未验证 提交 5a06bb8a 编写于 作者: H hzcheng 提交者: GitHub

Merge branch 'master' into master

......@@ -6,10 +6,10 @@ We appreciate contributions from all developers. Feel free to follow us, fork th
Any users can report bugs to us through the [github issue tracker](https://github.com/taosdata/TDengine/issues). We appreciate a detailed description of the problem you met. It is better to provide the detailed steps on reproducing the bug. Otherwise, an appendix with log files generated by the bug is welcome.
## Sign the contributor license agreement
## Read the contributor license agreement
It is required to sign the Contributor Licence Agreement(CLA) before a user submitting your code patch. Follow the [TaosData CLA](https://www.taosdata.com/en/contributor/) link to access the agreement and instructions on how to sign it.
It is required to agree the Contributor Licence Agreement(CLA) before a user submitting his/her code patch. Follow the [TaosData CLA](https://www.taosdata.com/en/contributor/) link to read through the agreement.
## Submit your code
Before submitting your code, make sure to [sign the contributor license agreement](#sign-the-contributor-license-agreement) beforehand. Your submission should solve an issue or add a feature registered in the [github issue tracker](https://github.com/taosdata/TDengine/issues). If no corresponding issue or feature is found in the issue tracker, please create one. When submitting your code to our repository, please create a pull request with the issue number included.
Before submitting your code, make sure to [read the contributor license agreement](#read-the-contributor-license-agreement) beforehand. If you don't accept the aggreement, please stop submitting. Your submission means you have accepted the agreement. Your submission should solve an issue or add a feature registered in the [github issue tracker](https://github.com/taosdata/TDengine/issues). If no corresponding issue or feature is found in the issue tracker, please create one. When submitting your code to our repository, please create a pull request with the issue number included.
......@@ -23,45 +23,30 @@ For user manual, system design and architecture, engineering blogs, refer to [TD
# Building
At the moment, TDengine only supports building and running on Linux systems. You can choose to [install from packages](https://www.taosdata.com/en/getting-started/#Install-from-Package) or from the source code. This quick guide is for installation from the source only.
To build TDengine, use [CMake](https://cmake.org/) 2.8 or higher versions in the project directory.
Install CMake for example on Ubuntu:
To build TDengine, use [CMake](https://cmake.org/) 2.8 or higher versions in the project directory. Install CMake for example on Ubuntu:
```
sudo apt-get install -y cmake build-essential
```
To compile and package the JDBC driver source code, you should have a Java jdk-8 or higher and Apache Maven 2.7 or higher installed.
To install openjdk-8 on Ubuntu:
```
sudo apt-get install openjdk-8-jdk
```
To install Apache Maven on Ubuntu:
```
sudo apt-get install maven
```
Build TDengine:
```cmd
mkdir build && cd build
cmake .. && cmake --build .
```
# Installing
After building successfully, TDengine can be installed by:
```cmd
make install
```
Users can find more information about directories installed on the system in the [directory and files](https://www.taosdata.com/en/documentation/administrator/#Directory-and-Files) section. It should be noted that installing from source code does not configure service management for TDengine.
Users can also choose to [install from packages](https://www.taosdata.com/en/getting-started/#Install-from-Package) for it.
# Running
<!-- TDengine uses _/etc/taos/taos.cfg_ as the default configuration file. This behavior can be changed with _-c_ option. For a quick start, we will make directories structured as:
```
test/
+--data/
|
+--log/
|
+--cfg/
|
+--taos.cfg
```
Then fill the configuration file _test/cfg/taos.cfg_:
```
echo -e "dataDir $(pwd)/test/data\nlogDir $(pwd)/test/log" > test/cfg/taos.cfg
​``` -->
To start the TDengine server, run the command below in terminal:
# Quick Run
To quickly start a TDengine server after building, run the command below in terminal:
```cmd
./build/bin/taosd -c test/cfg
```
......@@ -69,22 +54,30 @@ In another terminal, use the TDengine shell to connect the server:
```
./build/bin/taos -c test/cfg
```
option "-c test/cfg" specifies the system configuration file directory.
# Installing
After building successfully, TDengine can be installed by:
```cmd
make install
```
Users can find more information about directories installed on the system in the [directory and files](https://www.taosdata.com/en/documentation/administrator/#Directory-and-Files) section. It should be noted that installing from source code does not configure service management for TDengine.
Users can also choose to [install from packages](https://www.taosdata.com/en/getting-started/#Install-from-Package) for it.
Start the service in the terminal.
To start the service after installation, in a terminal, use:
```cmd
taosd
```
Then users can use the [TDengine shell](https://www.taosdata.com/en/getting-started/#TDengine-Shell) to connect the TDengine server.
Then users can use the [TDengine shell](https://www.taosdata.com/en/getting-started/#TDengine-Shell) to connect the TDengine server. In a terminal, use:
```cmd
taos
```
If the terminal connects the server successfully, welcome messages and version info are printed. Otherwise, an error message is shown.
If TDengine shell connects the server successfully, welcome messages and version info are printed. Otherwise, an error message is shown.
# Try TDengine
It is easy to run SQL commands in the terminal which is the same as other SQL databases.
It is easy to run SQL commands from TDengine shell which is the same as other SQL databases.
```sql
create database db;
use db;
......
......@@ -62,7 +62,7 @@ Time series data is a sequence of data points over time. Inside a table, the dat
To reduce the development complexity and improve data consistency, TDengine provides the pub/sub functionality. To publish a message, you simply insert a record into a table. Compared with popular messaging tool Kafka, you subscribe to a table or a SQL query statement, instead of a topic. Once new data points arrive, TDengine will notify the application. The process is just like Kafka.
The detailed API will be introduced in the [connectors](https://www.taosdata.com/en/documentation/advanced-features/) section.
The detailed API will be introduced in the [connectors](https://www.taosdata.com/en/documentation/connector/) section.
##Caching
TDengine allocates a fixed-size buffer in memory, the newly arrived data will be written into the buffer first. Every device or table gets one or more memory blocks. For typical IoT scenarios, the hot data shall always be newly arrived data, they are more important for timely analysis. Based on this observation, TDengine manages the cache blocks in First-In-First-Out strategy. If no enough space in the buffer, the oldest data will be saved into hard disk first, then be overwritten by newly arrived data. TDengine also guarantees every device can keep at least one block of data in the buffer.
......
......@@ -37,13 +37,13 @@ extern "C" {
struct SQLFunctionCtx;
typedef struct SLocalDataSrc {
typedef struct SLocalDataSource {
tExtMemBuffer *pMemBuffer;
int32_t flushoutIdx;
int32_t pageId;
int32_t rowIdx;
tFilePage filePage;
} SLocalDataSrc;
} SLocalDataSource;
enum {
TSC_LOCALREDUCE_READY = 0x0,
......@@ -52,7 +52,7 @@ enum {
};
typedef struct SLocalReducer {
SLocalDataSrc **pLocalDataSrc;
SLocalDataSource **pLocalDataSrc;
int32_t numOfBuffer;
int32_t numOfCompleted;
......
......@@ -41,21 +41,24 @@ typedef struct SParsedColElem {
} SParsedColElem;
typedef struct SParsedDataColInfo {
bool ordered; // denote if the timestamp in one data block ordered or not
int16_t numOfCols;
int16_t numOfAssignedCols;
SParsedColElem elems[TSDB_MAX_COLUMNS];
bool hasVal[TSDB_MAX_COLUMNS];
int64_t prevTimestamp;
} SParsedDataColInfo;
SInsertedDataBlocks* tscCreateDataBlock(int32_t size);
void tscDestroyDataBlock(SInsertedDataBlocks** pDataBlock);
STableDataBlocks* tscCreateDataBlock(int32_t size);
void tscDestroyDataBlock(STableDataBlocks* pDataBlock);
void tscAppendDataBlock(SDataBlockList* pList, STableDataBlocks* pBlocks);
SDataBlockList* tscCreateBlockArrayList();
void tscDestroyBlockArrayList(SDataBlockList** pList);
int32_t tscCopyDataBlockToPayload(SSqlObj* pSql, SInsertedDataBlocks* pDataBlock);
void* tscDestroyBlockArrayList(SDataBlockList* pList);
int32_t tscCopyDataBlockToPayload(SSqlObj* pSql, STableDataBlocks* pDataBlock);
void tscFreeUnusedDataBlocks(SDataBlockList* pList);
void tscMergeTableDataBlocks(SSqlObj* pSql, SDataBlockList* pDataList);
STableDataBlocks* tscGetDataBlockFromList(void* pHashList, SDataBlockList* pDataBlockList, int64_t id, int32_t size,
int32_t startOffset, int32_t rowSize, char* tableId);
STableDataBlocks* tscCreateDataBlockEx(size_t size, int32_t rowSize, int32_t startOffset, char* name);
SVnodeSidList* tscGetVnodeSidList(SMetricMeta* pMetricmeta, int32_t vnodeIdx);
SMeterSidExtInfo* tscGetMeterSidInfo(SVnodeSidList* pSidList, int32_t idx);
......@@ -66,8 +69,7 @@ bool tscIsTwoStageMergeMetricQuery(SSqlObj* pSql);
/**
*
* for the projection query on metric or point interpolation query on metric,
* we iterate all the meters, instead of invoke query on all qualified meters
* simultaneously.
* we iterate all the meters, instead of invoke query on all qualified meters simultaneously.
*
* @param pSql sql object
* @return
......@@ -124,8 +126,7 @@ void tscIncStreamExecutionCount(void* pStream);
bool tscValidateColumnId(SSqlCmd* pCmd, int32_t colId);
// get starter position of metric query condition (query on tags) in
// SSqlCmd.payload
// get starter position of metric query condition (query on tags) in SSqlCmd.payload
char* tsGetMetricQueryCondPos(STagCond* pCond);
void tscTagCondAssign(STagCond* pDst, STagCond* pSrc);
void tscTagCondRelease(STagCond* pCond);
......@@ -139,6 +140,7 @@ void tscCleanSqlCmd(SSqlCmd* pCmd);
bool tscShouldFreeAsyncSqlObj(SSqlObj* pSql);
void tscDoQuery(SSqlObj* pSql);
void sortRemoveDuplicates(STableDataBlocks* dataBuf);
#ifdef __cplusplus
}
#endif
......
......@@ -169,16 +169,22 @@ typedef struct STagCond {
char * pData;
} STagCond;
typedef struct SInsertedDataBlocks {
typedef struct STableDataBlocks {
char meterId[TSDB_METER_ID_LEN];
int64_t vgid;
int64_t size;
int64_t prevTS;
bool ordered;
int32_t numOfMeters;
int32_t rowSize;
uint32_t nAllocSize;
uint32_t numOfMeters;
union {
char *filename;
char *pData;
};
} SInsertedDataBlocks;
} STableDataBlocks;
typedef struct SDataBlockList {
int32_t idx;
......@@ -186,7 +192,7 @@ typedef struct SDataBlockList {
int32_t nAlloc;
char * userParam; /* user assigned parameters for async query */
void * udfp; /* user defined function pointer, used in async model */
SInsertedDataBlocks **pData;
STableDataBlocks **pData;
} SDataBlockList;
typedef struct {
......
......@@ -410,7 +410,7 @@ void tscAsyncInsertMultiVnodesProxy(void *param, TAOS_RES *tres, int numOfRows)
tscTrace("%p Async insertion completed, destroy data block list", pSql);
// release data block data
tscDestroyBlockArrayList(&pCmd->pDataBlocks);
pCmd->pDataBlocks = tscDestroyBlockArrayList(pCmd->pDataBlocks);
// all data has been sent to vnode, call user function
(*pSql->fp)(pSql->param, tres, numOfRows);
......
......@@ -53,11 +53,11 @@
return TSDB_CODE_INVALID_SQL; \
} while (0)
static void setErrMsg(char* msg, char* sql);
static int32_t tscAllocateMemIfNeed(SInsertedDataBlocks* pDataBlock, int32_t rowSize);
static void setErrMsg(char *msg, char *sql);
static int32_t tscAllocateMemIfNeed(STableDataBlocks *pDataBlock, int32_t rowSize);
// get formation
static int32_t getNumericType(const char* data) {
static int32_t getNumericType(const char *data) {
if (*data == '-' || *data == '+') {
data += 1;
}
......@@ -73,7 +73,7 @@ static int32_t getNumericType(const char* data) {
}
}
static int64_t tscToInteger(char* data, char** endPtr) {
static int64_t tscToInteger(char *data, char **endPtr) {
int32_t numType = getNumericType(data);
int32_t radix = 10;
......@@ -86,14 +86,14 @@ static int64_t tscToInteger(char* data, char** endPtr) {
return strtoll(data, endPtr, radix);
}
int tsParseTime(char* value, int32_t valuelen, int64_t* time, char** next, char* error, int16_t timePrec) {
char* token;
int tsParseTime(char *value, int32_t valuelen, int64_t *time, char **next, char *error, int16_t timePrec) {
char * token;
int tokenlen;
int64_t interval;
int64_t useconds = 0;
char* pTokenEnd = *next;
char *pTokenEnd = *next;
tscGetToken(pTokenEnd, &token, &tokenlen);
if (tokenlen == 0 && strlen(value) == 0) {
INVALID_SQL_RET_MSG(error, "missing time stamp");
......@@ -166,32 +166,32 @@ int tsParseTime(char* value, int32_t valuelen, int64_t* time, char** next, char*
return TSDB_CODE_SUCCESS;
}
int32_t tsParseOneColumnData(SSchema* pSchema, char* value, int valuelen, char* payload, char* msg, char** str,
int32_t tsParseOneColumnData(SSchema *pSchema, char *value, int valuelen, char *payload, char *msg, char **str,
bool primaryKey, int16_t timePrec) {
int64_t temp;
int32_t nullInt = *(int32_t*)TSDB_DATA_NULL_STR_L;
char* endptr = NULL;
int32_t nullInt = *(int32_t *)TSDB_DATA_NULL_STR_L;
char * endptr = NULL;
errno = 0; // reset global error code
switch (pSchema->type) {
case TSDB_DATA_TYPE_BOOL: { // bool
if (valuelen == 4 && nullInt == *(int32_t*)value) {
*(uint8_t*)payload = TSDB_DATA_BOOL_NULL;
if (valuelen == 4 && nullInt == *(int32_t *)value) {
*(uint8_t *)payload = TSDB_DATA_BOOL_NULL;
} else {
if (strncmp(value, "true", valuelen) == 0) {
*(uint8_t*)payload = TSDB_TRUE;
*(uint8_t *)payload = TSDB_TRUE;
} else if (strncmp(value, "false", valuelen) == 0) {
*(uint8_t*)payload = TSDB_FALSE;
*(uint8_t *)payload = TSDB_FALSE;
} else {
int64_t v = strtoll(value, NULL, 10);
*(uint8_t*)payload = (int8_t)((v == 0) ? TSDB_FALSE : TSDB_TRUE);
*(uint8_t *)payload = (int8_t)((v == 0) ? TSDB_FALSE : TSDB_TRUE);
}
}
break;
}
case TSDB_DATA_TYPE_TINYINT:
if (valuelen == 4 && nullInt == *(int32_t*)value) {
*((int32_t*)payload) = TSDB_DATA_TINYINT_NULL;
if (valuelen == 4 && nullInt == *(int32_t *)value) {
*((int32_t *)payload) = TSDB_DATA_TINYINT_NULL;
} else {
int64_t v = tscToInteger(value, &endptr);
if (errno == ERANGE || v > INT8_MAX || v < INT8_MIN) {
......@@ -199,18 +199,18 @@ int32_t tsParseOneColumnData(SSchema* pSchema, char* value, int valuelen, char*
}
int8_t v8 = (int8_t)v;
if (isNull((char*)&v8, pSchema->type)) {
if (isNull((char *)&v8, pSchema->type)) {
INVALID_SQL_RET_MSG(msg, "data is overflow");
}
*((int8_t*)payload) = v8;
*((int8_t *)payload) = v8;
}
break;
case TSDB_DATA_TYPE_SMALLINT:
if (valuelen == 4 && nullInt == *(int32_t*)value) {
*((int32_t*)payload) = TSDB_DATA_SMALLINT_NULL;
if (valuelen == 4 && nullInt == *(int32_t *)value) {
*((int32_t *)payload) = TSDB_DATA_SMALLINT_NULL;
} else {
int64_t v = tscToInteger(value, &endptr);
......@@ -219,17 +219,17 @@ int32_t tsParseOneColumnData(SSchema* pSchema, char* value, int valuelen, char*
}
int16_t v16 = (int16_t)v;
if (isNull((char*)&v16, pSchema->type)) {
if (isNull((char *)&v16, pSchema->type)) {
INVALID_SQL_RET_MSG(msg, "data is overflow");
}
*((int16_t*)payload) = v16;
*((int16_t *)payload) = v16;
}
break;
case TSDB_DATA_TYPE_INT:
if (valuelen == 4 && nullInt == *(int32_t*)value) {
*((int32_t*)payload) = TSDB_DATA_INT_NULL;
if (valuelen == 4 && nullInt == *(int32_t *)value) {
*((int32_t *)payload) = TSDB_DATA_INT_NULL;
} else {
int64_t v = tscToInteger(value, &endptr);
......@@ -238,36 +238,36 @@ int32_t tsParseOneColumnData(SSchema* pSchema, char* value, int valuelen, char*
}
int32_t v32 = (int32_t)v;
if (isNull((char*)&v32, pSchema->type)) {
if (isNull((char *)&v32, pSchema->type)) {
INVALID_SQL_RET_MSG(msg, "data is overflow");
}
*((int32_t*)payload) = v32;
*((int32_t *)payload) = v32;
}
break;
case TSDB_DATA_TYPE_BIGINT:
if (valuelen == 4 && nullInt == *(int32_t*)value) {
*((int64_t*)payload) = TSDB_DATA_BIGINT_NULL;
if (valuelen == 4 && nullInt == *(int32_t *)value) {
*((int64_t *)payload) = TSDB_DATA_BIGINT_NULL;
} else {
int64_t v = tscToInteger(value, &endptr);
if (isNull((char*)&v, pSchema->type) || errno == ERANGE) {
if (isNull((char *)&v, pSchema->type) || errno == ERANGE) {
INVALID_SQL_RET_MSG(msg, "data is overflow");
}
*((int64_t*)payload) = v;
*((int64_t *)payload) = v;
}
break;
case TSDB_DATA_TYPE_FLOAT:
if (valuelen == 4 && nullInt == *(int32_t*)value) {
*((int32_t*)payload) = TSDB_DATA_FLOAT_NULL;
if (valuelen == 4 && nullInt == *(int32_t *)value) {
*((int32_t *)payload) = TSDB_DATA_FLOAT_NULL;
} else {
float v = (float)strtod(value, &endptr);
if (isNull((char*)&v, pSchema->type) || isinf(v) || isnan(v)) {
*((int32_t*)payload) = TSDB_DATA_FLOAT_NULL;
if (isNull((char *)&v, pSchema->type) || isinf(v) || isnan(v)) {
*((int32_t *)payload) = TSDB_DATA_FLOAT_NULL;
} else {
*((float*)payload) = v;
*((float *)payload) = v;
}
if (str != NULL) {
......@@ -280,14 +280,14 @@ int32_t tsParseOneColumnData(SSchema* pSchema, char* value, int valuelen, char*
break;
case TSDB_DATA_TYPE_DOUBLE:
if (valuelen == 4 && nullInt == *(int32_t*)value) {
*((int64_t*)payload) = TSDB_DATA_DOUBLE_NULL;
if (valuelen == 4 && nullInt == *(int32_t *)value) {
*((int64_t *)payload) = TSDB_DATA_DOUBLE_NULL;
} else {
double v = strtod(value, &endptr);
if (isNull((char*)&v, pSchema->type) || isinf(v) || isnan(v)) {
*((int32_t*)payload) = TSDB_DATA_FLOAT_NULL;
if (isNull((char *)&v, pSchema->type) || isinf(v) || isnan(v)) {
*((int32_t *)payload) = TSDB_DATA_FLOAT_NULL;
} else {
*((double*)payload) = v;
*((double *)payload) = v;
}
if (str != NULL) {
......@@ -300,8 +300,10 @@ int32_t tsParseOneColumnData(SSchema* pSchema, char* value, int valuelen, char*
break;
case TSDB_DATA_TYPE_BINARY:
// binary data cannot be null-terminated char string, otherwise the last char of the string is lost
if (valuelen == 4 && nullInt == *(int32_t*)value) {
/*
* binary data cannot be null-terminated char string, otherwise the last char of the string is lost
*/
if (valuelen == 4 && nullInt == *(int32_t *)value) {
*payload = TSDB_DATA_BINARY_NULL;
} else {
/* truncate too long string */
......@@ -312,8 +314,8 @@ int32_t tsParseOneColumnData(SSchema* pSchema, char* value, int valuelen, char*
break;
case TSDB_DATA_TYPE_NCHAR:
if (valuelen == 4 && nullInt == *(int32_t*)value) {
*(uint32_t*)payload = TSDB_DATA_NCHAR_NULL;
if (valuelen == 4 && nullInt == *(int32_t *)value) {
*(uint32_t *)payload = TSDB_DATA_NCHAR_NULL;
} else {
if (!taosMbsToUcs4(value, valuelen, payload, pSchema->bytes)) {
sprintf(msg, "%s", strerror(errno));
......@@ -323,17 +325,17 @@ int32_t tsParseOneColumnData(SSchema* pSchema, char* value, int valuelen, char*
break;
case TSDB_DATA_TYPE_TIMESTAMP: {
if (valuelen == 4 && nullInt == *(int32_t*)value) {
if (valuelen == 4 && nullInt == *(int32_t *)value) {
if (primaryKey) {
*((int64_t*)payload) = 0;
*((int64_t *)payload) = 0;
} else {
*((int64_t*)payload) = TSDB_DATA_BIGINT_NULL;
*((int64_t *)payload) = TSDB_DATA_BIGINT_NULL;
}
} else {
if (tsParseTime(value, valuelen, &temp, str, msg, timePrec) != TSDB_CODE_SUCCESS) {
return TSDB_CODE_INVALID_SQL;
}
*((int64_t*)payload) = temp;
*((int64_t *)payload) = temp;
}
break;
......@@ -344,7 +346,7 @@ int32_t tsParseOneColumnData(SSchema* pSchema, char* value, int valuelen, char*
}
// todo merge the error msg function with tSQLParser
static void setErrMsg(char* msg, char* sql) {
static void setErrMsg(char *msg, char *sql) {
const char* msgFormat = "near \"%s\" syntax error";
const int32_t BACKWARD_CHAR_STEP = 15;
......@@ -354,16 +356,18 @@ static void setErrMsg(char* msg, char* sql) {
sprintf(msg, msgFormat, buf);
}
int tsParseOneRowData(char** str, char* payload, SSchema schema[], SParsedDataColInfo* spd, char* error,
int tsParseOneRowData(char **str, STableDataBlocks *pDataBlocks, SSchema schema[], SParsedDataColInfo *spd, char *error,
int16_t timePrec) {
char* value = NULL;
char *value = NULL;
int valuelen = 0;
char *payload = pDataBlocks->pData + pDataBlocks->size;
/* 1. set the parsed value from sql string */
int32_t rowSize = 0;
for (int i = 0; i < spd->numOfAssignedCols; ++i) {
/* the start position in data block buffer of current value in sql */
char* start = payload + spd->elems[i].offset;
char * start = payload + spd->elems[i].offset;
int16_t colIndex = spd->elems[i].colIndex;
rowSize += schema[colIndex].bytes;
......@@ -394,19 +398,19 @@ int tsParseOneRowData(char** str, char* payload, SSchema schema[], SParsedDataCo
}
// once the data block is disordered, we do NOT keep previous timestamp any more
if (colIndex == PRIMARYKEY_TIMESTAMP_COL_INDEX && spd->ordered) {
TSKEY k = *(TSKEY*)start;
if (k < spd->prevTimestamp) {
spd->ordered = false;
if (colIndex == PRIMARYKEY_TIMESTAMP_COL_INDEX && pDataBlocks->ordered) {
TSKEY k = *(TSKEY *)start;
if (k <= pDataBlocks->prevTS) {
pDataBlocks->ordered = false;
}
spd->prevTimestamp = k;
pDataBlocks->prevTS = k;
}
}
/*2. set the null value for the rest columns */
if (spd->numOfAssignedCols < spd->numOfCols) {
char* ptr = payload;
char *ptr = payload;
for (int32_t i = 0; i < spd->numOfCols; ++i) {
if (!spd->hasVal[i]) {
......@@ -423,39 +427,42 @@ int tsParseOneRowData(char** str, char* payload, SSchema schema[], SParsedDataCo
return rowSize;
}
static int32_t rowDataCompar(const void* lhs, const void* rhs) {
TSKEY left = GET_INT64_VAL(lhs);
TSKEY right = GET_INT64_VAL(rhs);
DEFAULT_COMP(left, right);
static int32_t rowDataCompar(const void *lhs, const void *rhs) {
TSKEY left = *(TSKEY *)lhs;
TSKEY right = *(TSKEY *)rhs;
if (left == right) {
return 0;
} else {
return left > right ? 1 : -1;
}
}
int tsParseValues(char** str, SInsertedDataBlocks* pDataBlock, SMeterMeta* pMeterMeta, int maxRows,
SParsedDataColInfo* spd, char* error) {
char* token;
int tsParseValues(char **str, STableDataBlocks *pDataBlock, SMeterMeta *pMeterMeta, int maxRows,
SParsedDataColInfo *spd, char *error) {
char *token;
int tokenlen;
SSchema* pSchema = tsGetSchema(pMeterMeta);
int16_t numOfRows = 0;
pDataBlock->size += sizeof(SShellSubmitBlock);
if (!spd->hasVal[0]) {
SSchema *pSchema = tsGetSchema(pMeterMeta);
int32_t precision = pMeterMeta->precision;
if (spd->hasVal[0] == false) {
sprintf(error, "primary timestamp column can not be null");
return -1;
}
while (1) {
char* tmp = tscGetToken(*str, &token, &tokenlen);
char *tmp = tscGetToken(*str, &token, &tokenlen);
if (tokenlen == 0 || *token != '(') break;
*str = tmp;
if (numOfRows >= maxRows ||
pDataBlock->size + pMeterMeta->rowSize + sizeof(SShellSubmitBlock) >= pDataBlock->nAllocSize) {
if (numOfRows >= maxRows || pDataBlock->size + pMeterMeta->rowSize >= pDataBlock->nAllocSize) {
maxRows += tscAllocateMemIfNeed(pDataBlock, pMeterMeta->rowSize);
}
int32_t len =
tsParseOneRowData(str, pDataBlock->pData + pDataBlock->size, pSchema, spd, error, pMeterMeta->precision);
int32_t len = tsParseOneRowData(str, pDataBlock, pSchema, spd, error, precision);
if (len <= 0) {
setErrMsg(error, *str);
return -1;
......@@ -474,26 +481,25 @@ int tsParseValues(char** str, SInsertedDataBlocks* pDataBlock, SMeterMeta* pMete
if (numOfRows <= 0) {
strcpy(error, "no any data points");
}
return -1;
} else {
return numOfRows;
}
}
static void appendDataBlock(SDataBlockList* pList, SInsertedDataBlocks* pBlocks) {
void tscAppendDataBlock(SDataBlockList *pList, STableDataBlocks *pBlocks) {
if (pList->nSize >= pList->nAlloc) {
pList->nAlloc = pList->nAlloc << 1;
pList->pData = realloc(pList->pData, (size_t)pList->nAlloc);
pList->pData = realloc(pList->pData, sizeof(void *) * (size_t)pList->nAlloc);
// reset allocated memory
memset(pList->pData + pList->nSize, 0, POINTER_BYTES * (pList->nAlloc - pList->nSize));
memset(pList->pData + pList->nSize, 0, sizeof(void *) * (pList->nAlloc - pList->nSize));
}
pList->pData[pList->nSize++] = pBlocks;
}
static void tscSetAssignedColumnInfo(SParsedDataColInfo* spd, SSchema* pSchema, int16_t numOfCols) {
spd->ordered = true;
spd->prevTimestamp = INT64_MIN;
static void tscSetAssignedColumnInfo(SParsedDataColInfo *spd, SSchema *pSchema, int32_t numOfCols) {
spd->numOfCols = numOfCols;
spd->numOfAssignedCols = numOfCols;
......@@ -507,101 +513,120 @@ static void tscSetAssignedColumnInfo(SParsedDataColInfo* spd, SSchema* pSchema,
}
}
int32_t tscAllocateMemIfNeed(SInsertedDataBlocks* pDataBlock, int32_t rowSize) {
int32_t tscAllocateMemIfNeed(STableDataBlocks *pDataBlock, int32_t rowSize) {
size_t remain = pDataBlock->nAllocSize - pDataBlock->size;
const int factor = 5;
// expand the allocated size
if (remain <= sizeof(SShellSubmitBlock) + rowSize) {
int32_t oldSize = pDataBlock->nAllocSize;
pDataBlock->nAllocSize = (uint32_t)(oldSize * 1.5);
if (remain < rowSize * factor) {
while (remain < rowSize * factor) {
pDataBlock->nAllocSize = (uint32_t) (pDataBlock->nAllocSize * 1.5);
remain = pDataBlock->nAllocSize - pDataBlock->size;
}
char* tmp = realloc(pDataBlock->pData, (size_t)pDataBlock->nAllocSize);
char *tmp = realloc(pDataBlock->pData, (size_t)pDataBlock->nAllocSize);
if (tmp != NULL) {
pDataBlock->pData = tmp;
memset(pDataBlock->pData + pDataBlock->size, 0, pDataBlock->nAllocSize - pDataBlock->size);
} else {
assert(false);
// do nothing
}
}
return (int32_t)(pDataBlock->nAllocSize - pDataBlock->size - sizeof(SShellSubmitBlock)) / rowSize;
return (int32_t)(pDataBlock->nAllocSize - pDataBlock->size) / rowSize;
}
void tsSetBlockInfo(SShellSubmitBlock* pBlocks, const SMeterMeta* pMeterMeta, int32_t numOfRows) {
pBlocks->sid = htonl(pMeterMeta->sid);
pBlocks->uid = htobe64(pMeterMeta->uid);
pBlocks->sversion = htonl(pMeterMeta->sversion);
pBlocks->numOfRows = htons(numOfRows);
static void tsSetBlockInfo(SShellSubmitBlock *pBlocks, const SMeterMeta *pMeterMeta, int32_t numOfRows) {
pBlocks->sid = pMeterMeta->sid;
pBlocks->uid = pMeterMeta->uid;
pBlocks->sversion = pMeterMeta->sversion;
pBlocks->numOfRows += numOfRows;
}
static int32_t doParseInsertStatement(SSqlCmd* pCmd, SSqlRes* pRes, void* pDataBlockHashList, char** str,
SParsedDataColInfo* spd) {
SMeterMeta* pMeterMeta = pCmd->pMeterMeta;
int32_t numOfRows = 0;
// data block is disordered, sort it in ascending order
void sortRemoveDuplicates(STableDataBlocks *dataBuf) {
SShellSubmitBlock* pBlocks = (SShellSubmitBlock*)dataBuf->pData;
SInsertedDataBlocks** pData = (SInsertedDataBlocks**)taosGetIntHashData(pDataBlockHashList, pMeterMeta->vgid);
SInsertedDataBlocks* dataBuf = NULL;
// size is less than the total size, since duplicated rows may be removed yet.
assert(pBlocks->numOfRows * dataBuf->rowSize + sizeof(SShellSubmitBlock) == dataBuf->size);
/* no data in hash list */
if (pData == NULL) {
dataBuf = tscCreateDataBlock(TSDB_PAYLOAD_SIZE);
if (!dataBuf->ordered) {
char *pBlockData = pBlocks->payLoad;
qsort(pBlockData, pBlocks->numOfRows, dataBuf->rowSize, rowDataCompar);
/* here we only keep the pointer of chunk of buffer, not the whole buffer */
dataBuf = *(SInsertedDataBlocks**)taosAddIntHash(pDataBlockHashList, pCmd->pMeterMeta->vgid, (char*)&dataBuf);
int32_t i = 0;
int32_t j = 1;
dataBuf->size = tsInsertHeadSize;
strncpy(dataBuf->meterId, pCmd->name, tListLen(pCmd->name));
appendDataBlock(pCmd->pDataBlocks, dataBuf);
} else {
dataBuf = *pData;
while (j < pBlocks->numOfRows) {
TSKEY ti = *(TSKEY *)(pBlockData + dataBuf->rowSize * i);
TSKEY tj = *(TSKEY *)(pBlockData + dataBuf->rowSize * j);
if (ti == tj) {
++j;
continue;
}
int32_t nextPos = (++i);
if (nextPos != j) {
memmove(pBlockData + dataBuf->rowSize * nextPos, pBlockData + dataBuf->rowSize * j, dataBuf->rowSize);
}
++j;
}
dataBuf->ordered = true;
pBlocks->numOfRows = i + 1;
dataBuf->size = sizeof(SShellSubmitBlock) + dataBuf->rowSize*pBlocks->numOfRows;
}
}
static int32_t doParseInsertStatement(SSqlObj *pSql, void *pTableHashList, char **str, SParsedDataColInfo *spd,
int32_t *totalNum) {
SSqlCmd * pCmd = &pSql->cmd;
SMeterMeta *pMeterMeta = pCmd->pMeterMeta;
STableDataBlocks *dataBuf =
tscGetDataBlockFromList(pTableHashList, pCmd->pDataBlocks, pMeterMeta->uid, TSDB_DEFAULT_PAYLOAD_SIZE,
sizeof(SShellSubmitBlock), pMeterMeta->rowSize, pCmd->name);
int32_t maxNumOfRows = tscAllocateMemIfNeed(dataBuf, pMeterMeta->rowSize);
int64_t startPos = dataBuf->size;
numOfRows = tsParseValues(str, dataBuf, pMeterMeta, maxNumOfRows, spd, pCmd->payload);
int32_t numOfRows = tsParseValues(str, dataBuf, pMeterMeta, maxNumOfRows, spd, pCmd->payload);
if (numOfRows <= 0) {
return TSDB_CODE_INVALID_SQL;
}
// data block is disordered, sort it in ascending order
if (!spd->ordered) {
char* pBlockData = dataBuf->pData + startPos + sizeof(SShellSubmitBlock);
qsort(pBlockData, numOfRows, pMeterMeta->rowSize, rowDataCompar);
spd->ordered = true;
}
SShellSubmitBlock* pBlocks = (SShellSubmitBlock*)(dataBuf->pData + startPos);
SShellSubmitBlock *pBlocks = (SShellSubmitBlock *)(dataBuf->pData);
tsSetBlockInfo(pBlocks, pMeterMeta, numOfRows);
dataBuf->numOfMeters += 1;
dataBuf->vgid = pMeterMeta->vgid;
dataBuf->numOfMeters = 1;
/*
* the value of pRes->numOfRows does not affect the true result of AFFECTED ROWS, which is
* actually returned from server.
*
* NOTE:
* The better way is to use a local variable to store the number of rows that
* has been extracted from sql expression string, and avoid to do the invalid write check
* the value of pRes->numOfRows does not affect the true result of AFFECTED ROWS,
* which is actually returned from server.
*/
pRes->numOfRows += numOfRows;
*totalNum += numOfRows;
return TSDB_CODE_SUCCESS;
}
static int32_t tscParseSqlForCreateTableOnDemand(char** sqlstr, SSqlObj* pSql) {
char* id = NULL;
static int32_t tscParseSqlForCreateTableOnDemand(char **sqlstr, SSqlObj *pSql) {
char * id = NULL;
int32_t idlen = 0;
int32_t code = TSDB_CODE_SUCCESS;
SSqlCmd* pCmd = &pSql->cmd;
char* sql = *sqlstr;
SSqlCmd *pCmd = &pSql->cmd;
char * sql = *sqlstr;
sql = tscGetToken(sql, &id, &idlen);
/* build the token of specified table */
SSQLToken tableToken = {.z = id, .n = idlen, .type = TK_ID};
char* cstart = NULL;
char* cend = NULL;
char *cstart = NULL;
char *cend = NULL;
/* skip possibly exists column list */
sql = tscGetToken(sql, &id, &idlen);
......@@ -630,7 +655,7 @@ static int32_t tscParseSqlForCreateTableOnDemand(char** sqlstr, SSqlObj* pSql) {
if (strncmp(id, "using", idlen) == 0 && idlen == 5) {
/* create table if not exists */
sql = tscGetToken(sql, &id, &idlen);
STagData* pTag = (STagData*)pCmd->payload;
STagData *pTag = (STagData *)pCmd->payload;
memset(pTag, 0, sizeof(STagData));
SSQLToken token1 = {idlen, TK_ID, id};
......@@ -643,8 +668,14 @@ static int32_t tscParseSqlForCreateTableOnDemand(char** sqlstr, SSqlObj* pSql) {
return code;
}
char* tagVal = pTag->data;
SSchema* pTagSchema = tsGetTagSchema(pCmd->pMeterMeta);
if (!UTIL_METER_IS_METRIC(pCmd)) {
const char* msg = "create table only from super table is allowed";
sprintf(pCmd->payload, "%s", msg);
return TSDB_CODE_INVALID_SQL;
}
char * tagVal = pTag->data;
SSchema *pTagSchema = tsGetTagSchema(pCmd->pMeterMeta);
sql = tscGetToken(sql, &id, &idlen);
if (!(strncmp(id, "tags", idlen) == 0 && idlen == 4)) {
......@@ -722,11 +753,11 @@ static int32_t tscParseSqlForCreateTableOnDemand(char** sqlstr, SSqlObj* pSql) {
return code;
}
int validateTableName(char* tblName, int len) {
int validateTableName(char *tblName, int len) {
char buf[TSDB_METER_ID_LEN] = {0};
memcpy(buf, tblName, len);
strncpy(buf, tblName, len);
SSQLToken token = {len, TK_ID, buf};
SSQLToken token = {.n = len, .type = TK_ID, .z = buf};
tSQLGetToken(buf, &token.type);
return tscValidateName(&token);
......@@ -742,18 +773,21 @@ int validateTableName(char* tblName, int len) {
* @param pSql
* @return
*/
int tsParseInsertStatement(SSqlCmd* pCmd, char* str, char* acct, char* db, SSqlObj* pSql) {
int tsParseInsertStatement(SSqlObj *pSql, char *str, char *acct, char *db) {
SSqlCmd *pCmd = &pSql->cmd;
pCmd->command = TSDB_SQL_INSERT;
pCmd->isInsertFromFile = -1;
pCmd->count = 0;
pSql->res.numOfRows = 0;
int32_t totalNum = 0;
if (!pSql->pTscObj->writeAuth) {
return TSDB_CODE_NO_RIGHTS;
}
char* id;
char *id;
int idlen;
int code = TSDB_CODE_INVALID_SQL;
......@@ -766,15 +800,21 @@ int tsParseInsertStatement(SSqlCmd* pCmd, char* str, char* acct, char* db, SSqlO
return code;
}
void* pDataBlockHashList = taosInitIntHash(4, POINTER_BYTES, taosHashInt);
void *pTableHashList = taosInitIntHash(128, sizeof(void *), taosHashInt);
pSql->cmd.pDataBlocks = tscCreateBlockArrayList();
tscTrace("%p create data block list for submit data, %p", pSql, pSql->cmd.pDataBlocks);
while (1) {
tscGetToken(str, &id, &idlen);
if (idlen == 0) {
if ((pSql->res.numOfRows > 0) || (1 == pCmd->isInsertFromFile)) {
// parse file, do not release the STableDataBlock
if (pCmd->isInsertFromFile == 1) {
goto _clean;
}
if (totalNum > 0) {
break;
} else { // no data in current sql string, error
code = TSDB_CODE_INVALID_SQL;
......@@ -782,10 +822,7 @@ int tsParseInsertStatement(SSqlCmd* pCmd, char* str, char* acct, char* db, SSqlO
}
}
/*
* Check the validity of the table name
*
*/
// Check if the table name available or not
if (validateTableName(id, idlen) != TSDB_CODE_SUCCESS) {
code = TSDB_CODE_INVALID_SQL;
sprintf(pCmd->payload, "table name is invalid");
......@@ -797,7 +834,7 @@ int tsParseInsertStatement(SSqlCmd* pCmd, char* str, char* acct, char* db, SSqlO
goto _error_clean;
}
void* fp = pSql->fp;
void *fp = pSql->fp;
if ((code = tscParseSqlForCreateTableOnDemand(&str, pSql)) != TSDB_CODE_SUCCESS) {
if (fp != NULL) {
goto _clean;
......@@ -826,7 +863,7 @@ int tsParseInsertStatement(SSqlCmd* pCmd, char* str, char* acct, char* db, SSqlO
if (strncmp(id, "values", 6) == 0 && idlen == 6) {
SParsedDataColInfo spd = {0};
SSchema* pSchema = tsGetSchema(pCmd->pMeterMeta);
SSchema * pSchema = tsGetSchema(pCmd->pMeterMeta);
tscSetAssignedColumnInfo(&spd, pSchema, pCmd->pMeterMeta->numOfColumns);
......@@ -844,7 +881,7 @@ int tsParseInsertStatement(SSqlCmd* pCmd, char* str, char* acct, char* db, SSqlO
* app here insert data in different vnodes, so we need to set the following
* data in another submit procedure using async insert routines
*/
code = doParseInsertStatement(pCmd, &pSql->res, pDataBlockHashList, &str, &spd);
code = doParseInsertStatement(pSql, pTableHashList, &str, &spd, &totalNum);
if (code != TSDB_CODE_SUCCESS) {
goto _error_clean;
}
......@@ -867,34 +904,29 @@ int tsParseInsertStatement(SSqlCmd* pCmd, char* str, char* acct, char* db, SSqlO
goto _error_clean;
}
char* fname = calloc(1, idlen + 1);
memcpy(fname, id, idlen);
char fname[PATH_MAX] = {0};
strncpy(fname, id, idlen);
strdequote(fname);
wordexp_t full_path;
if (wordexp(fname, &full_path, 0) != 0) {
code = TSDB_CODE_INVALID_SQL;
sprintf(pCmd->payload, "invalid filename");
free(fname);
goto _error_clean;
}
strcpy(fname, full_path.we_wordv[0]);
wordfree(&full_path);
SInsertedDataBlocks* dataBuf = tscCreateDataBlock(strlen(fname) + sizeof(SInsertedDataBlocks) + 1);
strcpy(dataBuf->filename, fname);
dataBuf->size = strlen(fname) + 1;
free(fname);
strcpy(dataBuf->meterId, pCmd->name);
appendDataBlock(pCmd->pDataBlocks, dataBuf);
STableDataBlocks* pDataBlock = tscCreateDataBlockEx(PATH_MAX, pCmd->pMeterMeta->rowSize, sizeof(SShellSubmitBlock),
pCmd->name);
tscAppendDataBlock(pCmd->pDataBlocks, pDataBlock);
strcpy(pDataBlock->filename, fname);
str = id + idlen;
} else if (idlen == 1 && id[0] == '(') {
/* insert into tablename(col1, col2,..., coln) values(v1, v2,... vn); */
SMeterMeta* pMeterMeta = pCmd->pMeterMeta;
SSchema* pSchema = tsGetSchema(pMeterMeta);
SMeterMeta *pMeterMeta = pCmd->pMeterMeta;
SSchema * pSchema = tsGetSchema(pMeterMeta);
if (pCmd->isInsertFromFile == -1) {
pCmd->isInsertFromFile = 0;
......@@ -905,8 +937,6 @@ int tsParseInsertStatement(SSqlCmd* pCmd, char* str, char* acct, char* db, SSqlO
}
SParsedDataColInfo spd = {0};
spd.ordered = true;
spd.prevTimestamp = INT64_MIN;
spd.numOfCols = pMeterMeta->numOfColumns;
int16_t offset[TSDB_MAX_COLUMNS] = {0};
......@@ -925,7 +955,7 @@ int tsParseInsertStatement(SSqlCmd* pCmd, char* str, char* acct, char* db, SSqlO
// todo speedup by using hash list
for (int32_t t = 0; t < pMeterMeta->numOfColumns; ++t) {
if (strncmp(id, pSchema[t].name, idlen) == 0 && strlen(pSchema[t].name) == idlen) {
SParsedColElem* pElem = &spd.elems[spd.numOfAssignedCols++];
SParsedColElem *pElem = &spd.elems[spd.numOfAssignedCols++];
pElem->offset = offset[t];
pElem->colIndex = t;
......@@ -961,7 +991,7 @@ int tsParseInsertStatement(SSqlCmd* pCmd, char* str, char* acct, char* db, SSqlO
goto _error_clean;
}
code = doParseInsertStatement(pCmd, &pSql->res, pDataBlockHashList, &str, &spd);
code = doParseInsertStatement(pSql, pTableHashList, &str, &spd, &totalNum);
if (code != TSDB_CODE_SUCCESS) {
goto _error_clean;
}
......@@ -972,53 +1002,49 @@ int tsParseInsertStatement(SSqlCmd* pCmd, char* str, char* acct, char* db, SSqlO
}
}
/* submit to more than one vnode */
// submit to more than one vnode
if (pCmd->pDataBlocks->nSize > 0) {
// lihui: if import file, only malloc the size of file name
if (1 != pCmd->isInsertFromFile) {
tscFreeUnusedDataBlocks(pCmd->pDataBlocks);
// merge according to vgid
tscMergeTableDataBlocks(pSql, pCmd->pDataBlocks);
SInsertedDataBlocks* pDataBlock = pCmd->pDataBlocks->pData[0];
STableDataBlocks *pDataBlock = pCmd->pDataBlocks->pData[0];
if ((code = tscCopyDataBlockToPayload(pSql, pDataBlock)) != TSDB_CODE_SUCCESS) {
goto _error_clean;
}
}
pCmd->vnodeIdx = 1; // set the next sent data vnode index in data block arraylist
} else {
tscDestroyBlockArrayList(&pCmd->pDataBlocks);
pCmd->pDataBlocks = tscDestroyBlockArrayList(pCmd->pDataBlocks);
}
code = TSDB_CODE_SUCCESS;
goto _clean;
_error_clean:
tscDestroyBlockArrayList(&pCmd->pDataBlocks);
pCmd->pDataBlocks = tscDestroyBlockArrayList(pCmd->pDataBlocks);
_clean:
taosCleanUpIntHash(pDataBlockHashList);
taosCleanUpIntHash(pTableHashList);
return code;
}
int tsParseImportStatement(SSqlObj* pSql, char* str, char* acct, char* db) {
SSqlCmd* pCmd = &pSql->cmd;
int tsParseImportStatement(SSqlObj *pSql, char *str, char *acct, char *db) {
SSqlCmd *pCmd = &pSql->cmd;
pCmd->order.order = TSQL_SO_ASC;
return tsParseInsertStatement(pCmd, str, acct, db, pSql);
return tsParseInsertStatement(pSql, str, acct, db);
}
int tsParseInsertSql(SSqlObj* pSql, char* sql, char* acct, char* db) {
char* verb;
int tsParseInsertSql(SSqlObj *pSql, char *sql, char *acct, char *db) {
char *verb;
int verblen;
int code = TSDB_CODE_INVALID_SQL;
SSqlCmd* pCmd = &pSql->cmd;
tscCleanSqlCmd(pCmd);
SSqlCmd *pCmd = &pSql->cmd;
sql = tscGetToken(sql, &verb, &verblen);
if (verblen) {
if (strncmp(verb, "insert", 6) == 0 && verblen == 6) {
code = tsParseInsertStatement(pCmd, sql, acct, db, pSql);
code = tsParseInsertStatement(pSql, sql, acct, db);
} else if (strncmp(verb, "import", 6) == 0 && verblen == 6) {
code = tsParseImportStatement(pSql, sql, acct, db);
} else {
......@@ -1029,27 +1055,24 @@ int tsParseInsertSql(SSqlObj* pSql, char* sql, char* acct, char* db) {
sprintf(pCmd->payload, "no any keywords");
}
// sql object has not been released in async model
if (pSql->signature == pSql) {
pSql->res.numOfRows = 0;
}
return code;
}
int tsParseSql(SSqlObj* pSql, char* acct, char* db, bool multiVnodeInsertion) {
int tsParseSql(SSqlObj *pSql, char *acct, char *db, bool multiVnodeInsertion) {
int32_t ret = TSDB_CODE_SUCCESS;
tscCleanSqlCmd(&pSql->cmd);
if (tscIsInsertOrImportData(pSql->sqlstr)) {
/*
* only for async multi-vnode insertion Set the fp before parse the sql string, in case of getmetermeta failed,
* in which the error handle callback function can rightfully restore the user defined function (fp)
* only for async multi-vnode insertion
* Set the fp before parse the sql string, in case of getmetermeta failed, in which
* the error handle callback function can rightfully restore the user defined function (fp)
*/
if (pSql->fp != NULL && multiVnodeInsertion) {
assert(pSql->fetchFp == NULL);
pSql->fetchFp = pSql->fp;
/* replace user defined callback function with multi-insert proxy function*/
/* replace user defined callback function with multi-insert proxy function */
pSql->fp = tscAsyncInsertMultiVnodesProxy;
}
......@@ -1057,6 +1080,9 @@ int tsParseSql(SSqlObj* pSql, char* acct, char* db, bool multiVnodeInsertion) {
} else {
SSqlInfo SQLInfo = {0};
tSQLParse(&SQLInfo, pSql->sqlstr);
tscAllocPayloadWithSize(&pSql->cmd, TSDB_DEFAULT_PAYLOAD_SIZE);
ret = tscToSQLCmd(pSql, &SQLInfo);
SQLInfoDestroy(&SQLInfo);
}
......@@ -1072,31 +1098,55 @@ int tsParseSql(SSqlObj* pSql, char* acct, char* db, bool multiVnodeInsertion) {
return ret;
}
static int tscInsertDataFromFile(SSqlObj* pSql, FILE* fp) {
// TODO : import data from file
static int doPackSendDataBlock(SSqlObj* pSql, int32_t numOfRows, STableDataBlocks* pTableDataBlocks) {
int32_t code = TSDB_CODE_SUCCESS;
SSqlCmd* pCmd = &pSql->cmd;
SMeterMeta* pMeterMeta = pCmd->pMeterMeta;
SShellSubmitBlock *pBlocks = (SShellSubmitBlock *)(pTableDataBlocks->pData);
tsSetBlockInfo(pBlocks, pMeterMeta, numOfRows);
tscMergeTableDataBlocks(pSql, pCmd->pDataBlocks);
// the pDataBlock is different from the pTableDataBlocks
STableDataBlocks *pDataBlock = pCmd->pDataBlocks->pData[0];
if ((code = tscCopyDataBlockToPayload(pSql, pDataBlock)) != TSDB_CODE_SUCCESS) {
return code;
}
if ((code = tscProcessSql(pSql)) != TSDB_CODE_SUCCESS) {
return code;
}
return TSDB_CODE_SUCCESS;
}
static int tscInsertDataFromFile(SSqlObj *pSql, FILE *fp) {
int readLen = 0;
char* line = NULL;
char * line = NULL;
size_t n = 0;
int len = 0;
uint32_t maxRows = 0;
SSqlCmd* pCmd = &pSql->cmd;
char* pStart = pCmd->payload + tsInsertHeadSize;
SMeterMeta* pMeterMeta = pCmd->pMeterMeta;
SSqlCmd * pCmd = &pSql->cmd;
SMeterMeta *pMeterMeta = pCmd->pMeterMeta;
int numOfRows = 0;
uint32_t rowSize = pMeterMeta->rowSize;
char error[128] = "\0";
SShellSubmitBlock* pBlock = (SShellSubmitBlock*)(pStart);
pStart += sizeof(SShellSubmitBlock);
int32_t rowSize = pMeterMeta->rowSize;
int32_t code = 0;
int nrows = 0;
const int32_t RESERVED_SIZE = 1024;
pCmd->pDataBlocks = tscCreateBlockArrayList();
STableDataBlocks* pTableDataBlock = tscCreateDataBlockEx(TSDB_PAYLOAD_SIZE, pMeterMeta->rowSize,
sizeof(SShellSubmitBlock), pCmd->name);
maxRows = (TSDB_PAYLOAD_SIZE - RESERVED_SIZE - sizeof(SShellSubmitBlock)) / rowSize;
tscAppendDataBlock(pCmd->pDataBlocks, pTableDataBlock);
maxRows = tscAllocateMemIfNeed(pTableDataBlock, rowSize);
if (maxRows < 1) return -1;
int count = 0;
SParsedDataColInfo spd = {0};
SSchema* pSchema = tsGetSchema(pCmd->pMeterMeta);
SParsedDataColInfo spd = {.numOfCols = pCmd->pMeterMeta->numOfColumns};
SSchema * pSchema = tsGetSchema(pCmd->pMeterMeta);
tscSetAssignedColumnInfo(&spd, pSchema, pCmd->pMeterMeta->numOfColumns);
......@@ -1105,43 +1155,42 @@ static int tscInsertDataFromFile(SSqlObj* pSql, FILE* fp) {
if (('\r' == line[readLen - 1]) || ('\n' == line[readLen - 1])) line[--readLen] = 0;
if (readLen <= 0) continue;
char* lineptr = line;
char *lineptr = line;
strtolower(line, line);
len = tsParseOneRowData(&lineptr, pStart, pSchema, &spd, error, pCmd->pMeterMeta->precision);
len = tsParseOneRowData(&lineptr, pTableDataBlock, pSchema, &spd, pCmd->payload, pMeterMeta->precision);
if (len <= 0) return -1;
pStart += len;
pTableDataBlock->size += len;
count++;
nrows++;
if (count >= maxRows) {
pCmd->payloadLen = (pStart - pCmd->payload);
pBlock->sid = htonl(pMeterMeta->sid);
pBlock->numOfRows = htons(count);
pSql->res.numOfRows = 0;
if (tscProcessSql(pSql) != 0) {
return -1;
if ((code = doPackSendDataBlock(pSql, count, pTableDataBlock)) != TSDB_CODE_SUCCESS) {
return -code;
}
pTableDataBlock = pCmd->pDataBlocks->pData[0];
pTableDataBlock->size = sizeof(SShellSubmitBlock);
pTableDataBlock->rowSize = pMeterMeta->rowSize;
numOfRows += pSql->res.numOfRows;
pSql->res.numOfRows = 0;
count = 0;
memset(pCmd->payload, 0, TSDB_PAYLOAD_SIZE);
pStart = pCmd->payload + tsInsertHeadSize;
pBlock = (SShellSubmitBlock*)(pStart);
pStart += sizeof(SShellSubmitBlock);
}
}
if (count > 0) {
pCmd->payloadLen = (pStart - pCmd->payload);
pBlock->sid = htonl(pMeterMeta->sid);
pBlock->numOfRows = htons(count);
pSql->res.numOfRows = 0;
if (tscProcessSql(pSql) != 0) {
return -1;
if ((code = doPackSendDataBlock(pSql, count, pTableDataBlock)) != TSDB_CODE_SUCCESS) {
return -code;
}
numOfRows += pSql->res.numOfRows;
pSql->res.numOfRows = 0;
}
if (line) tfree(line);
return numOfRows;
}
......@@ -1151,20 +1200,21 @@ static int tscInsertDataFromFile(SSqlObj* pSql, FILE* fp) {
* 2019.05.10 lihui
* Remove the code for importing records from files
*/
void tscProcessMultiVnodesInsert(SSqlObj* pSql) {
SSqlCmd* pCmd = &pSql->cmd;
void tscProcessMultiVnodesInsert(SSqlObj *pSql) {
SSqlCmd *pCmd = &pSql->cmd;
if (pCmd->command != TSDB_SQL_INSERT) {
return;
}
SInsertedDataBlocks* pDataBlock = NULL;
STableDataBlocks *pDataBlock = NULL;
int32_t affected_rows = 0;
int32_t code = TSDB_CODE_SUCCESS;
/* the first block has been sent to server in processSQL function */
assert(pCmd->isInsertFromFile != -1 && pCmd->vnodeIdx >= 1 && pCmd->pDataBlocks != NULL);
if (pCmd->vnodeIdx < pCmd->pDataBlocks->nSize) {
SDataBlockList* pDataBlocks = pCmd->pDataBlocks;
SDataBlockList *pDataBlocks = pCmd->pDataBlocks;
for (int32_t i = pCmd->vnodeIdx; i < pDataBlocks->nSize; ++i) {
pDataBlock = pDataBlocks->pData[i];
......@@ -1182,59 +1232,70 @@ void tscProcessMultiVnodesInsert(SSqlObj* pSql) {
}
// all data have been submit to vnode, release data blocks
tscDestroyBlockArrayList(&pCmd->pDataBlocks);
pCmd->pDataBlocks = tscDestroyBlockArrayList(pCmd->pDataBlocks);
}
/* multi-vnodes insertion in sync query model */
void tscProcessMultiVnodesInsertForFile(SSqlObj* pSql) {
SSqlCmd* pCmd = &pSql->cmd;
void tscProcessMultiVnodesInsertForFile(SSqlObj *pSql) {
SSqlCmd *pCmd = &pSql->cmd;
if (pCmd->command != TSDB_SQL_INSERT) {
return;
}
SInsertedDataBlocks* pDataBlock = NULL;
STableDataBlocks *pDataBlock = NULL;
int32_t affected_rows = 0;
assert(pCmd->isInsertFromFile == 1 && pCmd->vnodeIdx >= 1 && pCmd->pDataBlocks != NULL);
assert(pCmd->isInsertFromFile == 1 && pCmd->pDataBlocks != NULL);
SDataBlockList *pDataBlockList = pCmd->pDataBlocks;
pCmd->pDataBlocks = NULL;
SDataBlockList* pDataBlocks = pCmd->pDataBlocks;
char path[PATH_MAX] = {0};
pCmd->isInsertFromFile = 0; // for tscProcessSql()
pSql->res.numOfRows = 0;
for (int32_t i = 0; i < pDataBlocks->nSize; ++i) {
pDataBlock = pDataBlocks->pData[i];
for (int32_t i = 0; i < pDataBlockList->nSize; ++i) {
pDataBlock = pDataBlockList->pData[i];
if (pDataBlock == NULL) {
continue;
}
tscAllocPayloadWithSize(pCmd, TSDB_PAYLOAD_SIZE);
pCmd->count = 1;
FILE* fp = fopen(pDataBlock->filename, "r");
strncpy(path, pDataBlock->filename, PATH_MAX);
FILE *fp = fopen(path, "r");
if (fp == NULL) {
tscError("%p Failed to open file %s to insert data from file", pSql, pDataBlock->filename);
tscError("%p failed to open file %s to load data from file, reason:%s", pSql, path,
strerror(errno));
continue;
}
strcpy(pCmd->name, pDataBlock->meterId);
tscGetMeterMeta(pSql, pCmd->name);
memset(pDataBlock->pData, 0, pDataBlock->nAllocSize);
int32_t ret = tscGetMeterMeta(pSql, pCmd->name);
if (ret != TSDB_CODE_SUCCESS) {
tscError("%p get meter meta failed, abort", pSql);
continue;
}
int nrows = tscInsertDataFromFile(pSql, fp);
pCmd->pDataBlocks = tscDestroyBlockArrayList(pCmd->pDataBlocks);
if (nrows < 0) {
fclose(fp);
tscTrace("%p There is no record in file %s", pSql, pDataBlock->filename);
tscTrace("%p no records in file %s", pSql, path);
continue;
}
fclose(fp);
fclose(fp);
affected_rows += nrows;
tscTrace("%p Insert data %d records from file %s", pSql, nrows, pDataBlock->filename);
tscTrace("%p Insert data %d records from file %s", pSql, nrows, path);
}
pSql->res.numOfRows = affected_rows;
// all data have been submit to vnode, release data blocks
tscDestroyBlockArrayList(&pCmd->pDataBlocks);
pCmd->pDataBlocks = tscDestroyBlockArrayList(pCmd->pDataBlocks);
tscDestroyBlockArrayList(pDataBlockList);
}
......@@ -143,9 +143,6 @@ int32_t tscToSQLCmd(SSqlObj* pSql, struct SSqlInfo* pInfo) {
return TSDB_CODE_INVALID_SQL;
}
tscCleanSqlCmd(pCmd);
tscAllocPayloadWithSize(pCmd, TSDB_DEFAULT_PAYLOAD_SIZE);
// transfer pInfo into select operation
switch (pInfo->sqlType) {
case DROP_TABLE:
......@@ -785,7 +782,8 @@ int32_t tscToSQLCmd(SSqlObj* pSql, struct SSqlInfo* pInfo) {
// set sliding value
SSQLToken* pSliding = &pQuerySql->sliding;
if (pSliding->n != 0) {
if (!tscEmbedded) {
// pCmd->count == 1 means sql in stream function
if (!tscEmbedded && pCmd->count == 0) {
const char* msg = "not support sliding in query";
setErrMsg(pCmd, msg);
return TSDB_CODE_INVALID_SQL;
......
......@@ -140,12 +140,10 @@ tSQLExpr *tSQLExprIdValueCreate(SSQLToken *pToken, int32_t optrType) {
nodePtr->val.nType = TSDB_DATA_TYPE_BIGINT;
nodePtr->nSQLOptr = TK_TIMESTAMP;
} else { // must be field id if not numbers
if (pToken != NULL) {
assert(optrType == TK_ID);
/* it must be the column name (tk_id) */
assert(optrType == TK_ALL || optrType == TK_ID);
if (pToken != NULL) { // it must be the column name (tk_id)
nodePtr->colInfo = *pToken;
} else {
assert(optrType == TK_ALL);
}
nodePtr->nSQLOptr = optrType;
......
......@@ -19,14 +19,15 @@
#include <stdlib.h>
#include "tlosertree.h"
#include "tsclient.h"
#include "tlosertree.h"
#include "tscSecondaryMerge.h"
#include "tscUtil.h"
#include "tsclient.h"
#include "tutil.h"
typedef struct SCompareParam {
SLocalDataSrc ** pLocalData;
tOrderDescriptor *pDesc;
SLocalDataSource **pLocalData;
tOrderDescriptor * pDesc;
int32_t numOfElems;
int32_t groupOrderType;
} SCompareParam;
......@@ -36,8 +37,8 @@ int32_t treeComparator(const void *pLeft, const void *pRight, void *param) {
int32_t pRightIdx = *(int32_t *)pRight;
SCompareParam * pParam = (SCompareParam *)param;
tOrderDescriptor *pDesc = pParam->pDesc;
SLocalDataSrc ** pLocalData = pParam->pLocalData;
tOrderDescriptor * pDesc = pParam->pDesc;
SLocalDataSource **pLocalData = pParam->pLocalData;
/* this input is exhausted, set the special value to denote this */
if (pLocalData[pLeftIdx]->rowIdx == -1) {
......@@ -105,7 +106,7 @@ static void tscInitSqlContext(SSqlCmd *pCmd, SSqlRes *pRes, SLocalReducer *pRedu
}
/*
* todo error process with async process
* todo release allocated memory process with async process
*/
void tscCreateLocalReducer(tExtMemBuffer **pMemBuffer, int32_t numOfBuffer, tOrderDescriptor *pDesc,
tColModel *finalmodel, SSqlCmd *pCmd, SSqlRes *pRes) {
......@@ -133,32 +134,32 @@ void tscCreateLocalReducer(tExtMemBuffer **pMemBuffer, int32_t numOfBuffer, tOrd
if (numOfFlush == 0 || numOfBuffer == 0) {
tscLocalReducerEnvDestroy(pMemBuffer, pDesc, finalmodel, numOfBuffer);
tscTrace("%p retrieved no data", pSqlObjAddr);
return;
}
if (pDesc->pSchema->maxCapacity >= pMemBuffer[0]->nPageSize) {
tscLocalReducerEnvDestroy(pMemBuffer, pDesc, finalmodel, numOfBuffer);
tscError("%p Invalid value of buffer capacity %d and page size %d ", pSqlObjAddr, pDesc->pSchema->maxCapacity,
pMemBuffer[0]->nPageSize);
tscLocalReducerEnvDestroy(pMemBuffer, pDesc, finalmodel, numOfBuffer);
pRes->code = TSDB_CODE_APP_ERROR;
return;
}
size_t nReducerSize = sizeof(SLocalReducer) + POINTER_BYTES * numOfFlush;
size_t nReducerSize = sizeof(SLocalReducer) + sizeof(void *) * numOfFlush;
SLocalReducer *pReducer = (SLocalReducer *)calloc(1, nReducerSize);
if (pReducer == NULL) {
tscLocalReducerEnvDestroy(pMemBuffer, pDesc, finalmodel, numOfBuffer);
tscError("%p failed to create merge structure", pSqlObjAddr);
tscLocalReducerEnvDestroy(pMemBuffer, pDesc, finalmodel, numOfBuffer);
pRes->code = TSDB_CODE_CLI_OUT_OF_MEMORY;
return;
}
pReducer->pExtMemBuffer = pMemBuffer;
pReducer->pLocalDataSrc = (SLocalDataSrc **)&pReducer[1];
pReducer->pLocalDataSrc = (SLocalDataSource **)&pReducer[1];
assert(pReducer->pLocalDataSrc != NULL);
pReducer->numOfBuffer = numOfFlush;
......@@ -172,7 +173,7 @@ void tscCreateLocalReducer(tExtMemBuffer **pMemBuffer, int32_t numOfBuffer, tOrd
int32_t numOfFlushoutInFile = pMemBuffer[i]->fileMeta.flushoutData.nLength;
for (int32_t j = 0; j < numOfFlushoutInFile; ++j) {
SLocalDataSrc *pDS = (SLocalDataSrc *)malloc(sizeof(SLocalDataSrc) + pMemBuffer[0]->nPageSize);
SLocalDataSource *pDS = (SLocalDataSource *)malloc(sizeof(SLocalDataSource) + pMemBuffer[0]->nPageSize);
if (pDS == NULL) {
tscError("%p failed to create merge structure", pSqlObjAddr);
pRes->code = TSDB_CODE_CLI_OUT_OF_MEMORY;
......@@ -468,9 +469,7 @@ static int32_t createOrderDescriptor(tOrderDescriptor **pOrderDesc, SSqlCmd *pCm
}
if (pCmd->nAggTimeInterval != 0) {
/*
* the first column is the timestamp, handles queries like "interval(10m) group by tags"
*/
//the first column is the timestamp, handles queries like "interval(10m) group by tags"
orderIdx[numOfGroupByCols - 1] = PRIMARYKEY_TIMESTAMP_COL_INDEX;
}
}
......@@ -485,29 +484,32 @@ static int32_t createOrderDescriptor(tOrderDescriptor **pOrderDesc, SSqlCmd *pCm
}
}
bool isSameGroupOfPrev(SSqlCmd *pCmd, SLocalReducer *pReducer, char *pPrev, tFilePage *tmpPage) {
bool isSameGroup(SSqlCmd *pCmd, SLocalReducer *pReducer, char *pPrev, tFilePage *tmpBuffer) {
int16_t functionId = tscSqlExprGet(pCmd, 0)->sqlFuncId;
if (functionId == TSDB_FUNC_PRJ || functionId == TSDB_FUNC_ARITHM) { // column projection query
return false; // disable merge procedure
// disable merge procedure for column projection query
if (functionId == TSDB_FUNC_PRJ || functionId == TSDB_FUNC_ARITHM) {
return false;
}
tOrderDescriptor *pOrderDesc = pReducer->pDesc;
int32_t numOfCols = pOrderDesc->orderIdx.numOfOrderedCols;
if (numOfCols > 0) {
// no group by columns, all data belongs to one group
if (numOfCols <= 0) {
return true;
}
if (pOrderDesc->orderIdx.pData[numOfCols - 1] == PRIMARYKEY_TIMESTAMP_COL_INDEX) { //<= 0
/* metric interval query */
// super table interval query
assert(pCmd->nAggTimeInterval > 0);
pOrderDesc->orderIdx.numOfOrderedCols -= 1;
} else { /* simple group by query */
} else { // simple group by query
assert(pCmd->nAggTimeInterval == 0);
}
} else {
return true;
}
// only one row exists
int32_t ret = compare_a(pOrderDesc, 1, 0, pPrev, 1, 0, tmpPage->data);
int32_t ret = compare_a(pOrderDesc, 1, 0, pPrev, 1, 0, tmpBuffer->data);
pOrderDesc->orderIdx.numOfOrderedCols = numOfCols;
return (ret == 0);
......@@ -602,7 +604,7 @@ void tscLocalReducerEnvDestroy(tExtMemBuffer **pMemBuffer, tOrderDescriptor *pDe
* @param treeList
* @return the number of remain input source. if ret == 0, all data has been handled
*/
int32_t loadNewDataFromDiskFor(SLocalReducer *pLocalReducer, SLocalDataSrc *pOneInterDataSrc,
int32_t loadNewDataFromDiskFor(SLocalReducer *pLocalReducer, SLocalDataSource *pOneInterDataSrc,
bool *needAdjustLoserTree) {
pOneInterDataSrc->rowIdx = 0;
pOneInterDataSrc->pageId += 1;
......@@ -629,7 +631,7 @@ int32_t loadNewDataFromDiskFor(SLocalReducer *pLocalReducer, SLocalDataSrc *pOne
return pLocalReducer->numOfBuffer;
}
void loadDataIntoMemAndAdjustLoserTree(SLocalReducer *pLocalReducer, SLocalDataSrc *pOneInterDataSrc,
void adjustLoserTreeFromNewData(SLocalReducer *pLocalReducer, SLocalDataSource *pOneInterDataSrc,
SLoserTreeInfo *pTree) {
/*
* load a new data page into memory for intermediate dataset source,
......@@ -662,10 +664,10 @@ void loadDataIntoMemAndAdjustLoserTree(SLocalReducer *pLocalReducer, SLocalDataS
}
}
void savePrevRecordAndSetupInterpoInfo(SLocalReducer *pLocalReducer, SSqlCmd *pCmd,
SInterpolationInfo *pInterpoInfo) { // discard following dataset in the
// same group and reset the
// interpolation information
void savePrevRecordAndSetupInterpoInfo(
SLocalReducer *pLocalReducer, SSqlCmd *pCmd,
SInterpolationInfo
*pInterpoInfo) { // discard following dataset in the same group and reset the interpolation information
int64_t stime = (pCmd->stime < pCmd->etime) ? pCmd->stime : pCmd->etime;
int64_t revisedSTime = taosGetIntervalStartTimestamp(stime, pCmd->nAggTimeInterval, pCmd->intervalTimeUnit);
......@@ -749,7 +751,7 @@ static void doInterpolateResult(SSqlObj *pSql, SLocalReducer *pLocalReducer, boo
tColModelErase(pLocalReducer->resColModel, pFinalDataPage, prevSize, 0, pCmd->limit.offset - 1);
/* remove the hole in column model */
tColModelCompress(pLocalReducer->resColModel, pFinalDataPage, prevSize);
tColModelCompact(pLocalReducer->resColModel, pFinalDataPage, prevSize);
pRes->numOfRows -= pCmd->limit.offset;
pRes->numOfTotal -= pCmd->limit.offset;
......@@ -772,7 +774,7 @@ static void doInterpolateResult(SSqlObj *pSql, SLocalReducer *pLocalReducer, boo
pRes->numOfRows -= overFlow;
pFinalDataPage->numOfElems -= overFlow;
tColModelCompress(pLocalReducer->resColModel, pFinalDataPage, prevSize);
tColModelCompact(pLocalReducer->resColModel, pFinalDataPage, prevSize);
/* set remain data to be discarded, and reset the interpolation information */
savePrevRecordAndSetupInterpoInfo(pLocalReducer, pCmd, &pLocalReducer->interpolationInfo);
......@@ -892,21 +894,21 @@ static void doInterpolateResult(SSqlObj *pSql, SLocalReducer *pLocalReducer, boo
free(srcData);
}
static void savePrevRecord(SLocalReducer *pLocalReducer, tFilePage *tmpPages) {
static void savePreviousRow(SLocalReducer *pLocalReducer, tFilePage *tmpBuffer) {
tColModel *pColModel = pLocalReducer->pDesc->pSchema;
assert(pColModel->maxCapacity == 1 && tmpPages->numOfElems == 1);
assert(pColModel->maxCapacity == 1 && tmpBuffer->numOfElems == 1);
// copy to previous temp buffer
for (int32_t i = 0; i < pLocalReducer->pDesc->pSchema->numOfCols; ++i) {
memcpy(pLocalReducer->prevRowOfInput + pColModel->colOffset[i], tmpPages->data + pColModel->colOffset[i],
memcpy(pLocalReducer->prevRowOfInput + pColModel->colOffset[i], tmpBuffer->data + pColModel->colOffset[i],
pColModel->pFields[i].bytes);
}
tmpPages->numOfElems = 0;
tmpBuffer->numOfElems = 0;
pLocalReducer->hasPrevRow = true;
}
static void handleUnprocessedRow(SLocalReducer *pLocalReducer, SSqlCmd *pCmd, tFilePage *tmpPages) {
static void handleUnprocessedRow(SLocalReducer *pLocalReducer, SSqlCmd *pCmd, tFilePage *tmpBuffer) {
if (pLocalReducer->hasUnprocessedRow) {
for (int32_t j = 0; j < pCmd->fieldsInfo.numOfOutputCols; ++j) {
SSqlExpr *pExpr = tscSqlExprGet(pCmd, j);
......@@ -922,7 +924,7 @@ static void handleUnprocessedRow(SLocalReducer *pLocalReducer, SSqlCmd *pCmd, tF
pLocalReducer->hasUnprocessedRow = false;
// copy to previous temp buffer
savePrevRecord(pLocalReducer, tmpPages);
savePreviousRow(pLocalReducer, tmpBuffer);
}
}
......@@ -1005,7 +1007,7 @@ int32_t finalizeRes(SSqlCmd *pCmd, SLocalReducer *pLocalReducer) {
* results generated by simple aggregation function, we merge them all into one points
* *Exception*: column projection query, required no merge procedure
*/
bool needToMerge(SSqlCmd *pCmd, SLocalReducer *pLocalReducer, tFilePage *tmpPages) {
bool needToMerge(SSqlCmd *pCmd, SLocalReducer *pLocalReducer, tFilePage *tmpBuffer) {
int32_t ret = 0; // merge all result by default
int16_t functionId = tscSqlExprGet(pCmd, 0)->sqlFuncId;
......@@ -1016,9 +1018,9 @@ bool needToMerge(SSqlCmd *pCmd, SLocalReducer *pLocalReducer, tFilePage *tmpPage
if (pDesc->orderIdx.numOfOrderedCols > 0) {
if (pDesc->tsOrder == TSQL_SO_ASC) { // asc
// todo refactor comparator
ret = compare_a(pLocalReducer->pDesc, 1, 0, pLocalReducer->prevRowOfInput, 1, 0, tmpPages->data);
ret = compare_a(pLocalReducer->pDesc, 1, 0, pLocalReducer->prevRowOfInput, 1, 0, tmpBuffer->data);
} else { // desc
ret = compare_d(pLocalReducer->pDesc, 1, 0, pLocalReducer->prevRowOfInput, 1, 0, tmpPages->data);
ret = compare_d(pLocalReducer->pDesc, 1, 0, pLocalReducer->prevRowOfInput, 1, 0, tmpBuffer->data);
}
}
}
......@@ -1027,23 +1029,55 @@ bool needToMerge(SSqlCmd *pCmd, SLocalReducer *pLocalReducer, tFilePage *tmpPage
return (ret == 0);
}
void savePreGroupNumOfRes(SSqlRes *pRes) {
// pRes->numOfGroups += 1;
// pRes->pGroupRec = realloc(pRes->pGroupRec,
// pRes->numOfGroups*sizeof(SResRec));
//
static bool reachGroupResultLimit(SSqlCmd *pCmd, SSqlRes *pRes) {
return (pRes->numOfGroups >= pCmd->glimit.limit && pCmd->glimit.limit >= 0);
}
static bool saveGroupResultInfo(SSqlObj *pSql) {
SSqlCmd *pCmd = &pSql->cmd;
SSqlRes *pRes = &pSql->res;
pRes->numOfGroups += 1;
// the output group is limited by the glimit clause
if (reachGroupResultLimit(pCmd, pRes)) {
return true;
}
// pRes->pGroupRec = realloc(pRes->pGroupRec, pRes->numOfGroups*sizeof(SResRec));
// pRes->pGroupRec[pRes->numOfGroups-1].numOfRows = pRes->numOfRows;
// pRes->pGroupRec[pRes->numOfGroups-1].numOfTotal = pRes->numOfTotal;
return false;
}
void doGenerateFinalResults(SSqlObj *pSql, SLocalReducer *pLocalReducer,
bool doneOuput) { // there are merged results in buffer, flush to client
/**
*
* @param pSql
* @param pLocalReducer
* @param noMoreCurrentGroupRes
* @return if current group is skipped, return false, and do NOT record it into pRes->numOfGroups
*/
bool doGenerateFinalResults(SSqlObj *pSql, SLocalReducer *pLocalReducer, bool noMoreCurrentGroupRes) {
SSqlCmd * pCmd = &pSql->cmd;
SSqlRes * pRes = &pSql->res;
tFilePage *pResBuf = pLocalReducer->pResultBuf;
tColModel *pModel = pLocalReducer->resColModel;
tColModelCompress(pModel, pResBuf, pModel->maxCapacity);
pRes->code = TSDB_CODE_SUCCESS;
/*
* ignore the output of the current group since this group is skipped by user
* We set the numOfRows to be 0 and discard the possible remain results.
*/
if (pCmd->glimit.offset > 0) {
pRes->numOfRows = 0;
pCmd->glimit.offset -= 1;
pLocalReducer->discard = !noMoreCurrentGroupRes;
return false;
}
tColModelCompact(pModel, pResBuf, pModel->maxCapacity);
memcpy(pLocalReducer->pBufForInterpo, pResBuf->data, pLocalReducer->nResultBufSize);
#ifdef _DEBUG_VIEW
......@@ -1061,9 +1095,9 @@ void doGenerateFinalResults(SSqlObj *pSql, SLocalReducer *pLocalReducer,
}
taosInterpoSetStartInfo(&pLocalReducer->interpolationInfo, pResBuf->numOfElems, pCmd->interpoType);
doInterpolateResult(pSql, pLocalReducer, doneOuput);
doInterpolateResult(pSql, pLocalReducer, noMoreCurrentGroupRes);
pRes->code = TSDB_CODE_SUCCESS;
return true;
}
void resetOutputBuf(SSqlCmd *pCmd, SLocalReducer *pLocalReducer) { // reset output buffer to the beginning
......@@ -1075,10 +1109,8 @@ void resetOutputBuf(SSqlCmd *pCmd, SLocalReducer *pLocalReducer) { // reset out
memset(pLocalReducer->pResultBuf, 0, pLocalReducer->nResultBufSize + sizeof(tFilePage));
}
static void setUpForNewGroupRes(SSqlRes *pRes, SSqlCmd *pCmd, SLocalReducer *pLocalReducer) {
/*
* In handling data in other groups, we need to reset the interpolation information for a new group data
*/
static void resetEnvForNewResultset(SSqlRes *pRes, SSqlCmd *pCmd, SLocalReducer *pLocalReducer) {
//In handling data in other groups, we need to reset the interpolation information for a new group data
pRes->numOfRows = 0;
pRes->numOfTotal = 0;
pCmd->limit.offset = pLocalReducer->offset;
......@@ -1093,41 +1125,49 @@ static void setUpForNewGroupRes(SSqlRes *pRes, SSqlCmd *pCmd, SLocalReducer *pLo
}
}
int32_t tscLocalDoReduce(SSqlObj *pSql) {
static bool isAllSourcesCompleted(SLocalReducer *pLocalReducer) {
return (pLocalReducer->numOfBuffer == pLocalReducer->numOfCompleted);
}
static bool doInterpolationForCurrentGroup(SSqlObj *pSql) {
SSqlCmd *pCmd = &pSql->cmd;
SSqlRes *pRes = &pSql->res;
if (pSql->signature != pSql || pRes == NULL || pRes->pLocalReducer == NULL) { // all data has been processed
tscTrace("%s call the drop local reducer", __FUNCTION__);
SLocalReducer * pLocalReducer = pRes->pLocalReducer;
SInterpolationInfo *pInterpoInfo = &pLocalReducer->interpolationInfo;
tscDestroyLocalReducer(pSql);
pRes->numOfRows = 0;
pRes->row = 0;
return 0;
}
if (taosHasRemainsDataForInterpolation(pInterpoInfo)) {
assert(pCmd->interpoType != TSDB_INTERPO_NONE);
pRes->row = 0;
pRes->numOfRows = 0;
tFilePage *pFinalDataBuf = pLocalReducer->pResultBuf;
int64_t etime = *(int64_t *)(pFinalDataBuf->data + TSDB_KEYSIZE * (pInterpoInfo->numOfRawDataInRows - 1));
SLocalReducer *pLocalReducer = pRes->pLocalReducer;
int32_t remain = taosNumOfRemainPoints(pInterpoInfo);
TSKEY ekey = taosGetRevisedEndKey(etime, pCmd->order.order, pCmd->nAggTimeInterval, pCmd->intervalTimeUnit);
int32_t rows = taosGetNumOfResultWithInterpo(pInterpoInfo, (TSKEY *)pLocalReducer->pBufForInterpo, remain,
pCmd->nAggTimeInterval, ekey, pLocalReducer->resColModel->maxCapacity);
if (rows > 0) { // do interpo
doInterpolateResult(pSql, pLocalReducer, false);
}
// set the local reduce in progress
int32_t prevStatus =
__sync_val_compare_and_swap_32(&pLocalReducer->status, TSC_LOCALREDUCE_READY, TSC_LOCALREDUCE_IN_PROGRESS);
if (prevStatus != TSC_LOCALREDUCE_READY || pLocalReducer == NULL) {
assert(prevStatus == TSC_LOCALREDUCE_TOBE_FREED);
/* it is in tscDestroyLocalReducer function already */
return 0;
return true;
} else {
return false;
}
}
static bool doHandleLastRemainData(SSqlObj *pSql) {
SSqlCmd *pCmd = &pSql->cmd;
SSqlRes *pRes = &pSql->res;
SLocalReducer * pLocalReducer = pRes->pLocalReducer;
SInterpolationInfo *pInterpoInfo = &pLocalReducer->interpolationInfo;
tFilePage * tmpPages = pLocalReducer->pTempBuffer;
bool prevGroupDone = (!pLocalReducer->discard) && pLocalReducer->hasUnprocessedRow;
bool prevGroupCompleted = (!pLocalReducer->discard) && pLocalReducer->hasUnprocessedRow;
if ((pLocalReducer->numOfBuffer == pLocalReducer->numOfCompleted && !pLocalReducer->hasPrevRow) ||
pLocalReducer->pLocalDataSrc[0] == NULL || prevGroupDone) {
/* if interpoType == TSDB_INTERPO_NONE, return directly */
if ((isAllSourcesCompleted(pLocalReducer) && !pLocalReducer->hasPrevRow) || pLocalReducer->pLocalDataSrc[0] == NULL ||
prevGroupCompleted) {
// if interpoType == TSDB_INTERPO_NONE, return directly
if (pCmd->interpoType != TSDB_INTERPO_NONE) {
int64_t etime = (pCmd->stime < pCmd->etime) ? pCmd->etime : pCmd->stime;
......@@ -1139,54 +1179,117 @@ int32_t tscLocalDoReduce(SSqlObj *pSql) {
}
}
/* numOfRows == 0, means no interpolation results are generated yet */
if (pRes->numOfRows == 0) {
/* local reduce is completed */
if ((pLocalReducer->numOfBuffer == pLocalReducer->numOfCompleted) && (!pLocalReducer->hasUnprocessedRow)) {
pLocalReducer->status = TSC_LOCALREDUCE_READY;
// set the flag, taos_free_result can release this result.
return 0;
} else {
/* start for process result for a new group */
savePreGroupNumOfRes(pRes);
setUpForNewGroupRes(pRes, pCmd, pLocalReducer);
/*
* 1. numOfRows == 0, means no interpolation results are generated.
* 2. if all local data sources are consumed, and no un-processed rows exist.
*
* No results will be generated and query completed.
*/
if (pRes->numOfRows > 0 || (isAllSourcesCompleted(pLocalReducer) && (!pLocalReducer->hasUnprocessedRow))) {
return true;
}
} else {
pLocalReducer->status = TSC_LOCALREDUCE_READY;
// set the flag, taos_free_result can release this result.
return 0;
// start to process result for a new group and save the result info of previous group
if (saveGroupResultInfo(pSql)) {
return true;
}
resetEnvForNewResultset(pRes, pCmd, pLocalReducer);
}
if (taosHasNoneInterpoPoints(pInterpoInfo)) {
assert(pCmd->interpoType != TSDB_INTERPO_NONE);
return false;
}
tFilePage *pFinalDataPage = pLocalReducer->pResultBuf;
int64_t etime = *(int64_t *)(pFinalDataPage->data + TSDB_KEYSIZE * (pInterpoInfo->numOfRawDataInRows - 1));
static void doMergeWithPrevRows(SSqlObj *pSql, int32_t numOfRes) {
SSqlCmd * pCmd = &pSql->cmd;
SSqlRes * pRes = &pSql->res;
SLocalReducer *pLocalReducer = pRes->pLocalReducer;
int32_t remain = taosNumOfRemainPoints(pInterpoInfo);
TSKEY ekey = taosGetRevisedEndKey(etime, pCmd->order.order, pCmd->nAggTimeInterval, pCmd->intervalTimeUnit);
int32_t rows = taosGetNumOfResultWithInterpo(pInterpoInfo, (TSKEY *)pLocalReducer->pBufForInterpo, remain,
pCmd->nAggTimeInterval, ekey, pLocalReducer->resColModel->maxCapacity);
if (rows > 0) { // do interpo
doInterpolateResult(pSql, pLocalReducer, false);
for (int32_t k = 0; k < pCmd->fieldsInfo.numOfOutputCols; ++k) {
SSqlExpr *pExpr = tscSqlExprGet(pCmd, k);
pLocalReducer->pCtx[k].aOutputBuf += pLocalReducer->pCtx[k].outputBytes * numOfRes;
// set the correct output timestamp column position
if (pExpr->sqlFuncId == TSDB_FUNC_TOP_DST || pExpr->sqlFuncId == TSDB_FUNC_BOTTOM_DST) {
pLocalReducer->pCtx[k].ptsOutputBuf = ((char *)pLocalReducer->pCtx[k].ptsOutputBuf + TSDB_KEYSIZE * numOfRes);
}
pLocalReducer->status = TSC_LOCALREDUCE_READY;
// set the flag, taos_free_result can release this result.
/* set the parameters for the SQLFunctionCtx */
tVariantAssign(&pLocalReducer->pCtx[k].param[0], &pExpr->param[0]);
aAggs[pExpr->sqlFuncId].init(&pLocalReducer->pCtx[k]);
pLocalReducer->pCtx[k].currentStage = SECONDARY_STAGE_MERGE;
aAggs[pExpr->sqlFuncId].distSecondaryMergeFunc(&pLocalReducer->pCtx[k]);
}
}
static void doExecuteSecondaryMerge(SSqlObj *pSql) {
SSqlCmd * pCmd = &pSql->cmd;
SSqlRes * pRes = &pSql->res;
SLocalReducer *pLocalReducer = pRes->pLocalReducer;
for (int32_t j = 0; j < pCmd->fieldsInfo.numOfOutputCols; ++j) {
SSqlExpr *pExpr = tscSqlExprGet(pCmd, j);
tVariantAssign(&pLocalReducer->pCtx[j].param[0], &pExpr->param[0]);
pLocalReducer->pCtx[j].numOfIteratedElems = 0;
pLocalReducer->pCtx[j].currentStage = 0;
aAggs[pExpr->sqlFuncId].init(&pLocalReducer->pCtx[j]);
pLocalReducer->pCtx[j].currentStage = SECONDARY_STAGE_MERGE;
aAggs[pExpr->sqlFuncId].distSecondaryMergeFunc(&pLocalReducer->pCtx[j]);
}
}
int32_t tscLocalDoReduce(SSqlObj *pSql) {
SSqlCmd *pCmd = &pSql->cmd;
SSqlRes *pRes = &pSql->res;
if (pSql->signature != pSql || pRes == NULL || pRes->pLocalReducer == NULL) { // all data has been processed
tscTrace("%s call the drop local reducer", __FUNCTION__);
tscDestroyLocalReducer(pSql);
pRes->numOfRows = 0;
pRes->row = 0;
return 0;
}
pRes->row = 0;
pRes->numOfRows = 0;
SLocalReducer *pLocalReducer = pRes->pLocalReducer;
// set the data merge in progress
int32_t prevStatus =
__sync_val_compare_and_swap_32(&pLocalReducer->status, TSC_LOCALREDUCE_READY, TSC_LOCALREDUCE_IN_PROGRESS);
if (prevStatus != TSC_LOCALREDUCE_READY || pLocalReducer == NULL) {
assert(prevStatus == TSC_LOCALREDUCE_TOBE_FREED);
/* it is in tscDestroyLocalReducer function already */
return TSDB_CODE_SUCCESS;
}
tFilePage *tmpBuffer = pLocalReducer->pTempBuffer;
if (doHandleLastRemainData(pSql)) {
pLocalReducer->status = TSC_LOCALREDUCE_READY; // set the flag, taos_free_result can release this result.
return TSDB_CODE_SUCCESS;
}
if (doInterpolationForCurrentGroup(pSql)) {
pLocalReducer->status = TSC_LOCALREDUCE_READY; // set the flag, taos_free_result can release this result.
return TSDB_CODE_SUCCESS;
}
SLoserTreeInfo *pTree = pLocalReducer->pLoserTree;
// clear buffer
handleUnprocessedRow(pLocalReducer, pCmd, tmpPages);
handleUnprocessedRow(pLocalReducer, pCmd, tmpBuffer);
tColModel *pModel = pLocalReducer->pDesc->pSchema;
while (1) {
_reduce_retrieve:
if (pLocalReducer->numOfBuffer == pLocalReducer->numOfCompleted) {
pRes->numOfRows = 0;
if (isAllSourcesCompleted(pLocalReducer)) {
break;
}
......@@ -1194,12 +1297,12 @@ int32_t tscLocalDoReduce(SSqlObj *pSql) {
printf("chosen data in pTree[0] = %d\n", pTree->pNode[0].index);
#endif
assert((pTree->pNode[0].index < pLocalReducer->numOfBuffer) && (pTree->pNode[0].index >= 0) &&
tmpPages->numOfElems == 0);
tmpBuffer->numOfElems == 0);
// chosen from loser tree
SLocalDataSrc *pOneDataSrc = pLocalReducer->pLocalDataSrc[pTree->pNode[0].index];
SLocalDataSource *pOneDataSrc = pLocalReducer->pLocalDataSrc[pTree->pNode[0].index];
tColModelAppend(pModel, tmpPages, pOneDataSrc->filePage.data, pOneDataSrc->rowIdx, 1,
tColModelAppend(pModel, tmpBuffer, pOneDataSrc->filePage.data, pOneDataSrc->rowIdx, 1,
pOneDataSrc->pMemBuffer->pColModel->maxCapacity);
#if defined(_DEBUG_VIEW)
......@@ -1207,35 +1310,42 @@ int32_t tscLocalDoReduce(SSqlObj *pSql) {
SSrcColumnInfo colInfo[256] = {0};
tscGetSrcColumnInfo(colInfo, pCmd);
tColModelDisplayEx(pModel, tmpPages->data, tmpPages->numOfElems, pModel->maxCapacity, colInfo);
tColModelDisplayEx(pModel, tmpBuffer->data, tmpBuffer->numOfElems, pModel->maxCapacity, colInfo);
#endif
if (pLocalReducer->discard) {
assert(pLocalReducer->hasUnprocessedRow == false);
/* current record belongs to the same group of previous record, need to discard it */
if (isSameGroupOfPrev(pCmd, pLocalReducer, pLocalReducer->discardData->data, tmpPages)) {
tmpPages->numOfElems = 0;
if (isSameGroup(pCmd, pLocalReducer, pLocalReducer->discardData->data, tmpBuffer)) {
tmpBuffer->numOfElems = 0;
pOneDataSrc->rowIdx += 1;
loadDataIntoMemAndAdjustLoserTree(pLocalReducer, pOneDataSrc, pTree);
/* all inputs are exhausted, abort current process */
if (pLocalReducer->numOfBuffer == pLocalReducer->numOfCompleted) {
adjustLoserTreeFromNewData(pLocalReducer, pOneDataSrc, pTree);
// all inputs are exhausted, abort current process
if (isAllSourcesCompleted(pLocalReducer)) {
break;
}
/* since it belongs to the same group, ignore following records */
// data belongs to the same group needs to be discarded
continue;
} else {
pLocalReducer->discard = false;
pLocalReducer->discardData->numOfElems = 0;
savePreGroupNumOfRes(pRes);
setUpForNewGroupRes(pRes, pCmd, pLocalReducer);
if (saveGroupResultInfo(pSql)) {
pLocalReducer->status = TSC_LOCALREDUCE_READY;
return TSDB_CODE_SUCCESS;
}
resetEnvForNewResultset(pRes, pCmd, pLocalReducer);
}
}
if (pLocalReducer->hasPrevRow) {
if (needToMerge(pCmd, pLocalReducer, tmpPages)) { // belong to the group of the previous row
if (needToMerge(pCmd, pLocalReducer, tmpBuffer)) {
// belong to the group of the previous row, continue process it
for (int32_t j = 0; j < pCmd->fieldsInfo.numOfOutputCols; ++j) {
SSqlExpr *pExpr = tscSqlExprGet(pCmd, j);
tVariantAssign(&pLocalReducer->pCtx[j].param[0], &pExpr->param[0]);
......@@ -1244,109 +1354,86 @@ int32_t tscLocalDoReduce(SSqlObj *pSql) {
}
// copy to buffer
savePrevRecord(pLocalReducer, tmpPages);
} else { // reduce the previous is completed, start a new one
savePreviousRow(pLocalReducer, tmpBuffer);
} else {
/*
* current row does not belong to the group of previous row.
* so the processing of previous group is completed.
*/
int32_t numOfRes = finalizeRes(pCmd, pLocalReducer);
bool sameGroup = isSameGroupOfPrev(pCmd, pLocalReducer, pLocalReducer->prevRowOfInput, tmpPages);
bool sameGroup = isSameGroup(pCmd, pLocalReducer, pLocalReducer->prevRowOfInput, tmpBuffer);
tFilePage *pResBuf = pLocalReducer->pResultBuf;
/*
* if the previous group does NOTE generate any result
* (pResBuf->numOfElems == 0),
* if the previous group does NOT generate any result (pResBuf->numOfElems == 0),
* continue to process results instead of return results.
*/
if ((!sameGroup && pResBuf->numOfElems > 0) ||
(pResBuf->numOfElems == pLocalReducer->resColModel->maxCapacity)) {
// does not belong to the same group
assert(pResBuf->numOfElems > 0);
doGenerateFinalResults(pSql, pLocalReducer, !sameGroup);
bool notSkipped = doGenerateFinalResults(pSql, pLocalReducer, !sameGroup);
// this row needs to discard, since it belongs to the group of previous
if (pLocalReducer->discard && sameGroup) {
/* this row needs to discard, since it belongs to the group of previous */
pLocalReducer->hasUnprocessedRow = false;
tmpPages->numOfElems = 0;
tmpBuffer->numOfElems = 0;
} else {
// current row does not belongs to the previous group, so it is not be handled yet.
pLocalReducer->hasUnprocessedRow = true;
}
resetOutputBuf(pCmd, pLocalReducer);
pOneDataSrc->rowIdx += 1;
/* here we do not check the return value */
loadDataIntoMemAndAdjustLoserTree(pLocalReducer, pOneDataSrc, pTree);
// here we do not check the return value
adjustLoserTreeFromNewData(pLocalReducer, pOneDataSrc, pTree);
assert(pLocalReducer->status == TSC_LOCALREDUCE_IN_PROGRESS);
if (pRes->numOfRows == 0) {
handleUnprocessedRow(pLocalReducer, pCmd, tmpPages);
handleUnprocessedRow(pLocalReducer, pCmd, tmpBuffer);
if (!sameGroup) {
/* previous group is done, we start a new one by continuing to
* retrieve data */
savePreGroupNumOfRes(pRes);
setUpForNewGroupRes(pRes, pCmd, pLocalReducer);
/*
* previous group is done, prepare for the next group
* If previous group is not skipped, keep it in pRes->numOfGroups
*/
if (notSkipped && saveGroupResultInfo(pSql)) {
pLocalReducer->status = TSC_LOCALREDUCE_READY;
return TSDB_CODE_SUCCESS;
}
goto _reduce_retrieve;
resetEnvForNewResultset(pRes, pCmd, pLocalReducer);
}
} else {
/*
* if next record belongs to a new group, we do not handle this record here.
* We start the process in a new round.
*/
if (sameGroup) {
handleUnprocessedRow(pLocalReducer, pCmd, tmpPages);
}
handleUnprocessedRow(pLocalReducer, pCmd, tmpBuffer);
}
pLocalReducer->status = TSC_LOCALREDUCE_READY;
// set the flag, taos_free_result can release this result.
return 0;
} else { // result buffer is not full
for (int32_t k = 0; k < pCmd->fieldsInfo.numOfOutputCols; ++k) {
SSqlExpr *pExpr = tscSqlExprGet(pCmd, k);
pLocalReducer->pCtx[k].aOutputBuf += pLocalReducer->pCtx[k].outputBytes * numOfRes;
if (pExpr->sqlFuncId == TSDB_FUNC_TOP_DST || pExpr->sqlFuncId == TSDB_FUNC_BOTTOM_DST) {
pLocalReducer->pCtx[k].ptsOutputBuf =
((char *)pLocalReducer->pCtx[k].ptsOutputBuf + TSDB_KEYSIZE * numOfRes);
}
/* set the parameters for the SQLFunctionCtx */
tVariantAssign(&pLocalReducer->pCtx[k].param[0], &pExpr->param[0]);
aAggs[pExpr->sqlFuncId].init(&pLocalReducer->pCtx[k]);
pLocalReducer->pCtx[k].currentStage = SECONDARY_STAGE_MERGE;
aAggs[pExpr->sqlFuncId].distSecondaryMergeFunc(&pLocalReducer->pCtx[k]);
}
savePrevRecord(pLocalReducer, tmpPages);
// current group has no result,
if (pRes->numOfRows == 0) {
continue;
} else {
pLocalReducer->status = TSC_LOCALREDUCE_READY; // set the flag, taos_free_result can release this result.
return TSDB_CODE_SUCCESS;
}
} else { // result buffer is not full
doMergeWithPrevRows(pSql, numOfRes);
savePreviousRow(pLocalReducer, tmpBuffer);
}
} else { // put to previous input row for comparision
for (int32_t j = 0; j < pCmd->fieldsInfo.numOfOutputCols; ++j) {
SSqlExpr *pExpr = tscSqlExprGet(pCmd, j);
tVariantAssign(&pLocalReducer->pCtx[j].param[0], &pExpr->param[0]);
pLocalReducer->pCtx[j].numOfIteratedElems = 0;
pLocalReducer->pCtx[j].currentStage = 0;
aAggs[pExpr->sqlFuncId].init(&pLocalReducer->pCtx[j]);
pLocalReducer->pCtx[j].currentStage = SECONDARY_STAGE_MERGE;
aAggs[pExpr->sqlFuncId].distSecondaryMergeFunc(&pLocalReducer->pCtx[j]);
}
// copy to buffer
savePrevRecord(pLocalReducer, tmpPages);
} else {
doExecuteSecondaryMerge(pSql);
savePreviousRow(pLocalReducer, tmpBuffer); // copy the processed row to buffer
}
pOneDataSrc->rowIdx += 1;
loadDataIntoMemAndAdjustLoserTree(pLocalReducer, pOneDataSrc, pTree);
if (pLocalReducer->numOfCompleted == pLocalReducer->numOfBuffer) {
break;
}
adjustLoserTreeFromNewData(pLocalReducer, pOneDataSrc, pTree);
}
if (pLocalReducer->hasPrevRow) {
......@@ -1358,8 +1445,7 @@ int32_t tscLocalDoReduce(SSqlObj *pSql) {
}
assert(pLocalReducer->status == TSC_LOCALREDUCE_IN_PROGRESS && pRes->row == 0);
pLocalReducer->status = TSC_LOCALREDUCE_READY;
// set the flag, taos_free_result can release this result.
pLocalReducer->status = TSC_LOCALREDUCE_READY; // set the flag, taos_free_result can release this result.
return TSDB_CODE_SUCCESS;
}
......@@ -1378,7 +1464,8 @@ void tscInitResObjForLocalQuery(SSqlObj *pObj, int32_t numOfRes, int32_t rowLen)
pRes->pLocalReducer = (SLocalReducer *)calloc(1, sizeof(SLocalReducer));
/*
* we need one additional byte space the sprintf function needs one additional space to put '\0' at the end of string
* we need one additional byte space
* the sprintf function needs one additional space to put '\0' at the end of string
*/
size_t allocSize = numOfRes * rowLen + sizeof(tFilePage) + 1;
pRes->pLocalReducer->pResultBuf = (tFilePage *)calloc(1, allocSize);
......
......@@ -287,14 +287,20 @@ void *tscProcessMsgFromServer(char *msg, void *ahandle, void *thandle) {
pSql->thandle = NULL;
taosAddConnIntoCache(tscConnCache, thandle, pSql->ip, pSql->vnode, pObj->user);
if (UTIL_METER_IS_METRIC(pCmd) &&
(pMsg->content[0] == TSDB_CODE_INVALID_SESSION_ID || pMsg->content[0] == TSDB_CODE_NOT_ACTIVE_SESSION)) {
if (UTIL_METER_IS_METRIC(pCmd) && pMsg->content[0] == TSDB_CODE_NOT_ACTIVE_SESSION) {
/*
* for metric query, in case of any meter missing during query, sub-query of metric query will failed,
* causing metric query failed, and return TSDB_CODE_METRICMETA_EXPIRED code to app
*/
tscTrace("%p invalid meters id cause metric query failed, code:%d", pSql, pMsg->content[0]);
code = TSDB_CODE_METRICMETA_EXPIRED;
} else if ((pCmd->command == TSDB_SQL_INSERT || pCmd->command == TSDB_SQL_SELECT) &&
pMsg->content[0] == TSDB_CODE_INVALID_SESSION_ID) {
/*
* session id is invalid(e.g., less than 0 or larger than maximum session per
* vnode) in submit/query msg, no retry
*/
code = TSDB_CODE_INVALID_QUERY_MSG;
} else if (pCmd->command == TSDB_SQL_CONNECT) {
code = TSDB_CODE_NETWORK_UNAVAIL;
} else if (pCmd->command == TSDB_SQL_HB) {
......@@ -358,14 +364,17 @@ void *tscProcessMsgFromServer(char *msg, void *ahandle, void *thandle) {
pRes->code = TSDB_CODE_SUCCESS;
}
tscTrace("%p cmd:%d code:%d rsp len:%d", pSql, pCmd->command, pRes->code, pRes->rspLen);
/*
* There is not response callback function for submit response.
* The actual inserted number of points is the first number.
*/
if (pMsg->msgType == TSDB_MSG_TYPE_SUBMIT_RSP) {
pRes->numOfRows += *(int32_t *)pRes->pRsp;
tscTrace("%p cmd:%d code:%d, inserted rows:%d, rsp len:%d", pSql, pCmd->command, pRes->code,
*(int32_t *)pRes->pRsp, pRes->rspLen);
} else {
tscTrace("%p cmd:%d code:%d rsp len:%d", pSql, pCmd->command, pRes->code, pRes->rspLen);
}
}
......@@ -421,7 +430,7 @@ void *tscProcessMsgFromServer(char *msg, void *ahandle, void *thandle) {
return ahandle;
}
static SSqlObj* tscCreateSqlObjForSubquery(SSqlObj *pSql, SRetrieveSupport *trsupport, SSqlObj* pOld);
static SSqlObj* tscCreateSqlObjForSubquery(SSqlObj *pSql, SRetrieveSupport *trsupport, SSqlObj* prevSqlObj);
static int tscLaunchMetricSubQueries(SSqlObj *pSql);
int tscProcessSql(SSqlObj *pSql) {
......@@ -430,12 +439,6 @@ int tscProcessSql(SSqlObj *pSql) {
tscTrace("%p SQL cmd:%d will be processed, name:%s", pSql, pSql->cmd.command, pSql->cmd.name);
// whether don't judge 'isInsertFromFile' ?
if (pSql->cmd.command == TSDB_SQL_INSERT && pCmd->isInsertFromFile == 1) {
// pCmd->isInsertFromFile = 0; // lihui: can not clear the flag
return 0;
}
pSql->retry = 0;
if (pSql->cmd.command < TSDB_SQL_MGMT) {
pSql->maxRetry = 2;
......@@ -595,7 +598,6 @@ int tscLaunchMetricSubQueries(SSqlObj *pSql) {
SSqlObj *pNew = tscCreateSqlObjForSubquery(pSql, trs, NULL);
tscTrace("%p sub:%p launch subquery.orderOfSub:%d", pSql, pNew, pNew->cmd.vnodeIdx);
tscProcessSql(pNew);
}
......@@ -665,7 +667,6 @@ static void tscHandleSubRetrievalError(SRetrieveSupport *trsupport, SSqlObj *pSq
tscError("%p sub:%p abort further retrieval due to other queries failure,orderOfSub:%d,code:%d",
pPObj, pSql, idx, *trsupport->code);
} else {
if (trsupport->numOfRetry++ < MAX_NUM_OF_SUBQUERY_RETRY && *(trsupport->code) == TSDB_CODE_SUCCESS) {
/*
* current query failed, and the retry count is less than the available count,
......@@ -675,6 +676,7 @@ static void tscHandleSubRetrievalError(SRetrieveSupport *trsupport, SSqlObj *pSq
// clear local saved number of results
trsupport->localBuffer->numOfElems = 0;
pthread_mutex_unlock(&trsupport->queryMutex);
SSqlObj *pNew = tscCreateSqlObjForSubquery(trsupport->pParentSqlObj, trsupport, pSql);
......@@ -689,7 +691,6 @@ static void tscHandleSubRetrievalError(SRetrieveSupport *trsupport, SSqlObj *pSq
tscError("%p sub:%p retrieve failed,code:%d,orderOfSub:%d failed.no more retry,set global code:%d",
pPObj, pSql, numOfRows, idx, *trsupport->code);
}
}
if (__sync_add_and_fetch_32(trsupport->numOfFinished, 1) < trsupport->numOfVnodes) {
......@@ -778,7 +779,7 @@ void tscRetrieveFromVnodeCallBack(void *param, TAOS_RES *tres, int numOfRows) {
tscTrace("%p sub:%p all data retrieved from ip:%u,vid:%d, numOfRows:%d, orderOfSub:%d",
pPObj, pSql, pSvd->ip, pSvd->vnode, numOfRowsFromVnode, idx);
tColModelCompress(pDesc->pSchema, trsupport->localBuffer, pDesc->pSchema->maxCapacity);
tColModelCompact(pDesc->pSchema, trsupport->localBuffer, pDesc->pSchema->maxCapacity);
#ifdef _DEBUG_VIEW
printf("%ld rows data flushed to disk:\n", trsupport->localBuffer->numOfElems);
......@@ -877,7 +878,7 @@ void tscKillMetricQuery(SSqlObj *pSql) {
tscTrace("%p metric query is cancelled", pSql);
}
static SSqlObj* tscCreateSqlObjForSubquery(SSqlObj *pSql, SRetrieveSupport *trsupport, SSqlObj* prevSqlObj) {
SSqlObj* tscCreateSqlObjForSubquery(SSqlObj *pSql, SRetrieveSupport *trsupport, SSqlObj* prevSqlObj) {
SSqlCmd *pCmd = &pSql->cmd;
SSqlObj *pNew = (SSqlObj *)calloc(1, sizeof(SSqlObj));
......@@ -1032,8 +1033,6 @@ int tscBuildSubmitMsg(SSqlObj *pSql) {
pShellMsg->vnode = htons(pMeterMeta->vpeerDesc[pMeterMeta->index].vnode);
pShellMsg->numOfSid = htonl(pSql->cmd.count); /* number of meters to be inserted */
pMsg += sizeof(SShellSubmitMsg);
/*
* pSql->cmd.payloadLen is set during parse sql routine, so we do not use it here
*/
......@@ -2264,8 +2263,6 @@ int tscBuildMetricMetaMsg(SSqlObj *pSql) {
SSqlGroupbyExpr *pGroupby = &pCmd->groupbyExpr;
pMetaMsg->limit = htobe64(pCmd->glimit.limit);
pMetaMsg->offset = htobe64(pCmd->glimit.offset);
pMetaMsg->numOfTags = htons(pCmd->numOfReqTags);
pMetaMsg->numOfGroupbyCols = htons(pGroupby->numOfGroupbyCols);
......@@ -2750,7 +2747,6 @@ static int32_t tscDoGetMeterMeta(SSqlObj *pSql, char *meterId) {
} else {
pNew->fp = tscMeterMetaCallBack;
pNew->param = pSql;
pNew->sqlstr = strdup(pSql->sqlstr);
code = tscProcessSql(pNew);
......
......@@ -72,7 +72,7 @@ TAOS *taos_connect_imp(char *ip, char *user, char *pass, char *db, int port, voi
pObj->signature = pObj;
strncpy(pObj->user, user, TSDB_USER_LEN);
taosEncryptPass(pass, strlen(pass), pObj->pass);
taosEncryptPass((uint8_t *)pass, strlen(pass), pObj->pass);
pObj->mgmtPort = port ? port : tsMgmtShellPort;
if (db) {
......
......@@ -145,13 +145,14 @@ static void tscProcessStreamQueryCallback(void *param, TAOS_RES *tres, int numOf
static void tscSetTimestampForRes(SSqlStream *pStream, SSqlObj *pSql, int32_t numOfRows) {
SSqlRes *pRes = &pSql->res;
int64_t timestamp = *(int64_t *)pRes->data;
int64_t actualTimestamp = pStream->stime - pStream->interval;
if (timestamp != pStream->stime) {
if (timestamp != actualTimestamp) {
// reset the timestamp of each agg point by using start time of each interval
*((int64_t *)pRes->data) = pStream->stime - pStream->interval;
tscWarn("%p stream:%p, timestamp of points is:%lld, reset to %lld", pSql, pStream, timestamp,
pStream->stime - pStream->interval);
*((int64_t *)pRes->data) = actualTimestamp;
tscWarn("%p stream:%p, timestamp of points is:%lld, reset to %lld", pSql, pStream, timestamp, actualTimestamp);
}
}
......@@ -397,7 +398,7 @@ static int64_t tscGetStreamStartTimestamp(SSqlObj *pSql, SSqlStream *pStream, in
} else { // timewindow based aggregation stream
if (stime == 0) { // no data in meter till now
stime = ((int64_t)taosGetTimestamp(pStream->precision) / pStream->interval) * pStream->interval;
tscWarn("%p stream:%p, last timestamp:0, reset to:%lld", pSql, pStream, stime, stime);
tscWarn("%p stream:%p, last timestamp:0, reset to:%lld", pSql, pStream, stime);
} else {
int64_t newStime = (stime / pStream->interval) * pStream->interval;
if (newStime != stime) {
......@@ -435,13 +436,25 @@ static int64_t tscGetLaunchTimestamp(const SSqlStream *pStream) {
return (pStream->precision == TSDB_TIME_PRECISION_MICRO) ? timer / 1000L : timer;
}
static void setErrorInfo(STscObj* pObj, int32_t code, char* info) {
if (pObj == NULL) {
return;
}
SSqlCmd* pCmd = &pObj->pSql->cmd;
pObj->pSql->res.code = code;
strncpy(pCmd->payload, info, pCmd->payloadLen);
}
TAOS_STREAM *taos_open_stream(TAOS *taos, char *sqlstr, void (*fp)(void *param, TAOS_RES *, TAOS_ROW row),
int64_t stime, void *param, void (*callback)(void *)) {
STscObj *pObj = (STscObj *)taos;
if (pObj == NULL || pObj->signature != pObj) return NULL;
SSqlObj *pSql = (SSqlObj *)calloc(1, sizeof(SSqlObj));
if (pSql == NULL) { // todo set corect error msg
if (pSql == NULL) {
setErrorInfo(pObj, TSDB_CODE_CLI_OUT_OF_MEMORY, NULL);
return NULL;
}
......@@ -451,22 +464,31 @@ TAOS_STREAM *taos_open_stream(TAOS *taos, char *sqlstr, void (*fp)(void *param,
SSqlRes *pRes = &pSql->res;
tscAllocPayloadWithSize(pCmd, TSDB_DEFAULT_PAYLOAD_SIZE);
pSql->sqlstr = calloc(1, strlen(sqlstr) + 1);
if (pSql->sqlstr == NULL) { // todo set corect error msg
pSql->sqlstr = strdup(sqlstr);
if (pSql->sqlstr == NULL) {
setErrorInfo(pObj, TSDB_CODE_CLI_OUT_OF_MEMORY, NULL);
tfree(pSql);
return NULL;
}
strcpy(pSql->sqlstr, sqlstr);
sem_init(&pSql->rspSem, 0, 0);
sem_init(&pSql->emptyRspSem, 0, 1);
SSqlInfo SQLInfo = {0};
tSQLParse(&SQLInfo, pSql->sqlstr);
tscCleanSqlCmd(&pSql->cmd);
tscAllocPayloadWithSize(&pSql->cmd, TSDB_DEFAULT_PAYLOAD_SIZE);
//todo refactor later
pSql->cmd.count = 1;
pRes->code = tscToSQLCmd(pSql, &SQLInfo);
SQLInfoDestroy(&SQLInfo);
if (pRes->code != TSDB_CODE_SUCCESS) {
setErrorInfo(pObj, pRes->code, pCmd->payload);
tscError("%p open stream failed, sql:%s, reason:%s, code:%d", pSql, sqlstr, pCmd->payload, pRes->code);
tscFreeSqlObj(pSql);
return NULL;
......@@ -474,6 +496,8 @@ TAOS_STREAM *taos_open_stream(TAOS *taos, char *sqlstr, void (*fp)(void *param,
SSqlStream *pStream = (SSqlStream *)calloc(1, sizeof(SSqlStream));
if (pStream == NULL) {
setErrorInfo(pObj, TSDB_CODE_CLI_OUT_OF_MEMORY, NULL);
tscError("%p open stream failed, sql:%s, reason:%s, code:%d", pSql, sqlstr, pCmd->payload, pRes->code);
tscFreeSqlObj(pSql);
return NULL;
......
......@@ -17,6 +17,7 @@
#include <math.h>
#include <time.h>
#include "ihash.h"
#include "taosmsg.h"
#include "tcache.h"
#include "tkey.h"
......@@ -31,9 +32,10 @@
/*
* the detailed information regarding metric meta key is:
* fullmetername + '.' + querycond + '.' + [tagId1, tagId2,...] + '.' + group_orderType + '.' + limit + '.' + offset
* fullmetername + '.' + querycond + '.' + [tagId1, tagId2,...] + '.' + group_orderType
*
* if querycond is null, its format is:
* fullmetername + '.' + '(nil)' + '.' + [tagId1, tagId2,...] + '.' + group_orderType + '.' + limit + '.' + offset
* fullmetername + '.' + '(nil)' + '.' + [tagId1, tagId2,...] + '.' + group_orderType
*/
void tscGetMetricMetaCacheKey(SSqlCmd* pCmd, char* keyStr) {
char* pTagCondStr = NULL;
......@@ -60,8 +62,7 @@ void tscGetMetricMetaCacheKey(SSqlCmd* pCmd, char* keyStr) {
pTagCondStr = strdup(tsGetMetricQueryCondPos(&pCmd->tagCond));
}
int32_t keyLen = sprintf(keyStr, "%s.%s.[%s].%d.%lld.%lld", pCmd->name, pTagCondStr, tagIdBuf,
pCmd->groupbyExpr.orderType, pCmd->glimit.limit, pCmd->glimit.offset);
int32_t keyLen = sprintf(keyStr, "%s.%s.[%s].%d", pCmd->name, pTagCondStr, tagIdBuf, pCmd->groupbyExpr.orderType);
free(pTagCondStr);
assert(keyLen <= TSDB_MAX_TAGS_LEN);
......@@ -142,8 +143,7 @@ bool tscProjectionQueryOnMetric(SSqlObj* pSql) {
/*
* In following cases, return false for project query on metric
* 1. failed to get metermeta from server; 2. not a metric; 3. limit 0; 4.
* show query, instead of a select query
* 1. failed to get metermeta from server; 2. not a metric; 3. limit 0; 4. show query, instead of a select query
*/
if (pCmd->pMeterMeta == NULL || !UTIL_METER_IS_METRIC(pCmd) || pCmd->command == TSDB_SQL_RETRIEVE_EMPTY_RESULT ||
pCmd->exprsInfo.numOfExprs == 0) {
......@@ -252,7 +252,7 @@ void tscDestroyResPointerInfo(SSqlRes* pRes) {
}
void tscfreeSqlCmdData(SSqlCmd* pCmd) {
tscDestroyBlockArrayList(&pCmd->pDataBlocks);
pCmd->pDataBlocks = tscDestroyBlockArrayList(pCmd->pDataBlocks);
tscTagCondRelease(&pCmd->tagCond);
tscClearFieldInfo(pCmd);
......@@ -334,20 +334,22 @@ void tscFreeSqlObj(SSqlObj* pSql) {
free(pSql);
}
SInsertedDataBlocks* tscCreateDataBlock(int32_t size) {
SInsertedDataBlocks* dataBuf = (SInsertedDataBlocks*)calloc(1, sizeof(SInsertedDataBlocks));
dataBuf->nAllocSize = (uint32_t) size;
STableDataBlocks* tscCreateDataBlock(int32_t size) {
STableDataBlocks* dataBuf = (STableDataBlocks*)calloc(1, sizeof(STableDataBlocks));
dataBuf->nAllocSize = (uint32_t)size;
dataBuf->pData = calloc(1, dataBuf->nAllocSize);
dataBuf->ordered = true;
dataBuf->prevTS = INT64_MIN;
return dataBuf;
}
void tscDestroyDataBlock(SInsertedDataBlocks** pDataBlock) {
if (*pDataBlock == NULL) {
void tscDestroyDataBlock(STableDataBlocks* pDataBlock) {
if (pDataBlock == NULL) {
return;
}
tfree((*pDataBlock)->pData);
tfree(*pDataBlock);
tfree(pDataBlock->pData);
tfree(pDataBlock);
}
SDataBlockList* tscCreateBlockArrayList() {
......@@ -360,29 +362,31 @@ SDataBlockList* tscCreateBlockArrayList() {
return pDataBlockArrayList;
}
void tscDestroyBlockArrayList(SDataBlockList** pList) {
if (*pList == NULL) {
return;
void* tscDestroyBlockArrayList(SDataBlockList* pList) {
if (pList == NULL) {
return NULL;
}
for (int32_t i = 0; i < (*pList)->nSize; i++) {
tscDestroyDataBlock(&(*pList)->pData[i]);
for (int32_t i = 0; i < pList->nSize; i++) {
tscDestroyDataBlock(pList->pData[i]);
}
tfree((*pList)->pData);
tfree(*pList);
tfree(pList->pData);
tfree(pList);
return NULL;
}
int32_t tscCopyDataBlockToPayload(SSqlObj* pSql, SInsertedDataBlocks* pDataBlock) {
int32_t tscCopyDataBlockToPayload(SSqlObj* pSql, STableDataBlocks* pDataBlock) {
SSqlCmd* pCmd = &pSql->cmd;
pCmd->count = pDataBlock->numOfMeters;
strcpy(pCmd->name, pDataBlock->meterId);
strncpy(pCmd->name, pDataBlock->meterId, TSDB_METER_ID_LEN);
tscAllocPayloadWithSize(pCmd, pDataBlock->nAllocSize);
memcpy(pCmd->payload, pDataBlock->pData, pDataBlock->nAllocSize);
/* set the message length */
// set the message length
pCmd->payloadLen = pDataBlock->nAllocSize;
return tscGetMeterMeta(pSql, pCmd->name);
}
......@@ -390,12 +394,91 @@ int32_t tscCopyDataBlockToPayload(SSqlObj* pSql, SInsertedDataBlocks* pDataBlock
void tscFreeUnusedDataBlocks(SDataBlockList* pList) {
/* release additional memory consumption */
for (int32_t i = 0; i < pList->nSize; ++i) {
SInsertedDataBlocks* pDataBlock = pList->pData[i];
pDataBlock->pData = realloc(pDataBlock->pData, (size_t) pDataBlock->size);
pDataBlock->nAllocSize = (uint32_t) pDataBlock->size;
STableDataBlocks* pDataBlock = pList->pData[i];
pDataBlock->pData = realloc(pDataBlock->pData, pDataBlock->size);
pDataBlock->nAllocSize = (uint32_t)pDataBlock->size;
}
}
STableDataBlocks* tscCreateDataBlockEx(size_t size, int32_t rowSize, int32_t startOffset, char* name) {
STableDataBlocks* dataBuf = tscCreateDataBlock(size);
dataBuf->rowSize = rowSize;
dataBuf->size = startOffset;
strncpy(dataBuf->meterId, name, TSDB_METER_ID_LEN);
return dataBuf;
}
STableDataBlocks* tscGetDataBlockFromList(void* pHashList, SDataBlockList* pDataBlockList, int64_t id, int32_t size,
int32_t startOffset, int32_t rowSize, char* tableId) {
STableDataBlocks* dataBuf = NULL;
STableDataBlocks** t1 = (STableDataBlocks**)taosGetIntHashData(pHashList, id);
if (t1 != NULL) {
dataBuf = *t1;
}
if (dataBuf == NULL) {
dataBuf = tscCreateDataBlockEx((size_t)size, rowSize, startOffset, tableId);
dataBuf = *(STableDataBlocks**)taosAddIntHash(pHashList, id, (char*)&dataBuf);
tscAppendDataBlock(pDataBlockList, dataBuf);
}
return dataBuf;
}
void tscMergeTableDataBlocks(SSqlObj* pSql, SDataBlockList* pTableDataBlockList) {
SSqlCmd* pCmd = &pSql->cmd;
void* pVnodeDataBlockHashList = taosInitIntHash(8, sizeof(void*), taosHashInt);
SDataBlockList* pVnodeDataBlockList = tscCreateBlockArrayList();
for (int32_t i = 0; i < pTableDataBlockList->nSize; ++i) {
STableDataBlocks* pOneTableBlock = pTableDataBlockList->pData[i];
STableDataBlocks* dataBuf =
tscGetDataBlockFromList(pVnodeDataBlockHashList, pVnodeDataBlockList, pOneTableBlock->vgid, TSDB_PAYLOAD_SIZE,
tsInsertHeadSize, 0, pOneTableBlock->meterId);
int64_t destSize = dataBuf->size + pOneTableBlock->size;
if (dataBuf->nAllocSize < destSize) {
while (dataBuf->nAllocSize < destSize) {
dataBuf->nAllocSize = dataBuf->nAllocSize * 1.5;
}
char* tmp = realloc(dataBuf->pData, dataBuf->nAllocSize);
if (tmp != NULL) {
dataBuf->pData = tmp;
memset(dataBuf->pData + dataBuf->size, 0, dataBuf->nAllocSize - dataBuf->size);
} else {
// to do handle error
}
}
SShellSubmitBlock* pBlocks = (SShellSubmitBlock*)pOneTableBlock->pData;
sortRemoveDuplicates(pOneTableBlock);
tscTrace("%p meterId:%s, sid:%d, rows:%d, sversion:%d", pSql, pOneTableBlock->meterId, pBlocks->sid,
pBlocks->numOfRows, pBlocks->sversion);
pBlocks->sid = htonl(pBlocks->sid);
pBlocks->uid = htobe64(pBlocks->uid);
pBlocks->sversion = htonl(pBlocks->sversion);
pBlocks->numOfRows = htons(pBlocks->numOfRows);
memcpy(dataBuf->pData + dataBuf->size, pOneTableBlock->pData, pOneTableBlock->size);
dataBuf->size += pOneTableBlock->size;
dataBuf->numOfMeters += 1;
}
tscDestroyBlockArrayList(pTableDataBlockList);
// free the table data blocks;
pCmd->pDataBlocks = pVnodeDataBlockList;
tscFreeUnusedDataBlocks(pCmd->pDataBlocks);
taosCleanUpIntHash(pVnodeDataBlockHashList);
}
void tscCloseTscObj(STscObj* pObj) {
pObj->signature = NULL;
SSqlObj* pSql = pObj->pSql;
......@@ -821,15 +904,18 @@ int32_t tscValidateName(SSQLToken* pToken) {
pToken->n = strdequote(pToken->z);
strtrim(pToken->z);
pToken->n = (uint32_t)strlen(pToken->z);
int len = tSQLGetToken(pToken->z, &pToken->type);
if (len == pToken->n){
// single token, validate it
if (len == pToken->n) {
return validateQuoteToken(pToken);
}
else {
} else {
sep = strnchrNoquote(pToken->z, TS_PATH_DELIMITER[0], pToken->n);
if (sep == NULL) {
return TSDB_CODE_INVALID_SQL;
}
return tscValidateName(pToken);
}
} else {
......@@ -965,8 +1051,7 @@ void tscSetFreeHeatBeat(STscObj* pObj) {
SSqlObj* pHeatBeat = pObj->pHb;
assert(pHeatBeat == pHeatBeat->signature);
pHeatBeat->cmd.type = 1; // to denote the heart-beat timer close connection
// and free all allocated resources
pHeatBeat->cmd.type = 1; // to denote the heart-beat timer close connection and free all allocated resources
}
bool tscShouldFreeHeatBeat(SSqlObj* pHb) {
......@@ -1052,7 +1137,6 @@ void tscDoQuery(SSqlObj* pSql) {
if (pCmd->command > TSDB_SQL_LOCAL) {
tscProcessLocalCmd(pSql);
} else {
// add to sql list, so that the show queries could get the query info
if (pCmd->command == TSDB_SQL_SELECT) {
tscAddIntoSqlList(pSql);
}
......@@ -1061,18 +1145,19 @@ void tscDoQuery(SSqlObj* pSql) {
pSql->cmd.vnodeIdx += 1;
}
if (pSql->fp == NULL) {
if (0 == pCmd->isInsertFromFile) {
tscProcessSql(pSql);
tscProcessMultiVnodesInsert(pSql); // handle the multi-vnode insertion
} else if (1 == pCmd->isInsertFromFile) {
void* fp = pSql->fp;
if (pCmd->isInsertFromFile == 1) {
tscProcessMultiVnodesInsertForFile(pSql);
} else {
assert(false);
}
} else {
// pSql may be released in this function if it is a async insertion.
tscProcessSql(pSql);
}
// handle the multi-vnode insertion for sync model
if (fp == NULL) {
assert(pSql->signature == pSql);
tscProcessMultiVnodesInsert(pSql);
}
}
}
}
......@@ -3,7 +3,7 @@
<modelVersion>4.0.0</modelVersion>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>1.0.0</version>
<version>1.0.1</version>
<name>JDBCDriver</name>
<description>TDengine JDBC Driver</description>
<properties>
......
......@@ -24,6 +24,8 @@ public abstract class TSDBConstants {
public static final String INVALID_VARIABLES = "invalid variables";
public static Map<Integer, String> DATATYPE_MAP = null;
public static final long JNI_NULL_POINTER = 0L;
public static final int JNI_SUCCESS = 0;
public static final int JNI_TDENGINE_ERROR = -1;
public static final int JNI_CONNECTION_NULL = -2;
......
......@@ -19,7 +19,6 @@ import java.sql.SQLWarning;
import java.util.List;
public class TSDBJNIConnector {
static final long INVALID_CONNECTION_POINTER_VALUE = 0l;
static volatile Boolean isInitialized = false;
static {
......@@ -29,7 +28,12 @@ public class TSDBJNIConnector {
/**
* Connection pointer used in C
*/
private long taos = INVALID_CONNECTION_POINTER_VALUE;
private long taos = TSDBConstants.JNI_NULL_POINTER;
/**
* Result set pointer for the current connection
*/
private long taosResultSetPointer = TSDBConstants.JNI_NULL_POINTER;
/**
* result set status in current connection
......@@ -41,7 +45,7 @@ public class TSDBJNIConnector {
* Whether the connection is closed
*/
public boolean isClosed() {
return this.taos == INVALID_CONNECTION_POINTER_VALUE;
return this.taos == TSDBConstants.JNI_NULL_POINTER;
}
/**
......@@ -86,13 +90,13 @@ public class TSDBJNIConnector {
* @throws SQLException
*/
public boolean connect(String host, int port, String dbName, String user, String password) throws SQLException {
if (this.taos != INVALID_CONNECTION_POINTER_VALUE) {
if (this.taos != TSDBConstants.JNI_NULL_POINTER) {
this.closeConnectionImp(this.taos);
this.taos = INVALID_CONNECTION_POINTER_VALUE;
this.taos = TSDBConstants.JNI_NULL_POINTER;
}
this.taos = this.connectImp(host, port, dbName, user, password);
if (this.taos == INVALID_CONNECTION_POINTER_VALUE) {
if (this.taos == TSDBConstants.JNI_NULL_POINTER) {
throw new SQLException(TSDBConstants.WrapErrMsg(this.getErrMsg()), "", this.getErrCode());
}
......@@ -108,13 +112,7 @@ public class TSDBJNIConnector {
*/
public int executeQuery(String sql) throws SQLException {
if (!this.isResultsetClosed) {
//throw new RuntimeException(TSDBConstants.WrapErrMsg("Connection already has an open result set"));
long resultSetPointer = this.getResultSet();
if (resultSetPointer == TSDBConstants.JNI_CONNECTION_NULL) {
//do nothing
} else {
this.freeResultSet(resultSetPointer);
}
freeResultSet(taosResultSetPointer);
}
int code;
......@@ -134,6 +132,13 @@ public class TSDBJNIConnector {
}
}
// Try retrieving result set for the executed SQLusing the current connection pointer. If the executed
// SQL is a DML/DDL which doesn't return a result set, then taosResultSetPointer should be 0L. Otherwise,
// taosResultSetPointer should be a non-zero value.
taosResultSetPointer = this.getResultSetImp(this.taos);
if (taosResultSetPointer != TSDBConstants.JNI_NULL_POINTER) {
isResultsetClosed = false;
}
return code;
}
......@@ -162,8 +167,7 @@ public class TSDBJNIConnector {
* Each connection should have a single open result set at a time
*/
public long getResultSet() {
long res = this.getResultSetImp(this.taos);
return res;
return taosResultSetPointer;
}
private native long getResultSetImp(long connection);
......@@ -172,11 +176,31 @@ public class TSDBJNIConnector {
* Free resultset operation from C to release resultset pointer by JNI
*/
public int freeResultSet(long result) {
int res = this.freeResultSetImp(this.taos, result);
this.isResultsetClosed = true; // reset resultSetPointer to 0 after freeResultSetImp() return
int res = TSDBConstants.JNI_SUCCESS;
if (result != taosResultSetPointer && taosResultSetPointer != TSDBConstants.JNI_NULL_POINTER) {
throw new RuntimeException("Invalid result set pointer");
} else if (taosResultSetPointer != TSDBConstants.JNI_NULL_POINTER){
res = this.freeResultSetImp(this.taos, result);
isResultsetClosed = true; // reset resultSetPointer to 0 after freeResultSetImp() return
taosResultSetPointer = TSDBConstants.JNI_NULL_POINTER;
}
return res;
}
/**
* Close the open result set which is associated to the current connection. If the result set is already
* closed, return 0 for success.
* @return
*/
public int freeResultSet() {
int resCode = TSDBConstants.JNI_SUCCESS;
if (!isResultsetClosed) {
resCode = this.freeResultSetImp(this.taos, this.taosResultSetPointer);
taosResultSetPointer = TSDBConstants.JNI_NULL_POINTER;
}
return resCode;
}
private native int freeResultSetImp(long connection, long result);
/**
......@@ -220,7 +244,7 @@ public class TSDBJNIConnector {
if (code < 0) {
throw new SQLException(TSDBConstants.FixErrMsg(code), "", this.getErrCode());
} else if (code == 0){
this.taos = INVALID_CONNECTION_POINTER_VALUE;
this.taos = TSDBConstants.JNI_NULL_POINTER;
} else {
throw new SQLException("Undefined error code returned by TDengine when closing a connection");
}
......
......@@ -244,7 +244,7 @@ public class TSDBPreparedStatement extends TSDBStatement implements PreparedStat
@Override
public boolean execute() throws SQLException {
return executeUpdate(getNativeSql()) == 0;
return super.execute(getNativeSql());
}
@Override
......
......@@ -27,8 +27,14 @@ public class TSDBStatement implements Statement {
/** Timeout for a query */
protected int queryTimeout = 0;
/**
* Status of current statement
*/
private boolean isClosed = true;
TSDBStatement(TSDBJNIConnector connecter) {
this.connecter = connecter;
this.isClosed = false;
}
public <T> T unwrap(Class<T> iface) throws SQLException {
......@@ -40,13 +46,16 @@ public class TSDBStatement implements Statement {
}
public ResultSet executeQuery(String sql) throws SQLException {
if (isClosed) {
throw new SQLException("Invalid method call on a closed statement.");
}
this.connecter.executeQuery(sql);
long resultSetPointer = this.connecter.getResultSet();
if (resultSetPointer == TSDBConstants.JNI_CONNECTION_NULL) {
throw new SQLException(TSDBConstants.FixErrMsg(TSDBConstants.JNI_CONNECTION_NULL));
} else if (resultSetPointer == 0) {
} else if (resultSetPointer == TSDBConstants.JNI_NULL_POINTER) {
return null;
} else {
return new TSDBResultSet(this.connecter, resultSetPointer);
......@@ -54,7 +63,20 @@ public class TSDBStatement implements Statement {
}
public int executeUpdate(String sql) throws SQLException {
return this.connecter.executeQuery(sql);
if (isClosed) {
throw new SQLException("Invalid method call on a closed statement.");
}
int res = this.connecter.executeQuery(sql);
long resultSetPointer = this.connecter.getResultSet();
if (resultSetPointer == TSDBConstants.JNI_CONNECTION_NULL) {
throw new SQLException(TSDBConstants.FixErrMsg(TSDBConstants.JNI_CONNECTION_NULL));
} else if (resultSetPointer != TSDBConstants.JNI_NULL_POINTER) {
this.connecter.freeResultSet();
throw new SQLException("The executed SQL is not a DML or a DDL");
} else {
return res;
}
}
public String getErrorMsg() {
......@@ -62,6 +84,12 @@ public class TSDBStatement implements Statement {
}
public void close() throws SQLException {
if (!isClosed) {
if (!this.connecter.isResultsetClosed()) {
this.connecter.freeResultSet();
}
isClosed = true;
}
}
public int getMaxFieldSize() throws SQLException {
......@@ -110,19 +138,38 @@ public class TSDBStatement implements Statement {
}
public boolean execute(String sql) throws SQLException {
return executeUpdate(sql) == 0;
if (isClosed) {
throw new SQLException("Invalid method call on a closed statement.");
}
boolean res = true;
this.connecter.executeQuery(sql);
long resultSetPointer = this.connecter.getResultSet();
if (resultSetPointer == TSDBConstants.JNI_CONNECTION_NULL) {
throw new SQLException(TSDBConstants.FixErrMsg(TSDBConstants.JNI_CONNECTION_NULL));
} else if (resultSetPointer == TSDBConstants.JNI_NULL_POINTER) {
// no result set is retrieved
res = false;
}
return res;
}
public ResultSet getResultSet() throws SQLException {
if (isClosed) {
throw new SQLException("Invalid method call on a closed statement.");
}
long resultSetPointer = connecter.getResultSet();
TSDBResultSet resSet = null;
if (resultSetPointer != 0l) {
if (resultSetPointer != TSDBConstants.JNI_NULL_POINTER) {
resSet = new TSDBResultSet(connecter, resultSetPointer);
}
return resSet;
}
public int getUpdateCount() throws SQLException {
if (isClosed) {
throw new SQLException("Invalid method call on a closed statement.");
}
return this.connecter.getAffectedRows();
}
......@@ -171,6 +218,9 @@ public class TSDBStatement implements Statement {
}
public int[] executeBatch() throws SQLException {
if (isClosed) {
throw new SQLException("Invalid method call on a closed statement.");
}
if (batchedArgs == null) {
throw new SQLException(TSDBConstants.WrapErrMsg("Batch is empty!"));
} else {
......@@ -223,7 +273,7 @@ public class TSDBStatement implements Statement {
}
public boolean isClosed() throws SQLException {
throw new SQLException(TSDBConstants.UNSUPPORT_METHOD_EXCEPTIONZ_MSG);
return isClosed;
}
public void setPoolable(boolean poolable) throws SQLException {
......
......@@ -184,7 +184,7 @@ void tColModelDisplayEx(tColModel *pModel, void *pData, int32_t numOfRows, int32
/*
* compress data into consecutive block without hole in data
*/
void tColModelCompress(tColModel *pModel, tFilePage *inputBuffer, int32_t maxElemsCapacity);
void tColModelCompact(tColModel *pModel, tFilePage *inputBuffer, int32_t maxElemsCapacity);
void tColModelErase(tColModel *pModel, tFilePage *inputBuffer, int32_t maxCapacity, int32_t s, int32_t e);
......
......@@ -69,7 +69,7 @@ int32_t taosGetNumOfResWithoutLimit(SInterpolationInfo *pInterpoInfo, int64_t *p
* @param pInterpoInfo
* @return
*/
bool taosHasNoneInterpoPoints(SInterpolationInfo *pInterpoInfo);
bool taosHasRemainsDataForInterpolation(SInterpolationInfo *pInterpoInfo);
int32_t taosNumOfRemainPoints(SInterpolationInfo *pInterpoInfo);
......
......@@ -626,6 +626,7 @@ void source_file(TAOS *con, char *fptr) {
}
while ((read_len = getline(&line, &line_len, f)) != -1) {
if (read_len >= MAX_COMMAND_SIZE) continue;
line[--read_len] = '\0';
if (read_len == 0 || isCommentLine(line)) { // line starts with #
......
......@@ -64,14 +64,12 @@ bool gcProcessLoginRequest(HttpContext* pContext) {
//[{
// "refId": "A",
// "alias" : "taosd",
// "sql" : "select first(taosd) from sys.mem where ts > now-6h and ts < now
// interval(20000a)"
// "sql" : "select first(taosd) from sys.mem where ts > now-6h and ts < now interval(20000a)"
//},
//{
// "refId": "B",
// "alias" : "system",
// "sql" : "select first(taosd) from sys.mem where ts > now-6h and ts < now
// interval(20000a)"
// "sql" : "select first(taosd) from sys.mem where ts > now-6h and ts < now interval(20000a)"
//}]
// output
//[{
......
......@@ -20,7 +20,8 @@ char* httpMsg[] = {
"http method parse error", // 3
"http version should be 1.0, 1.1 or 1.2", // 4
"http head parse error", // 5
"request size is too big", "http body size invalid",
"request size is too big",
"http body size invalid",
"http chunked body parse error", // 8
"http url parse error", // 9
"invalid type of Authorization",
......@@ -52,7 +53,8 @@ char* httpMsg[] = {
"tags not find",
"tags size is 0",
"tags size too long", // 36
"tag is null", "tag name is null",
"tag is null",
"tag name is null",
"tag name length too long", // 39
"tag value type should be number or string",
"tag value is null",
......
......@@ -69,11 +69,12 @@ enum _sync_cmd {
};
enum _meter_state {
TSDB_METER_STATE_READY,
TSDB_METER_STATE_IMPORTING,
TSDB_METER_STATE_UPDATING,
TSDB_METER_STATE_DELETING,
TSDB_METER_STATE_DELETED,
TSDB_METER_STATE_READY = 0x00,
TSDB_METER_STATE_INSERT = 0x01,
TSDB_METER_STATE_IMPORTING = 0x02,
TSDB_METER_STATE_UPDATING = 0x04,
TSDB_METER_STATE_DELETING = 0x10,
TSDB_METER_STATE_DELETED = 0x18,
};
typedef struct {
......@@ -184,10 +185,10 @@ typedef struct _meter_obj {
short sqlLen;
char searchAlgorithm : 4;
char compAlgorithm : 4;
char state : 5; // deleted or added, 1: added
char status : 3; // 0: ok, 1: stop stream computing
char status; // 0: ok, 1: stop stream computing
char reserved[16];
int state;
int numOfQueries;
char * pSql;
void * pStream;
......@@ -418,10 +419,6 @@ void vnodeCommitOver(SVnodeObj *pVnode);
TSKEY vnodeGetFirstKey(int vnode);
int vnodeSyncRetrieveCache(int vnode, int fd);
int vnodeSyncRestoreCache(int vnode, int fd);
pthread_t vnodeCreateCommitThread(SVnodeObj *pVnode);
void vnodeCancelCommit(SVnodeObj *pVnode);
......@@ -447,10 +444,6 @@ void *vnodeCommitToFile(void *param);
void *vnodeCommitMultiToFile(SVnodeObj *pVnode, int ssid, int esid);
int vnodeSyncRetrieveFile(int vnode, int fd, uint32_t fileId, uint64_t *fmagic);
int vnodeSyncRestoreFile(int vnode, int sfd);
int vnodeWriteBlockToFile(SMeterObj *pObj, SCompBlock *pBlock, SData *data[], SData *cdata[], int pointsRead);
int vnodeSearchPointInFile(SMeterObj *pObj, SQuery *pQuery);
......@@ -476,14 +469,8 @@ void *vnodeGetMeterPeerConnection(SMeterObj *pObj, int index);
int vnodeForwardToPeer(SMeterObj *pObj, char *msg, int msgLen, char action, int sversion);
void vnodeCloseAllSyncFds(int vnode);
void vnodeConfigVPeers(int vnode, int numOfPeers, SVPeerDesc peerDesc[]);
void vnodeStartSyncProcess(SVnodeObj *pVnode);
void vnodeCancelSync(int vnode);
void vnodeListPeerStatus(char *buffer);
void vnodeCheckOwnStatus(SVnodeObj *pVnode);
......@@ -499,7 +486,7 @@ int vnodeInitStore();
void vnodeCleanUpVnodes();
void vnodeRemoveVnode(int vnode);
int vnodeRemoveVnode(int vnode);
int vnodeCreateVnode(int vnode, SVnodeCfg *pCfg, SVPeerDesc *pDesc);
......
......@@ -75,6 +75,12 @@ int32_t vnodeIncQueryRefCount(SQueryMeterMsg *pQueryMsg, SMeterSidExtInfo **pSid
void vnodeDecQueryRefCount(SQueryMeterMsg *pQueryMsg, SMeterObj **pMeterObjList, int32_t numOfInc);
int32_t vnodeTransferMeterState(SMeterObj* pMeterObj, int32_t state);
void vnodeClearMeterState(SMeterObj* pMeterObj, int32_t state);
bool vnodeIsMeterState(SMeterObj* pMeterObj, int32_t state);
void vnodeSetMeterDeleting(SMeterObj* pMeterObj);
bool vnodeIsSafeToDeleteMeter(SVnodeObj* pVnode, int32_t sid);
#ifdef __cplusplus
}
#endif
......
......@@ -113,7 +113,7 @@ int vnodeProcessCreateMeterRequest(char *pMsg) {
pVnode = vnodeList + vid;
if (pVnode->cfg.maxSessions <= 0) {
dError("vid:%d, not activated", vid);
code = TSDB_CODE_INVALID_SESSION_ID;
code = TSDB_CODE_NOT_ACTIVE_SESSION;
goto _over;
}
......@@ -215,7 +215,7 @@ int vnodeProcessCreateMeterMsg(char *pMsg) {
if (pVnode->pCachePool == NULL) {
dError("vid:%d is not activated yet", pCreate->vnode);
vnodeSendVpeerCfgMsg(pCreate->vnode);
code = TSDB_CODE_INVALID_SESSION_ID;
code = TSDB_CODE_NOT_ACTIVE_SESSION;
goto _create_over;
}
......@@ -445,7 +445,8 @@ int vnodeProcessFreeVnodeRequest(char *pMsg) {
}
dTrace("vid:%d receive free vnode message", pFree->vnode);
vnodeRemoveVnode(pFree->vnode);
int32_t code = vnodeRemoveVnode(pFree->vnode);
assert(code == TSDB_CODE_SUCCESS || code == TSDB_CODE_ACTION_IN_PROGRESS);
pStart = (char *)malloc(128);
if (pStart == NULL) return 0;
......@@ -453,7 +454,7 @@ int vnodeProcessFreeVnodeRequest(char *pMsg) {
*pStart = TSDB_MSG_TYPE_FREE_VNODE_RSP;
pMsg = pStart + 1;
*pMsg = 0;
*pMsg = code;
vnodeSendMsgToMgmt(pStart);
return 0;
......
......@@ -44,7 +44,7 @@ void dnodeInitModules() {
tsModule[TSDB_MOD_HTTP].cleanUpFp = httpCleanUpSystem;
tsModule[TSDB_MOD_HTTP].startFp = httpStartSystem;
tsModule[TSDB_MOD_HTTP].stopFp = httpStopSystem;
tsModule[TSDB_MOD_HTTP].num = tsEnableHttpModule ? -1 : 0;
tsModule[TSDB_MOD_HTTP].num = (tsEnableHttpModule == 1) ? -1 : 0;
tsModule[TSDB_MOD_HTTP].curNum = 0;
tsModule[TSDB_MOD_HTTP].equalVnodeNum = 0;
......@@ -53,7 +53,7 @@ void dnodeInitModules() {
tsModule[TSDB_MOD_MONITOR].cleanUpFp = monitorCleanUpSystem;
tsModule[TSDB_MOD_MONITOR].startFp = monitorStartSystem;
tsModule[TSDB_MOD_MONITOR].stopFp = monitorStopSystem;
tsModule[TSDB_MOD_MONITOR].num = tsEnableMonitorModule ? -1 : 0;
tsModule[TSDB_MOD_MONITOR].num = (tsEnableMonitorModule == 1) ? -1 : 0;
tsModule[TSDB_MOD_MONITOR].curNum = 0;
tsModule[TSDB_MOD_MONITOR].equalVnodeNum = 0;
}
......
......@@ -1140,54 +1140,13 @@ static void mgmtReorganizeMetersInMetricMeta(STabObj *pMetric, SMetricMetaMsg *p
startPos[1] = (int32_t)pRes->num;
}
/* if pInfo->limit == 0, the query will be intercepted by sdk, and wont be
* sent to mnode */
assert(pInfo->limit == -1 || pInfo->limit > 0);
int32_t numOfTotal = 0;
if (pInfo->offset >= numOfSubset) {
numOfTotal = 0;
} else if (numOfSubset == 1) {
// no 'groupBy' clause, all tables returned
numOfTotal = pRes->num;
} else {
/* there is a offset value of group */
int32_t start = 0;
int32_t end = 0;
if (pInfo->orderType == TSQL_SO_ASC) {
start = startPos[pInfo->offset];
if (pInfo->limit + pInfo->offset >= numOfSubset || pInfo->limit == -1) {
/* all results are required */
end = startPos[numOfSubset];
} else {
end = startPos[pInfo->limit + pInfo->offset];
}
} else {
end = startPos[numOfSubset - pInfo->offset];
if (pInfo->limit + pInfo->offset >= numOfSubset || pInfo->limit == -1) {
start = startPos[0];
} else {
start = startPos[numOfSubset - pInfo->limit - pInfo->offset];
}
}
numOfTotal = end - start;
assert(numOfTotal > 0);
memmove(pRes->pRes, pRes->pRes + start, numOfTotal * POINTER_BYTES);
}
/*
* sort the result according to vgid to ensure meters with the same vgid is
* continuous in the result list
*/
__compar_fn_t functor = (pRes->nodeType == TAST_NODE_TYPE_METER_PTR) ? tabObjVGIDComparator : nodeVGIDComparator;
qsort(pRes->pRes, numOfTotal, POINTER_BYTES, functor);
qsort(pRes->pRes, (size_t) pRes->num, POINTER_BYTES, functor);
pRes->num = numOfTotal;
free(descriptor->pTagSchema);
free(descriptor);
free(startPos);
......
......@@ -340,19 +340,33 @@ void vnodeCommitOver(SVnodeObj *pVnode) {
pthread_mutex_unlock(&pPool->vmutex);
}
void vnodeCancelCommit(SVnodeObj *pVnode) {
static void vnodeWaitForCommitComplete(SVnodeObj *pVnode) {
SCachePool *pPool = (SCachePool *)(pVnode->pCachePool);
if (pPool == NULL) return;
// wait for 100s at most
const int32_t totalCount = 1000;
int32_t count = 0;
// all meter is marked as dropped, so the commit will abort very quickly
while(count++ < totalCount) {
int32_t commitInProcess = 0;
pthread_mutex_lock(&pPool->vmutex);
commitInProcess = pPool->commitInProcess;
pthread_mutex_unlock(&pPool->vmutex);
if (pPool->commitInProcess) {
pPool->commitInProcess = 0;
pthread_cancel(pVnode->commitThread);
if (commitInProcess) {
dWarn("vid:%d still in commit, wait for completed", pVnode->vnode);
taosMsleep(10);
}
}
}
pthread_mutex_unlock(&pPool->vmutex);
void vnodeCancelCommit(SVnodeObj *pVnode) {
SCachePool *pPool = (SCachePool *)(pVnode->pCachePool);
if (pPool == NULL) return;
vnodeWaitForCommitComplete(pVnode);
taosTmrReset(vnodeProcessCommitTimer, pVnode->cfg.commitTime * 1000, pVnode, vnodeTmrCtrl, &pVnode->commitTimer);
}
......
......@@ -26,6 +26,7 @@
#include "tsdb.h"
#include "vnode.h"
#include "vnodeUtil.h"
typedef struct {
int sversion;
......@@ -160,9 +161,13 @@ size_t vnodeRestoreDataFromLog(int vnode, char *fileName, uint64_t *firstV) {
if (*(int *)(cont+head.contLen) != simpleCheck) break;
SMeterObj *pObj = pVnode->meterList[head.sid];
if (pObj == NULL) {
dError(
"vid:%d, sid:%d not exists, ignore data in commit log, "
"contLen:%d action:%d",
dError("vid:%d, sid:%d not exists, ignore data in commit log, contLen:%d action:%d",
vnode, head.sid, head.contLen, head.action);
continue;
}
if (vnodeIsMeterState(pObj, TSDB_METER_STATE_DELETING)) {
dWarn("vid:%d sid:%d id:%s, meter is dropped, ignore data in commit log, contLen:%d action:%d",
vnode, head.sid, head.contLen, head.action);
continue;
}
......
......@@ -408,7 +408,6 @@ void vnodeCloseCommitFiles(SVnodeObj *pVnode) {
char dpath[TSDB_FILENAME_LEN] = "\0";
int fileId;
int ret;
int file_removed = 0;
close(pVnode->nfd);
pVnode->nfd = 0;
......@@ -449,14 +448,15 @@ void vnodeCloseCommitFiles(SVnodeObj *pVnode) {
dTrace("vid:%d, %s and %s is saved", pVnode->vnode, pVnode->cfn, pVnode->lfn);
if (pVnode->numOfFiles > pVnode->maxFiles) {
// Retention policy here
fileId = pVnode->fileId - pVnode->numOfFiles + 1;
int cfile = taosGetTimestamp(pVnode->cfg.precision)/pVnode->cfg.daysPerFile/tsMsPerDay[pVnode->cfg.precision];
while (fileId <= cfile - pVnode->maxFiles) {
vnodeRemoveFile(pVnode->vnode, fileId);
pVnode->numOfFiles--;
file_removed = 1;
fileId++;
}
if (!file_removed) vnodeUpdateFileMagic(pVnode->vnode, pVnode->commitFileId);
vnodeSaveAllMeterObjToFile(pVnode->vnode);
return;
......@@ -577,8 +577,20 @@ _again:
// read compInfo
for (sid = 0; sid < pCfg->maxSessions; ++sid) {
if (pVnode->meterList == NULL) { // vnode is being freed, abort
goto _over;
}
pObj = (SMeterObj *)(pVnode->meterList[sid]);
if (pObj == NULL) continue;
if (pObj == NULL) {
continue;
}
// meter is going to be deleted, abort
if (vnodeIsMeterState(pObj, TSDB_METER_STATE_DELETING)) {
dWarn("vid:%d sid:%d is dropped, ignore this meter", vnode, sid);
continue;
}
pMeter = meterInfo + sid;
pHeader = ((SCompHeader *)tmem) + sid;
......@@ -672,6 +684,7 @@ _again:
pointsReadLast = pMeter->lastBlock.numOfPoints;
query.over = 0;
headInfo.totalStorage -= (pointsReadLast * pObj->bytesPerPoint);
dTrace("vid:%d sid:%d id:%s, points:%d in last block will be merged to new block",
pObj->vnode, pObj->sid, pObj->meterId, pointsReadLast);
}
......
......@@ -24,6 +24,7 @@
#include "vnode.h"
#include "vnodeMgmt.h"
#include "vnodeShell.h"
#include "vnodeShell.h"
#include "vnodeUtil.h"
#pragma GCC diagnostic ignored "-Wpointer-sign"
#pragma GCC diagnostic ignored "-Wint-conversion"
......@@ -281,14 +282,32 @@ void vnodeProcessImportTimer(void *param, void *tmrId) {
SShellObj * pShell = pImport->pShell;
pImport->retry++;
pObj->state = TSDB_METER_STATE_IMPORTING;
//slow query will block the import operation
int32_t state = vnodeTransferMeterState(pObj, TSDB_METER_STATE_IMPORTING);
if (state >= TSDB_METER_STATE_DELETING) {
dError("vid:%d sid:%d id:%s, meter is deleted, failed to import, state:%d",
pObj->vnode, pObj->sid, pObj->meterId, state);
return;
}
int32_t num = 0;
pthread_mutex_lock(&pVnode->vmutex);
num = pObj->numOfQueries;
pthread_mutex_unlock(&pVnode->vmutex);
//if the num == 0, it will never be increased before state is set to TSDB_METER_STATE_READY
int32_t commitInProcess = 0;
pthread_mutex_lock(&pPool->vmutex);
if (pPool->commitInProcess || pObj->numOfQueries > 0) {
if (((commitInProcess = pPool->commitInProcess) == 1) || num > 0 || state != TSDB_METER_STATE_READY) {
pthread_mutex_unlock(&pPool->vmutex);
pObj->state = TSDB_METER_STATE_READY;
vnodeClearMeterState(pObj, TSDB_METER_STATE_IMPORTING);
if (pImport->retry < 1000) {
dTrace("vid:%d sid:%d id:%s, commit in process, try to import later", pObj->vnode, pObj->sid, pObj->meterId);
dTrace("vid:%d sid:%d id:%s, import failed, retry later. commit in process or queries on it, or not ready."
"commitInProcess:%d, numOfQueries:%d, state:%d", pObj->vnode, pObj->sid, pObj->meterId,
commitInProcess, num, state);
taosTmrStart(vnodeProcessImportTimer, 10, pImport, vnodeTmrCtrl);
return;
} else {
......@@ -304,7 +323,8 @@ void vnodeProcessImportTimer(void *param, void *tmrId) {
}
}
pObj->state = TSDB_METER_STATE_READY;
vnodeClearMeterState(pObj, TSDB_METER_STATE_IMPORTING);
pVnode->version++;
// send response back to shell
......@@ -850,10 +870,13 @@ int vnodeImportPoints(SMeterObj *pObj, char *cont, int contLen, char source, voi
}
payload = pSubmit->payLoad;
if (pVnode->lastKeyOnFile > pVnode->cfg.daysToKeep * tsMsPerDay[pVnode->cfg.precision] + *((TSKEY *)(payload))) {
dError("vid:%d sid:%d id:%s, vnode lastKeyOnFile:%lld, data is too old to import, key:%lld",
pObj->vnode, pObj->sid, pObj->meterId, pVnode->lastKeyOnFile, *(TSKEY *)(payload));
return TSDB_CODE_OTHERS;
int firstId = (*(TSKEY *)payload)/pVnode->cfg.daysPerFile/tsMsPerDay[pVnode->cfg.precision];
int lastId = (*(TSKEY *)(payload+pObj->bytesPerPoint*(rows-1)))/pVnode->cfg.daysPerFile/tsMsPerDay[pVnode->cfg.precision];
int cfile = taosGetTimestamp(pVnode->cfg.precision)/pVnode->cfg.daysPerFile/tsMsPerDay[pVnode->cfg.precision];
if ((firstId <= cfile - pVnode->maxFiles) || (firstId > cfile + 1) || (lastId <= cfile - pVnode->maxFiles) || (lastId > cfile + 1)) {
dError("vid:%d sid:%d id:%s, invalid timestamp to import, firstKey: %ld lastKey: %ld",
pObj->vnode, pObj->sid, pObj->meterId, *(TSKEY *)(payload), *(TSKEY *)(payload+pObj->bytesPerPoint*(rows-1)));
return TSDB_CODE_TIMESTAMP_OUT_OF_RANGE;
}
if ( pVnode->cfg.commitLog && source != TSDB_DATA_SOURCE_LOG) {
......@@ -862,15 +885,19 @@ int vnodeImportPoints(SMeterObj *pObj, char *cont, int contLen, char source, voi
}
if (*((TSKEY *)(pSubmit->payLoad + (rows - 1) * pObj->bytesPerPoint)) > pObj->lastKey) {
vnodeClearMeterState(pObj, TSDB_METER_STATE_IMPORTING);
vnodeTransferMeterState(pObj, TSDB_METER_STATE_INSERT);
code = vnodeInsertPoints(pObj, cont, contLen, TSDB_DATA_SOURCE_LOG, NULL, pObj->sversion, &pointsImported);
if (pShell) {
pShell->code = code;
pShell->numOfTotalPoints += pointsImported;
}
vnodeClearMeterState(pObj, TSDB_METER_STATE_INSERT);
} else {
SImportInfo *pNew, import;
pObj->state = TSDB_METER_STATE_IMPORTING;
dTrace("vid:%d sid:%d id:%s, import %d rows data", pObj->vnode, pObj->sid, pObj->meterId, rows);
memset(&import, 0, sizeof(import));
import.firstKey = *((TSKEY *)(payload));
......@@ -880,10 +907,16 @@ int vnodeImportPoints(SMeterObj *pObj, char *cont, int contLen, char source, voi
import.payload = payload;
import.rows = rows;
int32_t num = 0;
pthread_mutex_lock(&pVnode->vmutex);
num = pObj->numOfQueries;
pthread_mutex_unlock(&pVnode->vmutex);
int32_t commitInProcess = 0;
pthread_mutex_lock(&pPool->vmutex);
if (pPool->commitInProcess || pObj->numOfQueries > 0) {
if (((commitInProcess = pPool->commitInProcess) == 1) || num > 0) {
pthread_mutex_unlock(&pPool->vmutex);
pObj->state = TSDB_METER_STATE_READY;
pNew = (SImportInfo *)malloc(sizeof(SImportInfo));
memcpy(pNew, &import, sizeof(SImportInfo));
......@@ -892,8 +925,9 @@ int vnodeImportPoints(SMeterObj *pObj, char *cont, int contLen, char source, voi
pNew->payload = malloc(payloadLen);
memcpy(pNew->payload, payload, payloadLen);
dTrace("vid:%d sid:%d id:%s, commit/query:%d in process, import later, ", pObj->vnode, pObj->sid, pObj->meterId,
pObj->numOfQueries);
dTrace("vid:%d sid:%d id:%s, import later, commit in process:%d, numOfQueries:%d", pObj->vnode, pObj->sid,
pObj->meterId, commitInProcess, pObj->numOfQueries);
taosTmrStart(vnodeProcessImportTimer, 10, pNew, vnodeTmrCtrl);
return 0;
} else {
......@@ -907,7 +941,6 @@ int vnodeImportPoints(SMeterObj *pObj, char *cont, int contLen, char source, voi
}
}
pObj->state = TSDB_METER_STATE_READY;
pVnode->version++;
if (pShell) {
......@@ -918,6 +951,7 @@ int vnodeImportPoints(SMeterObj *pObj, char *cont, int contLen, char source, voi
return 0;
}
//todo abort from the procedure if the meter is going to be dropped
int vnodeImportData(SMeterObj *pObj, SImportInfo *pImport) {
int code = 0;
......
......@@ -47,6 +47,8 @@ void vnodeFreeMeterObj(SMeterObj *pObj) {
if (vnodeList[pObj->vnode].meterList != NULL) {
vnodeList[pObj->vnode].meterList[pObj->sid] = NULL;
}
memset(pObj->meterId, 0, tListLen(pObj->meterId));
tfree(pObj);
}
......@@ -143,7 +145,7 @@ int vnodeSaveMeterObjToFile(SMeterObj *pObj) {
memcpy(buffer, pObj, offsetof(SMeterObj, reserved));
memcpy(buffer + offsetof(SMeterObj, reserved), pObj->schema, pObj->numOfColumns * sizeof(SColumn));
memcpy(buffer + offsetof(SMeterObj, reserved) + pObj->numOfColumns * sizeof(SColumn), pObj->pSql, pObj->sqlLen);
taosCalcChecksumAppend(0, buffer, new_length);
taosCalcChecksumAppend(0, (uint8_t *)buffer, new_length);
if (offset == 0 || length < new_length) { // New, append to file end
fseek(fp, 0, SEEK_END);
......@@ -208,7 +210,7 @@ int vnodeSaveAllMeterObjToFile(int vnode) {
memcpy(buffer, pObj, offsetof(SMeterObj, reserved));
memcpy(buffer + offsetof(SMeterObj, reserved), pObj->schema, pObj->numOfColumns * sizeof(SColumn));
memcpy(buffer + offsetof(SMeterObj, reserved) + pObj->numOfColumns * sizeof(SColumn), pObj->pSql, pObj->sqlLen);
taosCalcChecksumAppend(0, buffer, new_length);
taosCalcChecksumAppend(0, (uint8_t *)buffer, new_length);
if (offset == 0 || length > new_length) { // New, append to file end
new_offset = fseek(fp, 0, SEEK_END);
......@@ -391,7 +393,7 @@ int vnodeOpenMetersVnode(int vnode) {
fseek(fp, offset, SEEK_SET);
if (fread(buffer, length, 1, fp) <= 0) break;
if (taosCheckChecksumWhole(buffer, length)) {
if (taosCheckChecksumWhole((uint8_t *)buffer, length)) {
vnodeRestoreMeterObj(buffer, length - sizeof(TSCKSUM));
} else {
dError("meter object file is broken since checksum mismatch, vnode: %d sid: %d, try to recover", vnode, sid);
......@@ -440,7 +442,7 @@ int vnodeCreateMeterObj(SMeterObj *pNew, SConnSec *pSec) {
}
dTrace("vid:%d sid:%d id:%s, update schema", pNew->vnode, pNew->sid, pNew->meterId);
if (pObj->state != TSDB_METER_STATE_UPDATING) vnodeUpdateMeter(pNew, NULL);
if (!vnodeIsMeterState(pObj, TSDB_METER_STATE_UPDATING)) vnodeUpdateMeter(pNew, NULL);
return TSDB_CODE_SUCCESS;
}
......@@ -483,27 +485,20 @@ int vnodeRemoveMeterObj(int vnode, int sid) {
if (vnodeList[vnode].meterList == NULL) return 0;
pObj = vnodeList[vnode].meterList[sid];
if ((pObj == NULL) || (pObj->state == TSDB_METER_STATE_DELETED)) return 0;
if (pObj->state == TSDB_METER_STATE_IMPORTING) return TSDB_CODE_ACTION_IN_PROGRESS;
if (pObj == NULL) {
return TSDB_CODE_SUCCESS;
}
int32_t retFlag = 0;
pthread_mutex_lock(&vnodeList[vnode].vmutex);
pObj->state = TSDB_METER_STATE_DELETING;
if (pObj->numOfQueries > 0) {
retFlag = TSDB_CODE_ACTION_IN_PROGRESS;
dWarn("vid:%d sid:%d id:%s %d queries executing on it, wait query to be finished",
vnode, pObj->sid, pObj->meterId, pObj->numOfQueries);
if (!vnodeIsSafeToDeleteMeter(&vnodeList[vnode], sid)) {
return TSDB_CODE_ACTION_IN_PROGRESS;
}
pthread_mutex_unlock(&vnodeList[vnode].vmutex);
if (retFlag != 0) return retFlag;
// after remove this meter, change its stat to DELETED
// after remove this meter, change its state to DELETED
pObj->state = TSDB_METER_STATE_DELETED;
pObj->timeStamp = taosGetTimestampMs();
vnodeList[vnode].lastRemove = pObj->timeStamp;
vnodeRemoveStream(pObj);
pObj->meterId[0] = 0;
vnodeSaveMeterObjToFile(pObj);
vnodeFreeMeterObj(pObj);
......@@ -517,6 +512,7 @@ int vnodeInsertPoints(SMeterObj *pObj, char *cont, int contLen, char source, voi
SSubmitMsg *pSubmit = (SSubmitMsg *)cont;
char * pData;
TSKEY tsKey;
int cfile;
int points = 0;
int code = TSDB_CODE_SUCCESS;
SVnodeObj * pVnode = vnodeList + pObj->vnode;
......@@ -533,6 +529,7 @@ int vnodeInsertPoints(SMeterObj *pObj, char *cont, int contLen, char source, voi
// to guarantee time stamp is the same for all vnodes
pData = pSubmit->payLoad;
tsKey = taosGetTimestamp(pVnode->cfg.precision);
cfile = tsKey/pVnode->cfg.daysPerFile/tsMsPerDay[pVnode->cfg.precision];
if (*((TSKEY *)pData) == 0) {
for (i = 0; i < numOfPoints; ++i) {
*((TSKEY *)pData) = tsKey++;
......@@ -575,13 +572,24 @@ int vnodeInsertPoints(SMeterObj *pObj, char *cont, int contLen, char source, voi
code = 0;
TSKEY firstKey = *((TSKEY *)pData);
if (pVnode->lastKeyOnFile > pVnode->cfg.daysToKeep * tsMsPerDay[pVnode->cfg.precision] + firstKey) {
dError("vid:%d sid:%d id:%s, vnode lastKeyOnFile:%lld, data is too old to insert, key:%lld", pObj->vnode, pObj->sid,
pObj->meterId, pVnode->lastKeyOnFile, firstKey);
return TSDB_CODE_OTHERS;
int firstId = firstKey/pVnode->cfg.daysPerFile/tsMsPerDay[pVnode->cfg.precision];
int lastId = (*(TSKEY *)(pData + pObj->bytesPerPoint * (numOfPoints - 1)))/pVnode->cfg.daysPerFile/tsMsPerDay[pVnode->cfg.precision];
if ((firstId <= cfile - pVnode->maxFiles) || (firstId > cfile + 1) || (lastId <= cfile - pVnode->maxFiles) || (lastId > cfile + 1)) {
dError("vid:%d sid:%d id:%s, invalid timestamp to insert, firstKey: %ld lastKey: %ld ", pObj->vnode, pObj->sid,
pObj->meterId, firstKey, (*(TSKEY *)(pData + pObj->bytesPerPoint * (numOfPoints - 1))));
return TSDB_CODE_TIMESTAMP_OUT_OF_RANGE;
}
for (i = 0; i < numOfPoints; ++i) {
// meter will be dropped, abort current insertion
if (pObj->state >= TSDB_METER_STATE_DELETING) {
dWarn("vid:%d sid:%d id:%s, meter is dropped, abort insert, state:%d", pObj->vnode, pObj->sid, pObj->meterId,
pObj->state);
code = TSDB_CODE_NOT_ACTIVE_SESSION;
break;
}
if (*((TSKEY *)pData) <= pObj->lastKey) {
dWarn("vid:%d sid:%d id:%s, received key:%ld not larger than lastKey:%ld", pObj->vnode, pObj->sid, pObj->meterId,
*((TSKEY *)pData), pObj->lastKey);
......@@ -632,9 +640,11 @@ void vnodeProcessUpdateSchemaTimer(void *param, void *tmrId) {
pthread_mutex_lock(&pPool->vmutex);
if (pPool->commitInProcess) {
dTrace("vid:%d sid:%d mid:%s, commiting in process, commit later", pObj->vnode, pObj->sid, pObj->meterId);
if (taosTmrStart(vnodeProcessUpdateSchemaTimer, 10, pObj, vnodeTmrCtrl) == NULL)
pObj->state = TSDB_METER_STATE_READY;
dTrace("vid:%d sid:%d mid:%s, committing in process, commit later", pObj->vnode, pObj->sid, pObj->meterId);
if (taosTmrStart(vnodeProcessUpdateSchemaTimer, 10, pObj, vnodeTmrCtrl) == NULL) {
vnodeClearMeterState(pObj, TSDB_METER_STATE_UPDATING);
}
pthread_mutex_unlock(&pPool->vmutex);
return;
}
......@@ -649,41 +659,54 @@ void vnodeUpdateMeter(void *param, void *tmrId) {
SMeterObj *pNew = (SMeterObj *)param;
if (pNew == NULL || pNew->vnode < 0 || pNew->sid < 0) return;
if (vnodeList[pNew->vnode].meterList == NULL) {
SVnodeObj* pVnode = &vnodeList[pNew->vnode];
if (pVnode->meterList == NULL) {
dTrace("vid:%d sid:%d id:%s, vnode is deleted, abort update schema", pNew->vnode, pNew->sid, pNew->meterId);
free(pNew->schema);
free(pNew);
return;
}
SMeterObj *pObj = vnodeList[pNew->vnode].meterList[pNew->sid];
if (pObj == NULL) {
SMeterObj *pObj = pVnode->meterList[pNew->sid];
if (pObj == NULL || vnodeIsMeterState(pObj, TSDB_METER_STATE_DELETING)) {
dTrace("vid:%d sid:%d id:%s, meter is deleted, abort update schema", pNew->vnode, pNew->sid, pNew->meterId);
free(pNew->schema);
free(pNew);
return;
}
pObj->state = TSDB_METER_STATE_UPDATING;
int32_t state = vnodeTransferMeterState(pObj, TSDB_METER_STATE_UPDATING);
if (state >= TSDB_METER_STATE_DELETING) {
dError("vid:%d sid:%d id:%s, meter is deleted, failed to update, state:%d",
pObj->vnode, pObj->sid, pObj->meterId, state);
return;
}
int32_t num = 0;
pthread_mutex_lock(&pVnode->vmutex);
num = pObj->numOfQueries;
pthread_mutex_unlock(&pVnode->vmutex);
if (num > 0 || state != TSDB_METER_STATE_READY) {
dTrace("vid:%d sid:%d id:%s, update failed, retry later, numOfQueries:%d, state:%d",
pNew->vnode, pNew->sid, pNew->meterId, num, state);
if (pObj->numOfQueries > 0) {
// retry update meter in 50ms
if (taosTmrStart(vnodeUpdateMeter, 50, pNew, vnodeTmrCtrl) == NULL) {
dError("vid:%d sid:%d id:%s, failed to start update timer", pNew->vnode, pNew->sid, pNew->meterId);
pObj->state = TSDB_METER_STATE_READY;
dError("vid:%d sid:%d id:%s, failed to start update timer, no retry", pNew->vnode, pNew->sid, pNew->meterId);
free(pNew->schema);
free(pNew);
}
dTrace("vid:%d sid:%d id:%s, there are ongoing queries, update later", pNew->vnode, pNew->sid, pNew->meterId);
return;
}
// commit first
if (!vnodeIsCacheCommitted(pObj)) {
// commit
// commit data first
if (taosTmrStart(vnodeProcessUpdateSchemaTimer, 0, pObj, vnodeTmrCtrl) == NULL) {
dError("vid:%d sid:%d id:%s, failed to start commit timer", pObj->vnode, pObj->sid, pObj->meterId);
pObj->state = TSDB_METER_STATE_READY;
vnodeClearMeterState(pObj, TSDB_METER_STATE_UPDATING);
free(pNew->schema);
free(pNew);
return;
......@@ -691,13 +714,14 @@ void vnodeUpdateMeter(void *param, void *tmrId) {
if (taosTmrStart(vnodeUpdateMeter, 50, pNew, vnodeTmrCtrl) == NULL) {
dError("vid:%d sid:%d id:%s, failed to start update timer", pNew->vnode, pNew->sid, pNew->meterId);
pObj->state = TSDB_METER_STATE_READY;
vnodeClearMeterState(pObj, TSDB_METER_STATE_UPDATING);
free(pNew->schema);
free(pNew);
}
dTrace("vid:%d sid:%d meterId:%s, there are data in cache, commit first, update later",
pNew->vnode, pNew->sid, pNew->meterId);
vnodeClearMeterState(pObj, TSDB_METER_STATE_UPDATING);
return;
}
......@@ -716,7 +740,7 @@ void vnodeUpdateMeter(void *param, void *tmrId) {
pObj->sversion = pNew->sversion;
vnodeSaveMeterObjToFile(pObj);
pObj->state = TSDB_METER_STATE_READY;
vnodeClearMeterState(pObj, TSDB_METER_STATE_UPDATING);
dTrace("vid:%d sid:%d id:%s, schema is updated", pNew->vnode, pNew->sid, pNew->meterId);
free(pNew);
......
......@@ -1730,6 +1730,17 @@ static int64_t getOldestKey(int32_t numOfFiles, int64_t fileId, SVnodeCfg *pCfg)
bool isQueryKilled(SQuery *pQuery) {
SQInfo *pQInfo = (SQInfo *)GET_QINFO_ADDR(pQuery);
/*
* check if the queried meter is going to be deleted.
* if it will be deleted soon, stop current query ASAP.
*/
SMeterObj* pMeterObj = pQInfo->pObj;
if (vnodeIsMeterState(pMeterObj, TSDB_METER_STATE_DELETING)) {
pQInfo->killed = 1;
return true;
}
return (pQInfo->killed == 1);
}
......
......@@ -15,12 +15,13 @@
#define _DEFAULT_SOURCE
#include "vnodeShell.h"
#include <arpa/inet.h>
#include <assert.h>
#include <endian.h>
#include <stdint.h>
#include "taosmsg.h"
#include "vnode.h"
#include "vnodeShell.h"
#include "tschemautil.h"
#include "textbuffer.h"
......@@ -28,6 +29,7 @@
#include "vnode.h"
#include "vnodeRead.h"
#include "vnodeUtil.h"
#pragma GCC diagnostic ignored "-Wint-conversion"
void * pShellServer = NULL;
......@@ -87,6 +89,7 @@ void *vnodeProcessMsgFromShell(char *msg, void *ahandle, void *thandle) {
dTrace("vid:%d sid:%d, msg:%s is received pConn:%p", vnode, sid, taosMsg[pMsg->msgType], thandle);
// set in query processing flag
if (pMsg->msgType == TSDB_MSG_TYPE_QUERY) {
vnodeProcessQueryRequest((char *)pMsg->content, pMsg->msgLen - sizeof(SIntMsg), pObj);
} else if (pMsg->msgType == TSDB_MSG_TYPE_RETRIEVE) {
......@@ -157,16 +160,31 @@ int vnodeOpenShellVnode(int vnode) {
return 0;
}
void vnodeCloseShellVnode(int vnode) {
taosCloseRpcChann(pShellServer, vnode);
static void vnodeDelayedFreeResource(void *param, void *tmrId) {
int32_t vnode = *(int32_t*) param;
taosCloseRpcChann(pShellServer, vnode); // close connection
tfree (shellList[vnode]); //free SShellObj
tfree(param);
}
void vnodeCloseShellVnode(int vnode) {
if (shellList[vnode] == NULL) return;
for (int i = 0; i < vnodeList[vnode].cfg.maxSessions; ++i) {
vnodeFreeQInfo(shellList[vnode][i].qhandle, true);
}
tfree(shellList[vnode]);
int32_t* v = malloc(sizeof(int32_t));
*v = vnode;
/*
* free the connection related resource after 5sec.
* 1. The msg, as well as SRpcConn may be in the task queue, free it immediate will cause crash
* 2. Free connection may cause *(SRpcConn*)pObj->thandle to be invalid to access.
*/
dTrace("vid:%d, delay 5sec to free resources", vnode);
taosTmrStart(vnodeDelayedFreeResource, 5000, v, vnodeTmrCtrl);
}
void vnodeCleanUpShell() {
......@@ -230,7 +248,7 @@ int vnodeProcessQueryRequest(char *pMsg, int msgLen, SShellObj *pObj) {
}
if (pQueryMsg->numOfSids <= 0) {
code = TSDB_CODE_APP_ERROR;
code = TSDB_CODE_INVALID_QUERY_MSG;
goto _query_over;
}
......@@ -245,7 +263,7 @@ int vnodeProcessQueryRequest(char *pMsg, int msgLen, SShellObj *pObj) {
if (pVnode->cfg.maxSessions == 0) {
dError("qmsg:%p,vid:%d is not activated yet", pQueryMsg, pQueryMsg->vnode);
vnodeSendVpeerCfgMsg(pQueryMsg->vnode);
code = TSDB_CODE_INVALID_SESSION_ID;
code = TSDB_CODE_NOT_ACTIVE_SESSION;
goto _query_over;
}
......@@ -256,13 +274,13 @@ int vnodeProcessQueryRequest(char *pMsg, int msgLen, SShellObj *pObj) {
if (pQueryMsg->pSidExtInfo == 0) {
dTrace("qmsg:%p,SQueryMeterMsg wrong format", pQueryMsg);
code = TSDB_CODE_APP_ERROR;
code = TSDB_CODE_INVALID_QUERY_MSG;
goto _query_over;
}
if (pVnode->meterList == NULL) {
dError("qmsg:%p,vid:%d has been closed", pQueryMsg, pQueryMsg->vnode);
code = TSDB_CODE_INVALID_SESSION_ID;
code = TSDB_CODE_NOT_ACTIVE_SESSION;
goto _query_over;
}
......@@ -430,7 +448,7 @@ int vnodeProcessShellSubmitRequest(char *pMsg, int msgLen, SShellObj *pObj) {
if (pSubmit->numOfSid <= 0) {
dError("invalid num of meters:%d", pSubmit->numOfSid);
code = TSDB_CODE_APP_ERROR;
code = TSDB_CODE_INVALID_QUERY_MSG;
goto _submit_over;
}
......@@ -444,7 +462,7 @@ int vnodeProcessShellSubmitRequest(char *pMsg, int msgLen, SShellObj *pObj) {
if (pVnode->cfg.maxSessions == 0 || pVnode->meterList == NULL) {
dError("vid:%d is not activated for submit", pSubmit->vnode);
vnodeSendVpeerCfgMsg(pSubmit->vnode);
code = TSDB_CODE_INVALID_SESSION_ID;
code = TSDB_CODE_NOT_ACTIVE_SESSION;
goto _submit_over;
}
......@@ -488,25 +506,40 @@ int vnodeProcessShellSubmitRequest(char *pMsg, int msgLen, SShellObj *pObj) {
int subMsgLen = sizeof(pBlocks->numOfRows) + htons(pBlocks->numOfRows) * pMeterObj->bytesPerPoint;
int sversion = htonl(pBlocks->sversion);
if (pMeterObj->state == TSDB_METER_STATE_READY) {
if (pSubmit->import)
code = vnodeImportPoints(pMeterObj, (char *)&(pBlocks->numOfRows), subMsgLen, TSDB_DATA_SOURCE_SHELL, pObj,
int32_t state = TSDB_METER_STATE_READY;
if (pSubmit->import) {
state = vnodeTransferMeterState(pMeterObj, TSDB_METER_STATE_IMPORTING);
} else {
state = vnodeTransferMeterState(pMeterObj, TSDB_METER_STATE_INSERT);
}
if (state == TSDB_METER_STATE_READY) {
// meter status is ready for insert/import
if (pSubmit->import) {
code = vnodeImportPoints(pMeterObj, (char *) &(pBlocks->numOfRows), subMsgLen, TSDB_DATA_SOURCE_SHELL, pObj,
sversion, &numOfPoints);
else
code = vnodeInsertPoints(pMeterObj, (char *)&(pBlocks->numOfRows), subMsgLen, TSDB_DATA_SOURCE_SHELL, NULL,
vnodeClearMeterState(pMeterObj, TSDB_METER_STATE_IMPORTING);
} else {
code = vnodeInsertPoints(pMeterObj, (char *) &(pBlocks->numOfRows), subMsgLen, TSDB_DATA_SOURCE_SHELL, NULL,
sversion, &numOfPoints);
if (code != 0) break;
} else if (pMeterObj->state >= TSDB_METER_STATE_DELETING) {
dTrace("vid:%d sid:%d id:%s, is is removed, state:", pMeterObj->vnode, pMeterObj->sid, pMeterObj->meterId,
vnodeClearMeterState(pMeterObj, TSDB_METER_STATE_INSERT);
}
if (code != TSDB_CODE_SUCCESS) {break;}
} else {
if (vnodeIsMeterState(pMeterObj, TSDB_METER_STATE_DELETING)) {
dTrace("vid:%d sid:%d id:%s, it is removed, state:%d", pMeterObj->vnode, pMeterObj->sid, pMeterObj->meterId,
pMeterObj->state);
code = TSDB_CODE_NOT_ACTIVE_SESSION;
break;
} else { // importing state or others
dTrace("vid:%d sid:%d id:%s, try again since in state:%d", pMeterObj->vnode, pMeterObj->sid, pMeterObj->meterId,
pMeterObj->state);
} else {// waiting for 300ms by default and try again
dTrace("vid:%d sid:%d id:%s, try submit again since in state:%d", pMeterObj->vnode, pMeterObj->sid,
pMeterObj->meterId, pMeterObj->state);
code = TSDB_CODE_ACTION_IN_PROGRESS;
break;
}
}
numOfTotalPoints += numOfPoints;
pBlocks = (SShellSubmitBlock *)((char *)pBlocks + sizeof(SShellSubmitBlock) +
......
......@@ -85,13 +85,42 @@ int vnodeOpenVnode(int vnode) {
return 0;
}
void vnodeCloseVnode(int vnode) {
if (vnodeList == NULL) return;
static int32_t vnodeMarkAllMetersDropped(SVnodeObj* pVnode) {
if (pVnode->meterList == NULL) {
assert(pVnode->cfg.maxSessions == 0);
return TSDB_CODE_SUCCESS;
}
bool ready = true;
for (int sid = 0; sid < pVnode->cfg.maxSessions; ++sid) {
if (!vnodeIsSafeToDeleteMeter(pVnode, sid)) {
ready = false;
} else { // set the meter is to be deleted
SMeterObj* pObj = pVnode->meterList[sid];
if (pObj != NULL) {
pObj->state = TSDB_METER_STATE_DELETED;
}
}
}
return ready? TSDB_CODE_SUCCESS:TSDB_CODE_ACTION_IN_PROGRESS;
}
int vnodeCloseVnode(int vnode) {
if (vnodeList == NULL) return TSDB_CODE_SUCCESS;
SVnodeObj* pVnode = &vnodeList[vnode];
pthread_mutex_lock(&dmutex);
if (vnodeList[vnode].cfg.maxSessions == 0) {
if (pVnode->cfg.maxSessions == 0) {
pthread_mutex_unlock(&dmutex);
return;
return TSDB_CODE_SUCCESS;
}
// set the meter is dropped flag
if (vnodeMarkAllMetersDropped(pVnode) != TSDB_CODE_SUCCESS) {
pthread_mutex_unlock(&dmutex);
return TSDB_CODE_ACTION_IN_PROGRESS;
}
vnodeCloseStream(vnodeList + vnode);
......@@ -111,6 +140,7 @@ void vnodeCloseVnode(int vnode) {
vnodeCalcOpenVnodes();
pthread_mutex_unlock(&dmutex);
return TSDB_CODE_SUCCESS;
}
int vnodeCreateVnode(int vnode, SVnodeCfg *pCfg, SVPeerDesc *pDesc) {
......@@ -182,25 +212,23 @@ void vnodeRemoveDataFiles(int vnode) {
dTrace("vnode %d is removed!", vnode);
}
void vnodeRemoveVnode(int vnode) {
if (vnodeList == NULL) return;
int vnodeRemoveVnode(int vnode) {
if (vnodeList == NULL) return TSDB_CODE_SUCCESS;
if (vnodeList[vnode].cfg.maxSessions > 0) {
vnodeCloseVnode(vnode);
int32_t ret = vnodeCloseVnode(vnode);
if (ret != TSDB_CODE_SUCCESS) {
return ret;
}
vnodeRemoveDataFiles(vnode);
// sprintf(cmd, "rm -rf %s/vnode%d", tsDirectory, vnode);
// if ( system(cmd) < 0 ) {
// dError("vid:%d, failed to run command %s vnode, reason:%s", vnode, cmd, strerror(errno));
// } else {
// dTrace("vid:%d, this vnode is deleted!!!", vnode);
// }
} else {
dTrace("vid:%d, max sessions:%d, this vnode already dropped!!!", vnode, vnodeList[vnode].cfg.maxSessions);
vnodeList[vnode].cfg.maxSessions = 0;
vnodeList[vnode].cfg.maxSessions = 0; //reset value
vnodeCalcOpenVnodes();
}
return TSDB_CODE_SUCCESS;
}
int vnodeInitStore() {
......
......@@ -13,8 +13,9 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include "taosmsg.h"
#include "vnode.h"
#include <taosmsg.h>
#include "vnodeUtil.h"
/* static TAOS *dbConn = NULL; */
void vnodeCloseStreamCallback(void *param);
......@@ -51,8 +52,17 @@ void vnodeProcessStreamRes(void *param, TAOS_RES *tres, TAOS_ROW row) {
}
contLen += sizeof(SSubmitMsg);
int32_t numOfPoints = 0;
int32_t state = vnodeTransferMeterState(pObj, TSDB_METER_STATE_INSERT);
if (state == TSDB_METER_STATE_READY) {
vnodeInsertPoints(pObj, (char *)pMsg, contLen, TSDB_DATA_SOURCE_SHELL, NULL, pObj->sversion, &numOfPoints);
vnodeClearMeterState(pObj, TSDB_METER_STATE_INSERT);
} else {
dError("vid:%d sid:%d id:%s, failed to insert continuous query results, state:%d", pObj->vnode, pObj->sid,
pObj->meterId, state);
}
assert(numOfPoints >= 0 && numOfPoints <= 1);
tfree(pTemp);
......@@ -76,7 +86,7 @@ void vnodeOpenStreams(void *param, void *tmrId) {
for (int sid = 0; sid < pVnode->cfg.maxSessions; ++sid) {
pObj = pVnode->meterList[sid];
if (pObj == NULL || pObj->sqlLen == 0 || pObj->status == 1 || pObj->state == TSDB_METER_STATE_DELETED) continue;
if (pObj == NULL || pObj->sqlLen == 0 || vnodeIsMeterState(pObj, TSDB_METER_STATE_DELETING)) continue;
dTrace("vid:%d sid:%d id:%s, open stream:%s", pObj->vnode, sid, pObj->meterId, pObj->pSql);
......
......@@ -361,6 +361,7 @@ void vnodeUpdateFilterColumnIndex(SQuery* pQuery) {
// TODO support k<12 and k<>9
int32_t vnodeCreateFilterInfo(void* pQInfo, SQuery* pQuery) {
for (int32_t i = 0; i < pQuery->numOfCols; ++i) {
if (pQuery->colList[i].data.filterOn > 0) {
pQuery->numOfFilterCols++;
......@@ -401,8 +402,6 @@ int32_t vnodeCreateFilterInfo(void* pQInfo, SQuery* pQuery) {
pFilterInfo->fp = rangeFilterArray[2];
}
} else {
assert(lower == TSDB_RELATION_LARGE);
if (upper == TSDB_RELATION_LESS_EQUAL) {
pFilterInfo->fp = rangeFilterArray[3];
} else {
......@@ -421,6 +420,7 @@ int32_t vnodeCreateFilterInfo(void* pQInfo, SQuery* pQuery) {
pFilterInfo->fp = filterArray[upper];
}
}
pFilterInfo->elemSize = bytes;
j++;
}
......@@ -470,6 +470,18 @@ bool vnodeIsProjectionQuery(SSqlFunctionExpr* pExpr, int32_t numOfOutput) {
return true;
}
/*
* the pMeter->state may be changed by vnodeIsSafeToDeleteMeter and import/update processor, the check of
* the state will not always be correct.
*
* The import/update/deleting is actually blocked by current query processing if the check of meter state is
* passed, but later queries are denied.
*
* 1. vnodeIsSafeToDelete will wait for this complete, since it also use the vmutex to check the numOfQueries
* 2. import will check the numOfQueries again after setting state to be TSDB_METER_STATE_IMPORTING, while the
* vmutex is also used.
* 3. insert has nothing to do with the query processing.
*/
int32_t vnodeIncQueryRefCount(SQueryMeterMsg* pQueryMsg, SMeterSidExtInfo** pSids, SMeterObj** pMeterObjList,
int32_t* numOfInc) {
SVnodeObj* pVnode = &vnodeList[pQueryMsg->vnode];
......@@ -477,21 +489,24 @@ int32_t vnodeIncQueryRefCount(SQueryMeterMsg* pQueryMsg, SMeterSidExtInfo** pSid
int32_t num = 0;
int32_t code = TSDB_CODE_SUCCESS;
// check all meter metadata to ensure all metadata are identical.
for (int32_t i = 0; i < pQueryMsg->numOfSids; ++i) {
SMeterObj* pMeter = pVnode->meterList[pSids[i]->sid];
if (pMeter == NULL || pMeter->state != TSDB_METER_STATE_READY) {
if (pMeter == NULL) {
if (pMeter == NULL || (pMeter->state > TSDB_METER_STATE_INSERT)) {
if (pMeter == NULL || vnodeIsMeterState(pMeter, TSDB_METER_STATE_DELETING)) {
code = TSDB_CODE_NOT_ACTIVE_SESSION;
dError("qmsg:%p, vid:%d sid:%d, not there", pQueryMsg, pQueryMsg->vnode, pSids[i]->sid);
dError("qmsg:%p, vid:%d sid:%d, not there or will be dropped", pQueryMsg, pQueryMsg->vnode, pSids[i]->sid);
vnodeSendMeterCfgMsg(pQueryMsg->vnode, pSids[i]->sid);
} else {
} else {//update or import
code = TSDB_CODE_ACTION_IN_PROGRESS;
dTrace("qmsg:%p, vid:%d sid:%d id:%s, it is in state:%d, wait!", pQueryMsg, pQueryMsg->vnode, pSids[i]->sid,
pMeter->meterId, pMeter->state);
}
} else {
/*
* vnodeIsSafeToDeleteMeter will wait for this function complete, and then it can
* check if the numOfQueries is 0 or not.
*/
pMeterObjList[(*numOfInc)++] = pMeter;
__sync_fetch_and_add(&pMeter->numOfQueries, 1);
......@@ -517,7 +532,6 @@ void vnodeDecQueryRefCount(SQueryMeterMsg* pQueryMsg, SMeterObj** pMeterObjList,
SMeterObj* pMeter = pMeterObjList[i];
if (pMeter != NULL) { // here, do not need to lock to perform operations
assert(pMeter->state != TSDB_METER_STATE_DELETING && pMeter->state != TSDB_METER_STATE_DELETED);
__sync_fetch_and_sub(&pMeter->numOfQueries, 1);
if (pMeter->numOfQueries > 0) {
......@@ -571,3 +585,66 @@ void vnodeUpdateQueryColumnIndex(SQuery* pQuery, SMeterObj* pMeterObj) {
}
}
}
int32_t vnodeTransferMeterState(SMeterObj* pMeterObj, int32_t state) {
return __sync_val_compare_and_swap(&pMeterObj->state, TSDB_METER_STATE_READY, state);
}
void vnodeClearMeterState(SMeterObj* pMeterObj, int32_t state) {
pMeterObj->state &= (~state);
}
bool vnodeIsMeterState(SMeterObj* pMeterObj, int32_t state) {
if (state == TSDB_METER_STATE_READY) {
return pMeterObj->state == TSDB_METER_STATE_READY;
} else if (state == TSDB_METER_STATE_DELETING) {
return pMeterObj->state >= state;
} else {
return (((pMeterObj->state) & state) == state);
}
}
void vnodeSetMeterDeleting(SMeterObj* pMeterObj) {
if (pMeterObj == NULL) {
return;
}
pMeterObj->state |= TSDB_METER_STATE_DELETING;
}
bool vnodeIsSafeToDeleteMeter(SVnodeObj* pVnode, int32_t sid) {
SMeterObj* pObj = pVnode->meterList[sid];
if (pObj == NULL || vnodeIsMeterState(pObj, TSDB_METER_STATE_DELETED)) {
return true;
}
int32_t prev = vnodeTransferMeterState(pObj, TSDB_METER_STATE_DELETING);
/*
* if the meter is not in ready/deleting state, it must be in insert/import/update,
* set the deleting state and wait the procedure to be completed
*/
if (prev != TSDB_METER_STATE_READY && prev < TSDB_METER_STATE_DELETING) {
vnodeSetMeterDeleting(pObj);
dWarn("vid:%d sid:%d id:%s, can not be deleted, state:%d, wait", pObj->vnode, pObj->sid, pObj->meterId, prev);
return false;
}
bool ready = true;
/*
* the query will be stopped ASAP, since the state of meter is set to TSDB_METER_STATE_DELETING,
* and new query will abort since the meter is deleted.
*/
pthread_mutex_lock(&pVnode->vmutex);
if (pObj->numOfQueries > 0) {
dWarn("vid:%d sid:%d id:%s %d queries executing on it, wait query to be finished",
pObj->vnode, pObj->sid, pObj->meterId, pObj->numOfQueries);
ready = false;
}
pthread_mutex_unlock(&pVnode->vmutex);
return ready;
}
......@@ -1532,7 +1532,7 @@ void tColModelDisplayEx(tColModel *pModel, void *pData, int32_t numOfRows, int32
}
////////////////////////////////////////////////////////////////////////////////////////////
void tColModelCompress(tColModel *pModel, tFilePage *inputBuffer, int32_t maxElemsCapacity) {
void tColModelCompact(tColModel *pModel, tFilePage *inputBuffer, int32_t maxElemsCapacity) {
if (inputBuffer->numOfElems == 0 || maxElemsCapacity == inputBuffer->numOfElems) {
return;
}
......
......@@ -117,7 +117,7 @@ int32_t taosGetNumOfResWithoutLimit(SInterpolationInfo* pInterpoInfo, int64_t* p
}
}
bool taosHasNoneInterpoPoints(SInterpolationInfo* pInterpoInfo) { return taosNumOfRemainPoints(pInterpoInfo) > 0; }
bool taosHasRemainsDataForInterpolation(SInterpolationInfo* pInterpoInfo) { return taosNumOfRemainPoints(pInterpoInfo) > 0; }
int32_t taosNumOfRemainPoints(SInterpolationInfo* pInterpoInfo) {
if (pInterpoInfo->rowIdx == -1 || pInterpoInfo->numOfRawDataInRows == 0) {
......
char version[64] = "1.6.1.2";
char compatible_version[64] = "1.6.0.0";
char gitinfo[128] = "ddcb2519e895c2e2101089aedaf529cee5cefe04";
char buildinfo[512] = "Built by plum at 2019-07-29 10:41";
char version[64] = "1.6.1.4";
char compatible_version[64] = "1.6.1.0";
char gitinfo[128] = "36936e19eb26b5e45107bca95394133e3ac7c3a1";
char buildinfo[512] = "Built by slguan at 2019-08-05 09:24";
......@@ -63,7 +63,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>1.0.0</version>
<version>1.0.1</version>
</dependency>
</dependencies>
</project>
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册