未验证 提交 eed8b516 编写于 作者: 卡比卡比 提交者: GitHub

Merge branch 'taosdata:develop' into develop

build/ build/
.ycm_extra_conf.py
.vscode/ .vscode/
.idea/ .idea/
cmake-build-debug/ cmake-build-debug/
......
...@@ -191,7 +191,7 @@ cdf548465318 ...@@ -191,7 +191,7 @@ cdf548465318
1,通过端口映射(-p),将容器内部开放的网络端口映射到宿主机的指定端口上。通过挂载本地目录(-v),可以实现宿主机与容器内部的数据同步,防止容器删除后,数据丢失。 1,通过端口映射(-p),将容器内部开放的网络端口映射到宿主机的指定端口上。通过挂载本地目录(-v),可以实现宿主机与容器内部的数据同步,防止容器删除后,数据丢失。
```bash ```bash
$ docker run -d -v /etc/taos:/etc/taos -p 6041:6041 tdengine/tdengine $ docker run -d -v /etc/taos:/etc/taos -P 6041:6041 tdengine/tdengine
526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd 526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd
$ curl -u root:taosdata -d 'show databases' 127.0.0.1:6041/rest/sql $ curl -u root:taosdata -d 'show databases' 127.0.0.1:6041/rest/sql
......
...@@ -13,7 +13,7 @@ TDengine采用关系型数据模型,需要建库、建表。因此对于一个 ...@@ -13,7 +13,7 @@ TDengine采用关系型数据模型,需要建库、建表。因此对于一个
```mysql ```mysql
CREATE DATABASE power KEEP 365 DAYS 10 BLOCKS 6 UPDATE 1; CREATE DATABASE power KEEP 365 DAYS 10 BLOCKS 6 UPDATE 1;
``` ```
上述语句将创建一个名为power的库,这个库的数据将保留365天(超过365天将被自动删除),每10天一个数据文件,内存块数为4,允许更新数据。详细的语法及参数请见 [TAOS SQL 的数据管理](https://www.taosdata.com/cn/documentation/taos-sql#management) 章节。 上述语句将创建一个名为power的库,这个库的数据将保留365天(超过365天将被自动删除),每10天一个数据文件,内存块数为6,允许更新数据。详细的语法及参数请见 [TAOS SQL 的数据管理](https://www.taosdata.com/cn/documentation/taos-sql#management) 章节。
创建库之后,需要使用SQL命令USE将当前库切换过来,例如: 创建库之后,需要使用SQL命令USE将当前库切换过来,例如:
......
...@@ -315,6 +315,10 @@ TDengine的异步API均采用非阻塞调用模式。应用程序可以用多线 ...@@ -315,6 +315,10 @@ TDengine的异步API均采用非阻塞调用模式。应用程序可以用多线
1. 调用 `taos_stmt_init` 创建参数绑定对象; 1. 调用 `taos_stmt_init` 创建参数绑定对象;
2. 调用 `taos_stmt_prepare` 解析 INSERT 语句; 2. 调用 `taos_stmt_prepare` 解析 INSERT 语句;
3. 如果 INSERT 语句中预留了表名但没有预留 TAGS,那么调用 `taos_stmt_set_tbname` 来设置表名; 3. 如果 INSERT 语句中预留了表名但没有预留 TAGS,那么调用 `taos_stmt_set_tbname` 来设置表名;
* 从 2.1.6.0 版本开始,对于向一个超级表下的多个子表同时写入数据(每个子表写入的数据较少,可能只有一行)的情形,提供了一个专用的优化接口 `taos_stmt_set_sub_tbname`,可以通过提前载入 meta 数据以及避免对 SQL 语法的重复解析来节省总体的处理时间(但这个优化方法并不支持自动建表语法)。具体使用方法如下:
1. 必须先提前调用 `taos_load_table_info` 来加载所有需要的超级表和子表的 table meta;
2. 然后对一个超级表的第一个子表调用 `taos_stmt_set_tbname` 来设置表名;
3. 后续子表用 `taos_stmt_set_sub_tbname` 来设置表名。
4. 如果 INSERT 语句中既预留了表名又预留了 TAGS(例如 INSERT 语句采取的是自动建表的方式),那么调用 `taos_stmt_set_tbname_tags` 来设置表名和 TAGS 的值; 4. 如果 INSERT 语句中既预留了表名又预留了 TAGS(例如 INSERT 语句采取的是自动建表的方式),那么调用 `taos_stmt_set_tbname_tags` 来设置表名和 TAGS 的值;
5. 调用 `taos_stmt_bind_param_batch` 以多列的方式设置 VALUES 的值,或者调用 `taos_stmt_bind_param` 以单行的方式设置 VALUES 的值; 5. 调用 `taos_stmt_bind_param_batch` 以多列的方式设置 VALUES 的值,或者调用 `taos_stmt_bind_param` 以单行的方式设置 VALUES 的值;
6. 调用 `taos_stmt_add_batch` 把当前绑定的参数加入批处理; 6. 调用 `taos_stmt_add_batch` 把当前绑定的参数加入批处理;
...@@ -358,6 +362,12 @@ typedef struct TAOS_BIND { ...@@ -358,6 +362,12 @@ typedef struct TAOS_BIND {
(2.1.1.0 版本新增,仅支持用于替换 INSERT 语句中的参数值) (2.1.1.0 版本新增,仅支持用于替换 INSERT 语句中的参数值)
当 SQL 语句中的表名使用了 `?` 占位时,可以使用此函数绑定一个具体的表名。 当 SQL 语句中的表名使用了 `?` 占位时,可以使用此函数绑定一个具体的表名。
- `int taos_stmt_set_sub_tbname(TAOS_STMT* stmt, const char* name)`
(2.1.6.0 版本新增,仅支持用于替换 INSERT 语句中、属于同一个超级表下的多个子表中、作为写入目标的第 2 个到第 n 个子表的表名)
当 SQL 语句中的表名使用了 `?` 占位时,如果想要一批写入的表是多个属于同一个超级表的子表,那么可以使用此函数绑定除第一个子表之外的其他子表的表名。
*注意:*在使用时,客户端必须先调用 `taos_load_table_info` 来加载所有需要的超级表和子表的 table meta,然后对一个超级表的第一个子表调用 `taos_stmt_set_tbname`,后续子表用 `taos_stmt_set_sub_tbname`
- `int taos_stmt_set_tbname_tags(TAOS_STMT* stmt, const char* name, TAOS_BIND* tags)` - `int taos_stmt_set_tbname_tags(TAOS_STMT* stmt, const char* name, TAOS_BIND* tags)`
(2.1.2.0 版本新增,仅支持用于替换 INSERT 语句中的参数值) (2.1.2.0 版本新增,仅支持用于替换 INSERT 语句中的参数值)
......
...@@ -217,7 +217,7 @@ taosd -C ...@@ -217,7 +217,7 @@ taosd -C
| 99 | queryBufferSize | | **S** | MB | 为所有并发查询占用保留的内存大小。 | | | 计算规则可以根据实际应用可能的最大并发数和表的数字相乘,再乘 170 。(2.0.15 以前的版本中,此参数的单位是字节) | | 99 | queryBufferSize | | **S** | MB | 为所有并发查询占用保留的内存大小。 | | | 计算规则可以根据实际应用可能的最大并发数和表的数字相乘,再乘 170 。(2.0.15 以前的版本中,此参数的单位是字节) |
| 100 | ratioOfQueryCores | | **S** | | 设置查询线程的最大数量。 | | | 最小值0 表示只有1个查询线程;最大值2表示最大建立2倍CPU核数的查询线程。默认为1,表示最大和CPU核数相等的查询线程。该值可以为小数,即0.5表示最大建立CPU核数一半的查询线程。 | | 100 | ratioOfQueryCores | | **S** | | 设置查询线程的最大数量。 | | | 最小值0 表示只有1个查询线程;最大值2表示最大建立2倍CPU核数的查询线程。默认为1,表示最大和CPU核数相等的查询线程。该值可以为小数,即0.5表示最大建立CPU核数一半的查询线程。 |
| 101 | update | | **S** | | 允许更新已存在的数据行 | 0 \| 1 | 0 | 从 2.0.8.0 版本开始 | | 101 | update | | **S** | | 允许更新已存在的数据行 | 0 \| 1 | 0 | 从 2.0.8.0 版本开始 |
| 102 | cacheLast | | **S** | | 是否在内存中缓存子表的最近数据 | 0:关闭;1:缓存子表最近一行数据;2:缓存子表每一列的最近的非NULL值;3:同时打开缓存最近行和列功能。 | 0 | 2.1.2.0 版本之前、2.0.20.7 版本之前在 taos.cfg 文件中不支持此参数。 | | 102 | cacheLast | | **S** | | 是否在内存中缓存子表的最近数据 | 0:关闭;1:缓存子表最近一行数据;2:缓存子表每一列的最近的非NULL值;3:同时打开缓存最近行和列功能。(2.1.2.0 版本开始此参数支持 0~3 的取值范围,在此之前取值只能是 [0, 1]) | 0 | 2.1.2.0 版本之前、2.0.20.7 版本之前在 taos.cfg 文件中不支持此参数。 |
| 103 | numOfCommitThreads | YES | **S** | | 设置写入线程的最大数量 | | | | | 103 | numOfCommitThreads | YES | **S** | | 设置写入线程的最大数量 | | | |
| 104 | maxWildCardsLength | | **C** | bytes | 设定 LIKE 算子的通配符字符串允许的最大长度 | 0-16384 | 100 | 2.1.6.1 版本新增。 | | 104 | maxWildCardsLength | | **C** | bytes | 设定 LIKE 算子的通配符字符串允许的最大长度 | 0-16384 | 100 | 2.1.6.1 版本新增。 |
......
...@@ -1285,6 +1285,19 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数 ...@@ -1285,6 +1285,19 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
说明:(从 2.1.3.0 版本开始新增此函数)输出结果行数是范围内总行数减一,第一行没有结果输出。DERIVATIVE 函数可以在由 GROUP BY 划分出单独时间线的情况下用于超级表(也即 GROUP BY tbname)。 说明:(从 2.1.3.0 版本开始新增此函数)输出结果行数是范围内总行数减一,第一行没有结果输出。DERIVATIVE 函数可以在由 GROUP BY 划分出单独时间线的情况下用于超级表(也即 GROUP BY tbname)。
示例:
```mysql
taos> select derivative(current, 10m, 0) from t1;
ts | derivative(current, 10m, 0) |
========================================================
2021-08-20 10:11:22.790 | 0.500000000 |
2021-08-20 11:11:22.791 | 0.166666620 |
2021-08-20 12:11:22.791 | 0.000000000 |
2021-08-20 13:11:22.792 | 0.166666620 |
2021-08-20 14:11:22.792 | -0.666666667 |
Query OK, 5 row(s) in set (0.004883s)
```
- **SPREAD** - **SPREAD**
```mysql ```mysql
SELECT SPREAD(field_name) FROM { tb_name | stb_name } [WHERE clause]; SELECT SPREAD(field_name) FROM { tb_name | stb_name } [WHERE clause];
......
此差异已折叠。
...@@ -284,3 +284,5 @@ keepColumnName 1 ...@@ -284,3 +284,5 @@ keepColumnName 1
# 0 no query allowed, queries are disabled # 0 no query allowed, queries are disabled
# queryBufferSize -1 # queryBufferSize -1
# percent of redundant data in tsdb meta will compact meta data,0 means donot compact
# tsdbMetaCompactRatio 0
...@@ -2678,7 +2678,7 @@ int tscProcessQueryRsp(SSqlObj *pSql) { ...@@ -2678,7 +2678,7 @@ int tscProcessQueryRsp(SSqlObj *pSql) {
return 0; return 0;
} }
static void decompressQueryColData(SSqlRes *pRes, SQueryInfo* pQueryInfo, char **data, int8_t compressed, int compLen) { static void decompressQueryColData(SSqlObj *pSql, SSqlRes *pRes, SQueryInfo* pQueryInfo, char **data, int8_t compressed, int32_t compLen) {
int32_t decompLen = 0; int32_t decompLen = 0;
int32_t numOfCols = pQueryInfo->fieldsInfo.numOfOutput; int32_t numOfCols = pQueryInfo->fieldsInfo.numOfOutput;
int32_t *compSizes; int32_t *compSizes;
...@@ -2715,6 +2715,9 @@ static void decompressQueryColData(SSqlRes *pRes, SQueryInfo* pQueryInfo, char * ...@@ -2715,6 +2715,9 @@ static void decompressQueryColData(SSqlRes *pRes, SQueryInfo* pQueryInfo, char *
pData = *data + compLen + numOfCols * sizeof(int32_t); pData = *data + compLen + numOfCols * sizeof(int32_t);
} }
tscDebug("0x%"PRIx64" decompress col data, compressed size:%d, decompressed size:%d",
pSql->self, (int32_t)(compLen + numOfCols * sizeof(int32_t)), decompLen);
int32_t tailLen = pRes->rspLen - sizeof(SRetrieveTableRsp) - decompLen; int32_t tailLen = pRes->rspLen - sizeof(SRetrieveTableRsp) - decompLen;
memmove(*data + decompLen, pData, tailLen); memmove(*data + decompLen, pData, tailLen);
memmove(*data, outputBuf, decompLen); memmove(*data, outputBuf, decompLen);
...@@ -2749,7 +2752,7 @@ int tscProcessRetrieveRspFromNode(SSqlObj *pSql) { ...@@ -2749,7 +2752,7 @@ int tscProcessRetrieveRspFromNode(SSqlObj *pSql) {
//Decompress col data if compressed from server //Decompress col data if compressed from server
if (pRetrieve->compressed) { if (pRetrieve->compressed) {
int32_t compLen = htonl(pRetrieve->compLen); int32_t compLen = htonl(pRetrieve->compLen);
decompressQueryColData(pRes, pQueryInfo, &pRes->data, pRetrieve->compressed, compLen); decompressQueryColData(pSql, pRes, pQueryInfo, &pRes->data, pRetrieve->compressed, compLen);
} }
STableMetaInfo *pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0); STableMetaInfo *pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
......
此差异已折叠。
...@@ -517,6 +517,7 @@ void tdAppendMemRowToDataCol(SMemRow row, STSchema *pSchema, SDataCols *pCols, b ...@@ -517,6 +517,7 @@ void tdAppendMemRowToDataCol(SMemRow row, STSchema *pSchema, SDataCols *pCols, b
} }
} }
//TODO: refactor this function to eliminate additional memory copy
int tdMergeDataCols(SDataCols *target, SDataCols *source, int rowsToMerge, int *pOffset, bool forceSetNull) { int tdMergeDataCols(SDataCols *target, SDataCols *source, int rowsToMerge, int *pOffset, bool forceSetNull) {
ASSERT(rowsToMerge > 0 && rowsToMerge <= source->numOfRows); ASSERT(rowsToMerge > 0 && rowsToMerge <= source->numOfRows);
ASSERT(target->numOfCols == source->numOfCols); ASSERT(target->numOfCols == source->numOfCols);
......
...@@ -76,12 +76,11 @@ int32_t tsMaxBinaryDisplayWidth = 30; ...@@ -76,12 +76,11 @@ int32_t tsMaxBinaryDisplayWidth = 30;
int32_t tsCompressMsgSize = -1; int32_t tsCompressMsgSize = -1;
/* denote if server needs to compress the retrieved column data before adding to the rpc response message body. /* denote if server needs to compress the retrieved column data before adding to the rpc response message body.
* 0: disable column data compression * 0: all data are compressed
* 1: enable column data compression * -1: all data are not compressed
* This option is default to disabled. Once enabled, compression will be conducted if any column has size more * other values: if any retrieved column size is greater than the tsCompressColData, all data will be compressed.
* than QUERY_COMP_THRESHOLD. Otherwise, no further compression is needed.
*/ */
int32_t tsCompressColData = 0; int32_t tsCompressColData = -1;
// client // client
int32_t tsMaxSQLStringLen = TSDB_MAX_ALLOWED_SQL_LEN; int32_t tsMaxSQLStringLen = TSDB_MAX_ALLOWED_SQL_LEN;
...@@ -95,7 +94,7 @@ int32_t tsMaxNumOfOrderedResults = 100000; ...@@ -95,7 +94,7 @@ int32_t tsMaxNumOfOrderedResults = 100000;
// 10 ms for sliding time, the value will changed in case of time precision changed // 10 ms for sliding time, the value will changed in case of time precision changed
int32_t tsMinSlidingTime = 10; int32_t tsMinSlidingTime = 10;
// the maxinum number of distict query result // the maxinum number of distict query result
int32_t tsMaxNumOfDistinctResults = 1000 * 10000; int32_t tsMaxNumOfDistinctResults = 1000 * 10000;
// 1 us for interval time range, changed accordingly // 1 us for interval time range, changed accordingly
...@@ -150,6 +149,7 @@ int32_t tsMaxVgroupsPerDb = 0; ...@@ -150,6 +149,7 @@ int32_t tsMaxVgroupsPerDb = 0;
int32_t tsMinTablePerVnode = TSDB_TABLES_STEP; int32_t tsMinTablePerVnode = TSDB_TABLES_STEP;
int32_t tsMaxTablePerVnode = TSDB_DEFAULT_TABLES; int32_t tsMaxTablePerVnode = TSDB_DEFAULT_TABLES;
int32_t tsTableIncStepPerVnode = TSDB_TABLES_STEP; int32_t tsTableIncStepPerVnode = TSDB_TABLES_STEP;
int32_t tsTsdbMetaCompactRatio = TSDB_META_COMPACT_RATIO;
// tsdb config // tsdb config
...@@ -1006,10 +1006,10 @@ static void doInitGlobalConfig(void) { ...@@ -1006,10 +1006,10 @@ static void doInitGlobalConfig(void) {
cfg.option = "compressColData"; cfg.option = "compressColData";
cfg.ptr = &tsCompressColData; cfg.ptr = &tsCompressColData;
cfg.valType = TAOS_CFG_VTYPE_INT8; cfg.valType = TAOS_CFG_VTYPE_INT32;
cfg.cfgType = TSDB_CFG_CTYPE_B_CONFIG | TSDB_CFG_CTYPE_B_SHOW; cfg.cfgType = TSDB_CFG_CTYPE_B_CONFIG | TSDB_CFG_CTYPE_B_CLIENT | TSDB_CFG_CTYPE_B_SHOW;
cfg.minValue = 0; cfg.minValue = -1;
cfg.maxValue = 1; cfg.maxValue = 100000000.0f;
cfg.ptrLength = 0; cfg.ptrLength = 0;
cfg.unitType = TAOS_CFG_UTYPE_NONE; cfg.unitType = TAOS_CFG_UTYPE_NONE;
taosInitConfigOption(cfg); taosInitConfigOption(cfg);
...@@ -1581,6 +1581,16 @@ static void doInitGlobalConfig(void) { ...@@ -1581,6 +1581,16 @@ static void doInitGlobalConfig(void) {
cfg.unitType = TAOS_CFG_UTYPE_NONE; cfg.unitType = TAOS_CFG_UTYPE_NONE;
taosInitConfigOption(cfg); taosInitConfigOption(cfg);
cfg.option = "tsdbMetaCompactRatio";
cfg.ptr = &tsTsdbMetaCompactRatio;
cfg.valType = TAOS_CFG_VTYPE_INT32;
cfg.cfgType = TSDB_CFG_CTYPE_B_CONFIG;
cfg.minValue = 0;
cfg.maxValue = 100;
cfg.ptrLength = 0;
cfg.unitType = TAOS_CFG_UTYPE_NONE;
taosInitConfigOption(cfg);
assert(tsGlobalConfigNum <= TSDB_CFG_MAX_NUM); assert(tsGlobalConfigNum <= TSDB_CFG_MAX_NUM);
#ifdef TD_TSZ #ifdef TD_TSZ
// lossy compress // lossy compress
......
...@@ -38,12 +38,12 @@ void tVariantCreate(tVariant *pVar, SStrToken *token) { ...@@ -38,12 +38,12 @@ void tVariantCreate(tVariant *pVar, SStrToken *token) {
switch (token->type) { switch (token->type) {
case TSDB_DATA_TYPE_BOOL: { case TSDB_DATA_TYPE_BOOL: {
int32_t k = strncasecmp(token->z, "true", 4); if (strncasecmp(token->z, "true", 4) == 0) {
if (k == 0) {
pVar->i64 = TSDB_TRUE; pVar->i64 = TSDB_TRUE;
} else { } else if (strncasecmp(token->z, "false", 5) == 0) {
assert(strncasecmp(token->z, "false", 5) == 0);
pVar->i64 = TSDB_FALSE; pVar->i64 = TSDB_FALSE;
} else {
return;
} }
break; break;
......
...@@ -10,7 +10,8 @@ import sys ...@@ -10,7 +10,8 @@ import sys
_datetime_epoch = datetime.utcfromtimestamp(0) _datetime_epoch = datetime.utcfromtimestamp(0)
def _is_not_none(obj): def _is_not_none(obj):
obj != None return obj != None
class TaosBind(ctypes.Structure): class TaosBind(ctypes.Structure):
_fields_ = [ _fields_ = [
("buffer_type", c_int), ("buffer_type", c_int),
...@@ -299,27 +300,14 @@ class TaosMultiBind(ctypes.Structure): ...@@ -299,27 +300,14 @@ class TaosMultiBind(ctypes.Structure):
self.buffer = cast(buffer, c_void_p) self.buffer = cast(buffer, c_void_p)
self.num = len(values) self.num = len(values)
def binary(self, values): def _str_to_buffer(self, values):
self.num = len(values) self.num = len(values)
self.buffer = cast(c_char_p("".join(filter(_is_not_none, values)).encode("utf-8")), c_void_p) is_null = [1 if v == None else 0 for v in values]
self.length = (c_int * len(values))(*[len(value) if value is not None else 0 for value in values]) self.is_null = cast((c_byte * self.num)(*is_null), c_char_p)
self.buffer_type = FieldType.C_BINARY
self.is_null = cast((c_byte * self.num)(*[1 if v == None else 0 for v in values]), c_char_p) if sum(is_null) == self.num:
self.length = (c_int32 * len(values))(0 * self.num)
def timestamp(self, values, precision=PrecisionEnum.Milliseconds): return
try:
buffer = cast(values, c_void_p)
except:
buffer_type = c_int64 * len(values)
buffer = buffer_type(*[_datetime_to_timestamp(value, precision) for value in values])
self.buffer_type = FieldType.C_TIMESTAMP
self.buffer = cast(buffer, c_void_p)
self.buffer_length = sizeof(c_int64)
self.num = len(values)
def nchar(self, values):
# type: (list[str]) -> None
if sys.version_info < (3, 0): if sys.version_info < (3, 0):
_bytes = [bytes(value) if value is not None else None for value in values] _bytes = [bytes(value) if value is not None else None for value in values]
buffer_length = max(len(b) + 1 for b in _bytes if b is not None) buffer_length = max(len(b) + 1 for b in _bytes if b is not None)
...@@ -347,9 +335,26 @@ class TaosMultiBind(ctypes.Structure): ...@@ -347,9 +335,26 @@ class TaosMultiBind(ctypes.Structure):
) )
self.length = (c_int32 * len(values))(*[len(b) if b is not None else 0 for b in _bytes]) self.length = (c_int32 * len(values))(*[len(b) if b is not None else 0 for b in _bytes])
self.buffer_length = buffer_length self.buffer_length = buffer_length
def binary(self, values):
self.buffer_type = FieldType.C_BINARY
self._str_to_buffer(values)
def timestamp(self, values, precision=PrecisionEnum.Milliseconds):
try:
buffer = cast(values, c_void_p)
except:
buffer_type = c_int64 * len(values)
buffer = buffer_type(*[_datetime_to_timestamp(value, precision) for value in values])
self.buffer_type = FieldType.C_TIMESTAMP
self.buffer = cast(buffer, c_void_p)
self.buffer_length = sizeof(c_int64)
self.num = len(values) self.num = len(values)
self.is_null = cast((c_byte * self.num)(*[1 if v == None else 0 for v in values]), c_char_p)
def nchar(self, values):
# type: (list[str]) -> None
self.buffer_type = FieldType.C_NCHAR self.buffer_type = FieldType.C_NCHAR
self._str_to_buffer(values)
def tinyint_unsigned(self, values): def tinyint_unsigned(self, values):
self.buffer_type = FieldType.C_TINYINT_UNSIGNED self.buffer_type = FieldType.C_TINYINT_UNSIGNED
......
...@@ -3,6 +3,9 @@ ...@@ -3,6 +3,9 @@
"""Constants in TDengine python """Constants in TDengine python
""" """
import ctypes, struct
class FieldType(object): class FieldType(object):
"""TDengine Field Types""" """TDengine Field Types"""
...@@ -33,8 +36,8 @@ class FieldType(object): ...@@ -33,8 +36,8 @@ class FieldType(object):
C_INT_UNSIGNED_NULL = 4294967295 C_INT_UNSIGNED_NULL = 4294967295
C_BIGINT_NULL = -9223372036854775808 C_BIGINT_NULL = -9223372036854775808
C_BIGINT_UNSIGNED_NULL = 18446744073709551615 C_BIGINT_UNSIGNED_NULL = 18446744073709551615
C_FLOAT_NULL = float("nan") C_FLOAT_NULL = ctypes.c_float(struct.unpack("<f", b"\x00\x00\xf0\x7f")[0])
C_DOUBLE_NULL = float("nan") C_DOUBLE_NULL = ctypes.c_double(struct.unpack("<d", b"\x00\x00\x00\x00\x00\xff\xff\x7f")[0])
C_BINARY_NULL = bytearray([int("0xff", 16)]) C_BINARY_NULL = bytearray([int("0xff", 16)])
# Timestamp precision definition # Timestamp precision definition
C_TIMESTAMP_MILLI = 0 C_TIMESTAMP_MILLI = 0
......
from taos import *
conn = connect()
dbname = "pytest_taos_stmt_multi"
conn.execute("drop database if exists %s" % dbname)
conn.execute("create database if not exists %s" % dbname)
conn.select_db(dbname)
conn.execute(
"create table if not exists log(ts timestamp, bo bool, nil tinyint, \
ti tinyint, si smallint, ii int, bi bigint, tu tinyint unsigned, \
su smallint unsigned, iu int unsigned, bu bigint unsigned, \
ff float, dd double, bb binary(100), nn nchar(100), tt timestamp)",
)
stmt = conn.statement("insert into log values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)")
params = new_multi_binds(16)
params[0].timestamp((1626861392589, 1626861392590, 1626861392591))
params[1].bool((True, None, False))
params[2].tinyint([-128, -128, None]) # -128 is tinyint null
params[3].tinyint([0, 127, None])
params[4].smallint([3, None, 2])
params[5].int([3, 4, None])
params[6].bigint([3, 4, None])
params[7].tinyint_unsigned([3, 4, None])
params[8].smallint_unsigned([3, 4, None])
params[9].int_unsigned([3, 4, None])
params[10].bigint_unsigned([3, 4, None])
params[11].float([3, None, 1])
params[12].double([3, None, 1.2])
params[13].binary(["abc", "dddafadfadfadfadfa", None])
# params[14].nchar(["涛思数据", None, "a long string with 中文字符"])
params[14].nchar([None, None, None])
params[15].timestamp([None, None, 1626861392591])
stmt.bind_param_batch(params)
stmt.execute()
result = stmt.use_result()
assert result.affected_rows == 3
result.close()
result = conn.query("select * from log")
for row in result:
print(row)
result.close()
stmt.close()
conn.close()
...@@ -88,6 +88,8 @@ extern const int32_t TYPE_BYTES[15]; ...@@ -88,6 +88,8 @@ extern const int32_t TYPE_BYTES[15];
#define TSDB_DEFAULT_PASS "taosdata" #define TSDB_DEFAULT_PASS "taosdata"
#endif #endif
#define SHELL_MAX_PASSWORD_LEN 20
#define TSDB_TRUE 1 #define TSDB_TRUE 1
#define TSDB_FALSE 0 #define TSDB_FALSE 0
#define TSDB_OK 0 #define TSDB_OK 0
...@@ -275,6 +277,7 @@ do { \ ...@@ -275,6 +277,7 @@ do { \
#define TSDB_MAX_TABLES 10000000 #define TSDB_MAX_TABLES 10000000
#define TSDB_DEFAULT_TABLES 1000000 #define TSDB_DEFAULT_TABLES 1000000
#define TSDB_TABLES_STEP 1000 #define TSDB_TABLES_STEP 1000
#define TSDB_META_COMPACT_RATIO 0 // disable tsdb meta compact by default
#define TSDB_MIN_DAYS_PER_FILE 1 #define TSDB_MIN_DAYS_PER_FILE 1
#define TSDB_MAX_DAYS_PER_FILE 3650 #define TSDB_MAX_DAYS_PER_FILE 3650
......
...@@ -25,7 +25,6 @@ ...@@ -25,7 +25,6 @@
#define MAX_USERNAME_SIZE 64 #define MAX_USERNAME_SIZE 64
#define MAX_DBNAME_SIZE 64 #define MAX_DBNAME_SIZE 64
#define MAX_IP_SIZE 20 #define MAX_IP_SIZE 20
#define MAX_PASSWORD_SIZE 20
#define MAX_HISTORY_SIZE 1000 #define MAX_HISTORY_SIZE 1000
#define MAX_COMMAND_SIZE 1048586 #define MAX_COMMAND_SIZE 1048586
#define HISTORY_FILE ".taos_history" #define HISTORY_FILE ".taos_history"
......
...@@ -66,7 +66,7 @@ void printHelp() { ...@@ -66,7 +66,7 @@ void printHelp() {
char DARWINCLIENT_VERSION[] = "Welcome to the TDengine shell from %s, Client Version:%s\n" char DARWINCLIENT_VERSION[] = "Welcome to the TDengine shell from %s, Client Version:%s\n"
"Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.\n\n"; "Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.\n\n";
char g_password[MAX_PASSWORD_SIZE]; char g_password[SHELL_MAX_PASSWORD_LEN];
void shellParseArgument(int argc, char *argv[], SShellArguments *arguments) { void shellParseArgument(int argc, char *argv[], SShellArguments *arguments) {
wordexp_t full_path; wordexp_t full_path;
...@@ -81,19 +81,25 @@ void shellParseArgument(int argc, char *argv[], SShellArguments *arguments) { ...@@ -81,19 +81,25 @@ void shellParseArgument(int argc, char *argv[], SShellArguments *arguments) {
} }
} }
// for password // for password
else if (strncmp(argv[i], "-p", 2) == 0) { else if ((strncmp(argv[i], "-p", 2) == 0)
|| (strncmp(argv[i], "--password", 10) == 0)) {
strcpy(tsOsName, "Darwin"); strcpy(tsOsName, "Darwin");
printf(DARWINCLIENT_VERSION, tsOsName, taos_get_client_info()); printf(DARWINCLIENT_VERSION, tsOsName, taos_get_client_info());
if (strlen(argv[i]) == 2) { if ((strlen(argv[i]) == 2)
|| (strncmp(argv[i], "--password", 10) == 0)) {
printf("Enter password: "); printf("Enter password: ");
taosSetConsoleEcho(false);
if (scanf("%s", g_password) > 1) { if (scanf("%s", g_password) > 1) {
fprintf(stderr, "password read error\n"); fprintf(stderr, "password read error\n");
} }
taosSetConsoleEcho(true);
getchar(); getchar();
} else { } else {
tstrncpy(g_password, (char *)(argv[i] + 2), MAX_PASSWORD_SIZE); tstrncpy(g_password, (char *)(argv[i] + 2), SHELL_MAX_PASSWORD_LEN);
} }
arguments->password = g_password; arguments->password = g_password;
strcpy(argv[i], "");
argc -= 1;
} }
// for management port // for management port
else if (strcmp(argv[i], "-P") == 0) { else if (strcmp(argv[i], "-P") == 0) {
......
...@@ -47,7 +47,7 @@ static struct argp_option options[] = { ...@@ -47,7 +47,7 @@ static struct argp_option options[] = {
{"thread", 'T', "THREADNUM", 0, "Number of threads when using multi-thread to import data."}, {"thread", 'T', "THREADNUM", 0, "Number of threads when using multi-thread to import data."},
{"check", 'k', "CHECK", 0, "Check tables."}, {"check", 'k', "CHECK", 0, "Check tables."},
{"database", 'd', "DATABASE", 0, "Database to use when connecting to the server."}, {"database", 'd', "DATABASE", 0, "Database to use when connecting to the server."},
{"timezone", 't', "TIMEZONE", 0, "Time zone of the shell, default is local."}, {"timezone", 'z', "TIMEZONE", 0, "Time zone of the shell, default is local."},
{"netrole", 'n', "NETROLE", 0, "Net role when network connectivity test, default is startup, options: client|server|rpc|startup|sync|speen|fqdn."}, {"netrole", 'n', "NETROLE", 0, "Net role when network connectivity test, default is startup, options: client|server|rpc|startup|sync|speen|fqdn."},
{"pktlen", 'l', "PKTLEN", 0, "Packet length used for net test, default is 1000 bytes."}, {"pktlen", 'l', "PKTLEN", 0, "Packet length used for net test, default is 1000 bytes."},
{"pktnum", 'N', "PKTNUM", 0, "Packet numbers used for net test, default is 100."}, {"pktnum", 'N', "PKTNUM", 0, "Packet numbers used for net test, default is 100."},
...@@ -76,7 +76,7 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) { ...@@ -76,7 +76,7 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) {
} }
break; break;
case 't': case 'z':
arguments->timezone = arg; arguments->timezone = arg;
break; break;
case 'u': case 'u':
...@@ -173,22 +173,29 @@ static struct argp argp = {options, parse_opt, args_doc, doc}; ...@@ -173,22 +173,29 @@ static struct argp argp = {options, parse_opt, args_doc, doc};
char LINUXCLIENT_VERSION[] = "Welcome to the TDengine shell from %s, Client Version:%s\n" char LINUXCLIENT_VERSION[] = "Welcome to the TDengine shell from %s, Client Version:%s\n"
"Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.\n\n"; "Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.\n\n";
char g_password[MAX_PASSWORD_SIZE]; char g_password[SHELL_MAX_PASSWORD_LEN];
static void parse_password( static void parse_args(
int argc, char *argv[], SShellArguments *arguments) { int argc, char *argv[], SShellArguments *arguments) {
for (int i = 1; i < argc; i++) { for (int i = 1; i < argc; i++) {
if (strncmp(argv[i], "-p", 2) == 0) { if ((strncmp(argv[i], "-p", 2) == 0)
|| (strncmp(argv[i], "--password", 10) == 0)) {
strcpy(tsOsName, "Linux"); strcpy(tsOsName, "Linux");
printf(LINUXCLIENT_VERSION, tsOsName, taos_get_client_info()); printf(LINUXCLIENT_VERSION, tsOsName, taos_get_client_info());
if (strlen(argv[i]) == 2) { if ((strlen(argv[i]) == 2)
|| (strncmp(argv[i], "--password", 10) == 0)) {
printf("Enter password: "); printf("Enter password: ");
taosSetConsoleEcho(false);
if (scanf("%20s", g_password) > 1) { if (scanf("%20s", g_password) > 1) {
fprintf(stderr, "password reading error\n"); fprintf(stderr, "password reading error\n");
} }
getchar(); taosSetConsoleEcho(true);
if (EOF == getchar()) {
fprintf(stderr, "getchar() return EOF\n");
}
} else { } else {
tstrncpy(g_password, (char *)(argv[i] + 2), MAX_PASSWORD_SIZE); tstrncpy(g_password, (char *)(argv[i] + 2), SHELL_MAX_PASSWORD_LEN);
strcpy(argv[i], "-p");
} }
arguments->password = g_password; arguments->password = g_password;
arguments->is_use_passwd = true; arguments->is_use_passwd = true;
...@@ -203,7 +210,7 @@ void shellParseArgument(int argc, char *argv[], SShellArguments *arguments) { ...@@ -203,7 +210,7 @@ void shellParseArgument(int argc, char *argv[], SShellArguments *arguments) {
argp_program_version = verType; argp_program_version = verType;
if (argc > 1) { if (argc > 1) {
parse_password(argc, argv, arguments); parse_args(argc, argv, arguments);
} }
argp_parse(&argp, argc, argv, 0, 0, arguments); argp_parse(&argp, argc, argv, 0, 0, arguments);
......
...@@ -68,7 +68,7 @@ void printHelp() { ...@@ -68,7 +68,7 @@ void printHelp() {
exit(EXIT_SUCCESS); exit(EXIT_SUCCESS);
} }
char g_password[MAX_PASSWORD_SIZE]; char g_password[SHELL_MAX_PASSWORD_LEN];
void shellParseArgument(int argc, char *argv[], SShellArguments *arguments) { void shellParseArgument(int argc, char *argv[], SShellArguments *arguments) {
for (int i = 1; i < argc; i++) { for (int i = 1; i < argc; i++) {
...@@ -82,20 +82,26 @@ void shellParseArgument(int argc, char *argv[], SShellArguments *arguments) { ...@@ -82,20 +82,26 @@ void shellParseArgument(int argc, char *argv[], SShellArguments *arguments) {
} }
} }
// for password // for password
else if (strncmp(argv[i], "-p", 2) == 0) { else if ((strncmp(argv[i], "-p", 2) == 0)
|| (strncmp(argv[i], "--password", 10) == 0)) {
arguments->is_use_passwd = true; arguments->is_use_passwd = true;
strcpy(tsOsName, "Windows"); strcpy(tsOsName, "Windows");
printf(WINCLIENT_VERSION, tsOsName, taos_get_client_info()); printf(WINCLIENT_VERSION, tsOsName, taos_get_client_info());
if (strlen(argv[i]) == 2) { if ((strlen(argv[i]) == 2)
|| (strncmp(argv[i], "--password", 10) == 0)) {
printf("Enter password: "); printf("Enter password: ");
taosSetConsoleEcho(false);
if (scanf("%s", g_password) > 1) { if (scanf("%s", g_password) > 1) {
fprintf(stderr, "password read error!\n"); fprintf(stderr, "password read error!\n");
} }
taosSetConsoleEcho(true);
getchar(); getchar();
} else { } else {
tstrncpy(g_password, (char *)(argv[i] + 2), MAX_PASSWORD_SIZE); tstrncpy(g_password, (char *)(argv[i] + 2), SHELL_MAX_PASSWORD_LEN);
} }
arguments->password = g_password; arguments->password = g_password;
strcpy(argv[i], "");
argc -= 1;
} }
// for management port // for management port
else if (strcmp(argv[i], "-P") == 0) { else if (strcmp(argv[i], "-P") == 0) {
......
...@@ -69,7 +69,6 @@ extern char configDir[]; ...@@ -69,7 +69,6 @@ extern char configDir[];
#define COL_BUFFER_LEN ((TSDB_COL_NAME_LEN + 15) * TSDB_MAX_COLUMNS) #define COL_BUFFER_LEN ((TSDB_COL_NAME_LEN + 15) * TSDB_MAX_COLUMNS)
#define MAX_USERNAME_SIZE 64 #define MAX_USERNAME_SIZE 64
#define MAX_PASSWORD_SIZE 20
#define MAX_HOSTNAME_SIZE 253 // https://man7.org/linux/man-pages/man7/hostname.7.html #define MAX_HOSTNAME_SIZE 253 // https://man7.org/linux/man-pages/man7/hostname.7.html
#define MAX_TB_NAME_SIZE 64 #define MAX_TB_NAME_SIZE 64
#define MAX_DATA_SIZE (16*TSDB_MAX_COLUMNS)+20 // max record len: 16*MAX_COLUMNS, timestamp string and ,('') need extra space #define MAX_DATA_SIZE (16*TSDB_MAX_COLUMNS)+20 // max record len: 16*MAX_COLUMNS, timestamp string and ,('') need extra space
...@@ -208,7 +207,7 @@ typedef struct SArguments_S { ...@@ -208,7 +207,7 @@ typedef struct SArguments_S {
uint16_t port; uint16_t port;
uint16_t iface; uint16_t iface;
char * user; char * user;
char password[MAX_PASSWORD_SIZE]; char password[SHELL_MAX_PASSWORD_LEN];
char * database; char * database;
int replica; int replica;
char * tb_prefix; char * tb_prefix;
...@@ -356,7 +355,7 @@ typedef struct SDbs_S { ...@@ -356,7 +355,7 @@ typedef struct SDbs_S {
uint16_t port; uint16_t port;
char user[MAX_USERNAME_SIZE]; char user[MAX_USERNAME_SIZE];
char password[MAX_PASSWORD_SIZE]; char password[SHELL_MAX_PASSWORD_LEN];
char resultFile[MAX_FILE_NAME_LEN]; char resultFile[MAX_FILE_NAME_LEN];
bool use_metric; bool use_metric;
bool insert_only; bool insert_only;
...@@ -422,7 +421,7 @@ typedef struct SQueryMetaInfo_S { ...@@ -422,7 +421,7 @@ typedef struct SQueryMetaInfo_S {
uint16_t port; uint16_t port;
struct sockaddr_in serv_addr; struct sockaddr_in serv_addr;
char user[MAX_USERNAME_SIZE]; char user[MAX_USERNAME_SIZE];
char password[MAX_PASSWORD_SIZE]; char password[SHELL_MAX_PASSWORD_LEN];
char dbName[TSDB_DB_NAME_LEN]; char dbName[TSDB_DB_NAME_LEN];
char queryMode[SMALL_BUFF_LEN]; // taosc, rest char queryMode[SMALL_BUFF_LEN]; // taosc, rest
...@@ -660,7 +659,21 @@ static FILE * g_fpOfInsertResult = NULL; ...@@ -660,7 +659,21 @@ static FILE * g_fpOfInsertResult = NULL;
fprintf(stderr, "PERF: "fmt, __VA_ARGS__); } while(0) fprintf(stderr, "PERF: "fmt, __VA_ARGS__); } while(0)
#define errorPrint(fmt, ...) \ #define errorPrint(fmt, ...) \
do { fprintf(stderr, " \033[31m"); fprintf(stderr, "ERROR: "fmt, __VA_ARGS__); fprintf(stderr, " \033[0m"); } while(0) do {\
struct tm Tm, *ptm;\
struct timeval timeSecs; \
time_t curTime;\
gettimeofday(&timeSecs, NULL); \
curTime = timeSecs.tv_sec;\
ptm = localtime_r(&curTime, &Tm);\
fprintf(stderr, " \033[31m");\
fprintf(stderr, "%02d/%02d %02d:%02d:%02d.%06d %08" PRId64 " ",\
ptm->tm_mon + 1, ptm->tm_mday, ptm->tm_hour,\
ptm->tm_min, ptm->tm_sec, (int32_t)timeSecs.tv_usec,\
taosGetSelfPthreadId());\
fprintf(stderr, "ERROR: "fmt, __VA_ARGS__);\
fprintf(stderr, " \033[0m");\
} while(0)
// for strncpy buffer overflow // for strncpy buffer overflow
#define min(a, b) (((a) < (b)) ? (a) : (b)) #define min(a, b) (((a) < (b)) ? (a) : (b))
...@@ -738,12 +751,13 @@ static void printHelp() { ...@@ -738,12 +751,13 @@ static void printHelp() {
"Query mode -- 0: SYNC, 1: ASYNC. Default is SYNC."); "Query mode -- 0: SYNC, 1: ASYNC. Default is SYNC.");
printf("%s%s%s%s\n", indent, "-b", indent, printf("%s%s%s%s\n", indent, "-b", indent,
"The data_type of columns, default: FLOAT, INT, FLOAT."); "The data_type of columns, default: FLOAT, INT, FLOAT.");
printf("%s%s%s%s\n", indent, "-w", indent, printf("%s%s%s%s%d\n", indent, "-w", indent,
"The length of data_type 'BINARY' or 'NCHAR'. Default is 16"); "The length of data_type 'BINARY' or 'NCHAR'. Default is ",
g_args.len_of_binary);
printf("%s%s%s%s%d%s%d\n", indent, "-l", indent, printf("%s%s%s%s%d%s%d\n", indent, "-l", indent,
"The number of columns per record. Default is ", "The number of columns per record. Demo mode by default is ",
DEFAULT_DATATYPE_NUM, DEFAULT_DATATYPE_NUM,
". Max values is ", " (float, int, float). Max values is ",
MAX_NUM_COLUMNS); MAX_NUM_COLUMNS);
printf("%s%s%s%s\n", indent, indent, indent, printf("%s%s%s%s\n", indent, indent, indent,
"All of the new column(s) type is INT. If use -b to specify column type, -l will be ignored."); "All of the new column(s) type is INT. If use -b to specify column type, -l will be ignored.");
...@@ -857,7 +871,7 @@ static void parse_args(int argc, char *argv[], SArguments *arguments) { ...@@ -857,7 +871,7 @@ static void parse_args(int argc, char *argv[], SArguments *arguments) {
} }
taosSetConsoleEcho(true); taosSetConsoleEcho(true);
} else { } else {
tstrncpy(arguments->password, (char *)(argv[i] + 2), MAX_PASSWORD_SIZE); tstrncpy(arguments->password, (char *)(argv[i] + 2), SHELL_MAX_PASSWORD_LEN);
} }
} else if (strcmp(argv[i], "-o") == 0) { } else if (strcmp(argv[i], "-o") == 0) {
if (argc == i+1) { if (argc == i+1) {
...@@ -3779,9 +3793,9 @@ static bool getMetaFromInsertJsonFile(cJSON* root) { ...@@ -3779,9 +3793,9 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
cJSON* password = cJSON_GetObjectItem(root, "password"); cJSON* password = cJSON_GetObjectItem(root, "password");
if (password && password->type == cJSON_String && password->valuestring != NULL) { if (password && password->type == cJSON_String && password->valuestring != NULL) {
tstrncpy(g_Dbs.password, password->valuestring, MAX_PASSWORD_SIZE); tstrncpy(g_Dbs.password, password->valuestring, SHELL_MAX_PASSWORD_LEN);
} else if (!password) { } else if (!password) {
tstrncpy(g_Dbs.password, "taosdata", MAX_PASSWORD_SIZE); tstrncpy(g_Dbs.password, "taosdata", SHELL_MAX_PASSWORD_LEN);
} }
cJSON* resultfile = cJSON_GetObjectItem(root, "result_file"); cJSON* resultfile = cJSON_GetObjectItem(root, "result_file");
...@@ -4515,9 +4529,9 @@ static bool getMetaFromQueryJsonFile(cJSON* root) { ...@@ -4515,9 +4529,9 @@ static bool getMetaFromQueryJsonFile(cJSON* root) {
cJSON* password = cJSON_GetObjectItem(root, "password"); cJSON* password = cJSON_GetObjectItem(root, "password");
if (password && password->type == cJSON_String && password->valuestring != NULL) { if (password && password->type == cJSON_String && password->valuestring != NULL) {
tstrncpy(g_queryInfo.password, password->valuestring, MAX_PASSWORD_SIZE); tstrncpy(g_queryInfo.password, password->valuestring, SHELL_MAX_PASSWORD_LEN);
} else if (!password) { } else if (!password) {
tstrncpy(g_queryInfo.password, "taosdata", MAX_PASSWORD_SIZE);; tstrncpy(g_queryInfo.password, "taosdata", SHELL_MAX_PASSWORD_LEN);;
} }
cJSON *answerPrompt = cJSON_GetObjectItem(root, "confirm_parameter_prompt"); // yes, no, cJSON *answerPrompt = cJSON_GetObjectItem(root, "confirm_parameter_prompt"); // yes, no,
...@@ -8825,7 +8839,7 @@ static void initOfInsertMeta() { ...@@ -8825,7 +8839,7 @@ static void initOfInsertMeta() {
tstrncpy(g_Dbs.host, "127.0.0.1", MAX_HOSTNAME_SIZE); tstrncpy(g_Dbs.host, "127.0.0.1", MAX_HOSTNAME_SIZE);
g_Dbs.port = 6030; g_Dbs.port = 6030;
tstrncpy(g_Dbs.user, TSDB_DEFAULT_USER, MAX_USERNAME_SIZE); tstrncpy(g_Dbs.user, TSDB_DEFAULT_USER, MAX_USERNAME_SIZE);
tstrncpy(g_Dbs.password, TSDB_DEFAULT_PASS, MAX_PASSWORD_SIZE); tstrncpy(g_Dbs.password, TSDB_DEFAULT_PASS, SHELL_MAX_PASSWORD_LEN);
g_Dbs.threadCount = 2; g_Dbs.threadCount = 2;
g_Dbs.use_metric = g_args.use_metric; g_Dbs.use_metric = g_args.use_metric;
...@@ -8838,7 +8852,7 @@ static void initOfQueryMeta() { ...@@ -8838,7 +8852,7 @@ static void initOfQueryMeta() {
tstrncpy(g_queryInfo.host, "127.0.0.1", MAX_HOSTNAME_SIZE); tstrncpy(g_queryInfo.host, "127.0.0.1", MAX_HOSTNAME_SIZE);
g_queryInfo.port = 6030; g_queryInfo.port = 6030;
tstrncpy(g_queryInfo.user, TSDB_DEFAULT_USER, MAX_USERNAME_SIZE); tstrncpy(g_queryInfo.user, TSDB_DEFAULT_USER, MAX_USERNAME_SIZE);
tstrncpy(g_queryInfo.password, TSDB_DEFAULT_PASS, MAX_PASSWORD_SIZE); tstrncpy(g_queryInfo.password, TSDB_DEFAULT_PASS, SHELL_MAX_PASSWORD_LEN);
} }
static void setParaFromArg() { static void setParaFromArg() {
...@@ -8852,7 +8866,7 @@ static void setParaFromArg() { ...@@ -8852,7 +8866,7 @@ static void setParaFromArg() {
tstrncpy(g_Dbs.user, g_args.user, MAX_USERNAME_SIZE); tstrncpy(g_Dbs.user, g_args.user, MAX_USERNAME_SIZE);
} }
tstrncpy(g_Dbs.password, g_args.password, MAX_PASSWORD_SIZE); tstrncpy(g_Dbs.password, g_args.password, SHELL_MAX_PASSWORD_LEN);
if (g_args.port) { if (g_args.port) {
g_Dbs.port = g_args.port; g_Dbs.port = g_args.port;
......
...@@ -62,6 +62,20 @@ typedef struct { ...@@ -62,6 +62,20 @@ typedef struct {
#define errorPrint(fmt, ...) \ #define errorPrint(fmt, ...) \
do { fprintf(stderr, "\033[31m"); fprintf(stderr, "ERROR: "fmt, __VA_ARGS__); fprintf(stderr, "\033[0m"); } while(0) do { fprintf(stderr, "\033[31m"); fprintf(stderr, "ERROR: "fmt, __VA_ARGS__); fprintf(stderr, "\033[0m"); } while(0)
static bool isStringNumber(char *input)
{
int len = strlen(input);
if (0 == len) {
return false;
}
for (int i = 0; i < len; i++) {
if (!isdigit(input[i]))
return false;
}
return true;
}
// -------------------------- SHOW DATABASE INTERFACE----------------------- // -------------------------- SHOW DATABASE INTERFACE-----------------------
enum _show_db_index { enum _show_db_index {
...@@ -243,19 +257,15 @@ static struct argp_option options[] = { ...@@ -243,19 +257,15 @@ static struct argp_option options[] = {
{"table-batch", 't', "TABLE_BATCH", 0, "Number of table dumpout into one output file. Default is 1.", 3}, {"table-batch", 't', "TABLE_BATCH", 0, "Number of table dumpout into one output file. Default is 1.", 3},
{"thread_num", 'T', "THREAD_NUM", 0, "Number of thread for dump in file. Default is 5.", 3}, {"thread_num", 'T', "THREAD_NUM", 0, "Number of thread for dump in file. Default is 5.", 3},
{"debug", 'g', 0, 0, "Print debug info.", 8}, {"debug", 'g', 0, 0, "Print debug info.", 8},
{"verbose", 'b', 0, 0, "Print verbose debug info.", 9},
{"performanceprint", 'm', 0, 0, "Print performance debug info.", 10},
{0} {0}
}; };
#define MAX_PASSWORD_SIZE 20
/* Used by main to communicate with parse_opt. */ /* Used by main to communicate with parse_opt. */
typedef struct arguments { typedef struct arguments {
// connection option // connection option
char *host; char *host;
char *user; char *user;
char password[MAX_PASSWORD_SIZE]; char password[SHELL_MAX_PASSWORD_LEN];
uint16_t port; uint16_t port;
char cversion[12]; char cversion[12];
uint16_t mysqlFlag; uint16_t mysqlFlag;
...@@ -432,7 +442,6 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) { ...@@ -432,7 +442,6 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) {
break; break;
// dump unit option // dump unit option
case 'A': case 'A':
g_args.all_databases = true;
break; break;
case 'D': case 'D':
g_args.databases = true; g_args.databases = true;
...@@ -477,6 +486,10 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) { ...@@ -477,6 +486,10 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) {
g_args.table_batch = atoi(arg); g_args.table_batch = atoi(arg);
break; break;
case 'T': case 'T':
if (!isStringNumber(arg)) {
errorPrint("%s", "\n\t-T need a number following!\n");
exit(EXIT_FAILURE);
}
g_args.thread_num = atoi(arg); g_args.thread_num = atoi(arg);
break; break;
case OPT_ABORT: case OPT_ABORT:
...@@ -555,11 +568,14 @@ static void parse_precision_first( ...@@ -555,11 +568,14 @@ static void parse_precision_first(
} }
} }
static void parse_password( static void parse_args(
int argc, char *argv[], SArguments *arguments) { int argc, char *argv[], SArguments *arguments) {
for (int i = 1; i < argc; i++) { for (int i = 1; i < argc; i++) {
if (strncmp(argv[i], "-p", 2) == 0) { if ((strncmp(argv[i], "-p", 2) == 0)
if (strlen(argv[i]) == 2) { || (strncmp(argv[i], "--password", 10) == 0)) {
if ((strlen(argv[i]) == 2)
|| (strncmp(argv[i], "--password", 10) == 0)) {
printf("Enter password: "); printf("Enter password: ");
taosSetConsoleEcho(false); taosSetConsoleEcho(false);
if(scanf("%20s", arguments->password) > 1) { if(scanf("%20s", arguments->password) > 1) {
...@@ -567,10 +583,22 @@ static void parse_password( ...@@ -567,10 +583,22 @@ static void parse_password(
} }
taosSetConsoleEcho(true); taosSetConsoleEcho(true);
} else { } else {
tstrncpy(arguments->password, (char *)(argv[i] + 2), MAX_PASSWORD_SIZE); tstrncpy(arguments->password, (char *)(argv[i] + 2),
SHELL_MAX_PASSWORD_LEN);
strcpy(argv[i], "-p");
} }
argv[i] = ""; } else if (strcmp(argv[i], "-gg") == 0) {
arguments->verbose_print = true;
strcpy(argv[i], "");
} else if (strcmp(argv[i], "-PP") == 0) {
arguments->performance_print = true;
strcpy(argv[i], "");
} else if (strcmp(argv[i], "-A") == 0) {
g_args.all_databases = true;
} else {
continue;
} }
} }
} }
...@@ -639,7 +667,7 @@ int main(int argc, char *argv[]) { ...@@ -639,7 +667,7 @@ int main(int argc, char *argv[]) {
if (argc > 1) { if (argc > 1) {
parse_precision_first(argc, argv, &g_args); parse_precision_first(argc, argv, &g_args);
parse_timestamp(argc, argv, &g_args); parse_timestamp(argc, argv, &g_args);
parse_password(argc, argv, &g_args); parse_args(argc, argv, &g_args);
} }
argp_parse(&argp, argc, argv, 0, 0, &g_args); argp_parse(&argp, argc, argv, 0, 0, &g_args);
......
...@@ -63,12 +63,12 @@ int taosSetConsoleEcho(bool on) ...@@ -63,12 +63,12 @@ int taosSetConsoleEcho(bool on)
} }
if (on) if (on)
term.c_lflag|=ECHOFLAGS; term.c_lflag |= ECHOFLAGS;
else else
term.c_lflag &=~ECHOFLAGS; term.c_lflag &= ~ECHOFLAGS;
err = tcsetattr(STDIN_FILENO,TCSAFLUSH,&term); err = tcsetattr(STDIN_FILENO, TCSAFLUSH, &term);
if (err == -1 && err == EINTR) { if (err == -1 || err == EINTR) {
perror("Cannot set the attribution of the terminal"); perror("Cannot set the attribution of the terminal");
return -1; return -1;
} }
......
...@@ -43,9 +43,7 @@ typedef int32_t (*__block_search_fn_t)(char* data, int32_t num, int64_t key, int ...@@ -43,9 +43,7 @@ typedef int32_t (*__block_search_fn_t)(char* data, int32_t num, int64_t key, int
#define GET_NUM_OF_RESULTS(_r) (((_r)->outputBuf) == NULL? 0:((_r)->outputBuf)->info.rows) #define GET_NUM_OF_RESULTS(_r) (((_r)->outputBuf) == NULL? 0:((_r)->outputBuf)->info.rows)
//TODO: may need to fine tune this threshold #define NEEDTO_COMPRESS_QUERY(size) ((size) > tsCompressColData? 1 : 0)
#define QUERY_COMP_THRESHOLD (1024 * 512)
#define NEEDTO_COMPRESS_QUERY(size) ((size) > QUERY_COMP_THRESHOLD ? 1 : 0)
enum { enum {
// when query starts to execute, this status will set // when query starts to execute, this status will set
......
...@@ -357,7 +357,7 @@ int32_t qDumpRetrieveResult(qinfo_t qinfo, SRetrieveTableRsp **pRsp, int32_t *co ...@@ -357,7 +357,7 @@ int32_t qDumpRetrieveResult(qinfo_t qinfo, SRetrieveTableRsp **pRsp, int32_t *co
} }
(*pRsp)->precision = htons(pQueryAttr->precision); (*pRsp)->precision = htons(pQueryAttr->precision);
(*pRsp)->compressed = (int8_t)(tsCompressColData && checkNeedToCompressQueryCol(pQInfo)); (*pRsp)->compressed = (int8_t)((tsCompressColData != -1) && checkNeedToCompressQueryCol(pQInfo));
if (GET_NUM_OF_RESULTS(&(pQInfo->runtimeEnv)) > 0 && pQInfo->code == TSDB_CODE_SUCCESS) { if (GET_NUM_OF_RESULTS(&(pQInfo->runtimeEnv)) > 0 && pQInfo->code == TSDB_CODE_SUCCESS) {
doDumpQueryResult(pQInfo, (*pRsp)->data, (*pRsp)->compressed, &compLen); doDumpQueryResult(pQInfo, (*pRsp)->data, (*pRsp)->compressed, &compLen);
...@@ -367,8 +367,12 @@ int32_t qDumpRetrieveResult(qinfo_t qinfo, SRetrieveTableRsp **pRsp, int32_t *co ...@@ -367,8 +367,12 @@ int32_t qDumpRetrieveResult(qinfo_t qinfo, SRetrieveTableRsp **pRsp, int32_t *co
if ((*pRsp)->compressed && compLen != 0) { if ((*pRsp)->compressed && compLen != 0) {
int32_t numOfCols = pQueryAttr->pExpr2 ? pQueryAttr->numOfExpr2 : pQueryAttr->numOfOutput; int32_t numOfCols = pQueryAttr->pExpr2 ? pQueryAttr->numOfExpr2 : pQueryAttr->numOfOutput;
*contLen = *contLen - pQueryAttr->resultRowSize * s + compLen + numOfCols * sizeof(int32_t); int32_t origSize = pQueryAttr->resultRowSize * s;
int32_t compSize = compLen + numOfCols * sizeof(int32_t);
*contLen = *contLen - origSize + compSize;
*pRsp = (SRetrieveTableRsp *)rpcReallocCont(*pRsp, *contLen); *pRsp = (SRetrieveTableRsp *)rpcReallocCont(*pRsp, *contLen);
qDebug("QInfo:0x%"PRIx64" compress col data, uncompressed size:%d, compressed size:%d, ratio:%.2f",
pQInfo->qId, origSize, compSize, (float)origSize / (float)compSize);
} }
(*pRsp)->compLen = htonl(compLen); (*pRsp)->compLen = htonl(compLen);
......
...@@ -42,8 +42,9 @@ typedef struct { ...@@ -42,8 +42,9 @@ typedef struct {
typedef struct { typedef struct {
pthread_rwlock_t lock; pthread_rwlock_t lock;
SFSStatus* cstatus; // current status SFSStatus* cstatus; // current status
SHashObj* metaCache; // meta cache SHashObj* metaCache; // meta cache
SHashObj* metaCacheComp; // meta cache for compact
bool intxn; bool intxn;
SFSStatus* nstatus; // new status SFSStatus* nstatus; // new status
} STsdbFS; } STsdbFS;
......
...@@ -14,6 +14,8 @@ ...@@ -14,6 +14,8 @@
*/ */
#include "tsdbint.h" #include "tsdbint.h"
extern int32_t tsTsdbMetaCompactRatio;
#define TSDB_MAX_SUBBLOCKS 8 #define TSDB_MAX_SUBBLOCKS 8
static FORCE_INLINE int TSDB_KEY_FID(TSKEY key, int32_t days, int8_t precision) { static FORCE_INLINE int TSDB_KEY_FID(TSKEY key, int32_t days, int8_t precision) {
if (key < 0) { if (key < 0) {
...@@ -55,8 +57,9 @@ typedef struct { ...@@ -55,8 +57,9 @@ typedef struct {
#define TSDB_COMMIT_TXN_VERSION(ch) FS_TXN_VERSION(REPO_FS(TSDB_COMMIT_REPO(ch))) #define TSDB_COMMIT_TXN_VERSION(ch) FS_TXN_VERSION(REPO_FS(TSDB_COMMIT_REPO(ch)))
static int tsdbCommitMeta(STsdbRepo *pRepo); static int tsdbCommitMeta(STsdbRepo *pRepo);
static int tsdbUpdateMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid, void *cont, int contLen); static int tsdbUpdateMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid, void *cont, int contLen, bool compact);
static int tsdbDropMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid); static int tsdbDropMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid);
static int tsdbCompactMetaFile(STsdbRepo *pRepo, STsdbFS *pfs, SMFile *pMFile);
static int tsdbCommitTSData(STsdbRepo *pRepo); static int tsdbCommitTSData(STsdbRepo *pRepo);
static void tsdbStartCommit(STsdbRepo *pRepo); static void tsdbStartCommit(STsdbRepo *pRepo);
static void tsdbEndCommit(STsdbRepo *pRepo, int eno); static void tsdbEndCommit(STsdbRepo *pRepo, int eno);
...@@ -261,6 +264,35 @@ int tsdbWriteBlockIdx(SDFile *pHeadf, SArray *pIdxA, void **ppBuf) { ...@@ -261,6 +264,35 @@ int tsdbWriteBlockIdx(SDFile *pHeadf, SArray *pIdxA, void **ppBuf) {
// =================== Commit Meta Data // =================== Commit Meta Data
static int tsdbInitCommitMetaFile(STsdbRepo *pRepo, SMFile* pMf, bool open) {
STsdbFS * pfs = REPO_FS(pRepo);
SMFile * pOMFile = pfs->cstatus->pmf;
SDiskID did;
// Create/Open a meta file or open the existing file
if (pOMFile == NULL) {
// Create a new meta file
did.level = TFS_PRIMARY_LEVEL;
did.id = TFS_PRIMARY_ID;
tsdbInitMFile(pMf, did, REPO_ID(pRepo), FS_TXN_VERSION(REPO_FS(pRepo)));
if (open && tsdbCreateMFile(pMf, true) < 0) {
tsdbError("vgId:%d failed to create META file since %s", REPO_ID(pRepo), tstrerror(terrno));
return -1;
}
tsdbInfo("vgId:%d meta file %s is created to commit", REPO_ID(pRepo), TSDB_FILE_FULL_NAME(pMf));
} else {
tsdbInitMFileEx(pMf, pOMFile);
if (open && tsdbOpenMFile(pMf, O_WRONLY) < 0) {
tsdbError("vgId:%d failed to open META file since %s", REPO_ID(pRepo), tstrerror(terrno));
return -1;
}
}
return 0;
}
static int tsdbCommitMeta(STsdbRepo *pRepo) { static int tsdbCommitMeta(STsdbRepo *pRepo) {
STsdbFS * pfs = REPO_FS(pRepo); STsdbFS * pfs = REPO_FS(pRepo);
SMemTable *pMem = pRepo->imem; SMemTable *pMem = pRepo->imem;
...@@ -269,34 +301,25 @@ static int tsdbCommitMeta(STsdbRepo *pRepo) { ...@@ -269,34 +301,25 @@ static int tsdbCommitMeta(STsdbRepo *pRepo) {
SActObj * pAct = NULL; SActObj * pAct = NULL;
SActCont * pCont = NULL; SActCont * pCont = NULL;
SListNode *pNode = NULL; SListNode *pNode = NULL;
SDiskID did;
ASSERT(pOMFile != NULL || listNEles(pMem->actList) > 0); ASSERT(pOMFile != NULL || listNEles(pMem->actList) > 0);
if (listNEles(pMem->actList) <= 0) { if (listNEles(pMem->actList) <= 0) {
// no meta data to commit, just keep the old meta file // no meta data to commit, just keep the old meta file
tsdbUpdateMFile(pfs, pOMFile); tsdbUpdateMFile(pfs, pOMFile);
return 0; if (tsTsdbMetaCompactRatio > 0) {
} else { if (tsdbInitCommitMetaFile(pRepo, &mf, false) < 0) {
// Create/Open a meta file or open the existing file
if (pOMFile == NULL) {
// Create a new meta file
did.level = TFS_PRIMARY_LEVEL;
did.id = TFS_PRIMARY_ID;
tsdbInitMFile(&mf, did, REPO_ID(pRepo), FS_TXN_VERSION(REPO_FS(pRepo)));
if (tsdbCreateMFile(&mf, true) < 0) {
tsdbError("vgId:%d failed to create META file since %s", REPO_ID(pRepo), tstrerror(terrno));
return -1; return -1;
} }
int ret = tsdbCompactMetaFile(pRepo, pfs, &mf);
if (ret < 0) tsdbError("compact meta file error");
tsdbInfo("vgId:%d meta file %s is created to commit", REPO_ID(pRepo), TSDB_FILE_FULL_NAME(&mf)); return ret;
} else { }
tsdbInitMFileEx(&mf, pOMFile); return 0;
if (tsdbOpenMFile(&mf, O_WRONLY) < 0) { } else {
tsdbError("vgId:%d failed to open META file since %s", REPO_ID(pRepo), tstrerror(terrno)); if (tsdbInitCommitMetaFile(pRepo, &mf, true) < 0) {
return -1; return -1;
}
} }
} }
...@@ -305,7 +328,7 @@ static int tsdbCommitMeta(STsdbRepo *pRepo) { ...@@ -305,7 +328,7 @@ static int tsdbCommitMeta(STsdbRepo *pRepo) {
pAct = (SActObj *)pNode->data; pAct = (SActObj *)pNode->data;
if (pAct->act == TSDB_UPDATE_META) { if (pAct->act == TSDB_UPDATE_META) {
pCont = (SActCont *)POINTER_SHIFT(pAct, sizeof(SActObj)); pCont = (SActCont *)POINTER_SHIFT(pAct, sizeof(SActObj));
if (tsdbUpdateMetaRecord(pfs, &mf, pAct->uid, (void *)(pCont->cont), pCont->len) < 0) { if (tsdbUpdateMetaRecord(pfs, &mf, pAct->uid, (void *)(pCont->cont), pCont->len, false) < 0) {
tsdbError("vgId:%d failed to update META record, uid %" PRIu64 " since %s", REPO_ID(pRepo), pAct->uid, tsdbError("vgId:%d failed to update META record, uid %" PRIu64 " since %s", REPO_ID(pRepo), pAct->uid,
tstrerror(terrno)); tstrerror(terrno));
tsdbCloseMFile(&mf); tsdbCloseMFile(&mf);
...@@ -338,6 +361,10 @@ static int tsdbCommitMeta(STsdbRepo *pRepo) { ...@@ -338,6 +361,10 @@ static int tsdbCommitMeta(STsdbRepo *pRepo) {
tsdbCloseMFile(&mf); tsdbCloseMFile(&mf);
tsdbUpdateMFile(pfs, &mf); tsdbUpdateMFile(pfs, &mf);
if (tsTsdbMetaCompactRatio > 0 && tsdbCompactMetaFile(pRepo, pfs, &mf) < 0) {
tsdbError("compact meta file error");
}
return 0; return 0;
} }
...@@ -375,7 +402,7 @@ void tsdbGetRtnSnap(STsdbRepo *pRepo, SRtn *pRtn) { ...@@ -375,7 +402,7 @@ void tsdbGetRtnSnap(STsdbRepo *pRepo, SRtn *pRtn) {
pRtn->minFid, pRtn->midFid, pRtn->maxFid); pRtn->minFid, pRtn->midFid, pRtn->maxFid);
} }
static int tsdbUpdateMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid, void *cont, int contLen) { static int tsdbUpdateMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid, void *cont, int contLen, bool compact) {
char buf[64] = "\0"; char buf[64] = "\0";
void * pBuf = buf; void * pBuf = buf;
SKVRecord rInfo; SKVRecord rInfo;
...@@ -401,13 +428,18 @@ static int tsdbUpdateMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid, void ...@@ -401,13 +428,18 @@ static int tsdbUpdateMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid, void
} }
tsdbUpdateMFileMagic(pMFile, POINTER_SHIFT(cont, contLen - sizeof(TSCKSUM))); tsdbUpdateMFileMagic(pMFile, POINTER_SHIFT(cont, contLen - sizeof(TSCKSUM)));
SKVRecord *pRecord = taosHashGet(pfs->metaCache, (void *)&uid, sizeof(uid));
SHashObj* cache = compact ? pfs->metaCacheComp : pfs->metaCache;
pMFile->info.nRecords++;
SKVRecord *pRecord = taosHashGet(cache, (void *)&uid, sizeof(uid));
if (pRecord != NULL) { if (pRecord != NULL) {
pMFile->info.tombSize += (pRecord->size + sizeof(SKVRecord)); pMFile->info.tombSize += (pRecord->size + sizeof(SKVRecord));
} else { } else {
pMFile->info.nRecords++; pMFile->info.nRecords++;
} }
taosHashPut(pfs->metaCache, (void *)(&uid), sizeof(uid), (void *)(&rInfo), sizeof(rInfo)); taosHashPut(cache, (void *)(&uid), sizeof(uid), (void *)(&rInfo), sizeof(rInfo));
return 0; return 0;
} }
...@@ -442,6 +474,129 @@ static int tsdbDropMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid) { ...@@ -442,6 +474,129 @@ static int tsdbDropMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid) {
return 0; return 0;
} }
static int tsdbCompactMetaFile(STsdbRepo *pRepo, STsdbFS *pfs, SMFile *pMFile) {
float delPercent = (float)(pMFile->info.nDels) / (float)(pMFile->info.nRecords);
float tombPercent = (float)(pMFile->info.tombSize) / (float)(pMFile->info.size);
float compactRatio = (float)(tsTsdbMetaCompactRatio)/100;
if (delPercent < compactRatio && tombPercent < compactRatio) {
return 0;
}
if (tsdbOpenMFile(pMFile, O_RDONLY) < 0) {
tsdbError("open meta file %s compact fail", pMFile->f.rname);
return -1;
}
tsdbInfo("begin compact tsdb meta file, ratio:%d, nDels:%" PRId64 ",nRecords:%" PRId64 ",tombSize:%" PRId64 ",size:%" PRId64,
tsTsdbMetaCompactRatio, pMFile->info.nDels,pMFile->info.nRecords,pMFile->info.tombSize,pMFile->info.size);
SMFile mf;
SDiskID did;
// first create tmp meta file
did.level = TFS_PRIMARY_LEVEL;
did.id = TFS_PRIMARY_ID;
tsdbInitMFile(&mf, did, REPO_ID(pRepo), FS_TXN_VERSION(REPO_FS(pRepo)) + 1);
if (tsdbCreateMFile(&mf, true) < 0) {
tsdbError("vgId:%d failed to create META file since %s", REPO_ID(pRepo), tstrerror(terrno));
return -1;
}
tsdbInfo("vgId:%d meta file %s is created to compact meta data", REPO_ID(pRepo), TSDB_FILE_FULL_NAME(&mf));
// second iterator metaCache
int code = -1;
int64_t maxBufSize = 1024;
SKVRecord *pRecord;
void *pBuf = NULL;
pBuf = malloc((size_t)maxBufSize);
if (pBuf == NULL) {
goto _err;
}
// init Comp
assert(pfs->metaCacheComp == NULL);
pfs->metaCacheComp = taosHashInit(4096, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, HASH_NO_LOCK);
if (pfs->metaCacheComp == NULL) {
goto _err;
}
pRecord = taosHashIterate(pfs->metaCache, NULL);
while (pRecord) {
if (tsdbSeekMFile(pMFile, pRecord->offset + sizeof(SKVRecord), SEEK_SET) < 0) {
tsdbError("vgId:%d failed to seek file %s since %s", REPO_ID(pRepo), TSDB_FILE_FULL_NAME(pMFile),
tstrerror(terrno));
goto _err;
}
if (pRecord->size > maxBufSize) {
maxBufSize = pRecord->size;
void* tmp = realloc(pBuf, (size_t)maxBufSize);
if (tmp == NULL) {
goto _err;
}
pBuf = tmp;
}
int nread = (int)tsdbReadMFile(pMFile, pBuf, pRecord->size);
if (nread < 0) {
tsdbError("vgId:%d failed to read file %s since %s", REPO_ID(pRepo), TSDB_FILE_FULL_NAME(pMFile),
tstrerror(terrno));
goto _err;
}
if (nread < pRecord->size) {
tsdbError("vgId:%d failed to read file %s since file corrupted, expected read:%" PRId64 " actual read:%d",
REPO_ID(pRepo), TSDB_FILE_FULL_NAME(pMFile), pRecord->size, nread);
goto _err;
}
if (tsdbUpdateMetaRecord(pfs, &mf, pRecord->uid, pBuf, (int)pRecord->size, true) < 0) {
tsdbError("vgId:%d failed to update META record, uid %" PRIu64 " since %s", REPO_ID(pRepo), pRecord->uid,
tstrerror(terrno));
goto _err;
}
pRecord = taosHashIterate(pfs->metaCache, pRecord);
}
code = 0;
_err:
if (code == 0) TSDB_FILE_FSYNC(&mf);
tsdbCloseMFile(&mf);
tsdbCloseMFile(pMFile);
if (code == 0) {
// rename meta.tmp -> meta
tsdbInfo("vgId:%d meta file rename %s -> %s", REPO_ID(pRepo), TSDB_FILE_FULL_NAME(&mf), TSDB_FILE_FULL_NAME(pMFile));
taosRename(mf.f.aname,pMFile->f.aname);
tstrncpy(mf.f.aname, pMFile->f.aname, TSDB_FILENAME_LEN);
tstrncpy(mf.f.rname, pMFile->f.rname, TSDB_FILENAME_LEN);
// update current meta file info
pfs->nstatus->pmf = NULL;
tsdbUpdateMFile(pfs, &mf);
taosHashCleanup(pfs->metaCache);
pfs->metaCache = pfs->metaCacheComp;
pfs->metaCacheComp = NULL;
} else {
// remove meta.tmp file
remove(mf.f.aname);
taosHashCleanup(pfs->metaCacheComp);
pfs->metaCacheComp = NULL;
}
tfree(pBuf);
ASSERT(mf.info.nDels == 0);
ASSERT(mf.info.tombSize == 0);
tsdbInfo("end compact tsdb meta file,code:%d,nRecords:%" PRId64 ",size:%" PRId64,
code,mf.info.nRecords,mf.info.size);
return code;
}
// =================== Commit Time-Series Data // =================== Commit Time-Series Data
static int tsdbCommitTSData(STsdbRepo *pRepo) { static int tsdbCommitTSData(STsdbRepo *pRepo) {
SMemTable *pMem = pRepo->imem; SMemTable *pMem = pRepo->imem;
......
...@@ -215,6 +215,7 @@ STsdbFS *tsdbNewFS(STsdbCfg *pCfg) { ...@@ -215,6 +215,7 @@ STsdbFS *tsdbNewFS(STsdbCfg *pCfg) {
} }
pfs->intxn = false; pfs->intxn = false;
pfs->metaCacheComp = NULL;
pfs->nstatus = tsdbNewFSStatus(maxFSet); pfs->nstatus = tsdbNewFSStatus(maxFSet);
if (pfs->nstatus == NULL) { if (pfs->nstatus == NULL) {
......
...@@ -16,11 +16,11 @@ ...@@ -16,11 +16,11 @@
#include "tsdbint.h" #include "tsdbint.h"
static const char *TSDB_FNAME_SUFFIX[] = { static const char *TSDB_FNAME_SUFFIX[] = {
"head", // TSDB_FILE_HEAD "head", // TSDB_FILE_HEAD
"data", // TSDB_FILE_DATA "data", // TSDB_FILE_DATA
"last", // TSDB_FILE_LAST "last", // TSDB_FILE_LAST
"", // TSDB_FILE_MAX "", // TSDB_FILE_MAX
"meta" // TSDB_FILE_META "meta", // TSDB_FILE_META
}; };
static void tsdbGetFilename(int vid, int fid, uint32_t ver, TSDB_FILE_T ftype, char *fname); static void tsdbGetFilename(int vid, int fid, uint32_t ver, TSDB_FILE_T ftype, char *fname);
......
...@@ -43,6 +43,7 @@ static int tsdbRemoveTableFromStore(STsdbRepo *pRepo, STable *pTable); ...@@ -43,6 +43,7 @@ static int tsdbRemoveTableFromStore(STsdbRepo *pRepo, STable *pTable);
static int tsdbRmTableFromMeta(STsdbRepo *pRepo, STable *pTable); static int tsdbRmTableFromMeta(STsdbRepo *pRepo, STable *pTable);
static int tsdbAdjustMetaTables(STsdbRepo *pRepo, int tid); static int tsdbAdjustMetaTables(STsdbRepo *pRepo, int tid);
static int tsdbCheckTableTagVal(SKVRow *pKVRow, STSchema *pSchema); static int tsdbCheckTableTagVal(SKVRow *pKVRow, STSchema *pSchema);
static int tsdbInsertNewTableAction(STsdbRepo *pRepo, STable* pTable);
static int tsdbAddSchema(STable *pTable, STSchema *pSchema); static int tsdbAddSchema(STable *pTable, STSchema *pSchema);
static void tsdbFreeTableSchema(STable *pTable); static void tsdbFreeTableSchema(STable *pTable);
...@@ -128,21 +129,16 @@ int tsdbCreateTable(STsdbRepo *repo, STableCfg *pCfg) { ...@@ -128,21 +129,16 @@ int tsdbCreateTable(STsdbRepo *repo, STableCfg *pCfg) {
tsdbUnlockRepoMeta(pRepo); tsdbUnlockRepoMeta(pRepo);
// Write to memtable action // Write to memtable action
// TODO: refactor duplicate codes
int tlen = 0;
void *pBuf = NULL;
if (newSuper || superChanged) { if (newSuper || superChanged) {
tlen = tsdbGetTableEncodeSize(TSDB_UPDATE_META, super); // add insert new super table action
pBuf = tsdbAllocBytes(pRepo, tlen); if (tsdbInsertNewTableAction(pRepo, super) != 0) {
if (pBuf == NULL) goto _err; goto _err;
void *tBuf = tsdbInsertTableAct(pRepo, TSDB_UPDATE_META, pBuf, super); }
ASSERT(POINTER_DISTANCE(tBuf, pBuf) == tlen); }
// add insert new table action
if (tsdbInsertNewTableAction(pRepo, table) != 0) {
goto _err;
} }
tlen = tsdbGetTableEncodeSize(TSDB_UPDATE_META, table);
pBuf = tsdbAllocBytes(pRepo, tlen);
if (pBuf == NULL) goto _err;
void *tBuf = tsdbInsertTableAct(pRepo, TSDB_UPDATE_META, pBuf, table);
ASSERT(POINTER_DISTANCE(tBuf, pBuf) == tlen);
if (tsdbCheckCommit(pRepo) < 0) return -1; if (tsdbCheckCommit(pRepo) < 0) return -1;
...@@ -383,7 +379,7 @@ int tsdbUpdateTableTagValue(STsdbRepo *repo, SUpdateTableTagValMsg *pMsg) { ...@@ -383,7 +379,7 @@ int tsdbUpdateTableTagValue(STsdbRepo *repo, SUpdateTableTagValMsg *pMsg) {
tdDestroyTSchemaBuilder(&schemaBuilder); tdDestroyTSchemaBuilder(&schemaBuilder);
} }
// Chage in memory // Change in memory
if (pNewSchema != NULL) { // change super table tag schema if (pNewSchema != NULL) { // change super table tag schema
TSDB_WLOCK_TABLE(pTable->pSuper); TSDB_WLOCK_TABLE(pTable->pSuper);
STSchema *pOldSchema = pTable->pSuper->tagSchema; STSchema *pOldSchema = pTable->pSuper->tagSchema;
...@@ -426,6 +422,21 @@ int tsdbUpdateTableTagValue(STsdbRepo *repo, SUpdateTableTagValMsg *pMsg) { ...@@ -426,6 +422,21 @@ int tsdbUpdateTableTagValue(STsdbRepo *repo, SUpdateTableTagValMsg *pMsg) {
} }
// ------------------ INTERNAL FUNCTIONS ------------------ // ------------------ INTERNAL FUNCTIONS ------------------
static int tsdbInsertNewTableAction(STsdbRepo *pRepo, STable* pTable) {
int tlen = 0;
void *pBuf = NULL;
tlen = tsdbGetTableEncodeSize(TSDB_UPDATE_META, pTable);
pBuf = tsdbAllocBytes(pRepo, tlen);
if (pBuf == NULL) {
return -1;
}
void *tBuf = tsdbInsertTableAct(pRepo, TSDB_UPDATE_META, pBuf, pTable);
ASSERT(POINTER_DISTANCE(tBuf, pBuf) == tlen);
return 0;
}
STsdbMeta *tsdbNewMeta(STsdbCfg *pCfg) { STsdbMeta *tsdbNewMeta(STsdbCfg *pCfg) {
STsdbMeta *pMeta = (STsdbMeta *)calloc(1, sizeof(*pMeta)); STsdbMeta *pMeta = (STsdbMeta *)calloc(1, sizeof(*pMeta));
if (pMeta == NULL) { if (pMeta == NULL) {
...@@ -617,6 +628,7 @@ int16_t tsdbGetLastColumnsIndexByColId(STable* pTable, int16_t colId) { ...@@ -617,6 +628,7 @@ int16_t tsdbGetLastColumnsIndexByColId(STable* pTable, int16_t colId) {
if (pTable->lastCols == NULL) { if (pTable->lastCols == NULL) {
return -1; return -1;
} }
// TODO: use binary search instead
for (int16_t i = 0; i < pTable->maxColNum; ++i) { for (int16_t i = 0; i < pTable->maxColNum; ++i) {
if (pTable->lastCols[i].colId == colId) { if (pTable->lastCols[i].colId == colId) {
return i; return i;
...@@ -734,10 +746,10 @@ void tsdbUpdateTableSchema(STsdbRepo *pRepo, STable *pTable, STSchema *pSchema, ...@@ -734,10 +746,10 @@ void tsdbUpdateTableSchema(STsdbRepo *pRepo, STable *pTable, STSchema *pSchema,
TSDB_WUNLOCK_TABLE(pCTable); TSDB_WUNLOCK_TABLE(pCTable);
if (insertAct) { if (insertAct) {
int tlen = tsdbGetTableEncodeSize(TSDB_UPDATE_META, pCTable); if (tsdbInsertNewTableAction(pRepo, pCTable) != 0) {
void *buf = tsdbAllocBytes(pRepo, tlen); tsdbError("vgId:%d table %s tid %d uid %" PRIu64 " tsdbInsertNewTableAction fail", REPO_ID(pRepo), TABLE_CHAR_NAME(pTable),
ASSERT(buf != NULL); TABLE_TID(pTable), TABLE_UID(pTable));
tsdbInsertTableAct(pRepo, TSDB_UPDATE_META, buf, pCTable); }
} }
} }
...@@ -1250,8 +1262,14 @@ static int tsdbEncodeTable(void **buf, STable *pTable) { ...@@ -1250,8 +1262,14 @@ static int tsdbEncodeTable(void **buf, STable *pTable) {
tlen += taosEncodeFixedU64(buf, TABLE_SUID(pTable)); tlen += taosEncodeFixedU64(buf, TABLE_SUID(pTable));
tlen += tdEncodeKVRow(buf, pTable->tagVal); tlen += tdEncodeKVRow(buf, pTable->tagVal);
} else { } else {
tlen += taosEncodeFixedU8(buf, (uint8_t)taosArrayGetSize(pTable->schema)); uint32_t arraySize = (uint32_t)taosArrayGetSize(pTable->schema);
for (int i = 0; i < taosArrayGetSize(pTable->schema); i++) { if(arraySize > UINT8_MAX) {
tlen += taosEncodeFixedU8(buf, 0);
tlen += taosEncodeFixedU32(buf, arraySize);
} else {
tlen += taosEncodeFixedU8(buf, (uint8_t)arraySize);
}
for (uint32_t i = 0; i < arraySize; i++) {
STSchema *pSchema = taosArrayGetP(pTable->schema, i); STSchema *pSchema = taosArrayGetP(pTable->schema, i);
tlen += tdEncodeSchema(buf, pSchema); tlen += tdEncodeSchema(buf, pSchema);
} }
...@@ -1284,8 +1302,11 @@ static void *tsdbDecodeTable(void *buf, STable **pRTable) { ...@@ -1284,8 +1302,11 @@ static void *tsdbDecodeTable(void *buf, STable **pRTable) {
buf = taosDecodeFixedU64(buf, &TABLE_SUID(pTable)); buf = taosDecodeFixedU64(buf, &TABLE_SUID(pTable));
buf = tdDecodeKVRow(buf, &(pTable->tagVal)); buf = tdDecodeKVRow(buf, &(pTable->tagVal));
} else { } else {
uint8_t nSchemas; uint32_t nSchemas = 0;
buf = taosDecodeFixedU8(buf, &nSchemas); buf = taosDecodeFixedU8(buf, (uint8_t *)&nSchemas);
if(nSchemas == 0) {
buf = taosDecodeFixedU32(buf, &nSchemas);
}
for (int i = 0; i < nSchemas; i++) { for (int i = 0; i < nSchemas; i++) {
STSchema *pSchema; STSchema *pSchema;
buf = tdDecodeSchema(buf, &pSchema); buf = tdDecodeSchema(buf, &pSchema);
...@@ -1485,4 +1506,4 @@ static void tsdbFreeTableSchema(STable *pTable) { ...@@ -1485,4 +1506,4 @@ static void tsdbFreeTableSchema(STable *pTable) {
taosArrayDestroy(pTable->schema); taosArrayDestroy(pTable->schema);
} }
} }
\ No newline at end of file
...@@ -199,16 +199,7 @@ int32_t compareLenPrefixedWStr(const void *pLeft, const void *pRight) { ...@@ -199,16 +199,7 @@ int32_t compareLenPrefixedWStr(const void *pLeft, const void *pRight) {
if (len1 != len2) { if (len1 != len2) {
return len1 > len2? 1:-1; return len1 > len2? 1:-1;
} else { } else {
char *pLeftTerm = (char *)tcalloc(len1 + 1, sizeof(char)); int32_t ret = memcmp((wchar_t*) pLeft, (wchar_t*) pRight, len1);
char *pRightTerm = (char *)tcalloc(len1 + 1, sizeof(char));
memcpy(pLeftTerm, varDataVal(pLeft), len1);
memcpy(pRightTerm, varDataVal(pRight), len2);
int32_t ret = wcsncmp((wchar_t*) pLeftTerm, (wchar_t*) pRightTerm, len1/TSDB_NCHAR_SIZE);
tfree(pLeftTerm);
tfree(pRightTerm);
if (ret == 0) { if (ret == 0) {
return 0; return 0;
} else { } else {
...@@ -517,17 +508,7 @@ int32_t doCompare(const char* f1, const char* f2, int32_t type, size_t size) { ...@@ -517,17 +508,7 @@ int32_t doCompare(const char* f1, const char* f2, int32_t type, size_t size) {
if (t1->len != t2->len) { if (t1->len != t2->len) {
return t1->len > t2->len? 1:-1; return t1->len > t2->len? 1:-1;
} }
int32_t ret = memcmp((wchar_t*) t1, (wchar_t*) t2, t2->len);
char *t1_term = (char *)tcalloc(t1->len + 1, sizeof(char));
char *t2_term = (char *)tcalloc(t2->len + 1, sizeof(char));
memcpy(t1_term, t1->data, t1->len);
memcpy(t2_term, t2->data, t2->len);
int32_t ret = wcsncmp((wchar_t*) t1_term, (wchar_t*) t2_term, t2->len/TSDB_NCHAR_SIZE);
tfree(t1_term);
tfree(t2_term);
if (ret == 0) { if (ret == 0) {
return ret; return ret;
} }
......
...@@ -14,23 +14,24 @@ ...@@ -14,23 +14,24 @@
*/ */
#include "tfunctional.h" #include "tfunctional.h"
#include "tarray.h"
tGenericSavedFunc* genericSavedFuncInit(GenericVaFunc func, int numOfArgs) { tGenericSavedFunc* genericSavedFuncInit(GenericVaFunc func, int numOfArgs) {
tGenericSavedFunc* pSavedFunc = malloc(sizeof(tGenericSavedFunc) + numOfArgs * (sizeof(void*))); tGenericSavedFunc* pSavedFunc = malloc(sizeof(tGenericSavedFunc) + numOfArgs * (sizeof(void*)));
if(pSavedFunc == NULL) return NULL;
pSavedFunc->func = func; pSavedFunc->func = func;
return pSavedFunc; return pSavedFunc;
} }
tI32SavedFunc* i32SavedFuncInit(I32VaFunc func, int numOfArgs) { tI32SavedFunc* i32SavedFuncInit(I32VaFunc func, int numOfArgs) {
tI32SavedFunc* pSavedFunc = malloc(sizeof(tI32SavedFunc) + numOfArgs * sizeof(void *)); tI32SavedFunc* pSavedFunc = malloc(sizeof(tI32SavedFunc) + numOfArgs * sizeof(void *));
if(pSavedFunc == NULL) return NULL;
pSavedFunc->func = func; pSavedFunc->func = func;
return pSavedFunc; return pSavedFunc;
} }
tVoidSavedFunc* voidSavedFuncInit(VoidVaFunc func, int numOfArgs) { tVoidSavedFunc* voidSavedFuncInit(VoidVaFunc func, int numOfArgs) {
tVoidSavedFunc* pSavedFunc = malloc(sizeof(tVoidSavedFunc) + numOfArgs * sizeof(void*)); tVoidSavedFunc* pSavedFunc = malloc(sizeof(tVoidSavedFunc) + numOfArgs * sizeof(void*));
if(pSavedFunc == NULL) return NULL;
pSavedFunc->func = func; pSavedFunc->func = func;
return pSavedFunc; return pSavedFunc;
} }
......
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import os
import sys
sys.path.insert(0, os.getcwd())
from util.log import *
from util.sql import *
from util.dnodes import *
import taos
import threading
import subprocess
from random import choice
class TwoClients:
def initConnection(self):
self.host = "chr03"
self.user = "root"
self.password = "taosdata"
self.config = "/etc/taos/"
self.port =6030
self.rowNum = 10
self.ts = 1537146000000
def run(self):
# new taos client
conn1 = taos.connect(host=self.host, user=self.user, password=self.password, config=self.config )
print(conn1)
cur1 = conn1.cursor()
tdSql.init(cur1, True)
tdSql.execute("drop database if exists db3")
# insert data with taosc
for i in range(10):
os.system("taosdemo -f manualTest/TD-5114/insertDataDb3Replica2.json -y ")
# # check data correct
tdSql.execute("show databases")
tdSql.execute("use db3")
tdSql.query("select count (tbname) from stb0")
tdSql.checkData(0, 0, 20000)
tdSql.query("select count (*) from stb0")
tdSql.checkData(0, 0, 4000000)
# insert data with python connector , if you want to use this case ,cancel note.
# for x in range(10):
# dataType= [ "tinyint", "smallint", "int", "bigint", "float", "double", "bool", " binary(20)", "nchar(20)", "tinyint unsigned", "smallint unsigned", "int unsigned", "bigint unsigned"]
# tdSql.execute("drop database if exists db3")
# tdSql.execute("create database db3 keep 3650 replica 2 ")
# tdSql.execute("use db3")
# tdSql.execute('''create table test(ts timestamp, col0 tinyint, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
# col7 bool, col8 binary(20), col9 nchar(20), col10 tinyint unsigned, col11 smallint unsigned, col12 int unsigned, col13 bigint unsigned) tags(loc nchar(3000), tag1 int)''')
# rowNum2= 988
# for i in range(rowNum2):
# tdSql.execute("alter table test add column col%d %s ;" %( i+14, choice(dataType)) )
# rowNum3= 988
# for i in range(rowNum3):
# tdSql.execute("alter table test drop column col%d ;" %( i+14) )
# self.rowNum = 50
# self.rowNum2 = 2000
# self.ts = 1537146000000
# for j in range(self.rowNum2):
# tdSql.execute("create table test%d using test tags('beijing%d', 10)" % (j,j) )
# for i in range(self.rowNum):
# tdSql.execute("insert into test%d values(%d, %d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
# % (j, self.ts + i*1000, i + 1, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
# # check data correct
# tdSql.execute("show databases")
# tdSql.execute("use db3")
# tdSql.query("select count (tbname) from test")
# tdSql.checkData(0, 0, 200)
# tdSql.query("select count (*) from test")
# tdSql.checkData(0, 0, 200000)
# delete useless file
testcaseFilename = os.path.split(__file__)[-1]
os.system("rm -rf ./insert_res.txt")
os.system("rm -rf manualTest/TD-5114/%s.sql" % testcaseFilename )
clients = TwoClients()
clients.initConnection()
# clients.getBuildPath()
clients.run()
\ No newline at end of file
{
"filetype": "insert",
"cfgdir": "/etc/taos",
"host": "127.0.0.1",
"port": 6030,
"user": "root",
"password": "taosdata",
"thread_count": 4,
"thread_count_create_tbl": 4,
"result_file": "./insert_res.txt",
"confirm_parameter_prompt": "no",
"insert_interval": 0,
"interlace_rows": 0,
"num_of_records_per_req": 3000,
"max_sql_len": 1024000,
"databases": [{
"dbinfo": {
"name": "db3",
"drop": "yes",
"replica": 2,
"days": 10,
"cache": 50,
"blocks": 8,
"precision": "ms",
"keep": 365,
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
"update": 0
},
"super_tables": [{
"name": "stb0",
"child_table_exists":"no",
"childtable_count": 20000,
"childtable_prefix": "stb0_",
"auto_create_table": "no",
"batch_create_tbl_num": 1000,
"data_source": "rand",
"insert_mode": "taosc",
"insert_rows": 2000,
"childtable_limit": 0,
"childtable_offset":0,
"interlace_rows": 0,
"insert_interval":0,
"max_sql_len": 1024000,
"disorder_ratio": 0,
"disorder_range": 1000,
"timestamp_step": 1,
"start_timestamp": "2020-10-01 00:00:00.000",
"sample_format": "csv",
"sample_file": "./sample.csv",
"tags_file": "",
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}, {"type": "BINARY", "len": 32, "count":1}],
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
}]
}]
}
此差异已折叠。
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
import taos
from util.log import *
from util.cases import *
from util.sql import *
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor())
self.ts = 1537146000000
def run(self):
tdSql.prepare()
print("======= Verify filter for bool, nchar and binary type =========")
tdLog.debug(
"create table st(ts timestamp, tbcol1 bool, tbcol2 binary(10), tbcol3 nchar(20), tbcol4 tinyint, tbcol5 smallint, tbcol6 int, tbcol7 bigint, tbcol8 float, tbcol9 double) tags(tagcol1 bool, tagcol2 binary(10), tagcol3 nchar(10))")
tdSql.execute(
"create table st(ts timestamp, tbcol1 bool, tbcol2 binary(10), tbcol3 nchar(20), tbcol4 tinyint, tbcol5 smallint, tbcol6 int, tbcol7 bigint, tbcol8 float, tbcol9 double) tags(tagcol1 bool, tagcol2 binary(10), tagcol3 nchar(10))")
tdSql.execute("create table st1 using st tags(true, 'table1', '水表')")
for i in range(1, 6):
tdSql.execute(
"insert into st1 values(%d, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d, %f, %f)" %
(self.ts + i, i %
2, i, i,
i, i, i, i, 1.0, 1.0))
# =============Data type keywords cannot be used in filter====================
# timestamp
tdSql.error("select * from st where timestamp = 1629417600")
# bool
tdSql.error("select * from st where bool = false")
#binary
tdSql.error("select * from st where binary = 'taosdata'")
# nchar
tdSql.error("select * from st where nchar = '涛思数据'")
# tinyint
tdSql.error("select * from st where tinyint = 127")
# smallint
tdSql.error("select * from st where smallint = 32767")
# int
tdSql.error("select * from st where INTEGER = 2147483647")
tdSql.error("select * from st where int = 2147483647")
# bigint
tdSql.error("select * from st where bigint = 2147483647")
# float
tdSql.error("select * from st where float = 3.4E38")
# double
tdSql.error("select * from st where double = 1.7E308")
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册