提交 60a996cb 编写于 作者: W wpan

Merge branch 'develop' into feature/TD-5925

......@@ -44,13 +44,15 @@ TDengine是一个高效的存储、查询、分析时序大数据的平台,专
* [SQL函数](/taos-sql#functions):支持各种聚合函数、选择函数、计算函数,如avg, min, diff等
* [窗口切分聚合](/taos-sql#aggregation):将表中数据按照时间段等方式进行切割后聚合,降维处理
* [边界限制](/taos-sql#limitation):库、表、SQL等边界限制条件
* [UDF](/taos-sql/udf):用户定义函数的创建和管理方法
* [错误码](/taos-sql/error-code):TDengine 2.0 错误码以及对应的十进制码
## [高效写入数据](/insert)
* [SQL写入](/insert#sql):使用SQL insert命令向一张或多张表写入单条或多条记录
* [Prometheus写入](/insert#prometheus):配置Prometheus, 不用任何代码,将数据直接写入
* [Telegraf写入](/insert#telegraf):配置Telegraf, 不用任何代码,将采集数据直接写入
* [SQL 写入](/insert#sql):使用SQL insert命令向一张或多张表写入单条或多条记录
* [Schemaless 写入](/insert#schemaless):免于预先建表,将数据直接写入时自动维护元数据结构
* [Prometheus 写入](/insert#prometheus):配置Prometheus, 不用任何代码,将数据直接写入
* [Telegraf 写入](/insert#telegraf):配置Telegraf, 不用任何代码,将采集数据直接写入
* [EMQ X Broker](/insert#emq):配置EMQ X,不用任何代码,就可将MQTT数据直接写入
* [HiveMQ Broker](/insert#hivemq):配置HiveMQ,不用任何代码,就可将MQTT数据直接写入
......
......@@ -56,7 +56,6 @@ measurement,tag_set field_set timestamp
- 后缀为 i16,表示为 SMALLINT (INT16) 类型;
- 后缀为 i32,表示为 INT (INT32) 类型;
- 后缀为 i64,表示为 BIGINT (INT64) 类型;
- 后缀为 b,表示为 BOOL 类型。
* t, T, true, True, TRUE, f, F, false, False 将直接作为 BOOL 型来处理。
timestamp 位置的时间戳通过后缀来声明时间精度,具体如下:
......@@ -72,13 +71,15 @@ timestamp 位置的时间戳通过后缀来声明时间精度,具体如下:
st,t1=3i64,t2=4f64,t3="t3" c1=3i64,c3=L"passit",c2=false,c4=4f64 1626006833639000000ns
```
需要注意的是,如果描述数据类型后缀时使用了错误的大小写,或者为数据指定的数据类型有误,均可能引发报错提示而导致数据写入失败。
### Schemaless 的处理逻辑
Schemaless 按照如下原则来处理行数据:
1. 当 tag_set 中有 ID 字段时,该字段的值将作为数据子表的表名。
2. 没有 ID 字段时,将使用 `measurement + tag_value1 + tag_value2 + ...` 的 md5 值来作为子表名。
3. 如果指定的超级表名不存在,则 Schemaless 会创建这个超级表。
4. 如果指定的数据子表不存在,则 Schemaless 会使用 tag values 创建这个数据子表。
4. 如果指定的数据子表不存在,则 Schemaless 会按照步骤 1 或 2 确定的子表名来创建子表。
5. 如果数据行中指定的标签列或普通列不存在,则 Schemaless 会在超级表中增加对应的标签列或普通列(只增不减)。
6. 如果超级表中存在一些标签列或普通列未在一个数据行中被指定取值,那么这些列的值在这一行中会被置为 NULL。
7. 对 BINARY 或 NCHAR 列,如果数据行中所提供值的长度超出了列类型的限制,那么 Schemaless 会增加该列允许存储的字符长度上限(只增不减),以保证数据的完整保存。
......
......@@ -776,7 +776,7 @@ curl -u username:password -d '<SQL>' <ip>:<PORT>/rest/sql/[db_name]
- data: 具体返回的数据,一行一行的呈现,如果不返回结果集,那么就仅有 [[affected_rows]]。data 中每一行的数据列顺序,与 column_meta 中描述数据列的顺序完全一致。
- rows: 表明总共多少行数据。
column_meta 中的列类型说明:
<a class="anchor" id="column_meta"></a>column_meta 中的列类型说明:
* 1:BOOL
* 2:TINYINT
* 3:SMALLINT
......
# UDF(用户定义函数)
在有些应用场景中,应用逻辑需要的查询无法直接使用系统内置的函数来表示。利用 UDF 功能,TDengine 可以插入用户编写的处理代码并在查询中使用它们,就能够很方便地解决特殊应用场景中的使用需求。
从 2.2.0.0 版本开始,TDengine 支持通过 C/C++ 语言进行 UDF 定义。接下来结合示例讲解 UDF 的使用方法。
## 用 C/C++ 语言来定义 UDF
TDengine 提供 3 个 UDF 的源代码示例,分别为:
* [add_one.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/add_one.c)
* [abs_max.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/abs_max.c)
* [sum_double.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/sum_double.c)
### 无需中间变量的标量函数
[add_one.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/add_one.c) 是结构最简单的 UDF 实现。其功能为:对传入的一个数据列(可能因 WHERE 子句进行了筛选)中的每一项,都输出 +1 之后的值,并且要求输入的列数据类型为 INT。
这一具体的处理逻辑在函数 `void add_one(char* data, short itype, short ibytes, int numOfRows, long long* ts, char* dataOutput, char* interBUf, char* tsOutput, int* numOfOutput, short otype, short obytes, SUdfInit* buf)` 中定义。这类用于实现 UDF 的基础计算逻辑的函数,我们称为 udfNormalFunc,也就是对行数据块的标量计算函数。需要注意的是,udfNormalFunc 的参数项是固定的,用于按照约束完成与引擎之间的数据交换。
- udfNormalFunc 中各参数的具体含义是:
* data:存有输入的数据。
* itype:输入数据的类型。这里采用的是短整型表示法,与各种数据类型对应的值可以参见 [column_meta 中的列类型说明](https://www.taosdata.com/cn/documentation/connector#column_meta)。例如 4 用于表示 INT 型。
* iBytes:输入数据中每个值会占用的字节数。
* numOfRows:输入数据的总行数。
* ts:主键时间戳在输入中的列数据。
* dataOutput:输出数据的缓冲区。
* interBuf:系统使用的中间临时缓冲区,通常用户逻辑无需对 interBuf 进行处理。
* tsOutput:主键时间戳在输出时的列数据。
* numOfOutput:输出数据的个数。
* oType:输出数据的类型。取值含义与 itype 参数一致。
* oBytes:输出数据中每个值会占用的字节数。
* buf:计算过程的中间变量缓冲区。
其中 buf 参数需要用到一个自定义结构体 SUdfInit。在这个例子中,因为 add_one 的计算过程无需用到中间变量缓存,所以可以把 SUdfInit 定义成一个空结构体。
### 无需中间变量的聚合函数
[abs_max.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/abs_max.c) 实现的是一个聚合函数,功能是对一组数据按绝对值取最大值。
其计算过程为:与所在查询语句相关的数据会被分为多个行数据块,对每个行数据块调用 udfNormalFunc(在本例的实现代码中,实际函数名是 `abs_max`),再将每个数据块的计算结果调用 udfMergeFunc(本例中,其实际的函数名是 `abs_max_merge`)进行聚合,生成每个子表的聚合结果。如果查询指令涉及超级表,那么最后还会通过 udfFinalizeFunc(本例中,其实际的函数名是 `abs_max_finalize`)再把子表的计算结果聚合为超级表的计算结果。
值得注意的是,udfNormalFunc、udfMergeFunc、udfFinalizeFunc 之间,函数名约定使用相同的前缀,此前缀即 udfNormalFunc 的实际函数名。udfMergeFunc 的函数名后缀 `_merge`、udfFinalizeFunc 的函数名后缀 `_finalize`,是 UDF 实现规则的一部分,系统会按照这些函数名后缀来调用相应功能。
- udfMergeFunc 用于对计算中间结果进行聚合。本例中 udfMergeFunc 对应的实现函数为 `void abs_max_merge(char* data, int32_t numOfRows, char* dataOutput, int32_t* numOfOutput, SUdfInit* buf)`,其中各参数的具体含义是:
* data:udfNormalFunc 的输出组合在一起的数据,也就成为了 udfMergeFunc 的输入。
* numOfRows:data 中数据的行数。
* dataOutput:输出数据的缓冲区。
* numOfOutput:输出数据的个数。
* buf:计算过程的中间变量缓冲区。
- udfFinalizeFunc 用于对计算结果进行最终聚合。本例中 udfFinalizeFunc 对应的实现函数为 `void abs_max_finalize(char* dataOutput, char* interBuf, int* numOfOutput, SUdfInit* buf)`,其中各参数的具体含义是:
* dataOutput:输出数据的缓冲区。对 udfFinalizeFunc 来说,其输入数据也来自于这里。
* interBuf:系统使用的中间临时缓冲区,与 udfNormalFunc 中的同名参数含义一致。
* numOfOutput:输出数据的个数。
* buf:计算过程的中间变量缓冲区。
同样因为 abs_max 的计算过程无需用到中间变量缓存,所以同样是可以把 SUdfInit 定义成一个空结构体。
### 使用中间变量的聚合函数
[sum_double.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/sum_double.c) 也是一个聚合函数,功能是对一组数据输出求和结果的倍数。
出于功能演示的目的,在这个用户定义函数的实现方法中,用到了中间变量缓冲区 buf。因此,在这个源代码文件中,SUdfInit 就不再是一个空的结构体,而是定义了缓冲区的具体存储内容。
也正是因为用到了中间变量缓冲区,因此就需要对这一缓冲区进行初始化和资源释放。具体来说,也即对应 udfInitFunc(本例中,其实际的函数名是 `sum_double_init`)和 udfDestroyFunc(本例中,其实际的函数名是 `sum_double_destroy`)。其函数名命名规则同样是采取以 udfNormalFunc 的实际函数名为前缀,以 `_init``_destroy` 为后缀。系统会在初始化和资源释放时调用对应名称的函数。
- udfInitFunc 用于初始化中间变量缓冲区中的变量和内容。本例中 udfInitFunc 对应的实现函数为 `int sum_double_init(SUdfInit* buf)`,其中各参数的具体含义是:
* buf:计算过程的中间变量缓冲区。
- udfDestroyFunc 用于释放中间变量缓冲区中的变量和内容。本例中 udfDestroyFunc 对应的实现函数为 `void sum_double_destroy(SUdfInit* buf)`,其中各参数的具体含义是:
* buf:计算过程的中间变量缓冲区。
注意,UDF 的实现过程中需要小心处理对中间变量缓冲区的使用,如果使用不当则有可能导致内存泄露或对资源的过度占用,甚至导致系统服务进程崩溃等。
### UDF 实现方式的规则总结
根据所要实现的 UDF 类型不同,用户所要实现的功能函数内容也会有所区别:
* 无需中间变量的标量函数:结构体 SUdfInit 可以为空,需实现 udfNormalFunc。
* 无需中间变量的聚合函数:结构体 SUdfInit 可以为空,需实现 udfNormalFunc、udfMergeFunc、udfFinalizeFunc。
* 使用中间变量的标量函数:结构体 SUdfInit 需要具体定义,并需实现 udfNormalFunc、udfInitFunc、udfDestroyFunc。
* 使用中间变量的聚合函数:结构体 SUdfInit 需要具体定义,并需实现 udfNormalFunc、udfInitFunc、udfDestroyFunc、udfMergeFunc、udfFinalizeFunc。
## 编译 UDF
用户定义函数的 C 语言源代码无法直接被 TDengine 系统使用,而是需要先编译为 .so 链接库,之后才能载入 TDengine 系统。
例如,按照上一章节描述的规则准备好了用户定义函数的源代码 add_one.c,那么可以执行如下指令编译得到动态链接库文件:
```bash
gcc -g -O0 -fPIC -shared add_one.c -o add_one.so
```
这样就准备好了动态链接库 add_one.so 文件,可以供后文创建 UDF 时使用了。
## 在系统中管理和使用 UDF
### 创建 UDF
用户可以通过 SQL 指令在系统中加载客户端所在主机上的 UDF 函数库(不能通过 RESTful 接口或 HTTP 管理界面来进行这一过程)。一旦创建成功,则当前 TDengine 集群的所有用户都可以在 SQL 指令中使用这些函数。UDF 存储在系统的 MNode 节点上,因此即使重启 TDengine 系统,已经创建的 UDF 也仍然可用。
在创建 UDF 时,需要区分标量函数和聚合函数。如果创建时声明了错误的函数类别,则可能导致通过 SQL 指令调用函数时出错。
- 创建标量函数:`CREATE FUNCTION ids(X) AS ids(Y) OUTPUTTYPE typename(Z) bufsize(B);`
* X:标量函数未来在 SQL 指令中被调用时的函数名,必须与函数实现中 udfNormalFunc 的实际名称一致;
* Y:包含 UDF 函数实现的动态链接库的库文件路径(指的是库文件在当前客户端所在主机上的保存路径,通常是指向一个 .so 文件);
* Z:此函数计算结果的数据类型,使用数字表示,含义与上文中 udfNormalFunc 的 itype 参数一致;
* B:系统使用的中间临时缓冲区大小,单位是字节,最小 0,最大 512,通常可以设置为 128。
- 创建聚合函数:`CREATE AGGREGATE FUNCTION ids(X) AS ids(Y) OUTPUTTYPE typename(Z) bufsize(B);`
* X:标量函数未来在 SQL 指令中被调用时的函数名,必须与函数实现中 udfNormalFunc 的实际名称一致;
* Y:包含 UDF 函数实现的动态链接库的库文件路径(指的是库文件在当前客户端所在主机上的保存路径,通常是指向一个 .so 文件);
* Z:此函数计算结果的数据类型,使用数字表示,含义与上文中 udfNormalFunc 的 itype 参数一致;
* B:系统使用的中间临时缓冲区大小,单位是字节,最小 0,最大 512,通常可以设置为 128。
### 管理 UDF
- 删除指定名称的用户定义函数:`DROP FUNCTION ids(X);`
* X:此参数的含义与 CREATE 指令中的 X 参数一致。
- 显示系统中当前可用的所有 UDF:`SHOW FUNCTIONS;`
### 调用 UDF
在 SQL 指令中,可以直接以在系统中创建 UDF 时赋予的函数名来调用用户定义函数。例如:
```sql
SELECT X(c) FROM table/stable;
```
表示对名为 c 的数据列调用名为 X 的用户定义函数。SQL 指令中用户定义函数可以配合 WHERE 等查询特性来使用。
## UDF 的一些使用限制
在当前版本下,使用 UDF 存在如下这些限制:
1. UDF 不能与系统内建的 SQL 函数混合使用;
2. UDF 只支持以单个数据列作为输入;
3. UDF 只要创建成功,就会被持久化存储到 MNode 节点中;
4. 无法通过 RESTful 接口来创建 UDF;
5. UDF 在 SQL 中定义的函数名,必须与 .so 库文件实现中的接口函数名前缀保持一致,也即必须是 udfNormalFunc 的名称,而且不可与 TDengine 中已有的内建 SQL 函数重名。
......@@ -128,8 +128,12 @@ function install_lib() {
${csudo} ln -s ${install_main_dir}/driver/libtaos.* ${lib_link_dir}/libtaos.1.dylib
${csudo} ln -s ${lib_link_dir}/libtaos.1.dylib ${lib_link_dir}/libtaos.dylib
fi
${csudo} ldconfig
if [ "$osType" != "Darwin" ]; then
${csudo} ldconfig
else
${csudo} update_dyld_shared_cache
fi
}
function install_header() {
......
......@@ -109,8 +109,7 @@ extern char configDir[];
#define DEFAULT_DATATYPE_NUM 1
#define DEFAULT_CHILDTABLES 10000
#define STMT_BIND_PARAM_BATCH 0
#define STMT_BIND_PARAM_BATCH 1
char* g_sampleDataBuf = NULL;
#if STMT_BIND_PARAM_BATCH == 1
......@@ -3576,10 +3575,8 @@ static int getChildNameOfSuperTableWithLimitAndOffset(TAOS * taos,
char* childTblName = *childTblNameOfSuperTbl;
if (offset >= 0) {
snprintf(limitBuf, 100, " limit %"PRId64" offset %"PRIu64"",
limit, offset);
}
snprintf(limitBuf, 100, " limit %"PRId64" offset %"PRIu64"",
limit, offset);
//get all child table name use cmd: select tbname from superTblName;
snprintf(command, 1024, "select tbname from %s.%s %s",
......@@ -5432,7 +5429,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
g_Dbs.db[i].superTbls[j].interlaceRows = g_Dbs.db[i].superTbls[j].insertRows;
}
} else if (!stbInterlaceRows) {
g_Dbs.db[i].superTbls[j].interlaceRows = 0; // 0 means progressive mode, > 0 mean interlace mode. max value is less or equ num_of_records_per_req
g_Dbs.db[i].superTbls[j].interlaceRows = g_args.interlaceRows; // 0 means progressive mode, > 0 mean interlace mode. max value is less or equ num_of_records_per_req
} else {
errorPrint(
"%s", "failed to read json, interlace rows input mistake\n");
......@@ -7450,6 +7447,7 @@ static int32_t prepareStmtWithoutStb(
g_args.binwidth,
pThreadInfo->time_precision,
NULL)) {
free(bindArray);
return -1;
}
}
......@@ -8104,7 +8102,7 @@ static int parseSampleToStmt(
SSuperTable *stbInfo, uint32_t timePrec)
{
pThreadInfo->sampleBindArray =
calloc(1, sizeof(char *) * MAX_SAMPLES);
(char *)calloc(1, sizeof(char *) * MAX_SAMPLES);
if (pThreadInfo->sampleBindArray == NULL) {
errorPrint2("%s() LN%d, Failed to allocate %"PRIu64" bind array buffer\n",
__func__, __LINE__,
......@@ -8172,6 +8170,7 @@ static int parseSampleToStmt(
timePrec,
bindBuffer)) {
free(bindBuffer);
free(bindArray);
return -1;
}
free(bindBuffer);
......@@ -8294,7 +8293,7 @@ static uint32_t execBindParam(
}
#endif
static int32_t prepareStbStmtWithSample(
static int32_t prepareStbStmt(
threadInfo *pThreadInfo,
char *tableName,
int64_t tableSeq,
......@@ -8468,21 +8467,18 @@ static void* syncWriteInterlaceStmtBatch(threadInfo *pThreadInfo, uint32_t inter
pThreadInfo->threadID, __func__, __LINE__);
int64_t insertRows;
uint64_t maxSqlLen;
int64_t nTimeStampStep;
int64_t timeStampStep;
uint64_t insert_interval;
SSuperTable* stbInfo = pThreadInfo->stbInfo;
if (stbInfo) {
insertRows = stbInfo->insertRows;
maxSqlLen = stbInfo->maxSqlLen;
nTimeStampStep = stbInfo->timeStampStep;
timeStampStep = stbInfo->timeStampStep;
insert_interval = stbInfo->insertInterval;
} else {
insertRows = g_args.insertRows;
maxSqlLen = g_args.max_sql_len;
nTimeStampStep = g_args.timestamp_step;
timeStampStep = g_args.timestamp_step;
insert_interval = g_args.insert_interval;
}
......@@ -8491,18 +8487,14 @@ static void* syncWriteInterlaceStmtBatch(threadInfo *pThreadInfo, uint32_t inter
pThreadInfo->start_table_from,
pThreadInfo->ntables, insertRows);
uint32_t batchPerTbl = interlaceRows;
uint32_t batchPerTblTimes;
uint64_t timesInterlace = (insertRows / interlaceRows) + 1;
uint32_t precalcBatch = interlaceRows;
if (interlaceRows > g_args.reqPerReq)
interlaceRows = g_args.reqPerReq;
if (precalcBatch > g_args.reqPerReq)
precalcBatch = g_args.reqPerReq;
if ((interlaceRows > 0) && (pThreadInfo->ntables > 1)) {
batchPerTblTimes =
g_args.reqPerReq / interlaceRows;
} else {
batchPerTblTimes = 1;
}
if (precalcBatch > MAX_SAMPLES)
precalcBatch = MAX_SAMPLES;
pThreadInfo->totalInsertRows = 0;
pThreadInfo->totalAffectedRows = 0;
......@@ -8515,27 +8507,27 @@ static void* syncWriteInterlaceStmtBatch(threadInfo *pThreadInfo, uint32_t inter
uint64_t endTs;
uint64_t tableSeq = pThreadInfo->start_table_from;
int64_t startTime = pThreadInfo->start_time;
int64_t startTime;
uint64_t generatedRecPerTbl = 0;
bool flagSleep = true;
uint64_t sleepTimeTotal = 0;
int percentComplete = 0;
int64_t totalRows = insertRows * pThreadInfo->ntables;
pThreadInfo->samplePos = 0;
while(pThreadInfo->totalInsertRows < pThreadInfo->ntables * insertRows) {
for (int64_t interlace = 0;
interlace < timesInterlace; interlace ++) {
if ((flagSleep) && (insert_interval)) {
st = taosGetTimestampMs();
flagSleep = false;
}
uint32_t recOfBatch = 0;
int64_t generated = 0;
int64_t samplePos;
int32_t generated;
for (uint64_t i = 0; i < batchPerTblTimes; i ++) {
for (; tableSeq < pThreadInfo->start_table_from + pThreadInfo->ntables; tableSeq ++) {
char tableName[TSDB_TABLE_NAME_LEN];
getTableName(tableName, pThreadInfo, tableSeq);
if (0 == strlen(tableName)) {
errorPrint2("[%d] %s() LN%d, getTableName return null\n",
......@@ -8543,127 +8535,121 @@ static void* syncWriteInterlaceStmtBatch(threadInfo *pThreadInfo, uint32_t inter
return NULL;
}
if (stbInfo) {
generated = prepareStbStmtWithSample(
pThreadInfo,
tableName,
tableSeq,
batchPerTbl,
insertRows, 0,
startTime,
&(pThreadInfo->samplePos));
} else {
debugPrint("[%d] %s() LN%d, tableName:%s, batch:%d startTime:%"PRId64"\n",
pThreadInfo->threadID,
__func__, __LINE__,
tableName, batchPerTbl, startTime);
generated = prepareStmtWithoutStb(
pThreadInfo,
tableName,
batchPerTbl,
insertRows, i,
startTime);
}
samplePos = pThreadInfo->samplePos;
startTime = pThreadInfo->start_time
+ interlace * interlaceRows * timeStampStep;
uint64_t remainRecPerTbl =
insertRows - interlaceRows * interlace;
uint64_t recPerTbl = 0;
debugPrint("[%d] %s() LN%d, generated records is %d\n",
pThreadInfo->threadID, __func__, __LINE__, generated);
if (generated < 0) {
errorPrint2("[%d] %s() LN%d, generated records is %d\n",
pThreadInfo->threadID, __func__, __LINE__, generated);
goto free_of_interlace_stmt;
} else if (generated == 0) {
break;
uint64_t remainPerInterlace;
if (remainRecPerTbl > interlaceRows) {
remainPerInterlace = interlaceRows;
} else {
remainPerInterlace = remainRecPerTbl;
}
tableSeq ++;
recOfBatch += batchPerTbl;
while(remainPerInterlace > 0) {
pThreadInfo->totalInsertRows += batchPerTbl;
verbosePrint("[%d] %s() LN%d batchPerTbl=%d recOfBatch=%d\n",
pThreadInfo->threadID, __func__, __LINE__,
batchPerTbl, recOfBatch);
if (tableSeq == pThreadInfo->start_table_from + pThreadInfo->ntables) {
// turn to first table
tableSeq = pThreadInfo->start_table_from;
generatedRecPerTbl += batchPerTbl;
uint32_t batch;
if (remainPerInterlace > precalcBatch) {
batch = precalcBatch;
} else {
batch = remainPerInterlace;
}
debugPrint("[%d] %s() LN%d, tableName:%s, batch:%d startTime:%"PRId64"\n",
pThreadInfo->threadID,
__func__, __LINE__,
tableName, batch, startTime);
startTime = pThreadInfo->start_time
+ generatedRecPerTbl * nTimeStampStep;
if (stbInfo) {
generated = prepareStbStmt(
pThreadInfo,
tableName,
tableSeq,
batch,
insertRows, 0,
startTime,
&samplePos);
} else {
generated = prepareStmtWithoutStb(
pThreadInfo,
tableName,
batch,
insertRows,
interlaceRows * interlace + recPerTbl,
startTime);
}
flagSleep = true;
if (generatedRecPerTbl >= insertRows)
debugPrint("[%d] %s() LN%d, generated records is %"PRId64"\n",
pThreadInfo->threadID, __func__, __LINE__, generated);
if (generated < 0) {
errorPrint2("[%d] %s() LN%d, generated records is %"PRId64"\n",
pThreadInfo->threadID, __func__, __LINE__, generated);
goto free_of_interlace_stmt;
} else if (generated == 0) {
break;
}
int64_t remainRows = insertRows - generatedRecPerTbl;
if ((remainRows > 0) && (batchPerTbl > remainRows))
batchPerTbl = remainRows;
recPerTbl += generated;
remainPerInterlace -= generated;
pThreadInfo->totalInsertRows += generated;
if (pThreadInfo->ntables * batchPerTbl < g_args.reqPerReq)
break;
}
verbosePrint("[%d] %s() LN%d totalInsertRows=%"PRIu64"\n",
pThreadInfo->threadID, __func__, __LINE__,
pThreadInfo->totalInsertRows);
verbosePrint("[%d] %s() LN%d generatedRecPerTbl=%"PRId64" insertRows=%"PRId64"\n",
pThreadInfo->threadID, __func__, __LINE__,
generatedRecPerTbl, insertRows);
startTs = taosGetTimestampUs();
if ((g_args.reqPerReq - recOfBatch) < batchPerTbl)
break;
}
int64_t affectedRows = execInsert(pThreadInfo, generated);
verbosePrint("[%d] %s() LN%d recOfBatch=%d totalInsertRows=%"PRIu64"\n",
pThreadInfo->threadID, __func__, __LINE__, recOfBatch,
pThreadInfo->totalInsertRows);
endTs = taosGetTimestampUs();
uint64_t delay = endTs - startTs;
performancePrint("%s() LN%d, insert execution time is %10.2f ms\n",
__func__, __LINE__, delay / 1000.0);
verbosePrint("[%d] %s() LN%d affectedRows=%"PRId64"\n",
pThreadInfo->threadID,
__func__, __LINE__, affectedRows);
startTs = taosGetTimestampUs();
if (delay > pThreadInfo->maxDelay) pThreadInfo->maxDelay = delay;
if (delay < pThreadInfo->minDelay) pThreadInfo->minDelay = delay;
pThreadInfo->cntDelay++;
pThreadInfo->totalDelay += delay;
if (recOfBatch == 0) {
errorPrint2("[%d] %s() LN%d Failed to insert records of batch %d\n",
pThreadInfo->threadID, __func__, __LINE__,
batchPerTbl);
if (batchPerTbl > 0) {
errorPrint("\tIf the batch is %d, the length of the SQL to insert a row must be less then %"PRId64"\n",
batchPerTbl, maxSqlLen / batchPerTbl);
}
goto free_of_interlace_stmt;
}
int64_t affectedRows = execInsert(pThreadInfo, recOfBatch);
if (generated != affectedRows) {
errorPrint2("[%d] %s() LN%d execInsert() insert %"PRId64", affected rows: %"PRId64"\n\n",
pThreadInfo->threadID, __func__, __LINE__,
generated, affectedRows);
goto free_of_interlace_stmt;
}
endTs = taosGetTimestampUs();
uint64_t delay = endTs - startTs;
performancePrint("%s() LN%d, insert execution time is %10.2f ms\n",
__func__, __LINE__, delay / 1000.0);
verbosePrint("[%d] %s() LN%d affectedRows=%"PRId64"\n",
pThreadInfo->threadID,
__func__, __LINE__, affectedRows);
pThreadInfo->totalAffectedRows += affectedRows;
if (delay > pThreadInfo->maxDelay) pThreadInfo->maxDelay = delay;
if (delay < pThreadInfo->minDelay) pThreadInfo->minDelay = delay;
pThreadInfo->cntDelay++;
pThreadInfo->totalDelay += delay;
int currentPercent = pThreadInfo->totalAffectedRows * 100 / totalRows;
if (currentPercent > percentComplete ) {
printf("[%d]:%d%%\n", pThreadInfo->threadID, currentPercent);
percentComplete = currentPercent;
}
int64_t currentPrintTime = taosGetTimestampMs();
if (currentPrintTime - lastPrintTime > 30*1000) {
printf("thread[%d] has currently inserted rows: %"PRIu64 ", affected rows: %"PRIu64 "\n",
pThreadInfo->threadID,
pThreadInfo->totalInsertRows,
pThreadInfo->totalAffectedRows);
lastPrintTime = currentPrintTime;
}
if (recOfBatch != affectedRows) {
errorPrint2("[%d] %s() LN%d execInsert insert %d, affected rows: %"PRId64"\n\n",
pThreadInfo->threadID, __func__, __LINE__,
recOfBatch, affectedRows);
goto free_of_interlace_stmt;
startTime += (generated * timeStampStep);
}
}
pThreadInfo->samplePos = samplePos;
pThreadInfo->totalAffectedRows += affectedRows;
if (tableSeq == pThreadInfo->start_table_from
+ pThreadInfo->ntables) {
// turn to first table
tableSeq = pThreadInfo->start_table_from;
int currentPercent = pThreadInfo->totalAffectedRows * 100 / totalRows;
if (currentPercent > percentComplete ) {
printf("[%d]:%d%%\n", pThreadInfo->threadID, currentPercent);
percentComplete = currentPercent;
}
int64_t currentPrintTime = taosGetTimestampMs();
if (currentPrintTime - lastPrintTime > 30*1000) {
printf("thread[%d] has currently inserted rows: %"PRIu64 ", affected rows: %"PRIu64 "\n",
pThreadInfo->threadID,
pThreadInfo->totalInsertRows,
pThreadInfo->totalAffectedRows);
lastPrintTime = currentPrintTime;
flagSleep = true;
}
if ((insert_interval) && flagSleep) {
......@@ -8693,7 +8679,7 @@ static void* syncWriteInterlaceStmt(threadInfo *pThreadInfo, uint32_t interlaceR
int64_t insertRows;
uint64_t maxSqlLen;
int64_t nTimeStampStep;
int64_t timeStampStep;
uint64_t insert_interval;
SSuperTable* stbInfo = pThreadInfo->stbInfo;
......@@ -8701,12 +8687,12 @@ static void* syncWriteInterlaceStmt(threadInfo *pThreadInfo, uint32_t interlaceR
if (stbInfo) {
insertRows = stbInfo->insertRows;
maxSqlLen = stbInfo->maxSqlLen;
nTimeStampStep = stbInfo->timeStampStep;
timeStampStep = stbInfo->timeStampStep;
insert_interval = stbInfo->insertInterval;
} else {
insertRows = g_args.insertRows;
maxSqlLen = g_args.max_sql_len;
nTimeStampStep = g_args.timestamp_step;
timeStampStep = g_args.timestamp_step;
insert_interval = g_args.insert_interval;
}
......@@ -8767,8 +8753,12 @@ static void* syncWriteInterlaceStmt(threadInfo *pThreadInfo, uint32_t interlaceR
return NULL;
}
debugPrint("[%d] %s() LN%d, tableName:%s, batch:%d startTime:%"PRId64"\n",
pThreadInfo->threadID,
__func__, __LINE__,
tableName, batchPerTbl, startTime);
if (stbInfo) {
generated = prepareStbStmtWithSample(
generated = prepareStbStmt(
pThreadInfo,
tableName,
tableSeq,
......@@ -8777,10 +8767,6 @@ static void* syncWriteInterlaceStmt(threadInfo *pThreadInfo, uint32_t interlaceR
startTime,
&(pThreadInfo->samplePos));
} else {
debugPrint("[%d] %s() LN%d, tableName:%s, batch:%d startTime:%"PRId64"\n",
pThreadInfo->threadID,
__func__, __LINE__,
tableName, batchPerTbl, startTime);
generated = prepareStmtWithoutStb(
pThreadInfo,
tableName,
......@@ -8814,7 +8800,7 @@ static void* syncWriteInterlaceStmt(threadInfo *pThreadInfo, uint32_t interlaceR
generatedRecPerTbl += batchPerTbl;
startTime = pThreadInfo->start_time
+ generatedRecPerTbl * nTimeStampStep;
+ generatedRecPerTbl * timeStampStep;
flagSleep = true;
if (generatedRecPerTbl >= insertRows)
......@@ -8919,7 +8905,7 @@ static void* syncWriteInterlace(threadInfo *pThreadInfo, uint32_t interlaceRows)
int64_t insertRows;
uint64_t maxSqlLen;
int64_t nTimeStampStep;
int64_t timeStampStep;
uint64_t insert_interval;
SSuperTable* stbInfo = pThreadInfo->stbInfo;
......@@ -8927,12 +8913,12 @@ static void* syncWriteInterlace(threadInfo *pThreadInfo, uint32_t interlaceRows)
if (stbInfo) {
insertRows = stbInfo->insertRows;
maxSqlLen = stbInfo->maxSqlLen;
nTimeStampStep = stbInfo->timeStampStep;
timeStampStep = stbInfo->timeStampStep;
insert_interval = stbInfo->insertInterval;
} else {
insertRows = g_args.insertRows;
maxSqlLen = g_args.max_sql_len;
nTimeStampStep = g_args.timestamp_step;
timeStampStep = g_args.timestamp_step;
insert_interval = g_args.insert_interval;
}
......@@ -9075,7 +9061,7 @@ static void* syncWriteInterlace(threadInfo *pThreadInfo, uint32_t interlaceRows)
generatedRecPerTbl += batchPerTbl;
startTime = pThreadInfo->start_time
+ generatedRecPerTbl * nTimeStampStep;
+ generatedRecPerTbl * timeStampStep;
flagSleep = true;
if (generatedRecPerTbl >= insertRows)
......@@ -9176,6 +9162,144 @@ free_of_interlace:
return NULL;
}
static void* syncWriteProgressiveStmt(threadInfo *pThreadInfo) {
debugPrint("%s() LN%d: ### stmt progressive write\n", __func__, __LINE__);
SSuperTable* stbInfo = pThreadInfo->stbInfo;
int64_t timeStampStep =
stbInfo?stbInfo->timeStampStep:g_args.timestamp_step;
int64_t insertRows =
(stbInfo)?stbInfo->insertRows:g_args.insertRows;
verbosePrint("%s() LN%d insertRows=%"PRId64"\n",
__func__, __LINE__, insertRows);
uint64_t lastPrintTime = taosGetTimestampMs();
uint64_t startTs = taosGetTimestampMs();
uint64_t endTs;
pThreadInfo->totalInsertRows = 0;
pThreadInfo->totalAffectedRows = 0;
pThreadInfo->samplePos = 0;
int percentComplete = 0;
int64_t totalRows = insertRows * pThreadInfo->ntables;
for (uint64_t tableSeq = pThreadInfo->start_table_from;
tableSeq <= pThreadInfo->end_table_to;
tableSeq ++) {
int64_t start_time = pThreadInfo->start_time;
for (uint64_t i = 0; i < insertRows;) {
char tableName[TSDB_TABLE_NAME_LEN];
getTableName(tableName, pThreadInfo, tableSeq);
verbosePrint("%s() LN%d: tid=%d seq=%"PRId64" tableName=%s\n",
__func__, __LINE__,
pThreadInfo->threadID, tableSeq, tableName);
if (0 == strlen(tableName)) {
errorPrint2("[%d] %s() LN%d, getTableName return null\n",
pThreadInfo->threadID, __func__, __LINE__);
return NULL;
}
// measure prepare + insert
startTs = taosGetTimestampUs();
int32_t generated;
if (stbInfo) {
generated = prepareStbStmt(
pThreadInfo,
tableName,
tableSeq,
(g_args.reqPerReq>stbInfo->insertRows)?
stbInfo->insertRows:
g_args.reqPerReq,
insertRows, i, start_time,
&(pThreadInfo->samplePos));
} else {
generated = prepareStmtWithoutStb(
pThreadInfo,
tableName,
g_args.reqPerReq,
insertRows, i,
start_time);
}
verbosePrint("[%d] %s() LN%d generated=%d\n",
pThreadInfo->threadID,
__func__, __LINE__, generated);
if (generated > 0)
i += generated;
else
goto free_of_stmt_progressive;
start_time += generated * timeStampStep;
pThreadInfo->totalInsertRows += generated;
// only measure insert
// startTs = taosGetTimestampUs();
int32_t affectedRows = execInsert(pThreadInfo, generated);
endTs = taosGetTimestampUs();
uint64_t delay = endTs - startTs;
performancePrint("%s() LN%d, insert execution time is %10.f ms\n",
__func__, __LINE__, delay/1000.0);
verbosePrint("[%d] %s() LN%d affectedRows=%d\n",
pThreadInfo->threadID,
__func__, __LINE__, affectedRows);
if (delay > pThreadInfo->maxDelay) pThreadInfo->maxDelay = delay;
if (delay < pThreadInfo->minDelay) pThreadInfo->minDelay = delay;
pThreadInfo->cntDelay++;
pThreadInfo->totalDelay += delay;
if (affectedRows < 0) {
errorPrint2("%s() LN%d, affected rows: %d\n",
__func__, __LINE__, affectedRows);
goto free_of_stmt_progressive;
}
pThreadInfo->totalAffectedRows += affectedRows;
int currentPercent = pThreadInfo->totalAffectedRows * 100 / totalRows;
if (currentPercent > percentComplete ) {
printf("[%d]:%d%%\n", pThreadInfo->threadID, currentPercent);
percentComplete = currentPercent;
}
int64_t currentPrintTime = taosGetTimestampMs();
if (currentPrintTime - lastPrintTime > 30*1000) {
printf("thread[%d] has currently inserted rows: %"PRId64 ", affected rows: %"PRId64 "\n",
pThreadInfo->threadID,
pThreadInfo->totalInsertRows,
pThreadInfo->totalAffectedRows);
lastPrintTime = currentPrintTime;
}
if (i >= insertRows)
break;
} // insertRows
if ((g_args.verbose_print) &&
(tableSeq == pThreadInfo->ntables - 1) && (stbInfo)
&& (0 == strncasecmp(
stbInfo->dataSource,
"sample", strlen("sample")))) {
verbosePrint("%s() LN%d samplePos=%"PRId64"\n",
__func__, __LINE__, pThreadInfo->samplePos);
}
} // tableSeq
if (percentComplete < 100) {
printf("[%d]:%d%%\n", pThreadInfo->threadID, percentComplete);
}
free_of_stmt_progressive:
tmfree(pThreadInfo->buffer);
printStatPerThread(pThreadInfo);
return NULL;
}
// sync insertion progressive data
static void* syncWriteProgressive(threadInfo *pThreadInfo) {
debugPrint("%s() LN%d: ### progressive write\n", __func__, __LINE__);
......@@ -9242,7 +9366,7 @@ static void* syncWriteProgressive(threadInfo *pThreadInfo) {
int32_t generated;
if (stbInfo) {
if (stbInfo->iface == STMT_IFACE) {
generated = prepareStbStmtWithSample(
generated = prepareStbStmt(
pThreadInfo,
tableName,
tableSeq,
......@@ -9374,20 +9498,28 @@ static void* syncWrite(void *sarg) {
if (interlaceRows > 0) {
// interlace mode
if (((stbInfo) && (STMT_IFACE == stbInfo->iface))
|| (STMT_IFACE == g_args.iface)) {
if (stbInfo) {
if (STMT_IFACE == stbInfo->iface) {
#if STMT_BIND_PARAM_BATCH == 1
return syncWriteInterlaceStmtBatch(pThreadInfo, interlaceRows);
return syncWriteInterlaceStmtBatch(pThreadInfo, interlaceRows);
#else
return syncWriteInterlaceStmt(pThreadInfo, interlaceRows);
return syncWriteInterlaceStmt(pThreadInfo, interlaceRows);
#endif
} else {
return syncWriteInterlace(pThreadInfo, interlaceRows);
} else {
return syncWriteInterlace(pThreadInfo, interlaceRows);
}
}
}else {
// progressive mode
return syncWriteProgressive(pThreadInfo);
} else {
// progressive mode
if (((stbInfo) && (STMT_IFACE == stbInfo->iface))
|| (STMT_IFACE == g_args.iface)) {
return syncWriteProgressiveStmt(pThreadInfo);
} else {
return syncWriteProgressive(pThreadInfo);
}
}
return NULL;
}
static void callBack(void *param, TAOS_RES *res, int code) {
......@@ -9518,24 +9650,24 @@ static void startMultiThreadInsertData(int threads, char* db_name,
}
}
int64_t start_time;
int64_t startTime;
if (stbInfo) {
if (0 == strncasecmp(stbInfo->startTimestamp, "now", 3)) {
start_time = taosGetTimestamp(timePrec);
startTime = taosGetTimestamp(timePrec);
} else {
if (TSDB_CODE_SUCCESS != taosParseTime(
stbInfo->startTimestamp,
&start_time,
&startTime,
strlen(stbInfo->startTimestamp),
timePrec, 0)) {
ERROR_EXIT("failed to parse time!\n");
}
}
} else {
start_time = DEFAULT_START_TIME;
startTime = DEFAULT_START_TIME;
}
debugPrint("%s() LN%d, start_time= %"PRId64"\n",
__func__, __LINE__, start_time);
debugPrint("%s() LN%d, startTime= %"PRId64"\n",
__func__, __LINE__, startTime);
// read sample data from file first
int ret;
......@@ -9655,14 +9787,10 @@ static void startMultiThreadInsertData(int threads, char* db_name,
}
pthread_t *pids = calloc(1, threads * sizeof(pthread_t));
assert(pids != NULL);
threadInfo *infos = calloc(1, threads * sizeof(threadInfo));
assert(pids != NULL);
assert(infos != NULL);
memset(pids, 0, threads * sizeof(pthread_t));
memset(infos, 0, threads * sizeof(threadInfo));
char *stmtBuffer = calloc(1, BUFFER_SIZE);
assert(stmtBuffer);
......@@ -9671,18 +9799,8 @@ static void startMultiThreadInsertData(int threads, char* db_name,
uint32_t batch;
if (stbInfo) {
if ((stbInfo->interlaceRows == 0)
&& (g_args.interlaceRows > 0)
) {
interlaceRows = g_args.interlaceRows;
} else {
if (stbInfo->interlaceRows < stbInfo->insertRows)
interlaceRows = stbInfo->interlaceRows;
}
if (interlaceRows > stbInfo->insertRows) {
interlaceRows = 0;
}
} else {
if (g_args.interlaceRows < g_args.insertRows)
interlaceRows = g_args.interlaceRows;
......@@ -9739,7 +9857,7 @@ static void startMultiThreadInsertData(int threads, char* db_name,
pThreadInfo->time_precision = timePrec;
pThreadInfo->stbInfo = stbInfo;
pThreadInfo->start_time = start_time;
pThreadInfo->start_time = startTime;
pThreadInfo->minDelay = UINT64_MAX;
if ((NULL == stbInfo) ||
......@@ -9955,7 +10073,7 @@ static void *readTable(void *sarg) {
char *command = calloc(1, BUFFER_SIZE);
assert(command);
uint64_t sTime = pThreadInfo->start_time;
uint64_t startTime = pThreadInfo->start_time;
char *tb_prefix = pThreadInfo->tb_prefix;
FILE *fp = fopen(pThreadInfo->filePath, "a");
if (NULL == fp) {
......@@ -9988,7 +10106,7 @@ static void *readTable(void *sarg) {
uint64_t count = 0;
for (int64_t i = 0; i < ntables; i++) {
sprintf(command, "SELECT %s FROM %s%"PRId64" WHERE ts>= %" PRIu64,
g_aggreFunc[j], tb_prefix, i, sTime);
g_aggreFunc[j], tb_prefix, i, startTime);
double t = taosGetTimestampMs();
TAOS_RES *pSql = taos_query(taos, command);
......
......@@ -183,6 +183,9 @@ python3 test.py -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanosub
python3 test.py -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestInsertTime_step.py
python3 test.py -f tools/taosdemoAllTest/NanoTestCase/taosdumpTestNanoSupport.py
#
python3 ./test.py -f tsdb/tsdbComp.py
# update
python3 ./test.py -f update/allow_update.py
python3 ./test.py -f update/allow_update-0.py
......
......@@ -13,7 +13,6 @@
import traceback
import random
import string
from taos.error import LinesError
import time
from copy import deepcopy
......@@ -24,7 +23,6 @@ from util.sql import *
from util.common import tdCom
import threading
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
......@@ -191,7 +189,8 @@ class TDTestCase:
t4="9223372036854775807i64", t5="11.12345f32", t6="22.123456789f64", t7="\"binaryTagValue\"",
t8="L\"ncharTagValue\"", ts="1626006833639000000ns",
id_noexist_tag=None, id_change_tag=None, id_upper_tag=None, id_double_tag=None,
t_add_tag=None, t_mul_tag=None, t_multi_tag=None, t_blank_tag=None):
t_add_tag=None, t_mul_tag=None, t_multi_tag=None, c_blank_tag=None, t_blank_tag=None,
chinese_tag=None, multi_field_tag=None):
if stb_name == "":
stb_name = tdCom.getLongName(len=6, mode="letters")
if tb_name == "":
......@@ -221,8 +220,14 @@ class TDTestCase:
sql_seq = f'{stb_name} {ts} {value} t0={t0},t1={t1},t2={t2},t3={t3},t4={t4},t5={t5},t6={t6}'
if t_multi_tag is not None:
sql_seq = f'{stb_name} {ts} {value},{value} {id}=\"{tb_name}\",t0={t0},t1={t1},t2={t2},t3={t3},t4={t4},t5={t5},t6={t6}'
if t_blank_tag is not None:
if c_blank_tag is not None:
sql_seq = f'{stb_name} {ts} {id}=\"{tb_name}\",t0={t0},t1={t1},t2={t2},t3={t3},t4={t4},t5={t5},t6={t6},t7={t7},t8={t8}'
if t_blank_tag is not None:
sql_seq = f'{stb_name} {ts} {value} {id}=\"{tb_name}\"'
if chinese_tag is not None:
sql_seq = f'{stb_name} {ts} L"涛思数据" t0={t0},t1=L"涛思数据"'
if multi_field_tag is not None:
sql_seq = f'{stb_name} {ts} {value} {id}=\"{tb_name}\",t0={t0} t1={t1}'
return sql_seq, stb_name
def genMulTagColStr(self, genType, count=1):
......@@ -259,8 +264,6 @@ class TDTestCase:
def resHandle(self, query_sql, query_tag):
tdSql.execute('reset query cache')
row_info = tdSql.query(query_sql, query_tag)
print(query_sql)
print(row_info)
col_info = tdSql.getColNameList(query_sql, query_tag)
res_row_list = []
sub_list = []
......@@ -277,22 +280,6 @@ class TDTestCase:
self._conn.insert_telnet_lines([input_sql])
query_sql = f"{query_sql} {stb_name} {condition}"
res_row_list, res_field_list_without_ts, res_type_list = self.resHandle(query_sql, True)
res = tdSql.query(f'select * from {stb_name}', True)
print(res)
res = tdSql.query(f'select * from {stb_name}', True)
print(res)
time.sleep(2)
res = tdSql.query(f'select * from {stb_name}', True)
print(res)
time.sleep(2)
res = tdSql.query(f'select * from {stb_name}', True)
print(res)
time.sleep(2)
res = tdSql.query(f'select * from {stb_name}', True)
print(res)
if ts == 0:
res_ts = self.dateToTs(res_row_list[0][0])
current_time = time.time()
......@@ -535,8 +522,8 @@ class TDTestCase:
input_sql, stb_name = self.genFullTypeSql(t6=t6)
self.resCmp(input_sql, stb_name)
# * limit set to 1.797693134862316*(10**308)
for c6 in [f'{-1.797693134862316*(10**308)}f64', f'{-1.797693134862316*(10**308)}f64']:
input_sql = self.genFullTypeSql(c6=c6)[0]
for t6 in [f'{-1.797693134862316*(10**308)}f64', f'{-1.797693134862316*(10**308)}f64']:
input_sql = self.genFullTypeSql(t6=t6)[0]
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
......@@ -574,23 +561,25 @@ class TDTestCase:
"""
tdCom.cleanTb()
# i8
for c1 in ["-127i8", "127i8"]:
input_sql, stb_name = self.genFullTypeSql(c1=c1)
for value in ["-127i8", "127i8"]:
input_sql, stb_name = self.genFullTypeSql(value=value)
self.resCmp(input_sql, stb_name)
for c1 in ["-128i8", "128i8"]:
input_sql = self.genFullTypeSql(c1=c1)[0]
tdCom.cleanTb()
for value in ["-128i8", "128i8"]:
input_sql = self.genFullTypeSql(value=value)[0]
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
except LinesError as err:
tdSql.checkNotEqual(err.errno, 0)
# i16
for c2 in ["-32767i16"]:
input_sql, stb_name = self.genFullTypeSql(c2=c2)
tdCom.cleanTb()
for value in ["-32767i16"]:
input_sql, stb_name = self.genFullTypeSql(value=value)
self.resCmp(input_sql, stb_name)
for c2 in ["-32768i16", "32768i16"]:
input_sql = self.genFullTypeSql(c2=c2)[0]
tdCom.cleanTb()
for value in ["-32768i16", "32768i16"]:
input_sql = self.genFullTypeSql(value=value)[0]
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
......@@ -598,11 +587,13 @@ class TDTestCase:
tdSql.checkNotEqual(err.errno, 0)
# i32
for c3 in ["-2147483647i32"]:
input_sql, stb_name = self.genFullTypeSql(c3=c3)
tdCom.cleanTb()
for value in ["-2147483647i32"]:
input_sql, stb_name = self.genFullTypeSql(value=value)
self.resCmp(input_sql, stb_name)
for c3 in ["-2147483648i32", "2147483648i32"]:
input_sql = self.genFullTypeSql(c3=c3)[0]
tdCom.cleanTb()
for value in ["-2147483648i32", "2147483648i32"]:
input_sql = self.genFullTypeSql(value=value)[0]
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
......@@ -610,11 +601,13 @@ class TDTestCase:
tdSql.checkNotEqual(err.errno, 0)
# i64
for c4 in ["-9223372036854775807i64"]:
input_sql, stb_name = self.genFullTypeSql(c4=c4)
tdCom.cleanTb()
for value in ["-9223372036854775807i64"]:
input_sql, stb_name = self.genFullTypeSql(value=value)
self.resCmp(input_sql, stb_name)
for c4 in ["-9223372036854775808i64", "9223372036854775808i64"]:
input_sql = self.genFullTypeSql(c4=c4)[0]
tdCom.cleanTb()
for value in ["-9223372036854775808i64", "9223372036854775808i64"]:
input_sql = self.genFullTypeSql(value=value)[0]
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
......@@ -622,12 +615,14 @@ class TDTestCase:
tdSql.checkNotEqual(err.errno, 0)
# f32
for c5 in [f"{-3.4028234663852885981170418348451692544*(10**38)}f32", f"{3.4028234663852885981170418348451692544*(10**38)}f32"]:
input_sql, stb_name = self.genFullTypeSql(c5=c5)
tdCom.cleanTb()
for value in [f"{-3.4028234663852885981170418348451692544*(10**38)}f32", f"{3.4028234663852885981170418348451692544*(10**38)}f32"]:
input_sql, stb_name = self.genFullTypeSql(value=value)
self.resCmp(input_sql, stb_name)
# * limit set to 4028234664*(10**38)
for c5 in [f"{-3.4028234664*(10**38)}f32", f"{3.4028234664*(10**38)}f32"]:
input_sql = self.genFullTypeSql(c5=c5)[0]
tdCom.cleanTb()
for value in [f"{-3.4028234664*(10**38)}f32", f"{3.4028234664*(10**38)}f32"]:
input_sql = self.genFullTypeSql(value=value)[0]
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
......@@ -635,12 +630,14 @@ class TDTestCase:
tdSql.checkNotEqual(err.errno, 0)
# f64
for c6 in [f'{-1.79769313486231570814527423731704356798070567525844996598917476803157260780*(10**308)}f64', f'{-1.79769313486231570814527423731704356798070567525844996598917476803157260780*(10**308)}f64']:
input_sql, stb_name = self.genFullTypeSql(c6=c6)
tdCom.cleanTb()
for value in [f'{-1.79769313486231570814527423731704356798070567525844996598917476803157260780*(10**308)}f64', f'{-1.79769313486231570814527423731704356798070567525844996598917476803157260780*(10**308)}f64']:
input_sql, stb_name = self.genFullTypeSql(value=value)
self.resCmp(input_sql, stb_name)
# * limit set to 1.797693134862316*(10**308)
for c6 in [f'{-1.797693134862316*(10**308)}f64', f'{-1.797693134862316*(10**308)}f64']:
input_sql = self.genFullTypeSql(c6=c6)[0]
tdCom.cleanTb()
for value in [f'{-1.797693134862316*(10**308)}f64', f'{-1.797693134862316*(10**308)}f64']:
input_sql = self.genFullTypeSql(value=value)[0]
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
......@@ -648,10 +645,12 @@ class TDTestCase:
tdSql.checkNotEqual(err.errno, 0)
# # binary
tdCom.cleanTb()
stb_name = tdCom.getLongName(7, "letters")
input_sql = f'{stb_name} 1626006833639000000ns "{tdCom.getLongName(16374, "letters")}" t0=t'
self._conn.insert_telnet_lines([input_sql])
tdCom.cleanTb()
input_sql = f'{stb_name} 1626006833639000000ns "{tdCom.getLongName(16375, "letters")}" t0=t'
try:
self._conn.insert_telnet_lines([input_sql])
......@@ -661,10 +660,12 @@ class TDTestCase:
# nchar
# * legal nchar could not be larger than 16374/4
tdCom.cleanTb()
stb_name = tdCom.getLongName(7, "letters")
input_sql = f'{stb_name} 1626006833639000000ns L"{tdCom.getLongName(4093, "letters")}" t0=t'
self._conn.insert_telnet_lines([input_sql])
tdCom.cleanTb()
input_sql = f'{stb_name} 1626006833639000000ns L"{tdCom.getLongName(4094, "letters")}" t0=t'
try:
self._conn.insert_telnet_lines([input_sql])
......@@ -709,24 +710,21 @@ class TDTestCase:
tdSql.checkNotEqual(err.errno, 0)
# check binary and nchar blank
stb_name = tdCom.getLongName(7, "letters")
input_sql1 = f'{stb_name} 1626006833639000000ns "abc aaa" t0=t'
input_sql2 = f'{stb_name} 1626006833639000000ns L"abc aaa" t0=t'
input_sql3 = f'{stb_name} 1626006833639000000ns t t0="abc aaa"'
input_sql4 = f'{stb_name} 1626006833639000000ns t t0=L"abc aaa"'
input_sql1 = f'{tdCom.getLongName(7, "letters")} 1626006833639000000ns "abc aaa" t0=t'
input_sql2 = f'{tdCom.getLongName(7, "letters")} 1626006833639000000ns L"abc aaa" t0=t'
input_sql3 = f'{tdCom.getLongName(7, "letters")} 1626006833639000000ns t t0="abc aaa"'
input_sql4 = f'{tdCom.getLongName(7, "letters")} 1626006833639000000ns t t0=L"abc aaa"'
for input_sql in [input_sql1, input_sql2, input_sql3, input_sql4]:
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
except LinesError as err:
tdSql.checkNotEqual(err.errno, 0)
pass
# check accepted binary and nchar symbols
# # * ~!@#$¥%^&*()-+={}|[]、「」:;
for symbol in list('~!@#$¥%^&*()-+={}|[]、「」:;'):
input_sql1 = f'{stb_name} 1626006833639000000ns "abc{symbol}aaa" t0=t'
input_sql2 = f'{stb_name} 1626006833639000000ns t t0=t,t1="abc{symbol}aaa"'
input_sql1 = f'{tdCom.getLongName(7, "letters")} 1626006833639000000ns "abc{symbol}aaa" t0=t'
input_sql2 = f'{tdCom.getLongName(7, "letters")} 1626006833639000000ns t t0=t,t1="abc{symbol}aaa"'
self._conn.insert_telnet_lines([input_sql1])
self._conn.insert_telnet_lines([input_sql2])
......@@ -756,6 +754,7 @@ class TDTestCase:
"""
case no id when stb exist
"""
print("noIdStbExistCheckCase")
tdCom.cleanTb()
input_sql, stb_name = self.genFullTypeSql(tb_name="sub_table_0123456", t0="f", value="f")
self.resCmp(input_sql, stb_name)
......@@ -779,22 +778,24 @@ class TDTestCase:
"""
check length increase
"""
print("tagColBinaryNcharLengthCheckCase")
tdCom.cleanTb()
input_sql, stb_name = self.genFullTypeSql()
self.resCmp(input_sql, stb_name)
tb_name = tdCom.getLongName(5, "letters")
input_sql, stb_name = self.genFullTypeSql(stb_name=stb_name, tb_name=tb_name,t7="\"binaryTagValuebinaryTagValue\"", t8="L\"ncharTagValuencharTagValue\"", c7="\"binaryTagValuebinaryTagValue\"", c8="L\"ncharTagValuencharTagValue\"")
input_sql, stb_name = self.genFullTypeSql(stb_name=stb_name, tb_name=tb_name,t7="\"binaryTagValuebinaryTagValue\"", t8="L\"ncharTagValuencharTagValue\"")
self.resCmp(input_sql, stb_name, condition=f'where tbname like "{tb_name}"')
def tagColAddDupIDCheckCase(self):
"""
check column and tag count add, stb and tb duplicate
check tag count add, stb and tb duplicate
* tag: alter table ...
* col: when update==0 and ts is same, unchange
* so this case tag&&value will be added,
* col is added without value when update==0
* col is added with value when update==1
"""
print("tagColAddDupIDCheckCase")
tdCom.cleanTb()
tb_name = tdCom.getLongName(7, "letters")
for db_update_tag in [0, 1]:
......@@ -802,25 +803,27 @@ class TDTestCase:
self.createDb("test_update", db_update_tag=db_update_tag)
input_sql, stb_name = self.genFullTypeSql(tb_name=tb_name, t0="f", value="f")
self.resCmp(input_sql, stb_name)
self.genFullTypeSql(stb_name=stb_name, tb_name=tb_name, t0="f", value="f", ct_add_tag=True)
self.genFullTypeSql(stb_name=stb_name, tb_name=tb_name, t0="f", value="f", t_add_tag=True)
if db_update_tag == 1 :
self.resCmp(input_sql, stb_name, condition=f'where tbname like "{tb_name}"')
else:
self.resCmp(input_sql, stb_name, condition=f'where tbname like "{tb_name}"', none_check_tag=True)
self.createDb()
def tagColAddCheckCase(self):
"""
check column and tag count add
check tag count add
"""
print("tagColAddCheckCase")
tdCom.cleanTb()
tb_name = tdCom.getLongName(7, "letters")
input_sql, stb_name = self.genFullTypeSql(tb_name=tb_name, t0="f", value="f")
self.resCmp(input_sql, stb_name)
tb_name_1 = tdCom.getLongName(7, "letters")
input_sql, stb_name = self.genFullTypeSql(stb_name=stb_name, tb_name=tb_name_1, t0="f", value="f", ct_add_tag=True)
input_sql, stb_name = self.genFullTypeSql(stb_name=stb_name, tb_name=tb_name_1, t0="f", value="f", t_add_tag=True)
self.resCmp(input_sql, stb_name, condition=f'where tbname like "{tb_name_1}"')
res_row_list = self.resHandle(f"select c10,c11,t10,t11 from {tb_name}", True)[0]
tdSql.checkEqual(res_row_list[0], ['None', 'None', 'None', 'None'])
res_row_list = self.resHandle(f"select t10,t11 from {tb_name}", True)[0]
tdSql.checkEqual(res_row_list[0], ['None', 'None'])
self.resCmp(input_sql, stb_name, condition=f'where tbname like "{tb_name}"', none_check_tag=True)
def tagMd5Check(self):
......@@ -838,7 +841,7 @@ class TDTestCase:
tdSql.query(f"select * from {stb_name}")
tdSql.checkRows(1)
tdSql.checkEqual(tb_name1, tb_name2)
input_sql, stb_name = self.genFullTypeSql(stb_name=stb_name, t0="f", value="f", id_noexist_tag=True, ct_add_tag=True)
input_sql, stb_name = self.genFullTypeSql(stb_name=stb_name, t0="f", value="f", id_noexist_tag=True, t_add_tag=True)
self._conn.insert_telnet_lines([input_sql])
tb_name3 = self.getNoIdTbName(stb_name)
tdSql.query(f"select * from {stb_name}")
......@@ -853,16 +856,17 @@ class TDTestCase:
tdCom.cleanTb()
stb_name = tdCom.getLongName(7, "letters")
tb_name = f'{stb_name}_1'
input_sql = f'{stb_name},id="{tb_name}",t0=t c0=f 1626006833639000000ns'
input_sql = f'{stb_name} 1626006833639000000ns f id="{tb_name}",t0=t'
self._conn.insert_telnet_lines([input_sql])
# * every binary and nchar must be length+2, so here is two tag, max length could not larger than 16384-2*2
input_sql = f'{stb_name},t0=t,t1="{tdCom.getLongName(16374, "letters")}",t2="{tdCom.getLongName(5, "letters")}" c0=f 1626006833639000000ns'
input_sql = f'{stb_name} 1626006833639000000ns f t0=t,t1="{tdCom.getLongName(16374, "letters")}",t2="{tdCom.getLongName(5, "letters")}"'
self._conn.insert_telnet_lines([input_sql])
tdSql.query(f"select * from {stb_name}")
tdSql.checkRows(2)
input_sql = f'{stb_name},t0=t,t1="{tdCom.getLongName(16374, "letters")}",t2="{tdCom.getLongName(6, "letters")}" c0=f 1626006833639000000ns'
input_sql = f'{stb_name} 1626006833639000000ns f t0=t,t1="{tdCom.getLongName(16374, "letters")}",t2="{tdCom.getLongName(6, "letters")}"'
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
......@@ -871,21 +875,6 @@ class TDTestCase:
tdSql.query(f"select * from {stb_name}")
tdSql.checkRows(2)
# # * check col,col+ts max in describe ---> 16143
input_sql = f'{stb_name},t0=t c0=f,c1="{tdCom.getLongName(16374, "letters")}",c2="{tdCom.getLongName(16374, "letters")}",c3="{tdCom.getLongName(16374, "letters")}",c4="{tdCom.getLongName(12, "letters")}" 1626006833639000000ns'
self._conn.insert_telnet_lines([input_sql])
tdSql.query(f"select * from {stb_name}")
tdSql.checkRows(3)
input_sql = f'{stb_name},t0=t c0=f,c1="{tdCom.getLongName(16374, "letters")}",c2="{tdCom.getLongName(16374, "letters")}",c3="{tdCom.getLongName(16374, "letters")}",c4="{tdCom.getLongName(13, "letters")}" 1626006833639000000ns'
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
except LinesError as err:
tdSql.checkNotEqual(err.errno, 0)
tdSql.query(f"select * from {stb_name}")
tdSql.checkRows(3)
# * tag nchar max is 16374/4, col+ts nchar max 49151
def tagColNcharMaxLengthCheckCase(self):
"""
......@@ -894,15 +883,15 @@ class TDTestCase:
tdCom.cleanTb()
stb_name = tdCom.getLongName(7, "letters")
tb_name = f'{stb_name}_1'
input_sql = f'{stb_name},id="{tb_name}",t0=t c0=f 1626006833639000000ns'
code = self._conn.insert_telnet_lines([input_sql])
input_sql = f'{stb_name} 1626006833639000000ns f id="{tb_name}",t0=t'
self._conn.insert_telnet_lines([input_sql])
# * legal nchar could not be larger than 16374/4
input_sql = f'{stb_name},t0=t,t1=L"{tdCom.getLongName(4093, "letters")}",t2=L"{tdCom.getLongName(1, "letters")}" c0=f 1626006833639000000ns'
input_sql = f'{stb_name} 1626006833639000000ns f t0=t,t1=L"{tdCom.getLongName(4093, "letters")}",t2=L"{tdCom.getLongName(1, "letters")}"'
self._conn.insert_telnet_lines([input_sql])
tdSql.query(f"select * from {stb_name}")
tdSql.checkRows(2)
input_sql = f'{stb_name},t0=t,t1=L"{tdCom.getLongName(4093, "letters")}",t2=L"{tdCom.getLongName(2, "letters")}" c0=f 1626006833639000000ns'
input_sql = f'{stb_name} 1626006833639000000ns f t0=t,t1=L"{tdCom.getLongName(4093, "letters")}",t2=L"{tdCom.getLongName(2, "letters")}"'
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
......@@ -911,19 +900,6 @@ class TDTestCase:
tdSql.query(f"select * from {stb_name}")
tdSql.checkRows(2)
input_sql = f'{stb_name},t0=t c0=f,c1=L"{tdCom.getLongName(4093, "letters")}",c2=L"{tdCom.getLongName(4093, "letters")}",c3=L"{tdCom.getLongName(4093, "letters")}",c4=L"{tdCom.getLongName(4, "letters")}" 1626006833639000000ns'
self._conn.insert_telnet_lines([input_sql])
tdSql.query(f"select * from {stb_name}")
tdSql.checkRows(3)
input_sql = f'{stb_name},t0=t c0=f,c1=L"{tdCom.getLongName(4093, "letters")}",c2=L"{tdCom.getLongName(4093, "letters")}",c3=L"{tdCom.getLongName(4093, "letters")}",c4=L"{tdCom.getLongName(5, "letters")}" 1626006833639000000ns'
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
except LinesError as err:
tdSql.checkNotEqual(err.errno, 0)
tdSql.query(f"select * from {stb_name}")
tdSql.checkRows(3)
def batchInsertCheckCase(self):
"""
test batch insert
......@@ -931,17 +907,24 @@ class TDTestCase:
tdCom.cleanTb()
stb_name = tdCom.getLongName(8, "letters")
tdSql.execute(f'create stable {stb_name}(ts timestamp, f int) tags(t1 bigint)')
lines = ["st123456,t1=3i64,t2=4f64,t3=\"t3\" c1=3i64,c3=L\"passit\",c2=false,c4=4f64 1626006833639000000ns",
"st123456,t1=4i64,t3=\"t4\",t2=5f64,t4=5f64 c1=3i64,c3=L\"passitagin\",c2=true,c4=5f64,c5=5f64 1626006833640000000ns",
f"{stb_name},t2=5f64,t3=L\"ste\" c1=true,c2=4i64,c3=\"iam\" 1626056811823316532ns",
"stf567890,t1=4i64,t3=\"t4\",t2=5f64,t4=5f64 c1=3i64,c3=L\"passitagin\",c2=true,c4=5f64,c5=5f64,c6=7u64 1626006933640000000ns",
"st123456,t1=4i64,t2=5f64,t3=\"t4\" c1=3i64,c3=L\"passitagain\",c2=true,c4=5f64 1626006833642000000ns",
f"{stb_name},t2=5f64,t3=L\"ste2\" c3=\"iamszhou\",c4=false 1626056811843316532ns",
f"{stb_name},t2=5f64,t3=L\"ste2\" c3=\"iamszhou\",c4=false,c5=32i8,c6=64i16,c7=32i32,c8=88.88f32 1626056812843316532ns",
"st123456,t1=4i64,t3=\"t4\",t2=5f64,t4=5f64 c1=3i64,c3=L\"passitagin\",c2=true,c4=5f64,c5=5f64,c6=7u64 1626006933640000000ns",
"st123456,t1=4i64,t3=\"t4\",t2=5f64,t4=5f64 c1=3i64,c3=L\"passitagin_stf\",c2=false,c5=5f64,c6=7u64 1626006933641000000ns"
lines = ["st123456 1626006833639000000ns 1i64 t1=3i64,t2=4f64,t3=\"t3\"",
"st123456 1626006833640000000ns 2i64 t1=4i64,t3=\"t4\",t2=5f64,t4=5f64",
f'{stb_name} 1626056811823316532ns 3i64 t2=5f64,t3=L\"ste\"',
"stf567890 1626006933640000000ns 4i64 t1=4i64,t3=\"t4\",t2=5f64,t4=5f64",
"st123456 1626006833642000000ns 5i64 t1=4i64,t2=5f64,t3=\"t4\"",
f'{stb_name} 1626056811843316532ns 6i64 t2=5f64,t3=L\"ste2\"',
f'{stb_name} 1626056812843316532ns 7i64 t2=5f64,t3=L\"ste2\"',
"st123456 1626006933640000000ns 8i64 t1=4i64,t3=\"t4\",t2=5f64,t4=5f64",
"st123456 1626006933641000000ns 9i64 t1=4i64,t3=\"t4\",t2=5f64,t4=5f64"
]
self._conn.insert_telnet_lines(lines)
tdSql.query('show stables')
tdSql.checkRows(3)
tdSql.query('show tables')
tdSql.checkRows(6)
tdSql.query('select * from st123456')
tdSql.checkRows(5)
def multiInsertCheckCase(self, count):
"""
......@@ -952,9 +935,11 @@ class TDTestCase:
stb_name = tdCom.getLongName(8, "letters")
tdSql.execute(f'create stable {stb_name}(ts timestamp, f int) tags(t1 bigint)')
for i in range(count):
input_sql = self.genFullTypeSql(stb_name=stb_name, t7=f'"{tdCom.getLongName(8, "letters")}"', c7=f'"{tdCom.getLongName(8, "letters")}"', id_noexist_tag=True)[0]
input_sql = self.genFullTypeSql(stb_name=stb_name, t7=f'"{tdCom.getLongName(8, "letters")}"', value=f'"{tdCom.getLongName(8, "letters")}"', id_noexist_tag=True)[0]
sql_list.append(input_sql)
self._conn.insert_telnet_lines(sql_list)
tdSql.query('show tables')
tdSql.checkRows(1000)
def batchErrorInsertCheckCase(self):
"""
......@@ -962,14 +947,86 @@ class TDTestCase:
"""
tdCom.cleanTb()
stb_name = tdCom.getLongName(8, "letters")
lines = ["st123456,t1=3i64,t2=4f64,t3=\"t3\" c1=3i64,c3=L\"passit\",c2=false,c4=4f64 1626006833639000000ns",
f"{stb_name},t2=5f64,t3=L\"ste\" c1=tRue,c2=4i64,c3=\"iam\" 1626056811823316532ns"]
lines = ["st123456 1626006833639000000ns 3i64 t1=3i64,t2=4f64,t3=\"t3\"",
f"{stb_name} 1626056811823316532ns tRue t2=5f64,t3=L\"ste\""]
try:
self._conn.insert_telnet_lines(lines)
raise Exception("should not reach here")
except LinesError as err:
tdSql.checkNotEqual(err.errno, 0)
def multiColsInsertCheckCase(self):
"""
test multi cols insert
"""
tdCom.cleanTb()
input_sql = self.genFullTypeSql(t_multi_tag=True)[0]
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
except LinesError as err:
tdSql.checkNotEqual(err.errno, 0)
def blankColInsertCheckCase(self):
"""
test blank col insert
"""
tdCom.cleanTb()
input_sql = self.genFullTypeSql(c_blank_tag=True)[0]
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
except LinesError as err:
tdSql.checkNotEqual(err.errno, 0)
def blankTagInsertCheckCase(self):
"""
test blank tag insert
"""
tdCom.cleanTb()
input_sql = self.genFullTypeSql(t_blank_tag=True)[0]
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
except LinesError as err:
tdSql.checkNotEqual(err.errno, 0)
def chineseCheckCase(self):
"""
check nchar ---> chinese
"""
tdCom.cleanTb()
input_sql, stb_name = self.genFullTypeSql(chinese_tag=True)
self.resCmp(input_sql, stb_name)
def multiFieldCheckCase(self):
'''
multi_field
'''
tdCom.cleanTb()
input_sql = self.genFullTypeSql(multi_field_tag=True)[0]
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
except LinesError as err:
tdSql.checkNotEqual(err.errno, 0)
def errorTypeCheckCase(self):
stb_name = tdCom.getLongName(8, "letters")
input_sql_list = [f'{stb_name} 0 "hkgjiwdj" t0=f,t1=127I8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="vozamcts",t8=L"ncharTagValue"', \
f'{stb_name} 0 "hkgjiwdj" t0=f,t1=127i8,t2=32767I16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="vozamcts",t8=L"ncharTagValue"', \
f'{stb_name} 0 "hkgjiwdj" t0=f,t1=127i8,t2=32767i16,t3=2147483647I32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="vozamcts",t8=L"ncharTagValue"', \
f'{stb_name} 0 "hkgjiwdj" t0=f,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807I64,t5=11.12345f32,t6=22.123456789f64,t7="vozamcts",t8=L"ncharTagValue"', \
f'{stb_name} 0 "hkgjiwdj" t0=f,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345F32,t6=22.123456789f64,t7="vozamcts",t8=L"ncharTagValue"', \
f'{stb_name} 0 "hkgjiwdj" t0=f,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789F64,t7="vozamcts",t8=L"ncharTagValue"', \
f'{stb_name} 1626006833639000000NS "hkgjiwdj" t0=f,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="vozamcts",t8=L"ncharTagValue"']
for input_sql in input_sql_list:
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
except LinesError as err:
tdSql.checkNotEqual(err.errno, 0)
def genSqlList(self, count=5, stb_name="", tb_name=""):
"""
stb --> supertable
......@@ -984,36 +1041,36 @@ class TDTestCase:
"""
d_stb_d_tb_list = list()
s_stb_s_tb_list = list()
s_stb_s_tb_a_col_a_tag_list = list()
s_stb_s_tb_m_col_m_tag_list = list()
s_stb_s_tb_a_tag_list = list()
s_stb_s_tb_m_tag_list = list()
s_stb_d_tb_list = list()
s_stb_d_tb_a_col_m_tag_list = list()
s_stb_d_tb_a_tag_m_col_list = list()
s_stb_d_tb_m_tag_list = list()
s_stb_d_tb_a_tag_list = list()
s_stb_s_tb_d_ts_list = list()
s_stb_s_tb_d_ts_a_col_m_tag_list = list()
s_stb_s_tb_d_ts_a_tag_m_col_list = list()
s_stb_s_tb_d_ts_m_tag_list = list()
s_stb_s_tb_d_ts_a_tag_list = list()
s_stb_d_tb_d_ts_list = list()
s_stb_d_tb_d_ts_a_col_m_tag_list = list()
s_stb_d_tb_d_ts_a_tag_m_col_list = list()
s_stb_d_tb_d_ts_m_tag_list = list()
s_stb_d_tb_d_ts_a_tag_list = list()
for i in range(count):
d_stb_d_tb_list.append(self.genFullTypeSql(t0="f", c0="f"))
s_stb_s_tb_list.append(self.genFullTypeSql(stb_name=stb_name, tb_name=tb_name, t7=f'"{tdCom.getLongName(8, "letters")}"', c7=f'"{tdCom.getLongName(8, "letters")}"'))
s_stb_s_tb_a_col_a_tag_list.append(self.genFullTypeSql(stb_name=stb_name, tb_name=tb_name, t7=f'"{tdCom.getLongName(8, "letters")}"', c7=f'"{tdCom.getLongName(8, "letters")}"', ct_add_tag=True))
s_stb_s_tb_m_col_m_tag_list.append(self.genFullTypeSql(stb_name=stb_name, tb_name=tb_name, t7=f'"{tdCom.getLongName(8, "letters")}"', c7=f'"{tdCom.getLongName(8, "letters")}"', ct_min_tag=True))
s_stb_d_tb_list.append(self.genFullTypeSql(stb_name=stb_name, t7=f'"{tdCom.getLongName(8, "letters")}"', c7=f'"{tdCom.getLongName(8, "letters")}"', id_noexist_tag=True))
s_stb_d_tb_a_col_m_tag_list.append(self.genFullTypeSql(stb_name=stb_name, t7=f'"{tdCom.getLongName(8, "letters")}"', c7=f'"{tdCom.getLongName(8, "letters")}"', id_noexist_tag=True, ct_am_tag=True))
s_stb_d_tb_a_tag_m_col_list.append(self.genFullTypeSql(stb_name=stb_name, t7=f'"{tdCom.getLongName(8, "letters")}"', c7=f'"{tdCom.getLongName(8, "letters")}"', id_noexist_tag=True, ct_ma_tag=True))
s_stb_s_tb_d_ts_list.append(self.genFullTypeSql(stb_name=stb_name, tb_name=tb_name, t7=f'"{tdCom.getLongName(8, "letters")}"', c7=f'"{tdCom.getLongName(8, "letters")}"', ts=0))
s_stb_s_tb_d_ts_a_col_m_tag_list.append(self.genFullTypeSql(stb_name=stb_name, tb_name=tb_name, t7=f'"{tdCom.getLongName(8, "letters")}"', c7=f'"{tdCom.getLongName(8, "letters")}"', ts=0, ct_am_tag=True))
s_stb_s_tb_d_ts_a_tag_m_col_list.append(self.genFullTypeSql(stb_name=stb_name, tb_name=tb_name, t7=f'"{tdCom.getLongName(8, "letters")}"', c7=f'"{tdCom.getLongName(8, "letters")}"', ts=0, ct_ma_tag=True))
s_stb_d_tb_d_ts_list.append(self.genFullTypeSql(stb_name=stb_name, t7=f'"{tdCom.getLongName(8, "letters")}"', c7=f'"{tdCom.getLongName(8, "letters")}"', id_noexist_tag=True, ts=0))
s_stb_d_tb_d_ts_a_col_m_tag_list.append(self.genFullTypeSql(stb_name=stb_name, t7=f'"{tdCom.getLongName(8, "letters")}"', c7=f'"{tdCom.getLongName(8, "letters")}"', id_noexist_tag=True, ts=0, ct_am_tag=True))
s_stb_d_tb_d_ts_a_tag_m_col_list.append(self.genFullTypeSql(stb_name=stb_name, t7=f'"{tdCom.getLongName(8, "letters")}"', c7=f'"{tdCom.getLongName(8, "letters")}"', id_noexist_tag=True, ts=0, ct_ma_tag=True))
return d_stb_d_tb_list, s_stb_s_tb_list, s_stb_s_tb_a_col_a_tag_list, s_stb_s_tb_m_col_m_tag_list, \
s_stb_d_tb_list, s_stb_d_tb_a_col_m_tag_list, s_stb_d_tb_a_tag_m_col_list, s_stb_s_tb_d_ts_list, \
s_stb_s_tb_d_ts_a_col_m_tag_list, s_stb_s_tb_d_ts_a_tag_m_col_list, s_stb_d_tb_d_ts_list, \
s_stb_d_tb_d_ts_a_col_m_tag_list, s_stb_d_tb_d_ts_a_tag_m_col_list
d_stb_d_tb_list.append(self.genFullTypeSql(t0="f", value="f"))
s_stb_s_tb_list.append(self.genFullTypeSql(stb_name=stb_name, tb_name=tb_name, t7=f'"{tdCom.getLongName(8, "letters")}"', value=f'"{tdCom.getLongName(8, "letters")}"'))
s_stb_s_tb_a_tag_list.append(self.genFullTypeSql(stb_name=stb_name, tb_name=tb_name, t7=f'"{tdCom.getLongName(8, "letters")}"', value=f'"{tdCom.getLongName(8, "letters")}"', t_add_tag=True))
s_stb_s_tb_m_tag_list.append(self.genFullTypeSql(stb_name=stb_name, tb_name=tb_name, t7=f'"{tdCom.getLongName(8, "letters")}"', value=f'"{tdCom.getLongName(8, "letters")}"', t_mul_tag=True))
s_stb_d_tb_list.append(self.genFullTypeSql(stb_name=stb_name, t7=f'"{tdCom.getLongName(8, "letters")}"', value=f'"{tdCom.getLongName(8, "letters")}"', id_noexist_tag=True))
s_stb_d_tb_m_tag_list.append(self.genFullTypeSql(stb_name=stb_name, t7=f'"{tdCom.getLongName(8, "letters")}"', value=f'"{tdCom.getLongName(8, "letters")}"', id_noexist_tag=True, t_mul_tag=True))
s_stb_d_tb_a_tag_list.append(self.genFullTypeSql(stb_name=stb_name, t7=f'"{tdCom.getLongName(8, "letters")}"', value=f'"{tdCom.getLongName(8, "letters")}"', id_noexist_tag=True, t_add_tag=True))
s_stb_s_tb_d_ts_list.append(self.genFullTypeSql(stb_name=stb_name, tb_name=tb_name, t7=f'"{tdCom.getLongName(8, "letters")}"', value=f'"{tdCom.getLongName(8, "letters")}"', ts=0))
s_stb_s_tb_d_ts_m_tag_list.append(self.genFullTypeSql(stb_name=stb_name, tb_name=tb_name, t7=f'"{tdCom.getLongName(8, "letters")}"', value=f'"{tdCom.getLongName(8, "letters")}"', ts=0, t_mul_tag=True))
s_stb_s_tb_d_ts_a_tag_list.append(self.genFullTypeSql(stb_name=stb_name, tb_name=tb_name, t7=f'"{tdCom.getLongName(8, "letters")}"', value=f'"{tdCom.getLongName(8, "letters")}"', ts=0, t_add_tag=True))
s_stb_d_tb_d_ts_list.append(self.genFullTypeSql(stb_name=stb_name, t7=f'"{tdCom.getLongName(8, "letters")}"', value=f'"{tdCom.getLongName(8, "letters")}"', id_noexist_tag=True, ts=0))
s_stb_d_tb_d_ts_m_tag_list.append(self.genFullTypeSql(stb_name=stb_name, t7=f'"{tdCom.getLongName(8, "letters")}"', value=f'"{tdCom.getLongName(8, "letters")}"', id_noexist_tag=True, ts=0, t_mul_tag=True))
s_stb_d_tb_d_ts_a_tag_list.append(self.genFullTypeSql(stb_name=stb_name, t7=f'"{tdCom.getLongName(8, "letters")}"', value=f'"{tdCom.getLongName(8, "letters")}"', id_noexist_tag=True, ts=0, t_add_tag=True))
return d_stb_d_tb_list, s_stb_s_tb_list, s_stb_s_tb_a_tag_list, s_stb_s_tb_m_tag_list, \
s_stb_d_tb_list, s_stb_d_tb_m_tag_list, s_stb_d_tb_a_tag_list, s_stb_s_tb_d_ts_list, \
s_stb_s_tb_d_ts_m_tag_list, s_stb_s_tb_d_ts_a_tag_list, s_stb_d_tb_d_ts_list, \
s_stb_d_tb_d_ts_m_tag_list, s_stb_d_tb_d_ts_a_tag_list
def genMultiThreadSeq(self, sql_list):
......@@ -1045,7 +1102,7 @@ class TDTestCase:
"""
tdCom.cleanTb()
tb_name = tdCom.getLongName(7, "letters")
input_sql, stb_name = self.genFullTypeSql(tb_name=tb_name)
input_sql, stb_name = self.genFullTypeSql(tb_name=tb_name, value="\"binaryTagValue\"")
self.resCmp(input_sql, stb_name)
s_stb_s_tb_list = self.genSqlList(stb_name=stb_name, tb_name=tb_name)[1]
self.multiThreadRun(self.genMultiThreadSeq(s_stb_s_tb_list))
......@@ -1056,16 +1113,16 @@ class TDTestCase:
tdSql.query(f"select * from {stb_name};")
tdSql.checkRows(1)
def sStbStbDdataAtcInsertMultiThreadCheckCase(self):
def sStbStbDdataAtInsertMultiThreadCheckCase(self):
"""
thread input same stb tb, different data, add columes and tags, result keep first data
"""
tdCom.cleanTb()
tb_name = tdCom.getLongName(7, "letters")
input_sql, stb_name = self.genFullTypeSql(tb_name=tb_name)
input_sql, stb_name = self.genFullTypeSql(tb_name=tb_name, value="\"binaryTagValue\"")
self.resCmp(input_sql, stb_name)
s_stb_s_tb_a_col_a_tag_list = self.genSqlList(stb_name=stb_name, tb_name=tb_name)[2]
self.multiThreadRun(self.genMultiThreadSeq(s_stb_s_tb_a_col_a_tag_list))
s_stb_s_tb_a_tag_list = self.genSqlList(stb_name=stb_name, tb_name=tb_name)[2]
self.multiThreadRun(self.genMultiThreadSeq(s_stb_s_tb_a_tag_list))
tdSql.query(f"show tables;")
tdSql.checkRows(1)
expected_tb_name = self.getNoIdTbName(stb_name)[0]
......@@ -1073,16 +1130,16 @@ class TDTestCase:
tdSql.query(f"select * from {stb_name};")
tdSql.checkRows(1)
def sStbStbDdataMtcInsertMultiThreadCheckCase(self):
def sStbStbDdataMtInsertMultiThreadCheckCase(self):
"""
thread input same stb tb, different data, minus columes and tags, result keep first data
"""
tdCom.cleanTb()
tb_name = tdCom.getLongName(7, "letters")
input_sql, stb_name = self.genFullTypeSql(tb_name=tb_name)
input_sql, stb_name = self.genFullTypeSql(tb_name=tb_name, value="\"binaryTagValue\"")
self.resCmp(input_sql, stb_name)
s_stb_s_tb_m_col_m_tag_list = self.genSqlList(stb_name=stb_name, tb_name=tb_name)[3]
self.multiThreadRun(self.genMultiThreadSeq(s_stb_s_tb_m_col_m_tag_list))
s_stb_s_tb_m_tag_list = self.genSqlList(stb_name=stb_name, tb_name=tb_name)[3]
self.multiThreadRun(self.genMultiThreadSeq(s_stb_s_tb_m_tag_list))
tdSql.query(f"show tables;")
tdSql.checkRows(1)
expected_tb_name = self.getNoIdTbName(stb_name)[0]
......@@ -1095,40 +1152,38 @@ class TDTestCase:
thread input same stb, different tb, different data
"""
tdCom.cleanTb()
input_sql, stb_name = self.genFullTypeSql()
input_sql, stb_name = self.genFullTypeSql(value="\"binaryTagValue\"")
self.resCmp(input_sql, stb_name)
s_stb_d_tb_list = self.genSqlList(stb_name=stb_name)[4]
self.multiThreadRun(self.genMultiThreadSeq(s_stb_d_tb_list))
tdSql.query(f"show tables;")
tdSql.checkRows(6)
def sStbDtbDdataAcMtInsertMultiThreadCheckCase(self):
"""
#! concurrency conflict
"""
def sStbDtbDdataMtInsertMultiThreadCheckCase(self):
"""
thread input same stb, different tb, different data, add col, mul tag
"""
tdCom.cleanTb()
input_sql, stb_name = self.genFullTypeSql()
input_sql, stb_name = self.genFullTypeSql(value="\"binaryTagValue\"")
self.resCmp(input_sql, stb_name)
s_stb_d_tb_a_col_m_tag_list = self.genSqlList(stb_name=stb_name)[5]
self.multiThreadRun(self.genMultiThreadSeq(s_stb_d_tb_a_col_m_tag_list))
s_stb_d_tb_m_tag_list = [(f'{stb_name} 1626006833639000000ns "omfdhyom" t0=F,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64', 'yzwswz'), \
(f'{stb_name} 1626006833639000000ns "vqowydbc" t0=F,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64', 'yzwswz'), \
(f'{stb_name} 1626006833639000000ns "plgkckpv" t0=F,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64', 'yzwswz'), \
(f'{stb_name} 1626006833639000000ns "cujyqvlj" t0=F,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64', 'yzwswz'), \
(f'{stb_name} 1626006833639000000ns "twjxisat" t0=T,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64', 'yzwswz')]
self.multiThreadRun(self.genMultiThreadSeq(s_stb_d_tb_m_tag_list))
tdSql.query(f"show tables;")
tdSql.checkRows(6)
tdSql.checkRows(3)
def sStbDtbDdataAtMcInsertMultiThreadCheckCase(self):
"""
#! concurrency conflict
"""
def sStbDtbDdataAtInsertMultiThreadCheckCase(self):
"""
thread input same stb, different tb, different data, add tag, mul col
"""
tdCom.cleanTb()
input_sql, stb_name = self.genFullTypeSql()
input_sql, stb_name = self.genFullTypeSql(value="\"binaryTagValue\"")
self.resCmp(input_sql, stb_name)
s_stb_d_tb_a_tag_m_col_list = self.genSqlList(stb_name=stb_name)[6]
self.multiThreadRun(self.genMultiThreadSeq(s_stb_d_tb_a_tag_m_col_list))
s_stb_d_tb_a_tag_list = self.genSqlList(stb_name=stb_name)[6]
self.multiThreadRun(self.genMultiThreadSeq(s_stb_d_tb_a_tag_list))
tdSql.query(f"show tables;")
tdSql.checkRows(6)
......@@ -1138,89 +1193,94 @@ class TDTestCase:
"""
tdCom.cleanTb()
tb_name = tdCom.getLongName(7, "letters")
input_sql, stb_name = self.genFullTypeSql(tb_name=tb_name)
input_sql, stb_name = self.genFullTypeSql(tb_name=tb_name, value="\"binaryTagValue\"")
self.resCmp(input_sql, stb_name)
s_stb_s_tb_d_ts_list = self.genSqlList(stb_name=stb_name, tb_name=tb_name)[7]
s_stb_s_tb_d_ts_list = [(f'{stb_name} 0 "hkgjiwdj" id="{tb_name}",t0=f,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="vozamcts",t8=L"ncharTagValue"', 'dwpthv'), \
(f'{stb_name} 0 "rljjrrul" id="{tb_name}",t0=False,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="bmcanhbs",t8=L"ncharTagValue"', 'dwpthv'), \
(f'{stb_name} 0 "basanglx" id="{tb_name}",t0=False,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="enqkyvmb",t8=L"ncharTagValue"', 'dwpthv'), \
(f'{stb_name} 0 "clsajzpp" id="{tb_name}",t0=F,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="eivaegjk",t8=L"ncharTagValue"', 'dwpthv'), \
(f'{stb_name} 0 "jitwseso" id="{tb_name}",t0=T,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="yhlwkddq",t8=L"ncharTagValue"', 'dwpthv')]
self.multiThreadRun(self.genMultiThreadSeq(s_stb_s_tb_d_ts_list))
tdSql.query(f"show tables;")
tdSql.checkRows(1)
tdSql.query(f"select * from {stb_name}")
tdSql.checkRows(6)
def sStbStbDdataDtsAcMtInsertMultiThreadCheckCase(self):
def sStbStbDdataDtsMtInsertMultiThreadCheckCase(self):
"""
thread input same stb tb, different ts, add col, mul tag
"""
tdCom.cleanTb()
tb_name = tdCom.getLongName(7, "letters")
input_sql, stb_name = self.genFullTypeSql(tb_name=tb_name)
input_sql, stb_name = self.genFullTypeSql(tb_name=tb_name, value="\"binaryTagValue\"")
self.resCmp(input_sql, stb_name)
s_stb_s_tb_d_ts_a_col_m_tag_list = self.genSqlList(stb_name=stb_name, tb_name=tb_name)[8]
self.multiThreadRun(self.genMultiThreadSeq(s_stb_s_tb_d_ts_a_col_m_tag_list))
s_stb_s_tb_d_ts_m_tag_list = self.genSqlList(stb_name=stb_name, tb_name=tb_name)[8]
self.multiThreadRun(self.genMultiThreadSeq(s_stb_s_tb_d_ts_m_tag_list))
tdSql.query(f"show tables;")
tdSql.checkRows(1)
tdSql.query(f"select * from {stb_name}")
tdSql.checkRows(6)
tdSql.query(f"select * from {stb_name} where t8 is not NULL")
tdSql.checkRows(6)
tdSql.query(f"select * from {tb_name} where c11 is not NULL;")
tdSql.checkRows(5)
def sStbStbDdataDtsAtMcInsertMultiThreadCheckCase(self):
def sStbStbDdataDtsAtInsertMultiThreadCheckCase(self):
"""
thread input same stb tb, different ts, add tag, mul col
"""
tdCom.cleanTb()
tb_name = tdCom.getLongName(7, "letters")
input_sql, stb_name = self.genFullTypeSql(tb_name=tb_name)
input_sql, stb_name = self.genFullTypeSql(tb_name=tb_name, value="\"binaryTagValue\"")
self.resCmp(input_sql, stb_name)
s_stb_s_tb_d_ts_a_tag_m_col_list = self.genSqlList(stb_name=stb_name, tb_name=tb_name)[9]
self.multiThreadRun(self.genMultiThreadSeq(s_stb_s_tb_d_ts_a_tag_m_col_list))
s_stb_s_tb_d_ts_a_tag_list = [(f'{stb_name} 0 "clummqfy" id="{tb_name}",t0=False,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="hpxzrdiw",t8=L"ncharTagValue",t11=127i8,t10=L"ncharTagValue"', 'bokaxl'), \
(f'{stb_name} 0 "yqeztggb" id="{tb_name}",t0=F,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="gdtblmrc",t8=L"ncharTagValue",t11=127i8,t10=L"ncharTagValue"', 'bokaxl'), \
(f'{stb_name} 0 "gbkinqdk" id="{tb_name}",t0=f,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="iqniuvco",t8=L"ncharTagValue",t11=127i8,t10=L"ncharTagValue"', 'bokaxl'), \
(f'{stb_name} 0 "ldxxejbd" id="{tb_name}",t0=f,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="vxkipags",t8=L"ncharTagValue",t11=127i8,t10=L"ncharTagValue"', 'bokaxl'), \
(f'{stb_name} 0 "tlvzwjes" id="{tb_name}",t0=true,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="enwrlrtj",t8=L"ncharTagValue",t11=127i8,t10=L"ncharTagValue"', 'bokaxl')]
self.multiThreadRun(self.genMultiThreadSeq(s_stb_s_tb_d_ts_a_tag_list))
tdSql.query(f"show tables;")
tdSql.checkRows(1)
tdSql.query(f"select * from {stb_name}")
tdSql.checkRows(6)
for c in ["c7", "c8", "c9"]:
tdSql.query(f"select * from {stb_name} where {c} is NULL")
tdSql.checkRows(5)
for t in ["t10", "t11"]:
tdSql.query(f"select * from {stb_name} where {t} is not NULL;")
tdSql.checkRows(6)
tdSql.checkRows(0)
def sStbDtbDdataDtsInsertMultiThreadCheckCase(self):
"""
thread input same stb, different tb, data, ts
"""
tdCom.cleanTb()
input_sql, stb_name = self.genFullTypeSql()
input_sql, stb_name = self.genFullTypeSql(value="\"binaryTagValue\"")
self.resCmp(input_sql, stb_name)
s_stb_d_tb_d_ts_list = self.genSqlList(stb_name=stb_name)[10]
self.multiThreadRun(self.genMultiThreadSeq(s_stb_d_tb_d_ts_list))
tdSql.query(f"show tables;")
tdSql.checkRows(6)
def sStbDtbDdataDtsAcMtInsertMultiThreadCheckCase(self):
"""
# ! concurrency conflict
"""
def sStbDtbDdataDtsMtInsertMultiThreadCheckCase(self):
"""
thread input same stb, different tb, data, ts, add col, mul tag
"""
tdCom.cleanTb()
input_sql, stb_name = self.genFullTypeSql()
input_sql, stb_name = self.genFullTypeSql(value="\"binaryTagValue\"")
self.resCmp(input_sql, stb_name)
s_stb_d_tb_d_ts_a_col_m_tag_list = self.genSqlList(stb_name=stb_name)[11]
self.multiThreadRun(self.genMultiThreadSeq(s_stb_d_tb_d_ts_a_col_m_tag_list))
s_stb_d_tb_d_ts_m_tag_list = [(f'{stb_name} 0 "mnpmtzul" t0=f,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64', 'pcppkg'), \
(f'{stb_name} 0 "zbvwckcd" t0=True,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64', 'pcppkg'), \
(f'{stb_name} 0 "vymcjfwc" t0=F,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64', 'pcppkg'), \
(f'{stb_name} 0 "laumkwfn" t0=False,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64', 'pcppkg'), \
(f'{stb_name} 0 "nyultzxr" t0=false,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64', 'pcppkg')]
self.multiThreadRun(self.genMultiThreadSeq(s_stb_d_tb_d_ts_m_tag_list))
tdSql.query(f"show tables;")
tdSql.checkRows(6)
tdSql.checkRows(3)
def test(self):
# input_sql1 = "stb2_5 1626006833610ms 3f64 host=\"host0\",host2=L\"host2\""
# input_sql2 = "rfasta,id=\"rfasta_1\",t0=true,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64 c0=True,c1=127i8,c2=32767i16,c3=2147483647i32,c4=9223372036854775807i64,c5=11.12345f32,c6=22.123456789f64 1626006933640000000ns"
try:
input_sql, stb_name = self.genFullTypeSql()
self.resCmp(input_sql, stb_name)
input_sql = f'test_nchar 0 L"涛思数据" t0=f,t1=L"涛思数据",t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64'
self._conn.insert_telnet_lines([input_sql])
# input_sql, stb_name = self.genFullTypeSql()
# self.resCmp(input_sql, stb_name)
except LinesError as err:
print(err.errno)
# self._conn.insert_telnet_lines([input_sql2])
......@@ -1232,68 +1292,64 @@ class TDTestCase:
# self._conn.insert_telnet_lines([input_sql4])
def runAll(self):
# self.initCheckCase()
# self.boolTypeCheckCase()
self.symbolsCheckCase()
# self.tsCheckCase()
# self.idSeqCheckCase()
# self.idUpperCheckCase()
# self.noIdCheckCase()
# self.maxColTagCheckCase()
# self.idIllegalNameCheckCase()
# self.idStartWithNumCheckCase()
# self.nowTsCheckCase()
# self.dateFormatTsCheckCase()
# self.illegalTsCheckCase()
# self.tagValueLengthCheckCase()
# self.colValueLengthCheckCase()
# self.tagColIllegalValueCheckCase()
# self.duplicateIdTagColInsertCheckCase()
# self.noIdStbExistCheckCase()
# self.duplicateInsertExistCheckCase()
# self.tagColBinaryNcharLengthCheckCase()
# self.tagColAddDupIDCheckCase()
# self.tagColAddCheckCase()
# self.tagMd5Check()
# self.tagColBinaryMaxLengthCheckCase()
# # self.tagColNcharMaxLengthCheckCase()
# self.batchInsertCheckCase()
# self.multiInsertCheckCase(1000)
# self.batchErrorInsertCheckCase()
# # MultiThreads
self.initCheckCase()
self.boolTypeCheckCase()
# ! leave a bug
#self.symbolsCheckCase()
self.tsCheckCase()
self.idSeqCheckCase()
self.idUpperCheckCase()
self.noIdCheckCase()
self.maxColTagCheckCase()
self.idIllegalNameCheckCase()
self.idStartWithNumCheckCase()
self.nowTsCheckCase()
self.dateFormatTsCheckCase()
self.illegalTsCheckCase()
self.tagValueLengthCheckCase()
self.colValueLengthCheckCase()
self.tagColIllegalValueCheckCase()
self.duplicateIdTagColInsertCheckCase()
self.noIdStbExistCheckCase()
self.duplicateInsertExistCheckCase()
self.tagColBinaryNcharLengthCheckCase()
self.tagColAddDupIDCheckCase()
self.tagColAddCheckCase()
self.tagMd5Check()
self.tagColBinaryMaxLengthCheckCase()
self.tagColNcharMaxLengthCheckCase()
self.batchInsertCheckCase()
self.multiInsertCheckCase(1000)
self.batchErrorInsertCheckCase()
self.multiColsInsertCheckCase()
self.blankColInsertCheckCase()
self.blankTagInsertCheckCase()
self.chineseCheckCase()
self.multiFieldCheckCase()
self.errorTypeCheckCase()
# MultiThreads
# self.stbInsertMultiThreadCheckCase()
# self.sStbStbDdataInsertMultiThreadCheckCase()
# self.sStbStbDdataAtcInsertMultiThreadCheckCase()
# self.sStbStbDdataMtcInsertMultiThreadCheckCase()
# self.sStbStbDdataAtInsertMultiThreadCheckCase()
# self.sStbStbDdataMtInsertMultiThreadCheckCase()
# self.sStbDtbDdataInsertMultiThreadCheckCase()
# # # ! concurrency conflict
# # self.sStbDtbDdataAcMtInsertMultiThreadCheckCase()
# # self.sStbDtbDdataAtMcInsertMultiThreadCheckCase()
# self.sStbDtbDdataMtInsertMultiThreadCheckCase()
# self.sStbDtbDdataAtInsertMultiThreadCheckCase()
# self.sStbStbDdataDtsInsertMultiThreadCheckCase()
# # # ! concurrency conflict
# # self.sStbStbDdataDtsAcMtInsertMultiThreadCheckCase()
# # self.sStbStbDdataDtsAtMcInsertMultiThreadCheckCase()
# self.sStbStbDdataDtsMtInsertMultiThreadCheckCase()
# self.sStbStbDdataDtsAtInsertMultiThreadCheckCase()
# self.sStbDtbDdataDtsInsertMultiThreadCheckCase()
# # ! concurrency conflict
# # self.sStbDtbDdataDtsAcMtInsertMultiThreadCheckCase()
# self.sStbDtbDdataDtsMtInsertMultiThreadCheckCase()
def run(self):
print("running {}".format(__file__))
self.createDb()
try:
# self.symbolsCheckCase()
self.runAll()
# self.test()
except Exception as err:
print(''.join(traceback.format_exception(None, err, err.__traceback__)))
raise err
......
......@@ -42,7 +42,7 @@ class TwoClients:
tdSql.execute("drop database if exists db3")
# insert data with taosc
# insert data with c connector
for i in range(10):
os.system("taosdemo -f manualTest/TD-5114/insertDataDb3Replica2.json -y ")
# # check data correct
......
......@@ -24,7 +24,7 @@ from random import choice
class TwoClients:
def initConnection(self):
self.host = "chenhaoran02"
self.host = "chenhaoran01"
self.user = "root"
self.password = "taosdata"
self.config = "/etc/taos/"
......@@ -116,8 +116,10 @@ class TwoClients:
sleep(3)
tdSql.execute(" drop dnode 'chenhaoran02:6030'; ")
sleep(20)
os.system("rm -rf /var/lib/taos/*")
# remove data file;
os.system("rm -rf /home/chr/data/data0/*")
print("clear dnode chenhaoran02'data files")
sleep(5)
os.system("nohup /usr/bin/taosd > /dev/null 2>&1 &")
print("start taosd")
sleep(10)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册