提交 519b9331 编写于 作者: S Shengliang Guan

Merge remote-tracking branch 'origin/develop' into feature/linux

...@@ -258,10 +258,16 @@ TDengine 社区生态中也有一些非常友好的第三方连接器,可以 ...@@ -258,10 +258,16 @@ TDengine 社区生态中也有一些非常友好的第三方连接器,可以
TDengine 的测试框架和所有测试例全部开源。 TDengine 的测试框架和所有测试例全部开源。
点击[这里](tests/How-To-Run-Test-And-How-To-Add-New-Test-Case.md),了解如何运行测试例和添加新的测试例。 点击 [这里](tests/How-To-Run-Test-And-How-To-Add-New-Test-Case.md),了解如何运行测试例和添加新的测试例。
# 成为社区贡献者 # 成为社区贡献者
点击[这里](https://www.taosdata.com/cn/contributor/),了解如何成为 TDengine 的贡献者。
#加入技术交流群 点击 [这里](https://www.taosdata.com/cn/contributor/),了解如何成为 TDengine 的贡献者。
TDengine官方社群「物联网大数据群」对外开放,欢迎您加入讨论。搜索微信号 "tdengine",加小T为好友,即可入群。
# 加入技术交流群
TDengine 官方社群「物联网大数据群」对外开放,欢迎您加入讨论。搜索微信号 "tdengine",加小T为好友,即可入群。
# [谁在使用TDengine](https://github.com/taosdata/TDengine/issues/2432)
欢迎所有 TDengine 用户及贡献者在 [这里](https://github.com/taosdata/TDengine/issues/2432) 分享您在当前工作中开发/使用 TDengine 的故事。
...@@ -31,7 +31,7 @@ For user manual, system design and architecture, engineering blogs, refer to [TD ...@@ -31,7 +31,7 @@ For user manual, system design and architecture, engineering blogs, refer to [TD
# Building # Building
At the moment, TDengine only supports building and running on Linux systems. You can choose to [install from packages](https://www.taosdata.com/en/getting-started/#Install-from-Package) or from the source code. This quick guide is for installation from the source only. At the moment, TDengine only supports building and running on Linux systems. You can choose to [install from packages](https://www.taosdata.com/en/getting-started/#Install-from-Package) or from the source code. This quick guide is for installation from the source only.
To build TDengine, use [CMake](https://cmake.org/) 3.5 or higher versions in the project directory. To build TDengine, use [CMake](https://cmake.org/) 2.8.12.x or higher versions in the project directory.
## Install tools ## Install tools
...@@ -250,3 +250,6 @@ Please follow the [contribution guidelines](CONTRIBUTING.md) to contribute to th ...@@ -250,3 +250,6 @@ Please follow the [contribution guidelines](CONTRIBUTING.md) to contribute to th
Add WeChat “tdengine” to join the group,you can communicate with other users. Add WeChat “tdengine” to join the group,you can communicate with other users.
# [User List](https://github.com/taosdata/TDengine/issues/2432)
If you are using TDengine and feel it helps or you'd like to do some contributions, please add your company to [user list](https://github.com/taosdata/TDengine/issues/2432) and let us know your needs.
...@@ -451,7 +451,8 @@ Query OK, 1 row(s) in set (0.000141s) ...@@ -451,7 +451,8 @@ Query OK, 1 row(s) in set (0.000141s)
| taos-jdbcdriver 版本 | TDengine 版本 | JDK 版本 | | taos-jdbcdriver 版本 | TDengine 版本 | JDK 版本 |
| -------------------- | ----------------- | -------- | | -------------------- | ----------------- | -------- |
| 2.0.12 及以上 | 2.0.8.0 及以上 | 1.8.x | | 2.0.22 | 2.0.18.0 及以上 | 1.8.x |
| 2.0.12 - 2.0.21 | 2.0.8.0 - 2.0.17.0 | 1.8.x |
| 2.0.4 - 2.0.11 | 2.0.0.0 - 2.0.7.x | 1.8.x | | 2.0.4 - 2.0.11 | 2.0.0.0 - 2.0.7.x | 1.8.x |
| 1.0.3 | 1.6.1.x 及以上 | 1.8.x | | 1.0.3 | 1.6.1.x 及以上 | 1.8.x |
| 1.0.2 | 1.6.1.x 及以上 | 1.8.x | | 1.0.2 | 1.6.1.x 及以上 | 1.8.x |
......
...@@ -6,19 +6,27 @@ ...@@ -6,19 +6,27 @@
### 内存需求 ### 内存需求
每个 DB 可以创建固定数目的 vgroup,默认与 CPU 核数相同,可通过 maxVgroupsPerDb 配置;vgroup 中的每个副本会是一个 vnode;每个 vnode 会占用固定大小的内存(大小与数据库的配置参数 blocks 和 cache 有关);每个 Table 会占用与标签总长度有关的内存;此外,系统会有一些固定的内存开销。因此,每个 DB 需要的系统内存可通过如下公式计算: 每个 Database 可以创建固定数目的 vgroup,默认与 CPU 核数相同,可通过 maxVgroupsPerDb 配置;vgroup 中的每个副本会是一个 vnode;每个 vnode 会占用固定大小的内存(大小与数据库的配置参数 blocks 和 cache 有关);每个 Table 会占用与标签总长度有关的内存;此外,系统会有一些固定的内存开销。因此,每个 DB 需要的系统内存可通过如下公式计算:
``` ```
Memory Size = maxVgroupsPerDb * (blocks * cache + 10MB) + numOfTables * (tagSizePerTable + 0.5KB) Database Memory Size = maxVgroupsPerDb * (blocks * cache + 10MB) + numOfTables * (tagSizePerTable + 0.5KB)
``` ```
示例:假设是 4 核机器,cache 是缺省大小 16M, blocks 是缺省值 6,假设有 10 万张表,标签总长度是 256 字节,则总的内存需求为:4 \* (16 \* 6 + 10) + 100000 \* (0.25 + 0.5) / 1000 = 499M。 示例:假设是 4 核机器,cache 是缺省大小 16M, blocks 是缺省值 6,并且一个 DB 中有 10 万张表,标签总长度是 256 字节,则这个 DB 总的内存需求为:4 \* (16 \* 6 + 10) + 100000 \* (0.25 + 0.5) / 1000 = 499M。
注意:从这个公式计算得到的内存容量,应理解为系统的“必要需求”,而不是“内存总数”。在实际运行的生产系统中,由于操作系统缓存、资源管理调度等方面的需要,内存规划应当在计算结果的基础上保留一定冗余,以维持系统状态和系统性能的稳定性。 在实际的系统运维中,我们通常会更关心 TDengine 服务进程(taosd)会占用的内存量。
```
taosd 内存总量 = vnode 内存 + mnode 内存 + 查询内存
```
其中:
1. “vnode 内存”指的是集群中所有的 Database 存储分摊到当前 taosd 节点上所占用的内存资源。可以按上文“Database Memory Size”计算公式估算每个 DB 的内存占用量进行加总,再按集群中总共的 TDengine 节点数做平均(如果设置为多副本,则还需要乘以对应的副本倍数)。
2. “mnode 内存”指的是集群中管理节点所占用的资源。如果一个 taosd 节点上分布有 mnode 管理节点,则内存消耗还需要增加“0.2KB * 集群中数据表总数”。
3. “查询内存”指的是服务端处理查询请求时所需要占用的内存。单条查询语句至少会占用“0.2KB * 查询涉及的数据表总数”的内存量。
实际运行的系统往往会根据数据特点的不同,将数据存放在不同的 DB 里。因此做规划时,也需要考虑 注意:以上内存估算方法,主要讲解了系统的“必须内存需求”,而不是“内存总数上限”。在实际运行的生产环境中,由于操作系统缓存、资源管理调度等方面的原因,内存规划应当在估算结果的基础上保留一定冗余,以维持系统状态和系统性能的稳定性。并且,生产环境通常会配置系统资源的监控工具,以便及时发现硬件资源的紧缺情况
如果内存充裕,可以加大 Blocks 的配置,这样更多数据将保存在内存里,提高查询速度。 最后,如果内存充裕,可以考虑加大 Blocks 的配置,这样更多数据将保存在内存里,提高查询速度。
### CPU 需求 ### CPU 需求
......
...@@ -249,7 +249,7 @@ TDengine缺省的时间戳是毫秒精度,但通过修改配置参数enableMic ...@@ -249,7 +249,7 @@ TDengine缺省的时间戳是毫秒精度,但通过修改配置参数enableMic
3) TAGS 列名不能为预留关键字; 3) TAGS 列名不能为预留关键字;
4) TAGS 最多允许128个,至少1个,总长度不超过16k个字符 4) TAGS 最多允许 128 个,至少 1 个,总长度不超过 16 KB
- **删除超级表** - **删除超级表**
......
...@@ -41,7 +41,7 @@ ...@@ -41,7 +41,7 @@
"insert_mode": "taosc", "insert_mode": "taosc",
"insert_rows": 1000, "insert_rows": 1000,
"multi_thread_write_one_tbl": "no", "multi_thread_write_one_tbl": "no",
"rows_per_tbl": 20, "interlace_rows": 20,
"max_sql_len": 1024000, "max_sql_len": 1024000,
"disorder_ratio": 0, "disorder_ratio": 0,
"disorder_range": 1000, "disorder_range": 1000,
......
...@@ -41,7 +41,7 @@ ...@@ -41,7 +41,7 @@
"insert_mode": "taosc", "insert_mode": "taosc",
"insert_rows": 100000, "insert_rows": 100000,
"multi_thread_write_one_tbl": "no", "multi_thread_write_one_tbl": "no",
"rows_per_tbl": 0, "interlace_rows": 0,
"max_sql_len": 1024000, "max_sql_len": 1024000,
"disorder_ratio": 0, "disorder_ratio": 0,
"disorder_range": 1000, "disorder_range": 1000,
......
此差异已折叠。
...@@ -27,7 +27,7 @@ SDisk *tfsNewDisk(int level, int id, const char *dir) { ...@@ -27,7 +27,7 @@ SDisk *tfsNewDisk(int level, int id, const char *dir) {
pDisk->level = level; pDisk->level = level;
pDisk->id = id; pDisk->id = id;
strncpy(pDisk->dir, dir, TSDB_FILENAME_LEN); tstrncpy(pDisk->dir, dir, TSDB_FILENAME_LEN);
return pDisk; return pDisk;
} }
......
...@@ -187,7 +187,7 @@ void tfsInitFile(TFILE *pf, int level, int id, const char *bname) { ...@@ -187,7 +187,7 @@ void tfsInitFile(TFILE *pf, int level, int id, const char *bname) {
pf->level = level; pf->level = level;
pf->id = id; pf->id = id;
strncpy(pf->rname, bname, TSDB_FILENAME_LEN); tstrncpy(pf->rname, bname, TSDB_FILENAME_LEN);
char tmpName[TMPNAME_LEN] = {0}; char tmpName[TMPNAME_LEN] = {0};
snprintf(tmpName, TMPNAME_LEN, "%s/%s", DISK_DIR(pDisk), bname); snprintf(tmpName, TMPNAME_LEN, "%s/%s", DISK_DIR(pDisk), bname);
...@@ -230,15 +230,15 @@ void *tfsDecodeFile(void *buf, TFILE *pf) { ...@@ -230,15 +230,15 @@ void *tfsDecodeFile(void *buf, TFILE *pf) {
void tfsbasename(const TFILE *pf, char *dest) { void tfsbasename(const TFILE *pf, char *dest) {
char tname[TSDB_FILENAME_LEN] = "\0"; char tname[TSDB_FILENAME_LEN] = "\0";
strncpy(tname, pf->aname, TSDB_FILENAME_LEN); tstrncpy(tname, pf->aname, TSDB_FILENAME_LEN);
strncpy(dest, basename(tname), TSDB_FILENAME_LEN); tstrncpy(dest, basename(tname), TSDB_FILENAME_LEN);
} }
void tfsdirname(const TFILE *pf, char *dest) { void tfsdirname(const TFILE *pf, char *dest) {
char tname[TSDB_FILENAME_LEN] = "\0"; char tname[TSDB_FILENAME_LEN] = "\0";
strncpy(tname, pf->aname, TSDB_FILENAME_LEN); tstrncpy(tname, pf->aname, TSDB_FILENAME_LEN);
strncpy(dest, dirname(tname), TSDB_FILENAME_LEN); tstrncpy(dest, dirname(tname), TSDB_FILENAME_LEN);
} }
// DIR APIs ==================================== // DIR APIs ====================================
...@@ -344,7 +344,7 @@ TDIR *tfsOpendir(const char *rname) { ...@@ -344,7 +344,7 @@ TDIR *tfsOpendir(const char *rname) {
} }
tfsInitDiskIter(&(tdir->iter)); tfsInitDiskIter(&(tdir->iter));
strncpy(tdir->dirname, rname, TSDB_FILENAME_LEN); tstrncpy(tdir->dirname, rname, TSDB_FILENAME_LEN);
if (tfsOpendirImpl(tdir) < 0) { if (tfsOpendirImpl(tdir) < 0) {
free(tdir); free(tdir);
......
...@@ -334,7 +334,7 @@ static FORCE_INLINE int tsdbOpenDFileSet(SDFileSet* pSet, int flags) { ...@@ -334,7 +334,7 @@ static FORCE_INLINE int tsdbOpenDFileSet(SDFileSet* pSet, int flags) {
static FORCE_INLINE void tsdbRemoveDFileSet(SDFileSet* pSet) { static FORCE_INLINE void tsdbRemoveDFileSet(SDFileSet* pSet) {
for (TSDB_FILE_T ftype = 0; ftype < TSDB_FILE_MAX; ftype++) { for (TSDB_FILE_T ftype = 0; ftype < TSDB_FILE_MAX; ftype++) {
tsdbRemoveDFile(TSDB_DFILE_IN_SET(pSet, ftype)); (void)tsdbRemoveDFile(TSDB_DFILE_IN_SET(pSet, ftype));
} }
} }
......
...@@ -164,7 +164,7 @@ static int tsdbCommitMeta(STsdbRepo *pRepo) { ...@@ -164,7 +164,7 @@ static int tsdbCommitMeta(STsdbRepo *pRepo) {
tsdbError("vgId:%d failed to update META record, uid %" PRIu64 " since %s", REPO_ID(pRepo), pAct->uid, tsdbError("vgId:%d failed to update META record, uid %" PRIu64 " since %s", REPO_ID(pRepo), pAct->uid,
tstrerror(terrno)); tstrerror(terrno));
tsdbCloseMFile(&mf); tsdbCloseMFile(&mf);
tsdbApplyMFileChange(&mf, pOMFile); (void)tsdbApplyMFileChange(&mf, pOMFile);
// TODO: need to reload metaCache // TODO: need to reload metaCache
return -1; return -1;
} }
...@@ -304,7 +304,7 @@ static int tsdbCommitTSData(STsdbRepo *pRepo) { ...@@ -304,7 +304,7 @@ static int tsdbCommitTSData(STsdbRepo *pRepo) {
SDFileSet *pSet = NULL; SDFileSet *pSet = NULL;
int fid; int fid;
memset(&commith, 0, sizeof(SMemTable *)); memset(&commith, 0, sizeof(commith));
if (pMem->numOfRows <= 0) { if (pMem->numOfRows <= 0) {
// No memory data, just apply retention on each file on disk // No memory data, just apply retention on each file on disk
...@@ -399,9 +399,9 @@ static void tsdbEndCommit(STsdbRepo *pRepo, int eno) { ...@@ -399,9 +399,9 @@ static void tsdbEndCommit(STsdbRepo *pRepo, int eno) {
if (pRepo->appH.notifyStatus) pRepo->appH.notifyStatus(pRepo->appH.appH, TSDB_STATUS_COMMIT_OVER, eno); if (pRepo->appH.notifyStatus) pRepo->appH.notifyStatus(pRepo->appH.appH, TSDB_STATUS_COMMIT_OVER, eno);
SMemTable *pIMem = pRepo->imem; SMemTable *pIMem = pRepo->imem;
tsdbLockRepo(pRepo); (void)tsdbLockRepo(pRepo);
pRepo->imem = NULL; pRepo->imem = NULL;
tsdbUnlockRepo(pRepo); (void)tsdbUnlockRepo(pRepo);
tsdbUnRefMemTable(pRepo, pIMem); tsdbUnRefMemTable(pRepo, pIMem);
tsem_post(&(pRepo->readyToCommit)); tsem_post(&(pRepo->readyToCommit));
} }
...@@ -1136,12 +1136,12 @@ static int tsdbMoveBlock(SCommitH *pCommith, int bidx) { ...@@ -1136,12 +1136,12 @@ static int tsdbMoveBlock(SCommitH *pCommith, int bidx) {
} }
static int tsdbCommitAddBlock(SCommitH *pCommith, const SBlock *pSupBlock, const SBlock *pSubBlocks, int nSubBlocks) { static int tsdbCommitAddBlock(SCommitH *pCommith, const SBlock *pSupBlock, const SBlock *pSubBlocks, int nSubBlocks) {
if (taosArrayPush(pCommith->aSupBlk, pSupBlock) < 0) { if (taosArrayPush(pCommith->aSupBlk, pSupBlock) == NULL) {
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY; terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
return -1; return -1;
} }
if (pSubBlocks && taosArrayPushBatch(pCommith->aSubBlk, pSubBlocks, nSubBlocks) < 0) { if (pSubBlocks && taosArrayPushBatch(pCommith->aSubBlk, pSubBlocks, nSubBlocks) == NULL) {
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY; terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
return -1; return -1;
} }
...@@ -1379,7 +1379,7 @@ static int tsdbSetAndOpenCommitFile(SCommitH *pCommith, SDFileSet *pSet, int fid ...@@ -1379,7 +1379,7 @@ static int tsdbSetAndOpenCommitFile(SCommitH *pCommith, SDFileSet *pSet, int fid
tstrerror(terrno)); tstrerror(terrno));
tsdbCloseDFileSet(pWSet); tsdbCloseDFileSet(pWSet);
tsdbRemoveDFile(pWHeadf); (void)tsdbRemoveDFile(pWHeadf);
if (pCommith->isRFileSet) { if (pCommith->isRFileSet) {
tsdbCloseAndUnsetFSet(&(pCommith->readh)); tsdbCloseAndUnsetFSet(&(pCommith->readh));
return -1; return -1;
......
...@@ -380,7 +380,7 @@ static int tsdbSaveFSStatus(SFSStatus *pStatus, int vid) { ...@@ -380,7 +380,7 @@ static int tsdbSaveFSStatus(SFSStatus *pStatus, int vid) {
if (taosWrite(fd, pBuf, fsheader.len) < fsheader.len) { if (taosWrite(fd, pBuf, fsheader.len) < fsheader.len) {
terrno = TAOS_SYSTEM_ERROR(errno); terrno = TAOS_SYSTEM_ERROR(errno);
close(fd); close(fd);
remove(tfname); (void)remove(tfname);
taosTZfree(pBuf); taosTZfree(pBuf);
return -1; return -1;
} }
...@@ -413,7 +413,7 @@ static void tsdbApplyFSTxnOnDisk(SFSStatus *pFrom, SFSStatus *pTo) { ...@@ -413,7 +413,7 @@ static void tsdbApplyFSTxnOnDisk(SFSStatus *pFrom, SFSStatus *pTo) {
sizeTo = taosArrayGetSize(pTo->df); sizeTo = taosArrayGetSize(pTo->df);
// Apply meta file change // Apply meta file change
tsdbApplyMFileChange(pFrom->pmf, pTo->pmf); (void)tsdbApplyMFileChange(pFrom->pmf, pTo->pmf);
// Apply SDFileSet change // Apply SDFileSet change
if (ifrom >= sizeFrom) { if (ifrom >= sizeFrom) {
...@@ -853,7 +853,7 @@ static int tsdbScanRootDir(STsdbRepo *pRepo) { ...@@ -853,7 +853,7 @@ static int tsdbScanRootDir(STsdbRepo *pRepo) {
continue; continue;
} }
tfsremove(pf); (void)tfsremove(pf);
tsdbDebug("vgId:%d invalid file %s is removed", REPO_ID(pRepo), TFILE_NAME(pf)); tsdbDebug("vgId:%d invalid file %s is removed", REPO_ID(pRepo), TFILE_NAME(pf));
} }
...@@ -879,7 +879,7 @@ static int tsdbScanDataDir(STsdbRepo *pRepo) { ...@@ -879,7 +879,7 @@ static int tsdbScanDataDir(STsdbRepo *pRepo) {
tfsbasename(pf, bname); tfsbasename(pf, bname);
if (!tsdbIsTFileInFS(pfs, pf)) { if (!tsdbIsTFileInFS(pfs, pf)) {
tfsremove(pf); (void)tfsremove(pf);
tsdbDebug("vgId:%d invalid file %s is removed", REPO_ID(pRepo), TFILE_NAME(pf)); tsdbDebug("vgId:%d invalid file %s is removed", REPO_ID(pRepo), TFILE_NAME(pf));
} }
} }
...@@ -939,7 +939,7 @@ static int tsdbRestoreMeta(STsdbRepo *pRepo) { ...@@ -939,7 +939,7 @@ static int tsdbRestoreMeta(STsdbRepo *pRepo) {
if (strcmp(bname, tsdbTxnFname[TSDB_TXN_TEMP_FILE]) == 0) { if (strcmp(bname, tsdbTxnFname[TSDB_TXN_TEMP_FILE]) == 0) {
// Skip current.t file // Skip current.t file
tsdbInfo("vgId:%d file %s exists, remove it", REPO_ID(pRepo), TFILE_NAME(pf)); tsdbInfo("vgId:%d file %s exists, remove it", REPO_ID(pRepo), TFILE_NAME(pf));
tfsremove(pf); (void)tfsremove(pf);
continue; continue;
} }
...@@ -1045,7 +1045,7 @@ static int tsdbRestoreDFileSet(STsdbRepo *pRepo) { ...@@ -1045,7 +1045,7 @@ static int tsdbRestoreDFileSet(STsdbRepo *pRepo) {
int code = regexec(&regex, bname, 0, NULL, 0); int code = regexec(&regex, bname, 0, NULL, 0);
if (code == 0) { if (code == 0) {
if (taosArrayPush(fArray, (void *)pf) < 0) { if (taosArrayPush(fArray, (void *)pf) == NULL) {
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY; terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
tfsClosedir(tdir); tfsClosedir(tdir);
taosArrayDestroy(fArray); taosArrayDestroy(fArray);
...@@ -1055,7 +1055,7 @@ static int tsdbRestoreDFileSet(STsdbRepo *pRepo) { ...@@ -1055,7 +1055,7 @@ static int tsdbRestoreDFileSet(STsdbRepo *pRepo) {
} else if (code == REG_NOMATCH) { } else if (code == REG_NOMATCH) {
// Not match // Not match
tsdbInfo("vgId:%d invalid file %s exists, remove it", REPO_ID(pRepo), TFILE_NAME(pf)); tsdbInfo("vgId:%d invalid file %s exists, remove it", REPO_ID(pRepo), TFILE_NAME(pf));
tfsremove(pf); (void)tfsremove(pf);
continue; continue;
} else { } else {
// Has other error // Has other error
......
...@@ -523,7 +523,7 @@ static int tsdbApplyDFileChange(SDFile *from, SDFile *to) { ...@@ -523,7 +523,7 @@ static int tsdbApplyDFileChange(SDFile *from, SDFile *to) {
tsdbRollBackDFile(to); tsdbRollBackDFile(to);
} }
} else { } else {
tsdbRemoveDFile(from); (void)tsdbRemoveDFile(from);
} }
} }
} }
......
...@@ -139,7 +139,7 @@ int tsdbLoadBlockIdx(SReadH *pReadh) { ...@@ -139,7 +139,7 @@ int tsdbLoadBlockIdx(SReadH *pReadh) {
ptr = tsdbDecodeSBlockIdx(ptr, &blkIdx); ptr = tsdbDecodeSBlockIdx(ptr, &blkIdx);
ASSERT(ptr != NULL); ASSERT(ptr != NULL);
if (taosArrayPush(pReadh->aBlkIdx, (void *)(&blkIdx)) < 0) { if (taosArrayPush(pReadh->aBlkIdx, (void *)(&blkIdx)) == NULL) {
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY; terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
return -1; return -1;
} }
......
...@@ -47,7 +47,6 @@ static FORCE_INLINE int taosCalcChecksumAppend(TSCKSUM csi, uint8_t *stream, uin ...@@ -47,7 +47,6 @@ static FORCE_INLINE int taosCalcChecksumAppend(TSCKSUM csi, uint8_t *stream, uin
} }
static FORCE_INLINE int taosCheckChecksum(const uint8_t *stream, uint32_t ssize, TSCKSUM checksum) { static FORCE_INLINE int taosCheckChecksum(const uint8_t *stream, uint32_t ssize, TSCKSUM checksum) {
if (ssize < 0) return 0;
return (checksum == (*crc32c)(0, stream, (size_t)ssize)); return (checksum == (*crc32c)(0, stream, (size_t)ssize));
} }
......
...@@ -38,7 +38,7 @@ ...@@ -38,7 +38,7 @@
"insert_rows": 100, "insert_rows": 100,
"multi_thread_write_one_tbl": "no", "multi_thread_write_one_tbl": "no",
"number_of_tbl_in_one_sql": 0, "number_of_tbl_in_one_sql": 0,
"rows_per_tbl": 3, "interlace_rows": 3,
"max_sql_len": 1024, "max_sql_len": 1024,
"disorder_ratio": 0, "disorder_ratio": 0,
"disorder_range": 1000, "disorder_range": 1000,
......
...@@ -28,6 +28,8 @@ RUN ulimit -c unlimited ...@@ -28,6 +28,8 @@ RUN ulimit -c unlimited
COPY --from=builder /root/bin/taosd /usr/bin COPY --from=builder /root/bin/taosd /usr/bin
COPY --from=builder /root/bin/tarbitrator /usr/bin COPY --from=builder /root/bin/tarbitrator /usr/bin
COPY --from=builder /root/bin/taosdemo /usr/bin
COPY --from=builder /root/bin/taosdump /usr/bin
COPY --from=builder /root/bin/taos /usr/bin COPY --from=builder /root/bin/taos /usr/bin
COPY --from=builder /root/cfg/taos.cfg /etc/taos/ COPY --from=builder /root/cfg/taos.cfg /etc/taos/
COPY --from=builder /root/lib/libtaos.so.* /usr/lib/libtaos.so.1 COPY --from=builder /root/lib/libtaos.so.* /usr/lib/libtaos.so.1
......
...@@ -45,8 +45,8 @@ class BuildDockerCluser: ...@@ -45,8 +45,8 @@ class BuildDockerCluser:
os.system("docker exec -d $(docker ps|grep tdnode1|awk '{print $1}') tarbitrator") os.system("docker exec -d $(docker ps|grep tdnode1|awk '{print $1}') tarbitrator")
def run(self): def run(self):
if self.numOfNodes < 2 or self.numOfNodes > 5: if self.numOfNodes < 2 or self.numOfNodes > 10:
print("the number of nodes must be between 2 and 5") print("the number of nodes must be between 2 and 10")
exit(0) exit(0)
print("remove Flag value %s" % self.removeFlag) print("remove Flag value %s" % self.removeFlag)
if self.removeFlag == False: if self.removeFlag == False:
...@@ -96,7 +96,7 @@ parser.add_argument( ...@@ -96,7 +96,7 @@ parser.add_argument(
'-v', '-v',
'--version', '--version',
action='store', action='store',
default='2.0.17.1', default='2.0.18.1',
type=str, type=str,
help='the version of the cluster to be build, Default is 2.0.17.1') help='the version of the cluster to be build, Default is 2.0.17.1')
parser.add_argument( parser.add_argument(
......
#!/bin/bash #!/bin/bash
echo "Executing buildClusterEnv.sh" echo "Executing buildClusterEnv.sh"
CURR_DIR=`pwd` CURR_DIR=`pwd`
IN_TDINTERNAL="community"
if [ $# != 6 ]; then if [ $# != 6 ]; then
echo "argument list need input : " echo "argument list need input : "
...@@ -32,7 +33,7 @@ do ...@@ -32,7 +33,7 @@ do
done done
function addTaoscfg { function addTaoscfg {
for i in {1..5} for((i=1;i<=$NUM_OF_NODES;i++))
do do
touch $DOCKER_DIR/node$i/cfg/taos.cfg touch $DOCKER_DIR/node$i/cfg/taos.cfg
echo 'firstEp tdnode1:6030' > $DOCKER_DIR/node$i/cfg/taos.cfg echo 'firstEp tdnode1:6030' > $DOCKER_DIR/node$i/cfg/taos.cfg
...@@ -42,7 +43,7 @@ function addTaoscfg { ...@@ -42,7 +43,7 @@ function addTaoscfg {
} }
function createDIR { function createDIR {
for i in {1..5} for((i=1;i<=$NUM_OF_NODES;i++))
do do
mkdir -p $DOCKER_DIR/node$i/data mkdir -p $DOCKER_DIR/node$i/data
mkdir -p $DOCKER_DIR/node$i/log mkdir -p $DOCKER_DIR/node$i/log
...@@ -53,7 +54,7 @@ function createDIR { ...@@ -53,7 +54,7 @@ function createDIR {
function cleanEnv { function cleanEnv {
echo "Clean up docker environment" echo "Clean up docker environment"
for i in {1..5} for((i=1;i<=$NUM_OF_NODES;i++))
do do
rm -rf $DOCKER_DIR/node$i/data/* rm -rf $DOCKER_DIR/node$i/data/*
rm -rf $DOCKER_DIR/node$i/log/* rm -rf $DOCKER_DIR/node$i/log/*
...@@ -68,23 +69,48 @@ function prepareBuild { ...@@ -68,23 +69,48 @@ function prepareBuild {
fi fi
if [ ! -e $DOCKER_DIR/TDengine-server-$VERSION-Linux-x64.tar.gz ] || [ ! -e $DOCKER_DIR/TDengine-arbitrator-$VERSION-Linux-x64.tar.gz ]; then if [ ! -e $DOCKER_DIR/TDengine-server-$VERSION-Linux-x64.tar.gz ] || [ ! -e $DOCKER_DIR/TDengine-arbitrator-$VERSION-Linux-x64.tar.gz ]; then
cd $CURR_DIR/../../../../packaging cd $CURR_DIR/../../../../packaging
echo $CURR_DIR
echo $IN_TDINTERNAL
echo "generating TDeninger packages" echo "generating TDeninger packages"
./release.sh -v edge -n $VERSION >> /dev/null if [[ "$CURR_DIR" == *"$IN_TDINTERNAL"* ]]; then
pwd
if [ ! -e $CURR_DIR/../../../../release/TDengine-server-$VERSION-Linux-x64.tar.gz ]; then ./release.sh -v cluster -n $VERSION >> /dev/null 2>&1
echo "no TDengine install package found" else
exit 1 pwd
./release.sh -v edge -n $VERSION >> /dev/null 2>&1
fi fi
if [ ! -e $CURR_DIR/../../../../release/TDengine-arbitrator-$VERSION-Linux-x64.tar.gz ]; then if [[ "$CURR_DIR" == *"$IN_TDINTERNAL"* ]]; then
echo "no arbitrator install package found" if [ ! -e $CURR_DIR/../../../../release/TDengine-enterprise-server-$VERSION-Linux-x64.tar.gz ]; then
exit 1 echo "no TDengine install package found"
exit 1
fi
if [ ! -e $CURR_DIR/../../../../release/TDengine-enterprise-arbitrator-$VERSION-Linux-x64.tar.gz ]; then
echo "no arbitrator install package found"
exit 1
fi
else
if [ ! -e $CURR_DIR/../../../../release/TDengine-server-$VERSION-Linux-x64.tar.gz ]; then
echo "no TDengine install package found"
exit 1
fi
if [ ! -e $CURR_DIR/../../../../release/TDengine-arbitrator-$VERSION-Linux-x64.tar.gz ]; then
echo "no arbitrator install package found"
exit 1
fi
fi fi
cd $CURR_DIR/../../../../release cd $CURR_DIR/../../../../release
mv TDengine-server-$VERSION-Linux-x64.tar.gz $DOCKER_DIR if [[ "$CURR_DIR" == *"$IN_TDINTERNAL"* ]]; then
mv TDengine-arbitrator-$VERSION-Linux-x64.tar.gz $DOCKER_DIR mv TDengine-enterprise-server-$VERSION-Linux-x64.tar.gz $DOCKER_DIR
mv TDengine-enterprise-arbitrator-$VERSION-Linux-x64.tar.gz $DOCKER_DIR
else
mv TDengine-server-$VERSION-Linux-x64.tar.gz $DOCKER_DIR
mv TDengine-arbitrator-$VERSION-Linux-x64.tar.gz $DOCKER_DIR
fi
fi fi
rm -rf $DOCKER_DIR/*.yml rm -rf $DOCKER_DIR/*.yml
...@@ -99,23 +125,31 @@ function clusterUp { ...@@ -99,23 +125,31 @@ function clusterUp {
cd $DOCKER_DIR cd $DOCKER_DIR
if [ $NUM_OF_NODES -eq 2 ]; then if [[ "$CURR_DIR" == *"$IN_TDINTERNAL"* ]]; then
echo "create 2 dnodes" docker_run="PACKAGE=TDengine-enterprise-server-$VERSION-Linux-x64.tar.gz TARBITRATORPKG=TDengine-enterprise-arbitrator-$VERSION-Linux-x64.tar.gz DIR=TDengine-enterprise-server-$VERSION DIR2=TDengine-enterprise-arbitrator-$VERSION VERSION=$VERSION DATADIR=$DOCKER_DIR docker-compose -f docker-compose.yml "
PACKAGE=TDengine-server-$VERSION-Linux-x64.tar.gz TARBITRATORPKG=TDengine-arbitrator-$VERSION-Linux-x64.tar.gz DIR=TDengine-server-$VERSION DIR2=TDengine-arbitrator-$VERSION VERSION=$VERSION DATADIR=$DOCKER_DIR docker-compose up -d else
fi docker_run="PACKAGE=TDengine-server-$VERSION-Linux-x64.tar.gz TARBITRATORPKG=TDengine-arbitrator-$VERSION-Linux-x64.tar.gz DIR=TDengine-server-$VERSION DIR2=TDengine-arbitrator-$VERSION VERSION=$VERSION DATADIR=$DOCKER_DIR docker-compose -f docker-compose.yml "
if [ $NUM_OF_NODES -eq 3 ]; then
PACKAGE=TDengine-server-$VERSION-Linux-x64.tar.gz TARBITRATORPKG=TDengine-arbitrator-$VERSION-Linux-x64.tar.gz DIR=TDengine-server-$VERSION DIR2=TDengine-arbitrator-$VERSION VERSION=$VERSION DATADIR=$DOCKER_DIR docker-compose -f docker-compose.yml -f node3.yml up -d
fi fi
if [ $NUM_OF_NODES -eq 4 ]; then if [ $NUM_OF_NODES -ge 2 ];then
PACKAGE=TDengine-server-$VERSION-Linux-x64.tar.gz TARBITRATORPKG=TDengine-arbitrator-$VERSION-Linux-x64.tar.gz DIR=TDengine-server-$VERSION DIR2=TDengine-arbitrator-$VERSION VERSION=$VERSION DATADIR=$DOCKER_DIR docker-compose -f docker-compose.yml -f node3.yml -f node4.yml up -d echo "create $NUM_OF_NODES dnodes"
for((i=3;i<=$NUM_OF_NODES;i++))
do
if [ ! -f node$i.yml ];then
echo "node$i.yml not exist"
cp node3.yml node$i.yml
sed -i "s/td2.0-node3/td2.0-node$i/g" node$i.yml
sed -i "s/'tdnode3'/'tdnode$i'/g" node$i.yml
sed -i "s#/node3/#/node$i/#g" node$i.yml
sed -i "s#hostname: tdnode3#hostname: tdnode$i#g" node$i.yml
sed -i "s#ipv4_address: 172.27.0.9#ipv4_address: 172.27.0.`expr $i + 6`#g" node$i.yml
fi
docker_run=$docker_run" -f node$i.yml "
done
docker_run=$docker_run" up -d"
fi fi
echo $docker_run |sh
if [ $NUM_OF_NODES -eq 5 ]; then
PACKAGE=TDengine-server-$VERSION-Linux-x64.tar.gz TARBITRATORPKG=TDengine-arbitrator-$VERSION-Linux-x64.tar.gz DIR=TDengine-server-$VERSION DIR2=TDengine-arbitrator-$VERSION VERSION=$VERSION DATADIR=$DOCKER_DIR docker-compose -f docker-compose.yml -f node3.yml -f node4.yml -f node5.yml up -d
fi
echo "docker compose finish" echo "docker compose finish"
} }
......
...@@ -28,7 +28,7 @@ function removeDockerContainers { ...@@ -28,7 +28,7 @@ function removeDockerContainers {
function cleanEnv { function cleanEnv {
echo "Clean up docker environment" echo "Clean up docker environment"
for i in {1..5} for i in {1..10}
do do
rm -rf $DOCKER_DIR/node$i/data/* rm -rf $DOCKER_DIR/node$i/data/*
rm -rf $DOCKER_DIR/node$i/log/* rm -rf $DOCKER_DIR/node$i/log/*
......
...@@ -30,6 +30,11 @@ services: ...@@ -30,6 +30,11 @@ services:
- "tdnode3:172.27.0.9" - "tdnode3:172.27.0.9"
- "tdnode4:172.27.0.10" - "tdnode4:172.27.0.10"
- "tdnode5:172.27.0.11" - "tdnode5:172.27.0.11"
- "tdnode6:172.27.0.12"
- "tdnode7:172.27.0.13"
- "tdnode8:172.27.0.14"
- "tdnode9:172.27.0.15"
- "tdnode10:172.27.0.16"
volumes: volumes:
# bind data directory # bind data directory
- type: bind - type: bind
...@@ -61,7 +66,9 @@ services: ...@@ -61,7 +66,9 @@ services:
context: . context: .
args: args:
- PACKAGE=${PACKAGE} - PACKAGE=${PACKAGE}
- TARBITRATORPKG=${TARBITRATORPKG}
- EXTRACTDIR=${DIR} - EXTRACTDIR=${DIR}
- EXTRACTDIR2=${DIR2}
- DATADIR=${DATADIR} - DATADIR=${DATADIR}
image: 'tdengine:${VERSION}' image: 'tdengine:${VERSION}'
container_name: 'tdnode2' container_name: 'tdnode2'
......
...@@ -39,7 +39,7 @@ ...@@ -39,7 +39,7 @@
"insert_rows": 100000, "insert_rows": 100000,
"multi_thread_write_one_tbl": "no", "multi_thread_write_one_tbl": "no",
"number_of_tbl_in_one_sql": 1, "number_of_tbl_in_one_sql": 1,
"rows_per_tbl": 100, "interlace_rows": 100,
"max_sql_len": 1024000, "max_sql_len": 1024000,
"disorder_ratio": 0, "disorder_ratio": 0,
"disorder_range": 1000, "disorder_range": 1000,
......
...@@ -6,7 +6,9 @@ services: ...@@ -6,7 +6,9 @@ services:
context: . context: .
args: args:
- PACKAGE=${PACKAGE} - PACKAGE=${PACKAGE}
- TARBITRATORPKG=${TARBITRATORPKG}
- EXTRACTDIR=${DIR} - EXTRACTDIR=${DIR}
- EXTRACTDIR2=${DIR2}
- DATADIR=${DATADIR} - DATADIR=${DATADIR}
image: 'tdengine:${VERSION}' image: 'tdengine:${VERSION}'
container_name: 'tdnode3' container_name: 'tdnode3'
...@@ -24,10 +26,15 @@ services: ...@@ -24,10 +26,15 @@ services:
sysctl -p && sysctl -p &&
exec my-main-application" exec my-main-application"
extra_hosts: extra_hosts:
- "tdnode1:172.27.0.7"
- "tdnode2:172.27.0.8" - "tdnode2:172.27.0.8"
- "tdnode3:172.27.0.9"
- "tdnode4:172.27.0.10" - "tdnode4:172.27.0.10"
- "tdnode5:172.27.0.11" - "tdnode5:172.27.0.11"
- "tdnode6:172.27.0.12"
- "tdnode7:172.27.0.13"
- "tdnode8:172.27.0.14"
- "tdnode9:172.27.0.15"
- "tdnode10:172.27.0.16"
volumes: volumes:
# bind data directory # bind data directory
- type: bind - type: bind
......
...@@ -6,7 +6,9 @@ services: ...@@ -6,7 +6,9 @@ services:
context: . context: .
args: args:
- PACKAGE=${PACKAGE} - PACKAGE=${PACKAGE}
- TARBITRATORPKG=${TARBITRATORPKG}
- EXTRACTDIR=${DIR} - EXTRACTDIR=${DIR}
- EXTRACTDIR2=${DIR2}
- DATADIR=${DATADIR} - DATADIR=${DATADIR}
image: 'tdengine:${VERSION}' image: 'tdengine:${VERSION}'
container_name: 'tdnode4' container_name: 'tdnode4'
...@@ -28,6 +30,11 @@ services: ...@@ -28,6 +30,11 @@ services:
- "tdnode3:172.27.0.9" - "tdnode3:172.27.0.9"
- "tdnode4:172.27.0.10" - "tdnode4:172.27.0.10"
- "tdnode5:172.27.0.11" - "tdnode5:172.27.0.11"
- "tdnode6:172.27.0.12"
- "tdnode7:172.27.0.13"
- "tdnode8:172.27.0.14"
- "tdnode9:172.27.0.15"
- "tdnode10:172.27.0.16"
volumes: volumes:
# bind data directory # bind data directory
- type: bind - type: bind
......
...@@ -6,7 +6,9 @@ services: ...@@ -6,7 +6,9 @@ services:
context: . context: .
args: args:
- PACKAGE=${PACKAGE} - PACKAGE=${PACKAGE}
- TARBITRATORPKG=${TARBITRATORPKG}
- EXTRACTDIR=${DIR} - EXTRACTDIR=${DIR}
- EXTRACTDIR2=${DIR2}
- DATADIR=${DATADIR} - DATADIR=${DATADIR}
image: 'tdengine:${VERSION}' image: 'tdengine:${VERSION}'
container_name: 'tdnode5' container_name: 'tdnode5'
...@@ -28,6 +30,11 @@ services: ...@@ -28,6 +30,11 @@ services:
- "tdnode3:172.27.0.9" - "tdnode3:172.27.0.9"
- "tdnode4:172.27.0.10" - "tdnode4:172.27.0.10"
- "tdnode5:172.27.0.11" - "tdnode5:172.27.0.11"
- "tdnode6:172.27.0.12"
- "tdnode7:172.27.0.13"
- "tdnode8:172.27.0.14"
- "tdnode9:172.27.0.15"
- "tdnode10:172.27.0.16"
volumes: volumes:
# bind data directory # bind data directory
- type: bind - type: bind
......
...@@ -238,6 +238,7 @@ python3 test.py -f tools/taosdemoTestLimitOffset.py ...@@ -238,6 +238,7 @@ python3 test.py -f tools/taosdemoTestLimitOffset.py
python3 test.py -f tools/taosdumpTest.py python3 test.py -f tools/taosdumpTest.py
python3 test.py -f tools/taosdemoTest2.py python3 test.py -f tools/taosdemoTest2.py
python3 test.py -f tools/taosdemoTestSampleData.py python3 test.py -f tools/taosdemoTestSampleData.py
python3 test.py -f tools/taosdemoTestInterlace.py
# subscribe # subscribe
python3 test.py -f subscribe/singlemeter.py python3 test.py -f subscribe/singlemeter.py
......
...@@ -10,7 +10,7 @@ ...@@ -10,7 +10,7 @@
"result_file": "./insert_res.txt", "result_file": "./insert_res.txt",
"confirm_parameter_prompt": "no", "confirm_parameter_prompt": "no",
"insert_interval": 5000, "insert_interval": 5000,
"rows_per_tbl": 50, "interlace_rows": 50,
"num_of_records_per_req": 100, "num_of_records_per_req": 100,
"max_sql_len": 1024000, "max_sql_len": 1024000,
"databases": [{ "databases": [{
...@@ -42,7 +42,7 @@ ...@@ -42,7 +42,7 @@
"insert_mode": "taosc", "insert_mode": "taosc",
"insert_rows": 250, "insert_rows": 250,
"multi_thread_write_one_tbl": "no", "multi_thread_write_one_tbl": "no",
"rows_per_tbl": 80, "interlace_rows": 80,
"max_sql_len": 1024000, "max_sql_len": 1024000,
"disorder_ratio": 0, "disorder_ratio": 0,
"disorder_range": 1000, "disorder_range": 1000,
......
...@@ -51,7 +51,7 @@ class TDTestCase: ...@@ -51,7 +51,7 @@ class TDTestCase:
else: else:
tdLog.info("taosd found in %s" % buildPath) tdLog.info("taosd found in %s" % buildPath)
binPath = buildPath + "/build/bin/" binPath = buildPath + "/build/bin/"
os.system("%staosdemo -y -M -t %d -n %d -x" % os.system("%staosdemo -y -t %d -n %d" %
(binPath, self.numberOfTables, self.numberOfRecords)) (binPath, self.numberOfTables, self.numberOfRecords))
tdSql.execute("use test") tdSql.execute("use test")
......
...@@ -31,7 +31,7 @@ class TDTestCase: ...@@ -31,7 +31,7 @@ class TDTestCase:
def insertDataAndAlterTable(self, threadID): def insertDataAndAlterTable(self, threadID):
if(threadID == 0): if(threadID == 0):
os.system("taosdemo -M -y -t %d -n %d -x" % os.system("taosdemo -y -t %d -n %d" %
(self.numberOfTables, self.numberOfRecords)) (self.numberOfTables, self.numberOfRecords))
if(threadID == 1): if(threadID == 1):
time.sleep(2) time.sleep(2)
......
...@@ -17,6 +17,7 @@ from util.log import * ...@@ -17,6 +17,7 @@ from util.log import *
from util.cases import * from util.cases import *
from util.sql import * from util.sql import *
from util.dnodes import * from util.dnodes import *
import subprocess
class TDTestCase: class TDTestCase:
...@@ -39,7 +40,7 @@ class TDTestCase: ...@@ -39,7 +40,7 @@ class TDTestCase:
if ("taosd" in files): if ("taosd" in files):
rootRealPath = os.path.dirname(os.path.realpath(root)) rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath): if ("packaging" not in rootRealPath):
buildPath = root[:len(root)-len("/build/bin")] buildPath = root[:len(root) - len("/build/bin")]
break break
return buildPath return buildPath
...@@ -50,14 +51,23 @@ class TDTestCase: ...@@ -50,14 +51,23 @@ class TDTestCase:
tdLog.exit("taosd not found!") tdLog.exit("taosd not found!")
else: else:
tdLog.info("taosd found in %s" % buildPath) tdLog.info("taosd found in %s" % buildPath)
binPath = buildPath+ "/build/bin/" binPath = buildPath + "/build/bin/"
os.system("%staosdemo -f tools/insert-interlace.json" % binPath) taosdemoCmd = "%staosdemo -f tools/insert-interlace.json -pp 2>&1 | grep sleep | wc -l" % binPath
sleepTimes = subprocess.check_output(
taosdemoCmd, shell=True).decode("utf-8")
print("sleep times: %d" % int(sleepTimes))
if (int(sleepTimes) != 16):
caller = inspect.getframeinfo(inspect.stack()[0][0])
tdLog.exit(
"%s(%d) failed: expected sleep times 16, actual %d" %
(caller.filename, caller.lineno, int(sleepTimes)))
tdSql.execute("use db") tdSql.execute("use db")
tdSql.query("select count(tbname) from db.stb") tdSql.query("select count(tbname) from db.stb")
tdSql.checkData(0, 0, 100) tdSql.checkData(0, 0, 9)
tdSql.query("select count(*) from db.stb") tdSql.query("select count(*) from db.stb")
tdSql.checkData(0, 0, 33000) tdSql.checkData(0, 0, 2250)
def stop(self): def stop(self):
tdSql.close() tdSql.close()
......
...@@ -50,7 +50,7 @@ class TDTestCase: ...@@ -50,7 +50,7 @@ class TDTestCase:
else: else:
tdLog.info("taosd found in %s" % buildPath) tdLog.info("taosd found in %s" % buildPath)
binPath = buildPath + "/build/bin/" binPath = buildPath + "/build/bin/"
os.system("%staosdemo -y -t %d -n %d -x" % os.system("%staosdemo -N -y -t %d -n %d" %
(binPath, self.numberOfTables, self.numberOfRecords)) (binPath, self.numberOfTables, self.numberOfRecords))
tdSql.query("show databases") tdSql.query("show databases")
......
...@@ -79,24 +79,26 @@ function runSimCaseOneByOnefq { ...@@ -79,24 +79,26 @@ function runSimCaseOneByOnefq {
date +%F\ %T | tee -a out.log date +%F\ %T | tee -a out.log
if [[ "$tests_dir" == *"$IN_TDINTERNAL"* ]]; then if [[ "$tests_dir" == *"$IN_TDINTERNAL"* ]]; then
echo -n $case echo -n $case
./test.sh -f $case > /dev/null 2>&1 && \ ./test.sh -f $case > ../../../sim/case.log 2>&1 && \
( grep -q 'script.*'$case'.*failed.*, err.*lineNum' ../../../sim/tsim/log/taoslog0.0 && echo -e "${RED} failed${NC}" | tee -a out.log || echo -e "${GREEN} success${NC}" | tee -a out.log )|| \ ( grep -q 'script.*'$case'.*failed.*, err.*lineNum' ../../../sim/tsim/log/taoslog0.0 && echo -e "${RED} failed${NC}" | tee -a out.log || echo -e "${GREEN} success${NC}" | tee -a out.log )|| \
( grep -q 'script.*success.*m$' ../../../sim/tsim/log/taoslog0.0 && echo -e "${GREEN} success${NC}" | tee -a out.log ) || \ ( grep -q 'script.*success.*m$' ../../../sim/tsim/log/taoslog0.0 && echo -e "${GREEN} success${NC}" | tee -a out.log ) || \
echo -e "${RED} failed${NC}" | tee -a out.log ( echo -e "${RED} failed${NC}" | tee -a out.log && echo '=====================log=====================' && cat ../../../sim/case.log )
else else
echo -n $case echo -n $case
./test.sh -f $case > /dev/null 2>&1 && \ ./test.sh -f $case > ../../sim/case.log 2>&1 && \
( grep -q 'script.*'$case'.*failed.*, err.*lineNum' ../../sim/tsim/log/taoslog0.0 && echo -e "${RED} failed${NC}" | tee -a out.log || echo -e "${GREEN} success${NC}" | tee -a out.log )|| \ ( grep -q 'script.*'$case'.*failed.*, err.*lineNum' ../../sim/tsim/log/taoslog0.0 && echo -e "${RED} failed${NC}" | tee -a out.log || echo -e "${GREEN} success${NC}" | tee -a out.log )|| \
( grep -q 'script.*success.*m$' ../../sim/tsim/log/taoslog0.0 && echo -e "${GREEN} success${NC}" | tee -a out.log ) || \ ( grep -q 'script.*success.*m$' ../../sim/tsim/log/taoslog0.0 && echo -e "${GREEN} success${NC}" | tee -a out.log ) || \
echo -e "${RED} failed${NC}" | tee -a out.log ( echo -e "${RED} failed${NC}" | tee -a out.log && echo '=====================log=====================' && cat ../../sim/case.log )
fi fi
out_log=`tail -1 out.log ` out_log=`tail -1 out.log `
if [[ $out_log =~ 'failed' ]];then if [[ $out_log =~ 'failed' ]];then
if [[ "$tests_dir" == *"$IN_TDINTERNAL"* ]]; then if [[ "$tests_dir" == *"$IN_TDINTERNAL"* ]]; then
cp -r ../../../sim ~/sim_`date "+%Y_%m_%d_%H:%M:%S"` cp -r ../../../sim ~/sim_`date "+%Y_%m_%d_%H:%M:%S"`
rm -rf ../../../sim/case.log
else else
cp -r ../../sim ~/sim_`date "+%Y_%m_%d_%H:%M:%S" ` cp -r ../../sim ~/sim_`date "+%Y_%m_%d_%H:%M:%S" `
rm -rf ../../sim/case.log
fi fi
exit 8 exit 8
fi fi
...@@ -105,6 +107,8 @@ function runSimCaseOneByOnefq { ...@@ -105,6 +107,8 @@ function runSimCaseOneByOnefq {
dohavecore $2 dohavecore $2
fi fi
done done
rm -rf ../../../sim/case.log
rm -rf ../../sim/case.log
} }
function runPyCaseOneByOne { function runPyCaseOneByOne {
...@@ -158,13 +162,16 @@ function runPyCaseOneByOnefq() { ...@@ -158,13 +162,16 @@ function runPyCaseOneByOnefq() {
start_time=`date +%s` start_time=`date +%s`
date +%F\ %T | tee -a pytest-out.log date +%F\ %T | tee -a pytest-out.log
echo -n $case echo -n $case
$line > /dev/null 2>&1 && \ $line > ../../sim/case.log 2>&1 && \
echo -e "${GREEN} success${NC}" | tee -a pytest-out.log || \ echo -e "${GREEN} success${NC}" | tee -a pytest-out.log || \
echo -e "${RED} failed${NC}" | tee -a pytest-out.log echo -e "${RED} failed${NC}" | tee -a pytest-out.log
end_time=`date +%s` end_time=`date +%s`
out_log=`tail -1 pytest-out.log ` out_log=`tail -1 pytest-out.log `
if [[ $out_log =~ 'failed' ]];then if [[ $out_log =~ 'failed' ]];then
cp -r ../../sim ~/sim_`date "+%Y_%m_%d_%H:%M:%S" ` cp -r ../../sim ~/sim_`date "+%Y_%m_%d_%H:%M:%S" `
echo '=====================log====================='
cat ../../sim/case.log
rm -rf ../../sim/case.log
exit 8 exit 8
fi fi
echo execution time of $case was `expr $end_time - $start_time`s. | tee -a pytest-out.log echo execution time of $case was `expr $end_time - $start_time`s. | tee -a pytest-out.log
...@@ -174,6 +181,7 @@ function runPyCaseOneByOnefq() { ...@@ -174,6 +181,7 @@ function runPyCaseOneByOnefq() {
dohavecore $2 dohavecore $2
fi fi
done done
rm -rf ../../sim/case.log
} }
totalFailed=0 totalFailed=0
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册