提交 9a74e725 编写于 作者: P plum-lihui

Merge branch 'main' into test_main/lihui

......@@ -2,7 +2,7 @@
IF (DEFINED VERNUMBER)
SET(TD_VER_NUMBER ${VERNUMBER})
ELSE ()
SET(TD_VER_NUMBER "3.0.6.0.alpha")
SET(TD_VER_NUMBER "3.0.6.1.alpha")
ENDIF ()
IF (DEFINED VERCOMPATIBLE)
......
......@@ -12,50 +12,25 @@ To use DBeaver to manage TDengine, you need to prepare the following:
- Install DBeaver. DBeaver supports mainstream operating systems including Windows, macOS, and Linux. Please make sure you download and install the correct version (23.1.1+) and platform package. Please refer to the [official DBeaver documentation](https://github.com/dbeaver/dbeaver/wiki/Installation) for detailed installation steps.
- If you use an on-premises TDengine cluster, please make sure that TDengine and taosAdapter are deployed and running properly. For detailed information, please refer to the taosAdapter User Manual.
- If you use TDengine Cloud, please [register](https://cloud.tdengine.com/) for an account.
## Usage
### Use DBeaver to access on-premises TDengine cluster
## Use DBeaver to access on-premises TDengine cluster
1. Start the DBeaver application, click the button or menu item to choose **New Database Connection**, and then select **TDengine** in the **Timeseries** category.
![Connect TDengine with DBeaver](./dbeaver/dbeaver-connect-tdengine-en.webp)
![Connect TDengine with DBeaver](./dbeaver/dbeaver-connect-tdengine-en.webp)
2. Configure the TDengine connection by filling in the host address, port number, username, and password. If TDengine is deployed on the local machine, you are only required to fill in the username and password. The default username is root and the default password is taosdata. Click **Test Connection** to check whether the connection is workable. If you do not have the TDengine Java connector installed on the local machine, DBeaver will prompt you to download and install it.
![Configure the TDengine connection](./dbeaver/dbeaver-config-tdengine-en.webp))
![Configure the TDengine connection](./dbeaver/dbeaver-config-tdengine-en.webp))
3. If the connection is successful, it will be displayed as shown in the following figure. If the connection fails, please check whether the TDengine service and taosAdapter are running correctly and whether the host address, port number, username, and password are correct.
![Connection successful](./dbeaver/dbeaver-connect-tdengine-test-en.webp)
![Connection successful](./dbeaver/dbeaver-connect-tdengine-test-en.webp)
4. Use DBeaver to select databases and tables and browse your data stored in TDengine.
![Browse TDengine data with DBeaver](./dbeaver/dbeaver-browse-data-en.webp)
![Browse TDengine data with DBeaver](./dbeaver/dbeaver-browse-data-en.webp)
5. You can also manipulate TDengine data by executing SQL commands.
![Use SQL commands to manipulate TDengine data in DBeaver](./dbeaver/dbeaver-sql-execution-en.webp)
### Use DBeaver to access TDengine Cloud
1. Log in to the TDengine Cloud service, select **Programming** > **Java** in the management console, and then copy the string value of `TDENGINE_JDBC_URL` displayed in the **Config** section.
![Copy JDBC URL from TDengine Cloud](./dbeaver/tdengine-cloud-jdbc-dsn-en.webp)
2. Start the DBeaver application, click the button or menu item to choose **New Database Connection**, and then select **TDengine Cloud** in the **Timeseries** category.
![Connect TDengine Cloud with DBeaver](./dbeaver/dbeaver-connect-tdengine-cloud-en.webp)
3. Configure the TDengine Cloud connection by filling in the JDBC URL value. Click **Test Connection**. If you do not have the TDengine Java connector installed on the local machine, DBeaver will prompt you to download and install it. If the connection is successful, it will be displayed as shown in the following figure. If the connection fails, please check whether the TDengine Cloud service is running properly and whether the JDBC URL is correct.
![Configure the TDengine Cloud connection](./dbeaver/dbeaver-connect-tdengine-cloud-test-en.webp)
4. Use DBeaver to select databases and tables and browse your data stored in TDengine Cloud.
![Browse TDengine Cloud data with DBeaver](./dbeaver/dbeaver-browse-data-cloud-en.webp)
5. You can also manipulate TDengine Cloud data by executing SQL commands.
![Use SQL commands to manipulate TDengine Cloud data in DBeaver](./dbeaver/dbeaver-sql-execution-cloud-en.webp)
![Use SQL commands to manipulate TDengine data in DBeaver](./dbeaver/dbeaver-sql-execution-en.webp)
......@@ -10,6 +10,10 @@ For TDengine 2.x installation packages by version, please visit [here](https://w
import Release from "/components/ReleaseV3";
## 3.0.6.0
<Release type="tdengine" version="3.0.6.0" />
## 3.0.5.1
<Release type="tdengine" version="3.0.5.1" />
......
---
toc_max_heading_level: 4
sidebar_label: Python
title: TDengine Python Connector
description: "taospy 是 TDengine 的官方 Python 连接器。taospy 提供了丰富的 API, 使得 Python 应用可以很方便地使用 TDengine。tasopy 对 TDengine 的原生接口和 REST 接口都进行了封装, 分别对应 tasopy 的两个子模块:taos 和 taosrest。除了对原生接口和 REST 接口的封装,taospy 还提供了符合 Python 数据访问规范(PEP 249)的编程接口。这使得 taospy 和很多第三方工具集成变得简单,比如 SQLAlchemy 和 pandas"
......
......@@ -8,21 +8,16 @@ DBeaver 是一款流行的跨平台数据库管理工具,方便开发者、数
## 前置条件
### 安装 DBeaver
使用 DBeaver 管理 TDengine 需要以下几方面的准备工作。
- 安装 DBeaver。DBeaver 支持主流操作系统包括 Windows、macOS 和 Linux。请注意[下载](https://dbeaver.io/download/)正确平台和版本(23.1.1+)的安装包。详细安装步骤请参考 [DBeaver 官方文档](https://github.com/dbeaver/dbeaver/wiki/Installation)
- 如果使用独立部署的 TDengine 集群,请确认 TDengine 正常运行,并且 taosAdapter 已经安装并正常运行,具体细节请参考 [taosAdapter 的使用手册](/reference/taosadapter)
- 如果使用 TDengine Cloud,请[注册](https://cloud.taosdata.com/)相应账号。
## 使用步骤
### 使用 DBeaver 访问内部部署的 TDengine
## 使用 DBeaver 访问内部部署的 TDengine
1. 启动 DBeaver 应用,点击按钮或菜单项选择“连接到数据库”,然后在时间序列分类栏中选择 TDengine。
![DBeaver 连接 TDengine](./dbeaver/dbeaver-connect-tdengine-zh.webp)
![DBeaver 连接 TDengine](./dbeaver/dbeaver-connect-tdengine-zh.webp)
2. 配置 TDengine 连接,填入主机地址、端口号、用户名和密码。如果 TDengine 部署在本机,可以只填用户名和密码,默认用户名为 root,默认密码为 taosdata。点击“测试连接”可以对连接是否可用进行测试。如果本机没有安装 TDengine Java
连接器,DBeaver 会提示下载安装。
......@@ -31,37 +26,12 @@ DBeaver 是一款流行的跨平台数据库管理工具,方便开发者、数
3. 连接成功将显示如下图所示。如果显示连接失败,请检查 TDengine 服务和 taosAdapter 是否正确运行,主机地址、端口号、用户名和密码是否正确。
![连接成功](./dbeaver/dbeaver-connect-tdengine-test-zh.webp)
![连接成功](./dbeaver/dbeaver-connect-tdengine-test-zh.webp)
4. 使用 DBeaver 选择数据库和表可以浏览 TDengine 服务的数据。
![DBeaver 浏览 TDengine 数据](./dbeaver/dbeaver-browse-data-zh.webp)
![DBeaver 浏览 TDengine 数据](./dbeaver/dbeaver-browse-data-zh.webp)
5. 也可以通过执行 SQL 命令的方式对 TDengine 数据进行操作。
![DBeaver SQL 命令](./dbeaver/dbeaver-sql-execution-zh.webp)
### 使用 DBeaver 访问 TDengine Cloud
1. 登录 TDengine Cloud 服务,在管理界面中选择“编程”和“Java”,然后复制 TDENGINE_JDBC_URL 的字符串值。
![复制 TDengine Cloud DSN](./dbeaver/tdengine-cloud-jdbc-dsn-zh.webp)
2. 启动 DBeaver 应用,点击按钮或菜单项选择“连接到数据库”,然后在时间序列分类栏中选择 TDengine Cloud。
![DBeaver 连接 TDengine Cloud](./dbeaver/dbeaver-connect-tdengine-cloud-zh.webp)
3. 配置 TDengine Cloud 连接,填入 JDBC_URL 值。点击“测试连接”,如果本机没有安装 TDengine Java
连接器,DBeaver 会提示下载安装。连接成功将显示如下图所示。如果显示连接失败,请检查 TDengine Cloud 服务是否启动,JDBC_URL 是否正确。
![配置 TDengine Cloud 连接](./dbeaver/dbeaver-connect-tdengine-cloud-test-zh.webp)
4. 使用 DBeaver 选择数据库和表可以浏览 TDengine Cloud 服务的数据。
![DBeaver 浏览 TDengine Cloud 数据](./dbeaver/dbeaver-browse-cloud-data-zh.webp)
5. 也可以通过执行 SQL 命令的方式对 TDengine Cloud 数据进行操作。
![DBeaver SQL 命令 操作 TDengine Cloud](./dbeaver/dbeaver-sql-execution-cloud-zh.webp)
![DBeaver SQL 命令](./dbeaver/dbeaver-sql-execution-zh.webp)
......@@ -10,6 +10,10 @@ TDengine 2.x 各版本安装包请访问[这里](https://www.taosdata.com/all-do
import Release from "/components/ReleaseV3";
## 3.0.6.0
<Release type="tdengine" version="3.0.6.0" />
## 3.0.5.1
<Release type="tdengine" version="3.0.5.1" />
......
......@@ -6982,8 +6982,11 @@ int32_t tDecodeSVAlterTbReqSetCtime(SDecoder* pDecoder, SVAlterTbReq* pReq, int6
if (tStartDecode(pDecoder) < 0) return -1;
if (tDecodeSVAlterTbReqCommon(pDecoder, pReq) < 0) return -1;
*(int64_t *)(pDecoder->data + pDecoder->pos) = ctimeMs;
if (tDecodeI64(pDecoder, &pReq->ctimeMs) < 0) return -1;
pReq->ctimeMs = 0;
if (!tDecodeIsEnd(pDecoder)) {
*(int64_t *)(pDecoder->data + pDecoder->pos) = ctimeMs;
if (tDecodeI64(pDecoder, &pReq->ctimeMs) < 0) return -1;
}
tEndDecode(pDecoder);
return 0;
......@@ -7541,8 +7544,11 @@ int32_t tDecodeSBatchDeleteReq(SDecoder *pDecoder, SBatchDeleteReq *pReq) {
int32_t tDecodeSBatchDeleteReqSetCtime(SDecoder *pDecoder, SBatchDeleteReq *pReq, int64_t ctimeMs) {
if (tDecodeSBatchDeleteReqCommon(pDecoder, pReq)) return -1;
*(int64_t *)(pDecoder->data + pDecoder->pos) = ctimeMs;
if (tDecodeI64(pDecoder, &pReq->ctimeMs) < 0) return -1;
pReq->ctimeMs = 0;
if (!tDecodeIsEnd(pDecoder)) {
*(int64_t *)(pDecoder->data + pDecoder->pos) = ctimeMs;
if (tDecodeI64(pDecoder, &pReq->ctimeMs) < 0) return -1;
}
return 0;
}
......
......@@ -382,6 +382,40 @@ static int32_t mndCheckDbCfg(SMnode *pMnode, SDbCfg *pCfg) {
return terrno;
}
static int32_t mndCheckInChangeDbCfg(SMnode *pMnode, SDbCfg *pCfg) {
terrno = TSDB_CODE_MND_INVALID_DB_OPTION;
if (pCfg->buffer < TSDB_MIN_BUFFER_PER_VNODE || pCfg->buffer > TSDB_MAX_BUFFER_PER_VNODE) return -1;
if (pCfg->pages < TSDB_MIN_PAGES_PER_VNODE || pCfg->pages > TSDB_MAX_PAGES_PER_VNODE) return -1;
if (pCfg->pageSize < TSDB_MIN_PAGESIZE_PER_VNODE || pCfg->pageSize > TSDB_MAX_PAGESIZE_PER_VNODE) return -1;
if (pCfg->daysPerFile < TSDB_MIN_DAYS_PER_FILE || pCfg->daysPerFile > TSDB_MAX_DAYS_PER_FILE) return -1;
if (pCfg->daysToKeep0 < TSDB_MIN_KEEP || pCfg->daysToKeep0 > TSDB_MAX_KEEP) return -1;
if (pCfg->daysToKeep1 < TSDB_MIN_KEEP || pCfg->daysToKeep1 > TSDB_MAX_KEEP) return -1;
if (pCfg->daysToKeep2 < TSDB_MIN_KEEP || pCfg->daysToKeep2 > TSDB_MAX_KEEP) return -1;
if (pCfg->daysToKeep0 < pCfg->daysPerFile) return -1;
if (pCfg->daysToKeep0 > pCfg->daysToKeep1) return -1;
if (pCfg->daysToKeep1 > pCfg->daysToKeep2) return -1;
if (pCfg->walFsyncPeriod < TSDB_MIN_FSYNC_PERIOD || pCfg->walFsyncPeriod > TSDB_MAX_FSYNC_PERIOD) return -1;
if (pCfg->walLevel < TSDB_MIN_WAL_LEVEL || pCfg->walLevel > TSDB_MAX_WAL_LEVEL) return -1;
if (pCfg->cacheLast < TSDB_CACHE_MODEL_NONE || pCfg->cacheLast > TSDB_CACHE_MODEL_BOTH) return -1;
if (pCfg->cacheLastSize < TSDB_MIN_DB_CACHE_SIZE || pCfg->cacheLastSize > TSDB_MAX_DB_CACHE_SIZE) return -1;
if (pCfg->replications < TSDB_MIN_DB_REPLICA || pCfg->replications > TSDB_MAX_DB_REPLICA) return -1;
if (pCfg->replications != 1 && pCfg->replications != 3) return -1;
if (pCfg->sstTrigger < TSDB_MIN_STT_TRIGGER || pCfg->sstTrigger > TSDB_MAX_STT_TRIGGER) return -1;
if (pCfg->minRows < TSDB_MIN_MINROWS_FBLOCK || pCfg->minRows > TSDB_MAX_MINROWS_FBLOCK) return -1;
if (pCfg->maxRows < TSDB_MIN_MAXROWS_FBLOCK || pCfg->maxRows > TSDB_MAX_MAXROWS_FBLOCK) return -1;
if (pCfg->minRows > pCfg->maxRows) return -1;
if (pCfg->walRetentionPeriod < TSDB_DB_MIN_WAL_RETENTION_PERIOD) return -1;
if (pCfg->walRetentionSize < TSDB_DB_MIN_WAL_RETENTION_SIZE) return -1;
if (pCfg->strict < TSDB_DB_STRICT_OFF || pCfg->strict > TSDB_DB_STRICT_ON) return -1;
if (pCfg->replications > mndGetDnodeSize(pMnode)) {
terrno = TSDB_CODE_MND_NO_ENOUGH_DNODES;
return -1;
}
terrno = 0;
return terrno;
}
static void mndSetDefaultDbCfg(SDbCfg *pCfg) {
if (pCfg->numOfVgroups < 0) pCfg->numOfVgroups = TSDB_DEFAULT_VN_PER_DB;
if (pCfg->numOfStables < 0) pCfg->numOfStables = TSDB_DEFAULT_DB_SINGLE_STABLE;
......@@ -897,7 +931,7 @@ static int32_t mndProcessAlterDbReq(SRpcMsg *pReq) {
code = mndSetDbCfgFromAlterDbReq(&dbObj, &alterReq);
if (code != 0) goto _OVER;
code = mndCheckDbCfg(pMnode, &dbObj.cfg);
code = mndCheckInChangeDbCfg(pMnode, &dbObj.cfg);
if (code != 0) goto _OVER;
dbObj.cfgVersion++;
......
......@@ -801,7 +801,8 @@ static int32_t mndProcessAlterUserReq(SRpcMsg *pReq) {
goto _OVER;
}
if (TSDB_ALTER_USER_PASSWD == alterReq.alterType && alterReq.pass[0] == 0) {
if (TSDB_ALTER_USER_PASSWD == alterReq.alterType &&
(alterReq.pass[0] == 0 || strlen(alterReq.pass) > TSDB_PASSWORD_LEN)) {
terrno = TSDB_CODE_MND_INVALID_PASS_FORMAT;
goto _OVER;
}
......
......@@ -1980,6 +1980,11 @@ static int metaUpdateTtl(SMeta *pMeta, const SMetaEntry *pME) {
int metaUpdateChangeTime(SMeta *pMeta, tb_uid_t uid, int64_t changeTimeMs) {
if (!tsTtlChangeOnWrite) return 0;
if (changeTimeMs <= 0) {
metaWarn("Skip to change ttl deletetion time on write, uid: %" PRId64, uid);
return TSDB_CODE_VERSION_NOT_COMPATIBLE;
}
STtlUpdCtimeCtx ctx = {.uid = uid, .changeTimeMs = changeTimeMs};
return ttlMgrUpdateChangeTime(pMeta->pTtlMgr, &ctx);
......
......@@ -358,7 +358,8 @@ int ttlMgrFlush(STtlManger *pTtlMgr, TXN *pTxn) {
STtlCacheEntry *cacheEntry = taosHashGet(pTtlMgr->pTtlCache, pUid, sizeof(*pUid));
if (cacheEntry == NULL) {
metaError("ttlMgr flush failed to get ttl cache since %s", tstrerror(terrno));
metaError("ttlMgr flush failed to get ttl cache since %s, uid: %" PRId64 ", type: %d", tstrerror(terrno), *pUid,
pEntry->type);
goto _out;
}
......
......@@ -234,8 +234,10 @@ static int32_t vnodePreProcessSubmitTbData(SVnode *pVnode, SDecoder *pCoder, int
}
}
*(int64_t *)(pCoder->data + pCoder->pos) = ctimeMs;
pCoder->pos += sizeof(int64_t);
if (!tDecodeIsEnd(pCoder)) {
*(int64_t *)(pCoder->data + pCoder->pos) = ctimeMs;
pCoder->pos += sizeof(int64_t);
}
tEndDecode(pCoder);
......
......@@ -320,6 +320,7 @@
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/mode.py -R
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/Now.py
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/Now.py -R
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/orderBy.py -N 5
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/percentile.py
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/percentile.py -R
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/pow.py
......
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
import random
import time
import copy
import taos
from util.log import *
from util.cases import *
from util.sql import *
class TDTestCase:
# get col value and total max min ...
def getColsValue(self, i, j):
# c1 value
if random.randint(1, 10) == 5:
c1 = None
else:
c1 = 1
# c2 value
if j % 3200 == 0:
c2 = 8764231
elif random.randint(1, 10) == 5:
c2 = None
else:
c2 = random.randint(-87654297, 98765321)
# c3 is order
c3 = i * self.childRow + j
value = f"({self.ts}, "
# c1
if c1 is None:
value += "null,"
else:
self.c1Cnt += 1
value += f"{c1},"
# c2
if c2 is None:
value += "null,"
else:
value += f"{c2},"
# total count
self.c2Cnt += 1
# max
if self.c2Max is None:
self.c2Max = c2
else:
if c2 > self.c2Max:
self.c2Max = c2
# min
if self.c2Min is None:
self.c2Min = c2
else:
if c2 < self.c2Min:
self.c2Min = c2
# sum
if self.c2Sum is None:
self.c2Sum = c2
else:
self.c2Sum += c2
# c3
value += f"{c3},"
# ts1 same with ts
value += f"{self.ts})"
# move next
self.ts += 1
return value
# insert data
def insertData(self):
tdLog.info("insert data ....")
sqls = ""
for i in range(self.childCnt):
# insert child table
values = ""
pre_insert = f"insert into t{i} values "
for j in range(self.childRow):
if values == "":
values = self.getColsValue(i, j)
else:
values += "," + self.getColsValue(i, j)
# batch insert
if j % self.batchSize == 0 and values != "":
sql = pre_insert + values
tdSql.execute(sql)
values = ""
# append last
if values != "":
sql = pre_insert + values
tdSql.execute(sql)
values = ""
sql = "flush database db;"
tdLog.info(sql)
tdSql.execute(sql)
# insert finished
tdLog.info(f"insert data successfully.\n"
f" inserted child table = {self.childCnt}\n"
f" inserted child rows = {self.childRow}\n"
f" total inserted rows = {self.childCnt*self.childRow}\n")
return
# prepareEnv
def prepareEnv(self):
# init
self.ts = 1680000000000*1000
self.childCnt = 10
self.childRow = 100000
self.batchSize = 5000
# total
self.c1Cnt = 0
self.c2Cnt = 0
self.c2Max = None
self.c2Min = None
self.c2Sum = None
# create database db
sql = f"create database db vgroups 2 precision 'us' "
tdLog.info(sql)
tdSql.execute(sql)
sql = f"use db"
tdSql.execute(sql)
# alter config
sql = "alter local 'querySmaOptimize 1';"
tdLog.info(sql)
tdSql.execute(sql)
# create super talbe st
sql = f"create table st(ts timestamp, c1 int, c2 bigint, c3 bigint, ts1 timestamp) tags(area int)"
tdLog.info(sql)
tdSql.execute(sql)
# create child table
for i in range(self.childCnt):
sql = f"create table t{i} using st tags({i}) "
tdSql.execute(sql)
# insert data
self.insertData()
# check data correct
def checkExpect(self, sql, expectVal):
tdSql.query(sql)
rowCnt = tdSql.getRows()
for i in range(rowCnt):
val = tdSql.getData(i,0)
if val != expectVal:
tdLog.exit(f"Not expect . query={val} expect={expectVal} i={i} sql={sql}")
return False
tdLog.info(f"check expect ok. sql={sql} expect ={expectVal} rowCnt={rowCnt}")
return True
# check query
def queryResultSame(self, sql1, sql2):
# sql
tdLog.info(sql1)
start1 = time.time()
rows1 = tdSql.query(sql1)
spend1 = time.time() - start1
res1 = copy.copy(tdSql.queryResult)
tdLog.info(sql2)
start2 = time.time()
tdSql.query(sql2)
spend2 = time.time() - start2
res2 = tdSql.queryResult
rowlen1 = len(res1)
rowlen2 = len(res2)
if rowlen1 != rowlen2:
tdLog.exit(f"rowlen1={rowlen1} rowlen2={rowlen2} both not equal.")
return False
for i in range(rowlen1):
row1 = res1[i]
row2 = res2[i]
collen1 = len(row1)
collen2 = len(row2)
if collen1 != collen2:
tdLog.exit(f"collen1={collen1} collen2={collen2} both not equal.")
return False
for j in range(collen1):
if row1[j] != row2[j]:
tdLog.exit(f"col={j} col1={row1[j]} col2={row2[j]} both col not equal.")
return False
# warning performance
diff = (spend2 - spend1)*100/spend1
tdLog.info("spend1=%.6fs spend2=%.6fs diff=%.1f%%"%(spend1, spend2, diff))
if spend2 > spend1 and diff > 50:
tdLog.info("warning: the diff for performance after spliting is over 20%")
return True
# init
def init(self, conn, logSql, replicaVar=1):
seed = time.clock_gettime(time.CLOCK_REALTIME)
random.seed(seed)
self.replicaVar = int(replicaVar)
tdLog.debug(f"start to excute {__file__}")
tdSql.init(conn.cursor(), True)
# check time macro
def queryBasic(self):
# check count
expectVal = self.childCnt * self.childRow
sql = f"select count(ts) from st "
self.checkExpect(sql, expectVal)
# check diff
sql = f"select count(*) from (select diff(ts) as dif from st order by ts)"
self.checkExpect(sql, expectVal - 1)
# check ts order count
sql = f"select count(*) from (select diff(ts) as dif from st order by ts) where dif!=1"
self.checkExpect(sql, 0)
# check ts1 order count
sql = f"select count(*) from (select diff(ts1) as dif from st order by ts1) where dif!=1"
self.checkExpect(sql, 0)
# check c3 order asc
sql = f"select count(*) from (select diff(c3) as dif from st order by c3) where dif!=1"
self.checkExpect(sql, 0)
# check c3 order desc todo FIX
#sql = f"select count(*) from (select diff(c3) as dif from st order by c3 desc) where dif!=-1"
#self.checkExpect(sql, 0)
# advance
def queryAdvance(self):
# interval order todo FIX
#sql = f"select _wstart,count(ts),max(c2),min(c2) from st interval(100u) sliding(50u) order by _wstart limit 10"
#tdSql.query(sql)
#tdSql.checkRows(10)
# simulate crash sql
sql = f"select _wstart,count(ts),max(c2),min(c2) from st interval(100a) sliding(10a) order by _wstart limit 10"
tdSql.query(sql)
tdSql.checkRows(10)
# extent
sql = f"select _wstart,count(ts),max(c2),min(c2) from st interval(100a) sliding(10a) order by _wstart desc limit 5"
tdSql.query(sql)
tdSql.checkRows(5)
# data correct checked
sql1 = "select sum(a),sum(b), max(c), min(d),sum(e) from (select _wstart,count(ts) as a,count(c2) as b ,max(c2) as c, min(c2) as d, sum(c2) as e from st interval(100a) sliding(100a) order by _wstart desc);"
sql2 = "select count(*) as a, count(c2) as b, max(c2) as c, min(c2) as d, sum(c2) as e from st;"
self.queryResultSame(sql1, sql2)
# run
def run(self):
# prepare env
self.prepareEnv()
# basic
self.queryBasic()
# advance
self.queryAdvance()
# stop
def stop(self):
tdSql.close()
tdLog.success(f"{__file__} successfully executed")
tdCases.addLinux(__file__, TDTestCase())
tdCases.addWindows(__file__, TDTestCase())
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册