未验证 提交 43feda05 编写于 作者: sangshuduo's avatar sangshuduo 提交者: GitHub

[TD-13408]<test>: move tests in for3.0 (#10598)

* restore .gitmodules

* Revert "[TD-13408]<test>: move tests out"

This reverts commit f80a4ca49ff37431bc0bc0dafb5ccf5858b00beb.

* revert f80a4ca49ff37431bc0bc0dafb5ccf5858b00beb

* immigrate file change from stand-alone repo to TDengine

for 3.0

* remove tests repository for Jenkinsfile2
Co-authored-by: Ntangfangzhi <fztang@taosdata.com>
上级 8f108bfb

要显示的变更太多。

To preserve performance only 1000 of 1000+ files are displayed.
......@@ -10,10 +10,3 @@
[submodule "deps/TSZ"]
path = deps/TSZ
url = https://github.com/taosdata/TSZ.git
[submodule "tests"]
path = tests
url = https://github.com/taosdata/tests
branch = 3.0
[submodule "examples/rust"]
path = examples/rust
url = https://github.com/songtianyi/tdengine-rust-bindings.git
......@@ -74,37 +74,9 @@ def pre_test(){
git pull >/dev/null
git fetch origin +refs/pull/${CHANGE_ID}/merge
git checkout -qf FETCH_HEAD
git submodule update --init --recursive --remote
git submodule update --init --recursive
'''
script {
if (env.CHANGE_TARGET == 'master') {
sh '''
cd ${WKCT}
git checkout master
'''
}
else if(env.CHANGE_TARGET == '2.0'){
sh '''
cd ${WKCT}
git checkout 2.0
'''
}
else if(env.CHANGE_TARGET == '3.0'){
sh '''
cd ${WKCT}
git checkout 3.0
'''
}
else{
sh '''
cd ${WKCT}
git checkout develop
'''
}
}
sh'''
cd ${WKCT}
git pull >/dev/null
cd ${WKC}
export TZ=Asia/Harbin
date
......@@ -123,7 +95,6 @@ pipeline {
environment{
WK = '/var/lib/jenkins/workspace/TDinternal'
WKC= '/var/lib/jenkins/workspace/TDengine'
WKCT= '/var/lib/jenkins/workspace/TDengine/tests'
}
stages {
stage('pre_build'){
......
Subproject commit 1c8924dc668e6aa848214c2fc54e3ace3f5bf8df
Subproject commit 904e6f0e152e8fe61edfe0a0a9ae497cfde2a72c
#ADD_SUBDIRECTORY(examples/c)
ADD_SUBDIRECTORY(tsim)
ADD_SUBDIRECTORY(test/c)
#ADD_SUBDIRECTORY(comparisonTest/tdengine)
### Prepare development environment
1. sudo apt install
build-essential cmake net-tools python-pip python-setuptools python3-pip
python3-setuptools valgrind psmisc curl
2. git clone <https://github.com/taosdata/TDengine>; cd TDengine
3. mkdir debug; cd debug; cmake ..; make ; sudo make install
4. pip install ../src/connector/python ; pip3 install
../src/connector/python
5. pip install numpy; pip3 install numpy (numpy is required only if you need to run querySort.py)
> Note: Both Python2 and Python3 are currently supported by the Python test
> framework. Since Python2 is no longer officially supported by Python Software
> Foundation since January 1, 2020, it is recommended that subsequent test case
> development be guaranteed to run correctly on Python3.
> For Python2, please consider being compatible if appropriate without
> additional burden.
>
> If you use some new Linux distribution like Ubuntu 20.04 which already do not
> include Python2, please do not install Python2-related packages.
>
> <https://nakedsecurity.sophos.com/2020/01/03/python-is-dead-long-live-python/> 
### How to run Python test suite
1. cd \<TDengine\>/tests/pytest
2. ./smoketest.sh \# for smoke test
3. ./smoketest.sh -g \# for memory leak detection test with valgrind
4. ./fulltest.sh \# for full test
> Note1: TDengine daemon's configuration and data files are stored in
> \<TDengine\>/sim directory. As a historical design, it's same place with
> TSIM script. So after the TSIM script ran with sudo privilege, the directory
> has been used by TSIM then the python script cannot write it by a normal
> user. You need to remove the directory completely first before running the
> Python test case. We should consider using two different locations to store
> for TSIM and Python script.
> Note2: if you need to debug crash problem with a core dump, you need
> manually edit smoketest.sh or fulltest.sh to add "ulimit -c unlimited"
> before the script line. Then you can look for the core file in
> \<TDengine\>/tests/pytest after the program crash.
### How to add a new test case
**1. TSIM test cases:**
TSIM was the testing framework has been used internally. Now it still be used to run the test cases we develop in the past as a legacy system. We are turning to use Python to develop new test case and are abandoning TSIM gradually.
**2. Python test cases:**
**2.1 Please refer to \<TDengine\>/tests/pytest/insert/basic.py to add a new
test case.** The new test case must implement 3 functions, where self.init()
and self.stop() simply copy the contents of insert/basic.py and the test
logic is implemented in self.run(). You can refer to the code in the util
directory for more information.
**2.2 Edit smoketest.sh to add the path and filename of the new test case**
Note: The Python test framework may continue to be improved in the future,
hopefully, to provide more functionality and ease of writing test cases. The
method of writing the test case above does not exclude that it will also be
affected.
**2.3 What test.py does in detail:**
test.py is the entry program for test case execution and monitoring.
test.py has the following functions.
\-f --file, Specifies the test case file name to be executed
-p --path, Specifies deployment path
\-m --master, Specifies the master server IP for cluster deployment
-c--cluster, test cluster function
-s--stop, terminates all running nodes
\-g--valgrind, load valgrind for memory leak detection test
\-h--help, display help
**2.4 What util/log.py does in detail:**
log.py is quite simple, the main thing is that you can print the output in
different colors as needed. The success() should be called for successful
test case execution and the success() will print green text. The exit() will
print red text and exit the program, exit() should be called for test
failure.
**util/log.py**
...
    def info(self, info):
        printf("%s %s" % (datetime.datetime.now(), info))
 
    def sleep(self, sec):
        printf("%s sleep %d seconds" % (datetime.datetime.now(), sec))
        time.sleep(sec)
 
    def debug(self, err):
        printf("\\033[1;36m%s %s\\033[0m" % (datetime.datetime.now(), err))
 
    def success(self, info):
        printf("\\033[1;32m%s %s\\033[0m" % (datetime.datetime.now(), info))
 
    def notice(self, err):
        printf("\\033[1;33m%s %s\\033[0m" % (datetime.datetime.now(), err))
 
    def exit(self, err):
        printf("\\033[1;31m%s %s\\033[0m" % (datetime.datetime.now(), err))
        sys.exit(1)
 
    def printNoPrefix(self, info):
        printf("\\033[1;36m%s\\033[0m" % (info)
...
**2.5 What util/sql.py does in detail:**
SQL.py is mainly used to execute SQL statements to manipulate the database,
and the code is extracted and commented as follows:
**util/sql.py**
\# prepare() is mainly used to set up the environment for testing table and
data, and to set up the database db for testing. do not call prepare() if you
need to test the database operation command.
def prepare(self):
tdLog.info("prepare database:db")
self.cursor.execute('reset query cache')
self.cursor.execute('drop database if exists db')
self.cursor.execute('create database db')
self.cursor.execute('use db')
...
\# query() is mainly used to execute select statements for normal syntax input
def query(self, sql):
...
\# error() is mainly used to execute the select statement with the wrong syntax
input, the error will be caught as a reasonable behavior, if not caught it will
prove that the test failed
def error()
...
\# checkRows() is used to check the number of returned lines after calling
query(select ...) after calling the query(select ...) to check the number of
rows of returned results.
def checkRows(self, expectRows):
...
\# checkData() is used to check the returned result data after calling
query(select ...) after the query(select ...) is called, failure to meet
expectation is
def checkData(self, row, col, data):
...
\# getData() returns the result data after calling query(select ...) to return
the resulting data after calling query(select ...)
def getData(self, row, col):
...
\# execute() used to execute sql and return the number of affected rows
def execute(self, sql):
...
\# executeTimes() Multiple executions of the same sql statement
def executeTimes(self, sql, times):
...
\# CheckAffectedRows() Check if the number of affected rows is as expected
def checkAffectedRows(self, expectAffectedRows):
...
### CI submission adoption principle.
- Every commit / PR compilation must pass. Currently, the warning is treated
as an error, so the warning must also be resolved.
- Test cases that already exist must pass.
- Because CI is very important to support build and automatically test
procedure, it is necessary to manually test the test case before adding it
and do as many iterations as possible to ensure that the test case provides
stable and reliable test results when added.
> Note: In the future, according to the requirements and test development
> progress will add stress testing, performance testing, code style,
> and other features based on functional testing.
def pre_test(){
sh '''
sudo rmtaos||echo 'no taosd installed'
'''
sh '''
cd ${WKC}
git reset --hard
git checkout $BRANCH_NAME
git pull
git submodule update
cd ${WK}
git reset --hard
git checkout $BRANCH_NAME
git pull
export TZ=Asia/Harbin
date
rm -rf ${WK}/debug
mkdir debug
cd debug
cmake .. > /dev/null
make > /dev/null
make install > /dev/null
pip3 install ${WKC}/src/connector/python
'''
return 1
}
def pre_test_p(){
sh '''
sudo rmtaos||echo 'no taosd installed'
'''
sh '''
cd ${WKC}
git reset --hard
git checkout $BRANCH_NAME
git pull
git submodule update
cd ${WK}
git reset --hard
git checkout $BRANCH_NAME
git pull
export TZ=Asia/Harbin
date
rm -rf ${WK}/debug
mkdir debug
cd debug
cmake .. > /dev/null
make > /dev/null
make install > /dev/null
pip3 install ${WKC}/src/connector/python
'''
return 1
}
pipeline {
agent none
environment{
WK = '/data/lib/jenkins/workspace/TDinternal'
WKC= '/data/lib/jenkins/workspace/TDinternal/community'
}
stages {
stage('Parallel test stage') {
parallel {
stage('pytest') {
agent{label 'slad1'}
steps {
pre_test_p()
sh '''
cd ${WKC}/tests
find pytest -name '*'sql|xargs rm -rf
./test-all.sh pytest
date'''
}
}
stage('test_b1') {
agent{label 'slad2'}
steps {
pre_test()
sh '''
cd ${WKC}/tests
./test-all.sh b1
date'''
}
}
stage('test_crash_gen') {
agent{label "slad3"}
steps {
pre_test()
sh '''
cd ${WKC}/tests/pytest
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${WKC}/tests/pytest
./crash_gen.sh -a -p -t 4 -s 2000
'''
}
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${WKC}/tests/pytest
rm -rf /var/lib/taos/*
rm -rf /var/log/taos/*
./handle_crash_gen_val_log.sh
'''
}
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${WKC}/tests/pytest
rm -rf /var/lib/taos/*
rm -rf /var/log/taos/*
./handle_taosd_val_log.sh
'''
}
sh'''
nohup taosd >/dev/null &
sleep 10
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${WKC}/tests/gotest
bash batchtest.sh
'''
}
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${WKC}/tests/examples/python/PYTHONConnectorChecker
python3 PythonChecker.py
'''
}
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${WKC}/tests/examples/JDBC/JDBCDemo/
mvn clean package >/dev/null
java -jar target/JdbcRestfulDemo-jar-with-dependencies.jar
'''
}
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cp -rf ${WKC}/tests/examples/nodejs ${JENKINS_HOME}/workspace/
cd ${JENKINS_HOME}/workspace/nodejs
node nodejsChecker.js host=localhost
'''
}
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${JENKINS_HOME}/workspace/C#NET/src/CheckC#
dotnet run
'''
}
sh '''
pkill -9 taosd || echo 1
cd ${WKC}/tests
./test-all.sh b2
date
'''
sh '''
cd ${WKC}/tests
./test-all.sh full unit
date'''
}
}
stage('test_valgrind') {
agent{label "slad4"}
steps {
pre_test()
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${WKC}/tests/pytest
nohup taosd >/dev/null &
sleep 10
python3 concurrent_inquiry.py -c 1
'''
}
sh '''
cd ${WKC}/tests
./test-all.sh full jdbc
date'''
sh '''
cd ${WKC}/tests/pytest
./valgrind-test.sh 2>&1 > mem-error-out.log
./handle_val_log.sh
date
cd ${WKC}/tests
./test-all.sh b3
date'''
sh '''
date
cd ${WKC}/tests
./test-all.sh full example
date'''
}
}
stage('arm64_build'){
agent{label 'arm64'}
steps{
sh '''
cd ${WK}
git fetch
git checkout develop
git pull
cd ${WKC}
git fetch
git checkout develop
git pull
git submodule update
cd ${WKC}/packaging
./release.sh -v cluster -c aarch64 -n 2.0.0.0 -m 2.0.0.0
'''
}
}
stage('arm32_build'){
agent{label 'arm32'}
steps{
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${WK}
git fetch
git checkout develop
git pull
cd ${WKC}
git fetch
git checkout develop
git pull
git submodule update
cd ${WKC}/packaging
./release.sh -v cluster -c aarch32 -n 2.0.0.0 -m 2.0.0.0
'''
}
}
}
}
}
}
post {
success {
emailext (
subject: "PR-result: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' SUCCESS",
body: """<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
</head>
<body leftmargin="8" marginwidth="0" topmargin="8" marginheight="4" offset="0">
<table width="95%" cellpadding="0" cellspacing="0" style="font-size: 16pt; font-family: Tahoma, Arial, Helvetica, sans-serif">
<tr>
<td><br />
<b><font color="#0B610B"><font size="6">构建信息</font></font></b>
<hr size="2" width="100%" align="center" /></td>
</tr>
<tr>
<td>
<ul>
<div style="font-size:18px">
<li>构建名称>>分支:${env.BRANCH_NAME}</li>
<li>构建结果:<span style="color:green"> Successful </span></li>
<li>构建编号:${BUILD_NUMBER}</li>
<li>触发用户:${env.CHANGE_AUTHOR}</li>
<li>提交信息:${env.CHANGE_TITLE}</li>
<li>构建地址:<a href=${BUILD_URL}>${BUILD_URL}</a></li>
<li>构建日志:<a href=${BUILD_URL}console>${BUILD_URL}console</a></li>
</div>
</ul>
</td>
</tr>
</table></font>
</body>
</html>""",
to: "yqliu@taosdata.com,pxiao@taosdata.com",
from: "support@taosdata.com"
)
}
failure {
emailext (
subject: "PR-result: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' FAIL",
body: """<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
</head>
<body leftmargin="8" marginwidth="0" topmargin="8" marginheight="4" offset="0">
<table width="95%" cellpadding="0" cellspacing="0" style="font-size: 16pt; font-family: Tahoma, Arial, Helvetica, sans-serif">
<tr>
<td><br />
<b><font color="#0B610B"><font size="6">构建信息</font></font></b>
<hr size="2" width="100%" align="center" /></td>
</tr>
<tr>
<td>
<ul>
<div style="font-size:18px">
<li>构建名称>>分支:${env.BRANCH_NAME}</li>
<li>构建结果:<span style="color:red"> Failure </span></li>
<li>构建编号:${BUILD_NUMBER}</li>
<li>触发用户:${env.CHANGE_AUTHOR}</li>
<li>提交信息:${env.CHANGE_TITLE}</li>
<li>构建地址:<a href=${BUILD_URL}>${BUILD_URL}</a></li>
<li>构建日志:<a href=${BUILD_URL}console>${BUILD_URL}console</a></li>
</div>
</ul>
</td>
</tr>
</table></font>
</body>
</html>""",
to: "yqliu@taosdata.com,pxiao@taosdata.com",
from: "support@taosdata.com"
)
}
}
}
\ No newline at end of file
def pre_test(){
sh '''
sudo rmtaos||echo 'no taosd installed'
'''
sh '''
cd ${WKC}
git reset --hard
git checkout $BRANCH_NAME
git pull
git submodule update
cd ${WK}
git reset --hard
git checkout $BRANCH_NAME
git pull
export TZ=Asia/Harbin
date
rm -rf ${WK}/debug
mkdir debug
cd debug
cmake .. > /dev/null
make > /dev/null
make install > /dev/null
pip3 install ${WKC}/src/connector/python/ || echo 0
'''
return 1
}
def pre_test_p(){
sh '''
sudo rmtaos||echo 'no taosd installed'
'''
sh '''
cd ${WKC}
git reset --hard
git checkout $BRANCH_NAME
git pull
git submodule update
cd ${WK}
git reset --hard
git checkout $BRANCH_NAME
git pull
export TZ=Asia/Harbin
date
rm -rf ${WK}/debug
mkdir debug
cd debug
cmake .. > /dev/null
make > /dev/null
make install > /dev/null
pip3 install ${WKC}/src/connector/python/ || echo 0
'''
return 1
}
pipeline {
agent none
environment{
WK = '/data/lib/jenkins/workspace/TDinternal'
WKC= '/data/lib/jenkins/workspace/TDinternal/community'
}
stages {
stage('Parallel test stage') {
parallel {
stage('pytest') {
agent{label 'slam1'}
steps {
pre_test_p()
sh '''
cd ${WKC}/tests
find pytest -name '*'sql|xargs rm -rf
./test-all.sh pytest
date'''
}
}
stage('test_b1') {
agent{label 'slam2'}
steps {
pre_test()
sh '''
cd ${WKC}/tests
./test-all.sh b1
date'''
}
}
stage('test_crash_gen') {
agent{label "slam3"}
steps {
pre_test()
sh '''
cd ${WKC}/tests/pytest
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${WKC}/tests/pytest
./crash_gen.sh -a -p -t 4 -s 2000
'''
}
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${WKC}/tests/pytest
rm -rf /var/lib/taos/*
rm -rf /var/log/taos/*
./handle_crash_gen_val_log.sh
'''
}
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${WKC}/tests/pytest
rm -rf /var/lib/taos/*
rm -rf /var/log/taos/*
./handle_taosd_val_log.sh
'''
}
sh'''
nohup taosd >/dev/null &
sleep 10
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${WKC}/tests/gotest
bash batchtest.sh
'''
}
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${WKC}/tests/examples/python/PYTHONConnectorChecker
python3 PythonChecker.py
'''
}
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${WKC}/tests/examples/JDBC/JDBCDemo/
mvn clean package assembly:single -DskipTests >/dev/null
java -jar target/JDBCDemo-SNAPSHOT-jar-with-dependencies.jar -host 127.0.0.1
'''
}
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${WKC}/src/connector/jdbc
mvn clean package -Dmaven.test.skip=true >/dev/null
cd ${WKC}/tests/examples/JDBC/JDBCDemo/
java --class-path=../../../../src/connector/jdbc/target:$JAVA_HOME/jre/lib/ext -jar target/JDBCDemo-SNAPSHOT-jar-with-dependencies.jar -host 127.0.0.1
'''
}
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cp -rf ${WKC}/tests/examples/nodejs ${JENKINS_HOME}/workspace/
cd ${JENKINS_HOME}/workspace/nodejs
node nodejsChecker.js host=localhost
'''
}
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${JENKINS_HOME}/workspace/C#NET/src/CheckC#
dotnet run
'''
}
sh '''
pkill -9 taosd || echo 1
cd ${WKC}/tests
./test-all.sh b2
date
'''
sh '''
cd ${WKC}/tests
./test-all.sh full unit
date'''
}
}
stage('test_valgrind') {
agent{label "slam4"}
steps {
pre_test()
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${WKC}/tests/pytest
nohup taosd >/dev/null &
sleep 10
python3 concurrent_inquiry.py -c 1
'''
}
sh '''
cd ${WKC}/tests
./test-all.sh full jdbc
date'''
sh '''
cd ${WKC}/tests/pytest
./valgrind-test.sh 2>&1 > mem-error-out.log
./handle_val_log.sh
date
cd ${WKC}/tests
./test-all.sh b3
date'''
sh '''
date
cd ${WKC}/tests
./test-all.sh full example
date'''
}
}
stage('arm64_build'){
agent{label 'arm64'}
steps{
sh '''
cd ${WK}
git fetch
git checkout develop
git pull
cd ${WKC}
git fetch
git checkout develop
git pull
git submodule update
cd ${WKC}/packaging
./release.sh -v cluster -c aarch64 -n 2.0.0.0 -m 2.0.0.0
'''
}
}
stage('arm32_build'){
agent{label 'arm32'}
steps{
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${WK}
git fetch
git checkout develop
git pull
cd ${WKC}
git fetch
git checkout develop
git pull
git submodule update
cd ${WKC}/packaging
./release.sh -v cluster -c aarch32 -n 2.0.0.0 -m 2.0.0.0
'''
}
}
}
}
}
}
post {
success {
emailext (
subject: "SUCCESSFUL: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]'",
body: '''<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
</head>
<body leftmargin="8" marginwidth="0" topmargin="8" marginheight="4" offset="0">
<table width="95%" cellpadding="0" cellspacing="0" style="font-size: 16pt; font-family: Tahoma, Arial, Helvetica, sans-serif">
<tr>
<td><br />
<b><font color="#0B610B"><font size="6">构建信息</font></font></b>
<hr size="2" width="100%" align="center" /></td>
</tr>
<tr>
<td>
<ul>
<div style="font-size:18px">
<li>构建名称>>分支:${PROJECT_NAME}</li>
<li>构建结果:<span style="color:green"> Successful </span></li>
<li>构建编号:${BUILD_NUMBER}</li>
<li>触发用户:${CAUSE}</li>
<li>变更概要:${CHANGES}</li>
<li>构建地址:<a href=${BUILD_URL}>${BUILD_URL}</a></li>
<li>构建日志:<a href=${BUILD_URL}console>${BUILD_URL}console</a></li>
<li>变更集:${JELLY_SCRIPT}</li>
</div>
</ul>
</td>
</tr>
</table></font>
</body>
</html>''',
to: "yqliu@taosdata.com,pxiao@taosdata.com",
from: "support@taosdata.com"
)
}
failure {
emailext (
subject: "FAILED: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]'",
body: '''<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
</head>
<body leftmargin="8" marginwidth="0" topmargin="8" marginheight="4" offset="0">
<table width="95%" cellpadding="0" cellspacing="0" style="font-size: 16pt; font-family: Tahoma, Arial, Helvetica, sans-serif">
<tr>
<td><br />
<b><font color="#0B610B"><font size="6">构建信息</font></font></b>
<hr size="2" width="100%" align="center" /></td>
</tr>
<tr>
<td>
<ul>
<div style="font-size:18px">
<li>构建名称>>分支:${PROJECT_NAME}</li>
<li>构建结果:<span style="color:green"> Successful </span></li>
<li>构建编号:${BUILD_NUMBER}</li>
<li>触发用户:${CAUSE}</li>
<li>变更概要:${CHANGES}</li>
<li>构建地址:<a href=${BUILD_URL}>${BUILD_URL}</a></li>
<li>构建日志:<a href=${BUILD_URL}console>${BUILD_URL}console</a></li>
<li>变更集:${JELLY_SCRIPT}</li>
</div>
</ul>
</td>
</tr>
</table></font>
</body>
</html>''',
to: "yqliu@taosdata.com,pxiao@taosdata.com",
from: "support@taosdata.com"
)
}
}
}
\ No newline at end of file
import hudson.model.Result
import hudson.model.*;
import jenkins.model.CauseOfInterruption
node {
}
def skipbuild=0
def win_stop=0
def abortPreviousBuilds() {
def currentJobName = env.JOB_NAME
def currentBuildNumber = env.BUILD_NUMBER.toInteger()
def jobs = Jenkins.instance.getItemByFullName(currentJobName)
def builds = jobs.getBuilds()
for (build in builds) {
if (!build.isBuilding()) {
continue;
}
if (currentBuildNumber == build.getNumber().toInteger()) {
continue;
}
build.doKill() //doTerm(),doKill(),doTerm()
}
}
// abort previous build
abortPreviousBuilds()
def abort_previous(){
def buildNumber = env.BUILD_NUMBER as int
if (buildNumber > 1) milestone(buildNumber - 1)
milestone(buildNumber)
}
def pre_test(){
sh'hostname'
sh '''
sudo rmtaos || echo "taosd has not installed"
'''
sh '''
killall -9 taosd ||echo "no taosd running"
killall -9 gdb || echo "no gdb running"
killall -9 python3.8 || echo "no python program running"
cd ${WKC}
'''
script {
if (env.CHANGE_TARGET == 'master') {
sh '''
cd ${WKC}
git checkout master
'''
}
else if(env.CHANGE_TARGET == '2.0'){
sh '''
cd ${WKC}
git checkout 2.0
'''
}
else if(env.CHANGE_TARGET == '3.0'){
sh '''
cd ${WKC}
git checkout 3.0
'''
}
else{
sh '''
cd ${WKC}
git checkout develop
'''
}
}
sh'''
cd ${WKC}
git pull >/dev/null
git fetch origin +refs/pull/${CHANGE_ID}/merge
git checkout -qf FETCH_HEAD
export TZ=Asia/Harbin
date
rm -rf debug
mkdir debug
cd debug
cmake .. > /dev/null
make -j4> /dev/null
'''
return 1
}
pipeline {
agent none
options { skipDefaultCheckout() }
environment{
WK = '/var/lib/jenkins/workspace/TDinternal'
WKC= '/var/lib/jenkins/workspace/TDengine'
}
stages {
stage('pre_build'){
agent{label 'slave3_0'}
options { skipDefaultCheckout() }
when {
changeRequest()
}
steps {
script{
abort_previous()
abortPreviousBuilds()
}
timeout(time: 45, unit: 'MINUTES'){
pre_test()
sh'''
cd ${WKC}/tests
./test-all.sh b1fq
'''
sh'''
cd ${WKC}/debug
ctest
'''
}
}
}
}
post {
success {
emailext (
subject: "PR-result: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' SUCCESS",
body: """<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
</head>
<body leftmargin="8" marginwidth="0" topmargin="8" marginheight="4" offset="0">
<table width="95%" cellpadding="0" cellspacing="0" style="font-size: 16pt; font-family: Tahoma, Arial, Helvetica, sans-serif">
<tr>
<td><br />
<b><font color="#0B610B"><font size="6">构建信息</font></font></b>
<hr size="2" width="100%" align="center" /></td>
</tr>
<tr>
<td>
<ul>
<div style="font-size:18px">
<li>构建名称>>分支:${env.BRANCH_NAME}</li>
<li>构建结果:<span style="color:green"> Successful </span></li>
<li>构建编号:${BUILD_NUMBER}</li>
<li>触发用户:${env.CHANGE_AUTHOR}</li>
<li>提交信息:${env.CHANGE_TITLE}</li>
<li>构建地址:<a href=${BUILD_URL}>${BUILD_URL}</a></li>
<li>构建日志:<a href=${BUILD_URL}console>${BUILD_URL}console</a></li>
</div>
</ul>
</td>
</tr>
</table></font>
</body>
</html>""",
to: "${env.CHANGE_AUTHOR_EMAIL}",
from: "support@taosdata.com"
)
}
failure {
emailext (
subject: "PR-result: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]' FAIL",
body: """<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
</head>
<body leftmargin="8" marginwidth="0" topmargin="8" marginheight="4" offset="0">
<table width="95%" cellpadding="0" cellspacing="0" style="font-size: 16pt; font-family: Tahoma, Arial, Helvetica, sans-serif">
<tr>
<td><br />
<b><font color="#0B610B"><font size="6">构建信息</font></font></b>
<hr size="2" width="100%" align="center" /></td>
</tr>
<tr>
<td>
<ul>
<div style="font-size:18px">
<li>构建名称>>分支:${env.BRANCH_NAME}</li>
<li>构建结果:<span style="color:red"> Failure </span></li>
<li>构建编号:${BUILD_NUMBER}</li>
<li>触发用户:${env.CHANGE_AUTHOR}</li>
<li>提交信息:${env.CHANGE_TITLE}</li>
<li>构建地址:<a href=${BUILD_URL}>${BUILD_URL}</a></li>
<li>构建日志:<a href=${BUILD_URL}console>${BUILD_URL}console</a></li>
</div>
</ul>
</td>
</tr>
</table></font>
</body>
</html>""",
to: "${env.CHANGE_AUTHOR_EMAIL}",
from: "support@taosdata.com"
)
}
}
}
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
import os
import subprocess
import time
from util.log import *
from util.cases import *
from util.sql import *
from util.dnodes import *
import datetime
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
def getBuildPath(self):
selfPath = os.path.dirname(os.path.realpath(__file__))
if ("community" in selfPath):
projPath = selfPath[:selfPath.find("community")]
else:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root)-len("/build/bin")]
break
return buildPath
def run(self):
tdSql.prepare()
buildPath = self.getBuildPath()
if (buildPath == ""):
tdLog.exit("taosd not found!")
else:
tdLog.info("taosd found in %s" % buildPath)
binPath = buildPath+ "/build/bin/"
tdSql.execute("create database timezone")
tdSql.execute("use timezone")
tdSql.execute("create stable st (ts timestamp, id int ) tags (index int)")
tdSql.execute("insert into tb0 using st tags (1) values ('2021-07-01 00:00:00.000',0)")
tdSql.query("select ts from tb0")
tdSql.checkData(0, 0, "2021-07-01 00:00:00.000")
tdSql.execute("insert into tb1 using st tags (1) values ('2021-07-01T00:00:00.000+07:50',1)")
tdSql.query("select ts from tb1")
tdSql.checkData(0, 0, "2021-07-01 00:10:00.000")
tdSql.execute("insert into tb2 using st tags (1) values ('2021-07-01T00:00:00.000+08:00',2)")
tdSql.query("select ts from tb2")
tdSql.checkData(0, 0, "2021-07-01 00:00:00.000")
tdSql.execute("insert into tb3 using st tags (1) values ('2021-07-01T00:00:00.000Z',3)")
tdSql.query("select ts from tb3")
tdSql.checkData(0, 0, "2021-07-01 08:00:00.000")
tdSql.execute("insert into tb4 using st tags (1) values ('2021-07-01 00:00:00.000+07:50',4)")
tdSql.query("select ts from tb4")
tdSql.checkData(0, 0, "2021-07-01 00:10:00.000")
tdSql.execute("insert into tb5 using st tags (1) values ('2021-07-01 00:00:00.000Z',5)")
tdSql.query("select ts from tb5")
tdSql.checkData(0, 0, "2021-07-01 08:00:00.000")
tdSql.execute("insert into tb6 using st tags (1) values ('2021-07-01T00:00:00.000+0800',6)")
tdSql.query("select ts from tb6")
tdSql.checkData(0, 0, "2021-07-01 00:00:00.000")
tdSql.execute("insert into tb7 using st tags (1) values ('2021-07-01 00:00:00.000+0800',7)")
tdSql.query("select ts from tb7")
tdSql.checkData(0, 0, "2021-07-01 00:00:00.000")
tdSql.execute("insert into tb8 using st tags (1) values ('2021-07-0100:00:00.000',8)")
tdSql.query("select ts from tb8")
tdSql.checkData(0, 0, "2021-07-01 00:00:00.000")
tdSql.execute("insert into tb9 using st tags (1) values ('2021-07-0100:00:00.000+0800',9)")
tdSql.query("select ts from tb9")
tdSql.checkData(0, 0, "2021-07-01 00:00:00.000")
tdSql.execute("insert into tb10 using st tags (1) values ('2021-07-0100:00:00.000+08:00',10)")
tdSql.query("select ts from tb10")
tdSql.checkData(0, 0, "2021-07-01 00:00:00.000")
tdSql.execute("insert into tb11 using st tags (1) values ('2021-07-0100:00:00.000+07:00',11)")
tdSql.query("select ts from tb11")
tdSql.checkData(0, 0, "2021-07-01 01:00:00.000")
tdSql.execute("insert into tb12 using st tags (1) values ('2021-07-0100:00:00.000+0700',12)")
tdSql.query("select ts from tb12")
tdSql.checkData(0, 0, "2021-07-01 01:00:00.000")
tdSql.execute("insert into tb13 using st tags (1) values ('2021-07-0100:00:00.000+07:12',13)")
tdSql.query("select ts from tb13")
tdSql.checkData(0, 0, "2021-07-01 00:48:00.000")
tdSql.execute("insert into tb14 using st tags (1) values ('2021-07-0100:00:00.000+712',14)")
tdSql.query("select ts from tb14")
tdSql.checkData(0, 0, "2021-06-28 08:58:00.000")
tdSql.execute("insert into tb15 using st tags (1) values ('2021-07-0100:00:00.000Z',15)")
tdSql.query("select ts from tb15")
tdSql.checkData(0, 0, "2021-07-01 08:00:00.000")
tdSql.execute("insert into tb16 using st tags (1) values ('2021-7-1 00:00:00.000Z',16)")
tdSql.query("select ts from tb16")
tdSql.checkData(0, 0, "2021-07-01 08:00:00.000")
tdSql.execute("insert into tb17 using st tags (1) values ('2021-07-0100:00:00.000+0750',17)")
tdSql.query("select ts from tb17")
tdSql.checkData(0, 0, "2021-07-01 00:10:00.000")
tdSql.execute("insert into tb18 using st tags (1) values ('2021-07-0100:00:00.000+0752',18)")
tdSql.query("select ts from tb18")
tdSql.checkData(0, 0, "2021-07-01 00:08:00.000")
tdSql.execute("insert into tb19 using st tags (1) values ('2021-07-0100:00:00.000+075',19)")
tdSql.query("select ts from tb19")
tdSql.checkData(0, 0, "2021-07-01 00:55:00.000")
tdSql.execute("insert into tb20 using st tags (1) values ('2021-07-0100:00:00.000+75',20)")
tdSql.query("select ts from tb20")
tdSql.checkData(0, 0, "2021-06-28 05:00:00.000")
tdSql.execute("insert into tb21 using st tags (1) values ('2021-7-1 1:1:1.234+075',21)")
tdSql.query("select ts from tb21")
tdSql.checkData(0, 0, "2021-07-01 01:56:01.234")
tdSql.execute("insert into tb22 using st tags (1) values ('2021-7-1T1:1:1.234+075',22)")
tdSql.query("select ts from tb22")
tdSql.checkData(0, 0, "2021-07-01 01:56:01.234")
tdSql.execute("insert into tb23 using st tags (1) values ('2021-7-131:1:1.234+075',22)")
tdSql.query("select ts from tb23")
tdSql.checkData(0, 0, "2021-07-13 01:56:01.234")
tdSql.error("insert into tberror using st tags (1) values ('20210701 00:00:00.000+0800',0)")
tdSql.error("insert into tberror using st tags (1) values ('2021070100:00:00.000+0800',0)")
tdSql.error("insert into tberror using st tags (1) values ('202171 00:00:00.000+0800',0)")
tdSql.error("insert into tberror using st tags (1) values ('2021 07 01 00:00:00.000+0800',0)")
tdSql.error("insert into tberror using st tags (1) values ('2021 -07-0100:00:00.000+0800',0)")
tdSql.error("insert into tberror using st tags (1) values ('2021-7-11:1:1.234+075',0)")
os.system("rm -rf ./TimeZone/*.py.sql")
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
import os
from util.log import *
from util.cases import *
from util.sql import *
from util.dnodes import *
import datetime
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
def checkCommunity(self):
selfPath = os.path.dirname(os.path.realpath(__file__))
if ("community" in selfPath):
return False
else:
return True
def getBuildPath(self):
selfPath = os.path.dirname(os.path.realpath(__file__))
if ("community" in selfPath):
projPath = selfPath[:selfPath.find("community")]
else:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosdump" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]
break
return buildPath
def run(self):
# clear envs
tdSql.execute(" create database ZoneTime precision 'us' ")
tdSql.execute(" use ZoneTime ")
tdSql.execute(" create stable st (ts timestamp , id int , val float) tags (tag1 timestamp ,tag2 int) ")
# standard case for Timestamp
tdSql.execute(" insert into tb1 using st tags (\"2021-07-01 00:00:00.000\" , 2) values( \"2021-07-01 00:00:00.000\" , 1 , 1.0 ) ")
case1 = (tdSql.getResult("select * from tb1"))
print(case1)
if case1 == [(datetime.datetime(2021, 7, 1, 0, 0), 1, 1.0)]:
print ("check pass! ")
else:
print ("check failed about timestamp '2021-07-01 00:00:00.000' ")
# RCF-3339 : it allows "T" is replaced by " "
tdSql.execute(" insert into tb2 using st tags (\"2021-07-01T00:00:00.000+07:50\" , 2) values( \"2021-07-01T00:00:00.000+07:50\" , 2 , 2.0 ) ")
case2 = (tdSql.getResult("select * from tb2"))
print(case2)
if case2 == [(datetime.datetime(2021, 7, 1, 0, 10), 2, 2.0)]:
print ("check pass! ")
else:
print ("check failed about timestamp '2021-07-01T00:00:00.000+07:50'! ")
tdSql.execute(" insert into tb3 using st tags (\"2021-07-01T00:00:00.000+08:00\" , 3) values( \"2021-07-01T00:00:00.000+08:00\" , 3 , 3.0 ) ")
case3 = (tdSql.getResult("select * from tb3"))
print(case3)
if case3 == [(datetime.datetime(2021, 7, 1, 0, 0), 3, 3.0)]:
print ("check pass! ")
else:
print ("check failed about timestamp '2021-07-01T00:00:00.000+08:00'! ")
tdSql.execute(" insert into tb4 using st tags (\"2021-07-01T00:00:00.000Z\" , 4) values( \"2021-07-01T00:00:00.000Z\" , 4 , 4.0 ) ")
case4 = (tdSql.getResult("select * from tb4"))
print(case4)
if case4 == [(datetime.datetime(2021, 7, 1, 8, 0), 4, 4.0)]:
print ("check pass! ")
else:
print ("check failed about timestamp '2021-07-01T00:00:00.000Z'! ")
tdSql.execute(" insert into tb5 using st tags (\"2021-07-01 00:00:00.000+07:50\" , 5) values( \"2021-07-01 00:00:00.000+07:50\" , 5 , 5.0 ) ")
case5 = (tdSql.getResult("select * from tb5"))
print(case5)
if case5 == [(datetime.datetime(2021, 7, 1, 0, 10), 5, 5.0)]:
print ("check pass! ")
else:
print ("check failed about timestamp '2021-07-01 00:00:00.000+08:00 ")
tdSql.execute(" insert into tb6 using st tags (\"2021-07-01 00:00:00.000Z\" , 6) values( \"2021-07-01 00:00:00.000Z\" , 6 , 6.0 ) ")
case6 = (tdSql.getResult("select * from tb6"))
print(case6)
if case6 == [(datetime.datetime(2021, 7, 1, 8, 0), 6, 6.0)]:
print ("check pass! ")
else:
print ("check failed about timestamp '2021-07-01 00:00:00.000Z'! ")
# ISO 8610 timestamp format , time days and hours must be split by "T"
tdSql.execute(" insert into tb7 using st tags (\"2021-07-01T00:00:00.000+0800\" , 7) values( \"2021-07-01T00:00:00.000+0800\" , 7 , 7.0 ) ")
case7 = (tdSql.getResult("select * from tb7"))
print(case7)
if case7 == [(datetime.datetime(2021, 7, 1, 0, 0), 7, 7.0)]:
print ("check pass! ")
else:
print ("check failed about timestamp '2021-07-01T00:00:00.000+0800'! ")
tdSql.execute(" insert into tb8 using st tags (\"2021-07-01T00:00:00.000+08\" , 8) values( \"2021-07-01T00:00:00.000+08\" , 8 , 8.0 ) ")
case8 = (tdSql.getResult("select * from tb8"))
print(case8)
if case8 == [(datetime.datetime(2021, 7, 1, 0, 0), 8, 8.0)]:
print ("check pass! ")
else:
print ("check failed about timestamp '2021-07-01T00:00:00.000+08'! ")
# Non-standard case for Timestamp
tdSql.execute(" insert into tb9 using st tags (\"2021-07-01 00:00:00.000+0800\" , 9) values( \"2021-07-01 00:00:00.000+0800\" , 9 , 9.0 ) ")
case9 = (tdSql.getResult("select * from tb9"))
print(case9)
tdSql.execute(" insert into tb10 using st tags (\"2021-07-0100:00:00.000\" , 10) values( \"2021-07-0100:00:00.000\" , 10 , 10.0 ) ")
case10 = (tdSql.getResult("select * from tb10"))
print(case10)
tdSql.execute(" insert into tb11 using st tags (\"2021-07-0100:00:00.000+0800\" , 11) values( \"2021-07-0100:00:00.000+0800\" , 11 , 11.0 ) ")
case11 = (tdSql.getResult("select * from tb11"))
print(case11)
tdSql.execute(" insert into tb12 using st tags (\"2021-07-0100:00:00.000+08:00\" , 12) values( \"2021-07-0100:00:00.000+08:00\" , 12 , 12.0 ) ")
case12 = (tdSql.getResult("select * from tb12"))
print(case12)
tdSql.execute(" insert into tb13 using st tags (\"2021-07-0100:00:00.000Z\" , 13) values( \"2021-07-0100:00:00.000Z\" , 13 , 13.0 ) ")
case13 = (tdSql.getResult("select * from tb13"))
print(case13)
tdSql.execute(" insert into tb14 using st tags (\"2021-07-0100:00:00.000Z\" , 14) values( \"2021-07-0100:00:00.000Z\" , 14 , 14.0 ) ")
case14 = (tdSql.getResult("select * from tb14"))
print(case14)
tdSql.execute(" insert into tb15 using st tags (\"2021-07-0100:00:00.000+08\" , 15) values( \"2021-07-0100:00:00.000+08\" , 15 , 15.0 ) ")
case15 = (tdSql.getResult("select * from tb15"))
print(case15)
tdSql.execute(" insert into tb16 using st tags (\"2021-07-0100:00:00.000+07:50\" , 16) values( \"2021-07-0100:00:00.000+07:50\" , 16 , 16.0 ) ")
case16 = (tdSql.getResult("select * from tb16"))
print(case16)
os.system("rm -rf *.py.sql")
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
import taos
from util.log import tdLog
from util.cases import tdCases
from util.sql import tdSql
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
def run(self):
tdSql.query("show users")
rows = tdSql.queryRows
tdSql.execute("create user test PASS 'test' ")
tdSql.query("show users")
tdSql.checkRows(rows + 1)
tdSql.error("create user tdenginetdenginetdengine PASS 'test' ")
tdSql.error("create user tdenginet PASS '1234512345123456' ")
try:
tdSql.execute("create account a&cc PASS 'pass123'")
except Exception as e:
print("create account a&cc PASS 'pass123'")
return
tdLog.exit("drop built-in user is error.")
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
import taos
from util.log import tdLog
from util.cases import tdCases
from util.sql import tdSql
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
def run(self):
print("==========step1")
print("drop built-in account")
try:
tdSql.execute("drop account root")
except Exception as e:
if len(e.args) > 0 and 'no rights' != e.args[0]:
tdLog.exit(e)
print("==========step2")
print("drop built-in user")
try:
tdSql.execute("drop user root")
except Exception as e:
if len(e.args) > 0 and 'no rights' != e.args[0]:
tdLog.exit(e)
return
tdLog.exit("drop built-in user is error.")
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import random
import string
from util.log import *
from util.cases import *
from util.sql import *
from util.dnodes import *
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
def genColList(self):
'''
generate column list
'''
col_list = list()
for i in range(1, 18):
col_list.append(f'c{i}')
return col_list
def genIncreaseValue(self, input_value):
'''
add ', 1' to end of value every loop
'''
value_list = list(input_value)
value_list.insert(-1, ", 1")
return ''.join(value_list)
def insertAlter(self):
'''
after each alter and insert, when execute 'select * from {tbname};' taosd will coredump
'''
tbname = ''.join(random.choice(string.ascii_letters.lower()) for i in range(7))
input_value = '(now, 1)'
tdSql.execute(f'create table {tbname} (ts timestamp, c0 int);')
tdSql.execute(f'insert into {tbname} values {input_value};')
for col in self.genColList():
input_value = self.genIncreaseValue(input_value)
tdSql.execute(f'alter table {tbname} add column {col} int;')
tdSql.execute(f'insert into {tbname} values {input_value};')
tdSql.query(f'select * from {tbname};')
tdSql.checkRows(18)
def run(self):
tdSql.prepare()
self.insertAlter()
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
from util.log import *
from util.cases import *
from util.sql import *
from util.dnodes import *
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug(f"start to execute {__file__}")
tdSql.init(conn.cursor(), logSql)
def run(self):
tdSql.execute("drop database if exists db")
tdSql.execute("create database if not exists db keep 36500")
tdSql.execute("use db")
tdLog.printNoPrefix("==========step1:create table && insert data")
tdSql.execute("create table stbtag (ts timestamp, c1 int) TAGS(t1 int)")
tdSql.execute("create table tag1 using stbtag tags(1)")
tdLog.printNoPrefix("==========step2:alter stb add tag create new chiltable")
tdSql.execute("alter table stbtag add tag t2 int")
tdSql.execute("alter table stbtag add tag t3 tinyint")
tdSql.execute("alter table stbtag add tag t4 smallint ")
tdSql.execute("alter table stbtag add tag t5 bigint")
tdSql.execute("alter table stbtag add tag t6 float ")
tdSql.execute("alter table stbtag add tag t7 double ")
tdSql.execute("alter table stbtag add tag t8 bool ")
tdSql.execute("alter table stbtag add tag t9 binary(10) ")
tdSql.execute("alter table stbtag add tag t10 nchar(10)")
tdSql.execute("create table tag2 using stbtag tags(2, 22, 23, 24, 25, 26.1, 27.1, 1, 'binary9', 'nchar10')")
tdSql.query( "select tbname, t1, t2, t3, t4, t5, t6, t7, t8, t9, t10 from stbtag" )
tdSql.checkData(1, 0, "tag2")
tdSql.checkData(1, 1, 2)
tdSql.checkData(1, 2, 22)
tdSql.checkData(1, 3, 23)
tdSql.checkData(1, 4, 24)
tdSql.checkData(1, 5, 25)
tdSql.checkData(1, 6, 26.1)
tdSql.checkData(1, 7, 27.1)
tdSql.checkData(1, 8, 1)
tdSql.checkData(1, 9, "binary9")
tdSql.checkData(1, 10, "nchar10")
tdLog.printNoPrefix("==========step3:alter stb drop tag create new chiltable")
tdSql.execute("alter table stbtag drop tag t2 ")
tdSql.execute("alter table stbtag drop tag t3 ")
tdSql.execute("alter table stbtag drop tag t4 ")
tdSql.execute("alter table stbtag drop tag t5 ")
tdSql.execute("alter table stbtag drop tag t6 ")
tdSql.execute("alter table stbtag drop tag t7 ")
tdSql.execute("alter table stbtag drop tag t8 ")
tdSql.execute("alter table stbtag drop tag t9 ")
tdSql.execute("alter table stbtag drop tag t10 ")
tdSql.execute("create table tag3 using stbtag tags(3)")
tdSql.query("select * from stbtag where tbname like 'tag3' ")
tdSql.checkCols(3)
tdSql.query("select tbname, t1 from stbtag where tbname like 'tag3' ")
tdSql.checkData(0, 1, 3)
def stop(self):
tdSql.close()
tdLog.success(f"{__file__} successfully executed")
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
\ No newline at end of file
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
from util.log import *
from util.cases import *
from util.sql import *
from util.dnodes import *
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug(f"start to execute {__file__}")
tdSql.init(conn.cursor(), logSql)
def run(self):
tdSql.execute("drop database if exists db")
tdSql.execute("create database if not exists db keep 36500")
tdSql.execute("use db")
tdLog.printNoPrefix("==========step1:create table && insert data")
# timestamp list:
# 0 -> "1970-01-01 08:00:00" | -28800000 -> "1970-01-01 00:00:00" | -946800000000 -> "1940-01-01 00:00:00"
# -631180800000 -> "1950-01-01 00:00:00"
ts1 = 0
ts2 = -28800000
ts3 = -946800000000
ts4 = "1950-01-01 00:00:00"
tdSql.execute(
"create table stb2ts (ts timestamp, ts1 timestamp, ts2 timestamp, c1 int, ts3 timestamp) TAGS(t1 int)"
)
tdSql.execute("create table t2ts1 using stb2ts tags(1)")
tdSql.execute(f"insert into t2ts1 values ({ts1}, {ts1}, {ts1}, 1, {ts1})")
tdSql.execute(f"insert into t2ts1 values ({ts2}, {ts2}, {ts2}, 2, {ts2})")
tdSql.execute(f"insert into t2ts1 values ({ts3}, {ts3}, {ts3}, 4, {ts3})")
tdSql.execute(f"insert into t2ts1 values ('{ts4}', '{ts4}', '{ts4}', 3, '{ts4}')")
tdLog.printNoPrefix("==========step2:check inserted data")
tdSql.query("select * from stb2ts where ts1=0 and ts2='1970-01-01 08:00:00' ")
tdSql.checkRows(1)
tdSql.checkData(0, 4,'1970-01-01 08:00:00')
tdSql.query("select * from stb2ts where ts1=-28800000 and ts2='1970-01-01 00:00:00' ")
tdSql.checkRows(1)
tdSql.checkData(0, 4, '1970-01-01 00:00:00')
tdSql.query("select * from stb2ts where ts1=-946800000000 and ts2='1940-01-01 00:00:00' ")
tdSql.checkRows(1)
tdSql.checkData(0, 4, '1940-01-01 00:00:00')
tdSql.query("select * from stb2ts where ts1=-631180800000 and ts2='1950-01-01 00:00:00' ")
tdSql.checkRows(1)
tdSql.checkData(0, 4, '1950-01-01 00:00:00')
def stop(self):
tdSql.close()
tdLog.success(f"{__file__} successfully executed")
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
\ No newline at end of file
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
from util.log import *
from util.cases import *
from util.sql import *
from util.dnodes import tdDnodes
from datetime import datetime
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
def getBuildPath(self):
selfPath = os.path.dirname(os.path.realpath(__file__))
if ("community" in selfPath):
projPath = selfPath[:selfPath.find("community")]
else:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]
break
return buildPath
def run(self):
tdSql.prepare()
tdSql.query('show databases')
tdSql.checkData(0,15,0)
buildPath = self.getBuildPath()
if (buildPath == ""):
tdLog.exit("taosd not found!")
else:
tdLog.info("taosd found in %s" % buildPath)
binPath = buildPath + "/build/bin/"
#write 5M rows into db, then restart to force the data move into disk.
#create 500 tables
os.system("%staosdemo -f tools/taosdemoAllTest/insert_5M_rows.json -y " % binPath)
tdDnodes.stop(1)
tdDnodes.start(1)
tdSql.execute('use db')
#prepare to query 500 tables last_row()
tableName = []
for i in range(500):
tableName.append(f"stb_{i}")
tdSql.execute('use db')
lastRow_Off_start = datetime.now()
slow = 0 #count time where lastRow on is slower
for i in range(5):
#switch lastRow to off and check
tdSql.execute('alter database db cachelast 0')
tdSql.query('show databases')
tdSql.checkData(0,15,0)
#run last_row(*) query 500 times
for i in range(500):
tdSql.execute(f'SELECT LAST_ROW(*) FROM {tableName[i]}')
lastRow_Off_end = datetime.now()
tdLog.debug(f'time used:{lastRow_Off_end-lastRow_Off_start}')
#switch lastRow to on and check
tdSql.execute('alter database db cachelast 1')
tdSql.query('show databases')
tdSql.checkData(0,15,1)
#run last_row(*) query 500 times
tdSql.execute('use db')
lastRow_On_start = datetime.now()
for i in range(500):
tdSql.execute(f'SELECT LAST_ROW(*) FROM {tableName[i]}')
lastRow_On_end = datetime.now()
tdLog.debug(f'time used:{lastRow_On_end-lastRow_On_start}')
#check which one used more time
if (lastRow_Off_end-lastRow_Off_start > lastRow_On_end-lastRow_On_start):
pass
else:
slow += 1
tdLog.debug(slow)
if slow > 1: #tolerance for the first time
tdLog.exit('lastRow hot alter failed')
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
#TODO: after TD-4518 and TD-4510 is resolved, add the exception test case for these situations
import sys
from util.log import *
from util.cases import *
from util.sql import *
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
def run(self):
tdSql.prepare()
#checking string input exception for alter
tdSql.error("alter database db keep '10'")
tdSql.error('alter database db keep "10"')
tdSql.error("alter database db keep '\t'")
tdSql.error("alter database db keep \'\t\'")
tdSql.error('alter database db keep "a"')
tdSql.error('alter database db keep "1.4"')
tdSql.error("alter database db blocks '10'")
tdSql.error('alter database db comp "0"')
tdSql.execute('drop database if exists db')
#checking string input exception for create
tdSql.error("create database db comp '0'")
tdSql.error('create database db comp "1"')
tdSql.error("create database db comp '\t'")
tdSql.error("alter database db keep \'\t\'")
tdSql.error('create database db comp "a"')
tdSql.error('create database db comp "1.4"')
tdSql.error("create database db blocks '10'")
tdSql.error('create database db keep "3650"')
tdSql.error('create database db fsync "3650"')
tdSql.execute('create database db precision "us"')
tdSql.query('show databases')
tdSql.checkData(0,16,'us')
tdSql.execute('drop database if exists db')
#checking float input exception for create
tdSql.error("create database db fsync 7.3")
tdSql.error("create database db fsync 0.0")
tdSql.error("create database db fsync -5.32")
tdSql.error('create database db comp 7.2')
tdSql.error("create database db blocks 5.87")
tdSql.error('create database db keep 15.4')
#checking float input exception for insert
tdSql.execute('create database db')
tdSql.error('alter database db blocks 5.9')
tdSql.error('alter database db blocks -4.7')
tdSql.error('alter database db blocks 0.0')
tdSql.error('alter database db keep 15.4')
tdSql.error('alter database db comp 2.67')
#checking additional exception param for alter keep
tdSql.error('alter database db keep 365001')
tdSql.error('alter database db keep 364999,365000,365001')
tdSql.error('alter database db keep -10')
tdSql.error('alter database db keep 5')
tdSql.error('alter database db keep ')
tdSql.error('alter database db keep 40,a,60')
tdSql.error('alter database db keep ,,60,')
tdSql.error('alter database db keep \t')
tdSql.execute('alter database db keep \t50')
tdSql.query('show databases')
tdSql.checkData(0,7,'50,50,50')
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
\ No newline at end of file
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
import random
from util.log import *
from util.cases import *
from util.sql import *
from util.dnodes import tdDnodes
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
def run(self):
tdSql.prepare()
flagList=["debugflag", "cdebugflag", "tmrDebugFlag", "uDebugFlag", "rpcDebugFlag"]
for flag in flagList:
tdSql.execute("alter local %s 131" % flag)
tdSql.execute("alter local %s 135" % flag)
tdSql.execute("alter local %s 143" % flag)
randomFlag = random.randint(100, 250)
if randomFlag != 131 and randomFlag != 135 and randomFlag != 143:
tdSql.error("alter local %s %d" % (flag, randomFlag))
tdSql.query("show dnodes")
dnodeId = tdSql.getData(0, 0)
for flag in flagList:
tdSql.execute("alter dnode %d %s 131" % (dnodeId, flag))
tdSql.execute("alter dnode %d %s 135" % (dnodeId, flag))
tdSql.execute("alter dnode %d %s 143" % (dnodeId, flag))
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
from util.log import *
from util.cases import *
from util.sql import *
import time
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
def alterKeepCommunity(self):
tdLog.notice('running Keep Test, Community Version')
tdLog.notice('running parameter test for keep during create')
#testing keep parameter during create
tdSql.query('show databases')
tdSql.checkData(0,7,'3650')
tdSql.execute('drop database db')
tdSql.execute('create database db keep 100')
tdSql.query('show databases')
tdSql.checkData(0,7,'100')
tdSql.execute('drop database db')
tdSql.error('create database db keep ')
tdSql.error('create database db keep 0')
tdSql.error('create database db keep 10,20')
tdSql.error('create database db keep 10,20,30')
tdSql.error('create database db keep 20,30,40,50')
#testing keep parameter during alter
tdSql.execute('create database db')
tdLog.notice('running parameter test for keep during alter')
tdSql.execute('alter database db keep 100')
tdSql.query('show databases')
tdSql.checkData(0,7,'100')
tdSql.error('alter database db keep ')
tdSql.error('alter database db keep 0')
tdSql.error('alter database db keep 10,20')
tdSql.error('alter database db keep 10,20,30')
tdSql.error('alter database db keep 20,30,40,50')
tdSql.query('show databases')
tdSql.checkData(0,7,'100')
def alterKeepEnterprise(self):
tdLog.notice('running Keep Test, Enterprise Version')
#testing keep parameter during create
tdLog.notice('running parameter test for keep during create')
tdSql.query('show databases')
tdSql.checkData(0,7,'3650,3650,3650')
tdSql.execute('drop database db')
tdSql.execute('create database db keep 100')
tdSql.query('show databases')
tdSql.checkData(0,7,'100,100,100')
tdSql.execute('drop database db')
tdSql.execute('create database db keep 20, 30')
tdSql.query('show databases')
tdSql.checkData(0,7,'20,30,30')
tdSql.execute('drop database db')
tdSql.execute('create database db keep 30,40,50')
tdSql.query('show databases')
tdSql.checkData(0,7,'30,40,50')
tdSql.execute('drop database db')
tdSql.error('create database db keep ')
tdSql.error('create database db keep 20,30,40,50')
tdSql.error('create database db keep 0')
tdSql.error('create database db keep 100,50')
tdSql.error('create database db keep 100,40,50')
tdSql.error('create database db keep 20,100,50')
tdSql.error('create database db keep 50,60,20')
#testing keep parameter during alter
tdSql.execute('create database db')
tdLog.notice('running parameter test for keep during alter')
tdSql.execute('alter database db keep 10')
tdSql.query('show databases')
tdSql.checkData(0,7,'10,10,10')
tdSql.execute('alter database db keep 20,30')
tdSql.query('show databases')
tdSql.checkData(0,7,'20,30,30')
tdSql.execute('alter database db keep 100,200,300')
tdSql.query('show databases')
tdSql.checkData(0,7,'100,200,300')
tdSql.error('alter database db keep ')
tdSql.error('alter database db keep 20,30,40,50')
tdSql.error('alter database db keep 0')
tdSql.error('alter database db keep 100,50')
tdSql.error('alter database db keep 100,40,50')
tdSql.error('alter database db keep 20,100,50')
tdSql.error('alter database db keep 50,60,20')
tdSql.query('show databases')
tdSql.checkData(0,7,'100,200,300')
def run(self):
tdSql.prepare()
selfPath = os.path.dirname(os.path.realpath(__file__))
if ("community" in selfPath):
tdLog.debug('running enterprise test')
self.alterKeepEnterprise()
else:
tdLog.debug('running community test')
self.alterKeepCommunity()
tdSql.prepare()
## preset the keep
tdSql.prepare()
tdLog.notice('testing if alter will cause any error')
tdSql.execute('create table tb (ts timestamp, speed int)')
tdSql.execute('alter database db keep 10,10,10')
tdSql.execute('insert into tb values (now, 10)')
tdSql.execute('insert into tb values (now + 10m, 10)')
tdSql.query('select * from tb')
tdSql.checkRows(2)
#after alter from small to large, check if the alter if functioning
#test if change through test.py is consistent with change from taos client
#test case for TD-4459 and TD-4445
tdLog.notice('testing keep will be altered changing from small to big')
tdSql.execute('alter database db keep 40,40,40')
tdSql.query('show databases')
tdSql.checkData(0,7,'40,40,40')
tdSql.error('insert into tb values (now-60d, 10)')
tdSql.execute('insert into tb values (now-30d, 10)')
tdSql.query('select * from tb')
tdSql.checkRows(3)
rowNum = 3
for i in range(30):
rowNum += 1
tdSql.execute('alter database db keep 20,20,20')
tdSql.execute('alter database db keep 40,40,40')
tdSql.query('show databases')
tdSql.checkData(0,7,'40,40,40')
tdSql.error('insert into tb values (now-60d, 10)')
tdSql.execute('insert into tb values (now-30d, 10)')
tdSql.query('select * from tb')
tdSql.checkRows(rowNum)
tdLog.notice('testing keep will be altered changing from big to small')
tdSql.execute('alter database db keep 10,10,10')
tdSql.query('show databases')
tdSql.checkData(0,7,'10,10,10')
tdSql.error('insert into tb values (now-15d, 10)')
tdSql.query('select * from tb')
tdSql.checkRows(2)
rowNum = 2
tdLog.notice('testing keep will be altered if sudden change from small to big')
for i in range(30):
tdSql.execute('alter database db keep 14,14,14')
tdSql.execute('alter database db keep 16,16,16')
tdSql.execute('insert into tb values (now-15d, 10)')
tdSql.query('select * from tb')
rowNum += 1
tdSql.checkRows(rowNum)
tdLog.notice('testing keep will be altered if sudden change from big to small')
tdSql.execute('alter database db keep 16,16,16')
tdSql.execute('alter database db keep 14,14,14')
tdSql.error('insert into tb values (now-15d, 10)')
tdSql.query('select * from tb')
tdSql.checkRows(2)
tdLog.notice('testing data will show up again when keep is being changed to large value')
tdSql.execute('alter database db keep 40,40,40')
tdSql.query('select * from tb')
tdSql.checkRows(63)
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
import taos
from util.log import *
from util.cases import *
from util.sql import *
from util.dnodes import *
class TDTestCase:
def init(self):
tdLog.debug("start to execute %s" % __file__)
tdLog.info("prepare cluster")
tdDnodes.stopAll()
tdDnodes.deploy(1)
tdDnodes.start(1)
self.conn = taos.connect(config=tdDnodes.getSimCfgPath())
tdSql.init(self.conn.cursor())
tdSql.execute('reset query cache')
tdSql.execute('create dnode 192.168.0.2')
tdDnodes.deploy(2)
tdDnodes.start(2)
self.conn = taos.connect(config=tdDnodes.getSimCfgPath())
tdSql.init(self.conn.cursor())
tdSql.execute('reset query cache')
tdSql.execute('create dnode 192.168.0.3')
tdDnodes.deploy(3)
tdDnodes.start(3)
def run(self):
tdSql.execute('create database db replica 3 days 7')
tdSql.execute('use db')
for tid in range(1, 11):
tdSql.execute('create table tb%d(ts timestamp, i int)' % tid)
tdLog.sleep(10)
tdLog.info("================= step1")
startTime = 1520000010000
for rid in range(1, 11):
for tid in range(1, 11):
tdSql.execute(
'insert into tb%d values(%ld, %d)' %
(tid, startTime, rid))
startTime += 1
tdSql.query('select * from tb1')
tdSql.checkRows(10)
tdLog.sleep(5)
tdLog.info("================= step2")
tdSql.execute('alter database db replica 2')
tdLog.sleep(10)
tdLog.info("================= step3")
for rid in range(1, 11):
for tid in range(1, 11):
tdSql.execute(
'insert into tb%d values(%ld, %d)' %
(tid, startTime, rid))
startTime += 1
tdSql.query('select * from tb1')
tdSql.checkRows(20)
tdLog.sleep(5)
tdLog.info("================= step4")
tdSql.execute('alter database db replica 1')
tdLog.sleep(10)
tdLog.info("================= step5")
for rid in range(1, 11):
for tid in range(1, 11):
tdSql.execute(
'insert into tb%d values(%ld, %d)' %
(tid, startTime, rid))
startTime += 1
tdSql.query('select * from tb1')
tdSql.checkRows(30)
tdLog.sleep(5)
tdLog.info("================= step6")
tdSql.execute('alter database db replica 2')
tdLog.sleep(10)
tdLog.info("================= step7")
for rid in range(1, 11):
for tid in range(1, 11):
tdSql.execute(
'insert into tb%d values(%ld, %d)' %
(tid, startTime, rid))
startTime += 1
tdSql.query('select * from tb1')
tdSql.checkRows(40)
tdLog.sleep(5)
tdLog.info("================= step8")
tdSql.execute('alter database db replica 3')
tdLog.sleep(10)
tdLog.info("================= step9")
for rid in range(1, 11):
for tid in range(1, 11):
tdSql.execute(
'insert into tb%d values(%ld, %d)' %
(tid, startTime, rid))
startTime += 1
tdSql.query('select * from tb1')
tdSql.checkRows(50)
tdLog.sleep(5)
def stop(self):
tdSql.close()
self.conn.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addCluster(__file__, TDTestCase())
# -*- coding: utf-8 -*-
import sys
import taos
from util.log import *
from util.cases import *
from util.sql import *
from util.dnodes import *
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
self.types = [
"int",
"bigint",
"float",
"double",
"smallint",
"tinyint",
"int unsigned",
"bigint unsigned",
"smallint unsigned",
"tinyint unsigned",
"binary(10)",
"nchar(10)",
"timestamp"]
self.rowNum = 300
self.ts = 1537146000000
self.step = 1000
self.sqlHead = "select count(*), count(c1) "
self.sqlTail = " from stb"
def addColumnAndCount(self):
for colIdx in range(len(self.types)):
tdSql.execute(
"alter table stb add column c%d %s" %
(colIdx + 2, self.types[colIdx]))
self.sqlHead = self.sqlHead + ",count(c%d) " % (colIdx + 2)
tdSql.query(self.sqlHead + self.sqlTail)
# count non-NULL values in each column
tdSql.checkData(0, 0, self.rowNum * (colIdx + 1))
tdSql.checkData(0, 1, self.rowNum * (colIdx + 1))
for i in range(2, colIdx + 2):
print("check1: i=%d colIdx=%d" % (i, colIdx))
tdSql.checkData(0, i, self.rowNum * (colIdx - i + 2))
# insert more rows
for k in range(self.rowNum):
self.ts += self.step
sql = "insert into tb values (%d, %d" % (self.ts, colIdx + 2)
for j in range(colIdx + 1):
sql += ", %d" % (colIdx + 2)
sql += ")"
tdSql.execute(sql)
# count non-NULL values in each column
tdSql.query(self.sqlHead + self.sqlTail)
tdSql.checkData(0, 0, self.rowNum * (colIdx + 2))
tdSql.checkData(0, 1, self.rowNum * (colIdx + 2))
for i in range(2, colIdx + 2):
print("check2: i=%d colIdx=%d" % (i, colIdx))
tdSql.checkData(0, i, self.rowNum * (colIdx - i + 3))
def dropColumnAndCount(self):
tdSql.query(self.sqlHead + self.sqlTail)
res = []
for i in range(len(self.types)):
res.append(tdSql.getData(0, i + 2))
print(res)
for colIdx in range(len(self.types), 0, -1):
tdSql.execute("alter table stb drop column c%d" % (colIdx + 2))
# self.sqlHead = self.sqlHead + ",count(c%d) " %(colIdx + 2)
tdSql.query(self.sqlHead + self.sqlTail)
# count non-NULL values in each column
tdSql.checkData(0, 0, self.rowNum * (colIdx + 1))
tdSql.checkData(0, 1, self.rowNum * (colIdx + 1))
for i in range(2, colIdx + 2):
print("check1: i=%d colIdx=%d" % (i, colIdx))
tdSql.checkData(0, i, self.rowNum * (colIdx - i + 2))
# insert more rows
for k in range(self.rowNum):
self.ts += self.step
sql = "insert into tb values (%d, %d" % (self.ts, colIdx + 2)
for j in range(colIdx + 1):
sql += ", %d" % (colIdx + 2)
sql += ")"
tdSql.execute(sql)
# count non-NULL values in each column
tdSql.query(self.sqlHead + self.sqlTail)
tdSql.checkData(0, 0, self.rowNum * (colIdx + 2))
tdSql.checkData(0, 1, self.rowNum * (colIdx + 2))
for i in range(2, colIdx + 2):
print("check2: i=%d colIdx=%d" % (i, colIdx))
tdSql.checkData(0, i, self.rowNum * (colIdx - i + 3))
def run(self):
# Setup params
db = "db"
# Create db
tdSql.execute("drop database if exists %s" % (db))
tdSql.execute("reset query cache")
tdSql.execute("create database %s maxrows 200 maxtables 4" % (db))
tdSql.execute("use %s" % (db))
# Create a table with one colunm of int type and insert 300 rows
tdLog.info("Create stb and tb")
tdSql.execute("create table stb (ts timestamp, c1 int) tags (tg1 int)")
tdSql.execute("create table tb using stb tags (0)")
tdLog.info("Insert %d rows into tb" % (self.rowNum))
for k in range(1, self.rowNum + 1):
self.ts += self.step
tdSql.execute("insert into tb values (%d, 1)" % (self.ts))
# Alter tb and add a column of smallint type, then query tb to see if
# all added column are NULL
self.addColumnAndCount()
tdDnodes.stop(1)
time.sleep(5)
tdDnodes.start(1)
time.sleep(5)
tdSql.query(self.sqlHead + self.sqlTail)
for i in range(2, len(self.types) + 2):
tdSql.checkData(0, i, self.rowNum * (len(self.types) + 2 - i))
self.dropColumnAndCount()
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
#tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
# -*- coding: utf-8 -*-
import sys
import taos
from util.log import *
from util.cases import *
from util.sql import *
from util.dnodes import *
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
self.types = [
"int",
"bigint",
"float",
"double",
"smallint",
"tinyint",
"int unsigned",
"bigint unsigned",
"smallint unsigned",
"tinyint unsigned",
"binary(10)",
"nchar(10)",
"timestamp"]
self.rowNum = 300
self.ts = 1537146000000
self.step = 1000
self.sqlHead = "select count(*), count(c1) "
self.sqlTail = " from tb"
def addColumnAndCount(self):
for colIdx in range(len(self.types)):
tdSql.execute(
"alter table tb add column c%d %s" %
(colIdx + 2, self.types[colIdx]))
self.sqlHead = self.sqlHead + ",count(c%d) " % (colIdx + 2)
tdSql.query(self.sqlHead + self.sqlTail)
# count non-NULL values in each column
tdSql.checkData(0, 0, self.rowNum * (colIdx + 1))
tdSql.checkData(0, 1, self.rowNum * (colIdx + 1))
for i in range(2, colIdx + 2):
print("check1: i=%d colIdx=%d" % (i, colIdx))
tdSql.checkData(0, i, self.rowNum * (colIdx - i + 2))
# insert more rows
for k in range(self.rowNum):
self.ts += self.step
sql = "insert into tb values (%d, %d" % (self.ts, colIdx + 2)
for j in range(colIdx + 1):
sql += ", %d" % (colIdx + 2)
sql += ")"
tdSql.execute(sql)
# count non-NULL values in each column
tdSql.query(self.sqlHead + self.sqlTail)
tdSql.checkData(0, 0, self.rowNum * (colIdx + 2))
tdSql.checkData(0, 1, self.rowNum * (colIdx + 2))
for i in range(2, colIdx + 2):
print("check2: i=%d colIdx=%d" % (i, colIdx))
tdSql.checkData(0, i, self.rowNum * (colIdx - i + 3))
def dropColumnAndCount(self):
tdSql.query(self.sqlHead + self.sqlTail)
res = []
for i in range(len(self.types)):
res[i] = tdSql.getData(0, i + 2)
print(res.join)
for colIdx in range(len(self.types), 0, -1):
tdSql.execute("alter table tb drop column c%d" % (colIdx + 2))
# self.sqlHead = self.sqlHead + ",count(c%d) " %(colIdx + 2)
tdSql.query(self.sqlHead + self.sqlTail)
# count non-NULL values in each column
tdSql.checkData(0, 0, self.rowNum * (colIdx + 1))
tdSql.checkData(0, 1, self.rowNum * (colIdx + 1))
for i in range(2, colIdx + 2):
print("check1: i=%d colIdx=%d" % (i, colIdx))
tdSql.checkData(0, i, self.rowNum * (colIdx - i + 2))
# insert more rows
for k in range(self.rowNum):
self.ts += self.step
sql = "insert into tb values (%d, %d" % (self.ts, colIdx + 2)
for j in range(colIdx + 1):
sql += ", %d" % (colIdx + 2)
sql += ")"
tdSql.execute(sql)
# count non-NULL values in each column
tdSql.query(self.sqlHead + self.sqlTail)
tdSql.checkData(0, 0, self.rowNum * (colIdx + 2))
tdSql.checkData(0, 1, self.rowNum * (colIdx + 2))
for i in range(2, colIdx + 2):
print("check2: i=%d colIdx=%d" % (i, colIdx))
tdSql.checkData(0, i, self.rowNum * (colIdx - i + 3))
def alter_table_255_times(self): # add case for TD-6207
for i in range(255):
tdLog.info("alter table st add column cb%d int"%i)
tdSql.execute("alter table st add column cb%d int"%i)
tdSql.execute("insert into t0 (ts,c1) values(now,1)")
tdSql.execute("reset query cache")
tdSql.query("select * from st")
tdSql.execute("create table mt(ts timestamp, i int)")
tdSql.execute("insert into mt values(now,11)")
tdSql.query("select * from mt")
tdDnodes.stop(1)
tdDnodes.start(1)
tdSql.query("describe db.st")
def run(self):
# Setup params
db = "db"
# Create db
tdSql.execute("drop database if exists %s" % (db))
tdSql.execute("reset query cache")
tdSql.execute("create database %s maxrows 200" % (db))
tdSql.execute("use %s" % (db))
# Create a table with one colunm of int type and insert 300 rows
tdLog.info("create table tb")
tdSql.execute("create table tb (ts timestamp, c1 int)")
tdLog.info("Insert %d rows into tb" % (self.rowNum))
for k in range(1, self.rowNum + 1):
self.ts += self.step
tdSql.execute("insert into tb values (%d, 1)" % (self.ts))
# Alter tb and add a column of smallint type, then query tb to see if
# all added column are NULL
self.addColumnAndCount()
tdDnodes.stop(1)
tdDnodes.start(1)
tdSql.query(self.sqlHead + self.sqlTail)
size = len(self.types) + 2
for i in range(2, size):
tdSql.checkData(0, i, self.rowNum * (size - i))
tdSql.execute("create table st(ts timestamp, c1 int) tags(t1 float,t2 int,t3 double)")
tdSql.execute("create table t0 using st tags(null,1,2.3)")
tdSql.execute("alter table t0 set tag t1=2.1")
tdSql.query("show tables")
tdSql.checkRows(2)
self.alter_table_255_times()
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addLinux(__file__, TDTestCase())
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
import taos
from util.log import *
from util.cases import *
from util.sql import *
from util.dnodes import *
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor())
def run(self):
tdSql.prepare()
print("==============Case 1: add column, restart taosd, drop the same colum then add it back")
tdSql.execute(
"create table st(ts timestamp, speed int) tags(loc nchar(20))")
tdSql.execute(
"insert into t1 using st tags('beijing') values(now, 1)")
tdSql.execute(
"alter table st add column tbcol binary(20)")
# restart taosd
tdDnodes.forcestop(1)
tdDnodes.start(1)
tdSql.execute(
"alter table st drop column tbcol")
tdSql.execute(
"alter table st add column tbcol binary(20)")
tdSql.query("select * from st")
tdSql.checkRows(1)
print("==============Case 2: keep adding columns, restart taosd")
tdSql.execute(
"create table dt(ts timestamp, tbcol1 tinyint) tags(tgcol1 tinyint)")
tdSql.execute(
"alter table dt add column tbcol2 int")
tdSql.execute(
"alter table dt add column tbcol3 smallint")
tdSql.execute(
"alter table dt add column tbcol4 bigint")
tdSql.execute(
"alter table dt add column tbcol5 float")
tdSql.execute(
"alter table dt add column tbcol6 double")
tdSql.execute(
"alter table dt add column tbcol7 bool")
tdSql.execute(
"alter table dt add column tbcol8 nchar(20)")
tdSql.execute(
"alter table dt add column tbcol9 binary(20)")
tdSql.execute(
"alter table dt add column tbcol10 tinyint unsigned")
tdSql.execute(
"alter table dt add column tbcol11 int unsigned")
tdSql.execute(
"alter table dt add column tbcol12 smallint unsigned")
tdSql.execute(
"alter table dt add column tbcol13 bigint unsigned")
# restart taosd
tdDnodes.forcestop(1)
tdDnodes.start(1)
tdSql.query("select * from dt")
tdSql.checkRows(0)
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
# -*- coding: utf-8 -*-
import random
import string
import subprocess
import sys
from util.log import *
from util.cases import *
from util.sql import *
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
def run(self):
tdLog.debug("check database")
tdSql.prepare()
# check default update value
sql = "create database if not exists db"
tdSql.execute(sql)
tdSql.query('show databases')
tdSql.checkRows(1)
tdSql.checkData(0,16,0)
sql = "alter database db update 1"
# check update value
tdSql.execute(sql)
tdSql.query('show databases')
tdSql.checkRows(1)
tdSql.checkData(0,16,1)
sql = "alter database db update 0"
tdSql.execute(sql)
tdSql.query('show databases')
tdSql.checkRows(1)
tdSql.checkData(0,16,0)
sql = "alter database db update -1"
tdSql.error(sql)
sql = "alter database db update 100"
tdSql.error(sql)
tdSql.query('show databases')
tdSql.checkRows(1)
tdSql.checkData(0,16,0)
tdSql.execute('drop database db')
tdSql.error('create database db update 100')
tdSql.error('create database db update -1')
tdSql.execute('create database db update 1')
tdSql.query('show databases')
tdSql.checkRows(1)
tdSql.checkData(0,16,1)
tdSql.execute('drop database db')
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
import taos
from util.log import *
from util.cases import *
from util.sql import *
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
def run(self):
tdSql.prepare()
tdSql.execute(
'create table st (ts timestamp, v1 int, v2 int, v3 int, v4 int, v5 int) tags (t int)')
totalTables = 100
batchSize = 500
totalBatch = 60
tdLog.info(
"create %d tables, insert %d rows per table" %
(totalTables, batchSize * totalBatch))
for t in range(0, totalTables):
tdSql.execute('create table t%d using st tags(%d)' % (t, t))
# 2019-06-10 00:00:00
beginTs = 1560096000000
interval = 10000
for r in range(0, totalBatch):
sql = 'insert into t%d values ' % (t)
for b in range(0, batchSize):
ts = beginTs + (r * batchSize + b) * interval
sql += '(%d, 1, 2, 3, 4, 5)' % (ts)
tdSql.execute(sql)
tdLog.info("insert data finished")
tdSql.execute('alter table st add column v6 int')
tdLog.sleep(5)
tdLog.info("alter table finished")
tdSql.query("select count(*) from t50")
tdSql.checkData(0, 0, (int)(batchSize * totalBatch))
tdLog.info("insert")
tdSql.execute(
"insert into t50 values ('2019-06-13 07:59:55.000', 1, 2, 3, 4, 5, 6)")
tdLog.info("import")
tdSql.execute(
"import into t50 values ('2019-06-13 07:59:55.000', 1, 2, 3, 4, 5, 6)")
tdLog.info("query")
tdSql.query("select count(*) from t50")
tdSql.checkData(0, 0, batchSize * totalBatch + 1)
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
from util.log import *
from util.cases import *
from util.sql import *
from util.dnodes import *
import taos
if __name__ == "__main__":
logSql = True
deployPath = ""
testCluster = False
valgrind = 0
print("start to execute %s" % __file__)
tdDnodes.init(deployPath)
tdDnodes.setTestCluster(testCluster)
tdDnodes.setValgrind(valgrind)
tdDnodes.stopAll()
tdDnodes.addSimExtraCfg("maxSQLLength", "1048576")
tdDnodes.deploy(1)
tdDnodes.start(1)
host = '127.0.0.1'
tdLog.info("Procedures for tdengine deployed in %s" % (host))
tdCases.logSql(logSql)
print('1')
conn = taos.connect(
host,
config=tdDnodes.getSimCfgPath())
tdSql.init(conn.cursor(), True)
print("==========step1")
print("create table ")
tdSql.execute("create database db")
tdSql.execute("use db")
tdSql.execute("create table t1 (ts timestamp, c1 int,c2 int ,c3 int)")
print("==========step2")
print("insert maxSQLLength data ")
data = 'insert into t1 values'
ts = 1604298064000
i = 0
while ((len(data)<(1024*1024)) & (i < 32767 - 1) ):
data += '(%s,%d,%d,%d)'%(ts+i,i%1000,i%1000,i%1000)
i+=1
tdSql.execute(data)
print("==========step4")
print("insert data batch larger than 32767 ")
i = 0
while ((len(data)<(1024*1024)) & (i < 32767) ):
data += '(%s,%d,%d,%d)'%(ts+i,i%1000,i%1000,i%1000)
i+=1
tdSql.error(data)
print("==========step4")
print("insert data larger than maxSQLLength ")
tdSql.execute("create table t2 (ts timestamp, c1 binary(50))")
data = 'insert into t2 values'
i = 0
while ((len(data)<(1024*1024)) & (i < 32767 - 1 ) ):
data += '(%s,%s)'%(ts+i,'a'*50)
i+=1
tdSql.error(data)
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
from util.log import *
from util.cases import *
from util.sql import *
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
def run(self):
tdSql.prepare()
tdSql.query('select database()')
tdSql.checkData(0, 0, "db")
tdSql.execute("alter database db comp 2")
tdSql.query("show databases")
tdSql.checkData(0, 14, 2)
tdSql.execute("alter database db keep 365,365,365")
tdSql.query("show databases")
tdSql.checkData(0, 7, "365,365,365")
tdSql.error("alter database db quorum 2")
tdSql.execute("alter database db blocks 100")
tdSql.query("show databases")
tdSql.checkData(0, 9, 100)
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
\ No newline at end of file
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
from util.log import *
from util.cases import *
from util.sql import *
from util.pathFinding import *
from util.dnodes import tdDnodes
from datetime import datetime
import subprocess
import time
##TODO: this is now automatic, but not sure if this will run through jenkins
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
tdFindPath.init(__file__)
def run(self):
tdSql.prepare()
binPath = tdFindPath.getTaosdemoPath()
TDenginePath = tdFindPath.getTDenginePath()
## change system time to 2020/10/20
os.system('sudo timedatectl set-ntp off')
tdLog.sleep(10)
os.system('sudo timedatectl set-time 2020-10-20')
#run taosdemo to insert data. one row per second from 2020/10/11 to 2020/10/20
#11 data files should be generated
#vnode at TDinternal/community/sim/dnode1/data/vnode
try:
os.system(f"{binPath}taosdemo -f tools/taosdemoAllTest/manual_change_time_1_1_A.json")
commandArray = ['ls', '-l', f'{TDenginePath}/sim/dnode1/data/vnode/vnode2/tsdb/data']
result = subprocess.run(commandArray, stdout=subprocess.PIPE).stdout.decode('utf-8')
except BaseException:
os.system('sudo timedatectl set-ntp on')
tdLog.sleep(10)
if result.count('data') != 11:
os.system('sudo timedatectl set-ntp on')
tdLog.sleep(10)
tdLog.exit('wrong number of files')
else:
tdLog.debug("data file number correct")
#move 5 days ahead to 2020/10/25. 4 oldest files should be removed during the new write
#leaving 7 data files.
try:
os.system ('timedatectl set-time 2020-10-25')
os.system(f"{binPath}taosdemo -f tools/taosdemoAllTest/manual_change_time_1_1_B.json")
except BaseException:
os.system('sudo timedatectl set-ntp on')
tdLog.sleep(10)
os.system('sudo timedatectl set-ntp on')
tdLog.sleep(10)
commandArray = ['ls', '-l', f'{TDenginePath}/sim/dnode1/data/vnode/vnode2/tsdb/data']
result = subprocess.run(commandArray, stdout=subprocess.PIPE).stdout.decode('utf-8')
print(result.count('data'))
if result.count('data') != 7:
tdLog.exit('wrong number of files')
else:
tdLog.debug("data file number correct")
tdSql.query('select first(ts) from stb_0')
tdSql.checkData(0,0,datetime(2020,10,14,8,0,0,0)) #check the last data in the database
os.system('sudo timedatectl set-ntp on')
tdLog.sleep(10)
def stop(self):
os.system('sudo timedatectl set-ntp on')
tdLog.sleep(10)
tdSql.close()
tdLog.success("alter block manual check finish")
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
from util.log import *
from util.cases import *
from util.sql import *
from util.dnodes import tdDnodes
from util.pathFinding import *
from datetime import datetime
import subprocess
##TODO: this is now automatic, but not sure if this will run through jenkins
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
tdFindPath.init(__file__)
def run(self):
tdSql.prepare()
binPath = tdFindPath.getTaosdemoPath()
TDenginePath = tdFindPath.getTDenginePath()
## change system time to 2020/10/20
os.system ('timedatectl set-ntp off')
tdLog.sleep(10)
os.system ('timedatectl set-time 2020-10-20')
#run taosdemo to insert data. one row per second from 2020/10/11 to 2020/10/20
#11 data files should be generated
#vnode at TDinternal/community/sim/dnode1/data/vnode
try:
os.system(f"{binPath}taosdemo -f tools/taosdemoAllTest/manual_change_time_1_1_A.json")
commandArray = ['ls', '-l', f'{TDenginePath}/sim/dnode1/data/vnode/vnode2/tsdb/data']
result = subprocess.run(commandArray, stdout=subprocess.PIPE).stdout.decode('utf-8')
except BaseException:
os.system('sudo timedatectl set-ntp on')
tdLog.sleep(10)
if result.count('data') != 11:
os.system('sudo timedatectl set-ntp on')
tdLog.sleep(10)
tdLog.exit('wrong number of files')
else:
tdLog.debug("data file number correct")
try:
tdSql.query('select first(ts) from stb_0') #check the last data in the database
tdSql.checkData(0,0,datetime(2020,10,11,0,0,0,0))
except BaseException:
os.system('sudo timedatectl set-ntp on')
tdLog.sleep(10)
#moves 5 days ahead to 2020/10/25 and restart taosd
#4 oldest data file should be removed from tsdb/data
#7 data file should be found
#vnode at TDinternal/community/sim/dnode1/data/vnode
try:
os.system ('timedatectl set-time 2020-10-25')
tdDnodes.stop(1)
tdDnodes.start(1)
tdSql.query('select first(ts) from stb_0')
tdSql.checkData(0,0,datetime(2020,10,14,8,0,0,0)) #check the last data in the database
except BaseException:
os.system('sudo timedatectl set-ntp on')
tdLog.sleep(10)
os.system('sudo timedatectl set-ntp on')
tdLog.sleep(10)
commandArray = ['ls', '-l', f'{TDenginePath}/sim/dnode1/data/vnode/vnode2/tsdb/data']
result = subprocess.run(commandArray, stdout=subprocess.PIPE).stdout.decode('utf-8')
print(result.count('data'))
if result.count('data') != 7:
tdLog.exit('wrong number of files')
else:
tdLog.debug("data file number correct")
os.system('sudo timedatectl set-ntp on')
tdLog.sleep(10)
def stop(self):
os.system('sudo timedatectl set-ntp on')
tdSql.close()
tdLog.success("alter block manual check finish")
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
from util.log import *
from util.cases import *
from util.sql import *
from datetime import timedelta
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
def run(self):
tdSql.prepare()
ret = tdSql.query('select database()')
tdSql.checkData(0, 0, "db")
ret = tdSql.query('select server_status()')
tdSql.checkData(0, 0, 1)
ret = tdSql.query('select server_status() as result')
tdSql.checkData(0, 0, 1)
time.sleep(1)
ret = tdSql.query('show dnodes')
dnodeId = tdSql.getData(0, 0);
dnodeEndpoint = tdSql.getData(0, 1);
ret = tdSql.execute('alter dnode "%s" debugFlag 135' % dnodeId)
tdLog.info('alter dnode "%s" debugFlag 135 -> ret: %d' % (dnodeId, ret))
time.sleep(1)
ret = tdSql.query('show mnodes')
tdSql.checkRows(1)
tdSql.checkData(0, 2, "master")
role_time = tdSql.getData(0, 3)
create_time = tdSql.getData(0, 4)
time_delta = timedelta(milliseconds=100)
if create_time-time_delta < role_time < create_time+time_delta:
tdLog.info("role_time {} and create_time {} expected within range".format(role_time, create_time))
else:
tdLog.exit("role_time {} and create_time {} not expected within range".format(role_time, create_time))
ret = tdSql.query('show vgroups')
tdSql.checkRows(0)
tdSql.execute('create stable st (ts timestamp, f int) tags(t int)')
tdSql.execute('create table ct1 using st tags(1)');
tdSql.execute('create table ct2 using st tags(2)');
time.sleep(3)
ret = tdSql.query('show vnodes "{}"'.format(dnodeEndpoint))
tdSql.checkRows(1)
tdSql.checkData(0, 0, 2)
tdSql.checkData(0, 1, "master")
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
from util.log import *
from util.cases import *
from util.sql import *
from util.dnodes import tdDnodes
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
def run(self):
tdSql.prepare()
tdDnodes.stop(1)
sql = "use db"
try:
tdSql.execute(sql)
except Exception as e:
expectError = 'Unable to establish connection'
if expectError in str(e):
pass
else:
caller = inspect.getframeinfo(inspect.stack()[1][1])
tdLog.exit("%s(%d) failed: sql:%s, expect error not occured" % (caller.filename, caller.lineno, sql))
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import os
import sys
sys.path.insert(0, os.getcwd())
from util.log import *
from util.sql import *
from util.dnodes import *
import taos
import threading
class TwoClients:
def initConnection(self):
self.host = "127.0.0.1"
self.user = "root"
self.password = "taosdata"
self.config = "/home/chr/taosdata/TDengine/sim/dnode1/cfg "
def newCloseCon(times):
newConList = []
for times in range(0,times) :
newConList.append(taos.connect(self.host, self.user, self.password, self.config))
for times in range(0,times) :
newConList[times].close()
def run(self):
tdDnodes.init("")
tdDnodes.setTestCluster(False)
tdDnodes.setValgrind(False)
tdDnodes.stopAll()
tdDnodes.deploy(1)
tdDnodes.start(1)
# multiple new and cloes connection
for m in range(1,101) :
t= threading.Thread(target=newCloseCon,args=(10,))
t.start()
clients = TwoClients()
clients.initConnection()
clients.run()
\ No newline at end of file
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import os
import sys
sys.path.insert(0, os.getcwd())
from util.log import *
from util.sql import *
from util.dnodes import *
import taos
class TwoClients:
def initConnection(self):
self.host = "127.0.0.1"
self.user = "root"
self.password = "taosdata"
self.config = "/home/xp/git/TDengine/sim/dnode1/cfg"
def run(self):
tdDnodes.init("")
tdDnodes.setTestCluster(False)
tdDnodes.setValgrind(False)
tdDnodes.stopAll()
tdDnodes.deploy(1)
tdDnodes.start(1)
# first client create a stable and insert data
conn1 = taos.connect(self.host, self.user, self.password, self.config)
cursor1 = conn1.cursor()
cursor1.execute("drop database if exists db")
cursor1.execute("create database db")
cursor1.execute("use db")
cursor1.execute("create table tb (ts timestamp, id int) tags(loc nchar(30))")
cursor1.execute("insert into t0 using tb tags('beijing') values(now, 1)")
# second client alter the table created by cleint
conn2 = taos.connect(self.host, self.user, self.password, self.config)
cursor2 = conn2.cursor()
cursor2.execute("use db")
cursor2.execute("alter table tb add column name nchar(30)")
# first client should not be able to use the origin metadata
tdSql.init(cursor1, True)
tdSql.error("insert into t0 values(now, 2)")
# first client should be able to insert data with udpated medadata
tdSql.execute("insert into t0 values(now, 2, 'test')")
tdSql.query("select * from tb")
tdSql.checkRows(2)
# second client drop the table
cursor2.execute("drop table t0")
cursor2.execute("create table t0 using tb tags('beijing')")
tdSql.execute("insert into t0 values(now, 2, 'test')")
tdSql.query("select * from tb")
tdSql.checkRows(1)
# error expected for two clients drop the same cloumn
cursor2.execute("alter table tb drop column name")
tdSql.error("alter table tb drop column name")
cursor2.execute("alter table tb add column speed int")
tdSql.error("alter table tb add column speed int")
tdSql.execute("alter table tb add column size int")
tdSql.query("describe tb")
tdSql.checkRows(5)
tdSql.checkData(0, 0, "ts")
tdSql.checkData(1, 0, "id")
tdSql.checkData(2, 0, "speed")
tdSql.checkData(3, 0, "size")
tdSql.checkData(4, 0, "loc")
cursor1.close()
cursor2.close()
conn1.close()
conn2.close()
clients = TwoClients()
clients.initConnection()
clients.run()
\ No newline at end of file
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
from util.log import *
from util.cases import *
from util.sql import *
from math import floor
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
def run(self):
tdSql.prepare()
sql = "select server_version()"
ret = tdSql.query(sql)
version = floor(float(tdSql.getData(0, 0)[0:3]))
expectedVersion = 2
if(version == expectedVersion):
tdLog.info("sql:%s, row:%d col:%d data:%d == expect" % (sql, 0, 0, version))
else:
tdLog.exit("sql:%s, row:%d col:%d data:%d != expect:%d " % (sql, 0, 0, version, expectedVersion))
sql = "select client_version()"
ret = tdSql.query(sql)
version = floor(float(tdSql.getData(0, 0)[0:3]))
expectedVersion = 2
if(version == expectedVersion):
tdLog.info("sql:%s, row:%d col:%d data:%d == expect" % (sql, 0, 0, version))
else:
tdLog.exit("sql:%s, row:%d col:%d data:%d != expect:%d " % (sql, 0, 0, version, expectedVersion))
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
execute:
cd TDengine/tests/pytest && python3 ./test.py -f cluster/TD-3693/multClient.py && python3 cluster/TD-3693/multQuery.py
1. 使用测试的集群,三个节点fc1、fct2、fct4。
2. 用taosdemo建两个库db1和db2,副本数目为1,插入一定数据。
3. db1在mnode的master上(fct2),db2在mnode的slave上(fct4)。
4. 珲哥修改taosdemo,变成多线程查询,修改后的软件我命名成taosdemoMul,然后做持续多线程查询db2上的数据,建立多个连接
5. 4中查询过程放到后台,同时再次在db2执行建表、插入,查询操作。循环执行查询10次,每次间隔91s。
6. 然后查询taosd的log日志,看是否还存在上述问题“send auth msg to mnodes”。
\ No newline at end of file
{
"filetype": "insert",
"cfgdir": "/etc/taos",
"host": "192.168.1.104",
"port": 6030,
"user": "root",
"password": "taosdata",
"thread_count": 4,
"thread_count_create_tbl": 4,
"result_file": "./insert_res.txt",
"confirm_parameter_prompt": "no",
"insert_interval": 0,
"interlace_rows": 10,
"num_of_records_per_req": 1000,
"max_sql_len": 1024000,
"databases": [{
"dbinfo": {
"name": "db1",
"drop": "yes",
"replica": 1,
"days": 10,
"cache": 50,
"blocks": 8,
"precision": "ms",
"keep": 3650,
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
"update": 0
},
"super_tables": [{
"name": "stb0",
"child_table_exists":"no",
"childtable_count": 10,
"childtable_prefix": "stb00_",
"auto_create_table": "no",
"batch_create_tbl_num": 10,
"data_source": "rand",
"insert_mode": "taosc",
"insert_rows": 10000,
"childtable_limit": 0,
"childtable_offset":0,
"multi_thread_write_one_tbl": "no",
"interlace_rows": 0,
"insert_interval":0,
"max_sql_len": 1024000,
"disorder_ratio": 0,
"disorder_range": 1000,
"timestamp_step": 1,
"start_timestamp": "2020-10-01 00:00:00.000",
"sample_format": "csv",
"sample_file": "./sample.csv",
"tags_file": "",
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":10}, {"type": "BINARY", "len": 16, "count":3}, {"type": "BINARY", "len": 32, "count":6}],
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":5}]
},
{
"name": "stb1",
"child_table_exists":"no",
"childtable_count": 20,
"childtable_prefix": "stb01_",
"auto_create_table": "no",
"batch_create_tbl_num": 10,
"data_source": "rand",
"insert_mode": "taosc",
"insert_rows": 20000,
"childtable_limit": 0,
"childtable_offset":0,
"multi_thread_write_one_tbl": "no",
"interlace_rows": 0,
"insert_interval":0,
"max_sql_len": 1024000,
"disorder_ratio": 0,
"disorder_range": 1000,
"timestamp_step": 1,
"start_timestamp": "2020-10-01 00:00:00.000",
"sample_format": "csv",
"sample_file": "./sample.csv",
"tags_file": "",
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":10}, {"type": "BINARY", "len": 16, "count":3}, {"type": "BINARY", "len": 32, "count":6}],
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":5}]
}]
}]
}
{
"filetype": "insert",
"cfgdir": "/etc/taos",
"host": "192.168.1.104",
"port": 6030,
"user": "root",
"password": "taosdata",
"thread_count": 4,
"thread_count_create_tbl": 4,
"result_file": "./insert_res.txt",
"confirm_parameter_prompt": "no",
"insert_interval": 0,
"interlace_rows": 10,
"num_of_records_per_req": 1000,
"max_sql_len": 1024000,
"databases": [{
"dbinfo": {
"name": "db2",
"drop": "yes",
"replica": 1,
"days": 10,
"cache": 50,
"blocks": 8,
"precision": "ms",
"keep": 3650,
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
"update": 0
},
"super_tables": [{
"name": "stb0",
"child_table_exists":"no",
"childtable_count": 10,
"childtable_prefix": "stb00_",
"auto_create_table": "no",
"batch_create_tbl_num": 10,
"data_source": "rand",
"insert_mode": "taosc",
"insert_rows": 10000,
"childtable_limit": 0,
"childtable_offset":0,
"multi_thread_write_one_tbl": "no",
"interlace_rows": 0,
"insert_interval":0,
"max_sql_len": 1024000,
"disorder_ratio": 0,
"disorder_range": 1000,
"timestamp_step": 1,
"start_timestamp": "2020-10-01 00:00:00.000",
"sample_format": "csv",
"sample_file": "./sample.csv",
"tags_file": "",
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":10}, {"type": "BINARY", "len": 16, "count":3}, {"type": "BINARY", "len": 32, "count":6}],
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":5}]
},
{
"name": "stb1",
"child_table_exists":"no",
"childtable_count": 20,
"childtable_prefix": "stb01_",
"auto_create_table": "no",
"batch_create_tbl_num": 10,
"data_source": "rand",
"insert_mode": "taosc",
"insert_rows": 20000,
"childtable_limit": 0,
"childtable_offset":0,
"multi_thread_write_one_tbl": "no",
"interlace_rows": 0,
"insert_interval":0,
"max_sql_len": 1024000,
"disorder_ratio": 0,
"disorder_range": 1000,
"timestamp_step": 1,
"start_timestamp": "2020-10-01 00:00:00.000",
"sample_format": "csv",
"sample_file": "./sample.csv",
"tags_file": "",
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":10}, {"type": "BINARY", "len": 16, "count":3}, {"type": "BINARY", "len": 32, "count":6}],
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":5}]
}]
}]
}
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
import os
from util.log import *
from util.cases import *
from util.sql import *
from util.dnodes import *
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
self.rowNum = 100000
self.ts = 1537146000000
def getBuildPath(self):
selfPath = os.path.dirname(os.path.realpath(__file__))
if ("community" in selfPath):
projPath = selfPath[:selfPath.find("community")]
else:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root)-len("/build/bin")]
break
return buildPath
def run(self):
buildPath = self.getBuildPath()
if (buildPath == ""):
tdLog.exit("taosd not found!")
else:
tdLog.info("taosd found in %s" % buildPath)
binPath = buildPath+ "/build/bin/"
# insert data to cluster'db
os.system("%staosdemo -f cluster/TD-3693/insert1Data.json -y " % binPath)
# multiple new and cloes connection with query data
os.system("%staosdemo -f cluster/TD-3693/insert2Data.json -y " % binPath)
os.system("nohup %staosdemoMul -f cluster/TD-3693/queryCount.json -y & " % binPath)
# delete useless files
os.system("rm -rf ./insert_res.txt")
os.system("rm -rf ./querySystemInfo*")
os.system("rm -rf cluster/TD-3693/multClient.py.sql")
os.system("rm -rf ./querySystemInfo*")
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import os
import sys
sys.path.insert(0, os.getcwd())
from util.log import *
from util.sql import *
from util.dnodes import *
import taos
import threading
class TwoClients:
def initConnection(self):
self.host = "fct4"
self.user = "root"
self.password = "taosdata"
self.config = "/etc/taos/"
self.rowNum = 10
self.ts = 1537146000000
def run(self):
# query data from cluster'db
conn = taos.connect(host=self.host, user=self.user, password=self.password, config=self.config)
cur = conn.cursor()
tdSql.init(cur, True)
tdSql.execute("use db2")
cur.execute("select count (tbname) from stb0")
tdSql.query("select count (tbname) from stb0")
tdSql.checkData(0, 0, 10)
tdSql.query("select count (tbname) from stb1")
tdSql.checkData(0, 0, 20)
tdSql.query("select count(*) from stb00_0")
tdSql.checkData(0, 0, 10000)
tdSql.query("select count(*) from stb0")
tdSql.checkData(0, 0, 100000)
tdSql.query("select count(*) from stb01_0")
tdSql.checkData(0, 0, 20000)
tdSql.query("select count(*) from stb1")
tdSql.checkData(0, 0, 400000)
tdSql.execute("drop table if exists squerytest")
tdSql.execute("drop table if exists querytest")
tdSql.execute('''create stable squerytest(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double, col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned) tags(loc nchar(20))''')
tdSql.execute("create table querytest using squerytest tags('beijing')")
tdSql.execute("insert into querytest(ts) values(%d)" % (self.ts - 1))
for i in range(self.rowNum):
tdSql.execute("insert into querytest values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)" % (self.ts + i, i + 1, 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
for j in range(10):
tdSql.execute("use db2")
tdSql.query("select count(*),last(*) from querytest group by col1")
tdSql.checkRows(10)
tdSql.checkData(0, 0, 1)
tdSql.checkData(1, 2, 2)
tdSql.checkData(1, 3, 1)
sleep(88)
tdSql.execute("drop table if exists squerytest")
tdSql.execute("drop table if exists querytest")
clients = TwoClients()
clients.initConnection()
clients.run()
\ No newline at end of file
{
"filetype":"query",
"cfgdir": "/etc/taos",
"host": "192.168.1.104",
"port": 6030,
"user": "root",
"password": "taosdata",
"confirm_parameter_prompt": "no",
"databases": "db2",
"query_times": 1000000,
"specified_table_query":
{"query_interval":1, "concurrent":100,
"sqls": [{"sql": "select count(*) from db.stb0", "result": ""}]
}
}
\ No newline at end of file
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
from clusterSetup import *
from util.sql import tdSql
from util.log import tdLog
import random
import time
class ClusterTestcase:
## test case 32 ##
def run(self):
nodes = Nodes()
nodes.addConfigs("maxVgroupsPerDb", "10")
nodes.addConfigs("maxTablesPerVnode", "1000")
nodes.restartAllTaosd()
ctest = ClusterTest(nodes.node1.hostName)
ctest.connectDB()
ctest.createSTable(1)
ctest.run()
tdSql.init(ctest.conn.cursor(), False)
tdSql.execute("use %s" % ctest.dbName)
tdSql.query("show vgroups")
dnodes = []
for i in range(10):
dnodes.append(int(tdSql.getData(i, 4)))
s = set(dnodes)
if len(s) < 3:
tdLog.exit("cluster is not balanced")
tdLog.info("cluster is balanced")
nodes.removeConfigs("maxVgroupsPerDb", "10")
nodes.removeConfigs("maxTablesPerVnode", "1000")
nodes.restartAllTaosd()
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
ct = ClusterTestcase()
ct.run()
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
from clusterSetup import *
from util.sql import tdSql
from util.log import tdLog
import random
class ClusterTestcase:
## test case 1, 33 ##
def run(self):
nodes = Nodes()
ctest = ClusterTest(nodes.node1.hostName)
ctest.connectDB()
tdSql.init(ctest.conn.cursor(), False)
## Test case 1 ##
tdLog.info("Test case 1 repeat %d times" % ctest.repeat)
for i in range(ctest.repeat):
tdLog.info("Start Round %d" % (i + 1))
replica = random.randint(1,3)
ctest.createSTable(replica)
ctest.run()
tdLog.sleep(10)
tdSql.query("select count(*) from %s.%s" %(ctest.dbName, ctest.stbName))
tdSql.checkData(0, 0, ctest.numberOfRecords * ctest.numberOfTables)
tdLog.info("Round %d completed" % (i + 1))
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
ct = ClusterTestcase()
ct.run()
\ No newline at end of file
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
from clusterSetup import *
from util.sql import tdSql
from util.log import tdLog
import random
class ClusterTestcase:
## test case 7, ##
def run(self):
nodes = Nodes()
ctest = ClusterTest(nodes.node1.hostName)
ctest.connectDB()
tdSql.init(ctest.conn.cursor(), False)
tdSql.execute("use %s" % ctest.dbName)
tdSql.query("show vgroups")
for i in range(10):
tdSql.checkData(i, 5, "master")
tdSql.execute("alter database %s replica 2" % ctest.dbName)
tdLog.sleep(30)
tdSql.query("show vgroups")
for i in range(10):
tdSql.checkData(i, 5, "master")
tdSql.checkData(i, 7, "slave")
tdSql.execute("alter database %s replica 3" % ctest.dbName)
tdLog.sleep(30)
tdSql.query("show vgroups")
for i in range(10):
tdSql.checkData(i, 5, "master")
tdSql.checkData(i, 7, "slave")
tdSql.checkData(i, 9, "slave")
ct = ClusterTestcase()
ct.run()
\ No newline at end of file
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
from fabric import Connection
import random
import time
import logging
class Node:
def __init__(self, index, username, hostIP, hostName, password, homeDir):
self.index = index
self.username = username
self.hostIP = hostIP
self.hostName = hostName
self.homeDir = homeDir
self.corePath = '/coredump'
self.conn = Connection("{}@{}".format(username, hostName), connect_kwargs={"password": "{}".format(password)})
def buildTaosd(self):
try:
self.conn.cd("/root/TDinternal/community")
self.conn.run("git checkout develop")
self.conn.run("git pull")
self.conn.cd("/root/TDinternal")
self.conn.run("git checkout develop")
self.conn.run("git pull")
self.conn.cd("/root/TDinternal/debug")
self.conn.run("cmake ..")
self.conn.run("make")
self.conn.run("make install")
except Exception as e:
print("Build Taosd error for node %d " % self.index)
logging.exception(e)
pass
def startTaosd(self):
try:
self.conn.run("sudo systemctl start taosd")
except Exception as e:
print("Start Taosd error for node %d " % self.index)
logging.exception(e)
def stopTaosd(self):
try:
self.conn.run("sudo systemctl stop taosd")
except Exception as e:
print("Stop Taosd error for node %d " % self.index)
logging.exception(e)
def restartTaosd(self):
try:
self.conn.run("sudo systemctl restart taosd")
except Exception as e:
print("Stop Taosd error for node %d " % self.index)
logging.exception(e)
def removeTaosd(self):
try:
self.conn.run("rmtaos")
except Exception as e:
print("remove taosd error for node %d " % self.index)
logging.exception(e)
def forceStopOneTaosd(self):
try:
self.conn.run("kill -9 $(ps -ax|grep taosd|awk '{print $1}')")
except Exception as e:
print("kill taosd error on node%d " % self.index)
def startOneTaosd(self):
try:
self.conn.run("nohup taosd -c /etc/taos/ > /dev/null 2>&1 &")
except Exception as e:
print("start taosd error on node%d " % self.index)
logging.exception(e)
def installTaosd(self, packagePath):
self.conn.put(packagePath, self.homeDir)
self.conn.cd(self.homeDir)
self.conn.run("tar -zxf $(basename '%s')" % packagePath)
with self.conn.cd("TDengine-enterprise-server"):
self.conn.run("yes|./install.sh")
def configTaosd(self, taosConfigKey, taosConfigValue):
self.conn.run("sudo echo '%s %s' >> %s" % (taosConfigKey, taosConfigValue, "/etc/taos/taos.cfg"))
def removeTaosConfig(self, taosConfigKey, taosConfigValue):
self.conn.run("sudo sed -in-place -e '/%s %s/d' %s" % (taosConfigKey, taosConfigValue, "/etc/taos/taos.cfg"))
def configHosts(self, ip, name):
self.conn.run("echo '%s %s' >> %s" % (ip, name, '/etc/hosts'))
def removeData(self):
try:
self.conn.run("sudo rm -rf /var/lib/taos/*")
except Exception as e:
print("remove taosd data error for node %d " % self.index)
logging.exception(e)
def removeLog(self):
try:
self.conn.run("sudo rm -rf /var/log/taos/*")
except Exception as e:
print("remove taosd error for node %d " % self.index)
logging.exception(e)
def removeDataForMnode(self):
try:
self.conn.run("sudo rm -rf /var/lib/taos/*")
except Exception as e:
print("remove taosd error for node %d " % self.index)
logging.exception(e)
def removeDataForVnode(self, id):
try:
self.conn.run("sudo rm -rf /var/lib/taos/vnode%d/*.data" % id)
except Exception as e:
print("remove taosd error for node %d " % self.index)
logging.exception(e)
def detectCoredumpFile(self):
try:
result = self.conn.run("find /coredump -name 'core_*' ", hide=True)
output = result.stdout
print("output: %s" % output)
return output
except Exception as e:
print("find coredump file error on node %d " % self.index)
logging.exception(e)
class Nodes:
def __init__(self):
self.tdnodes = []
self.tdnodes.append(Node(0, 'root', '192.168.17.194', 'taosdata', 'r', '/root/'))
# self.tdnodes.append(Node(1, 'root', '52.250.48.222', 'node2', 'a', '/root/'))
# self.tdnodes.append(Node(2, 'root', '51.141.167.23', 'node3', 'a', '/root/'))
# self.tdnodes.append(Node(3, 'root', '52.247.207.173', 'node4', 'a', '/root/'))
# self.tdnodes.append(Node(4, 'root', '51.141.166.100', 'node5', 'a', '/root/'))
def stopOneNode(self, index):
self.tdnodes[index].stopTaosd()
self.tdnodes[index].forceStopOneTaosd()
def startOneNode(self, index):
self.tdnodes[index].startOneTaosd()
def detectCoredumpFile(self, index):
return self.tdnodes[index].detectCoredumpFile()
def stopAllTaosd(self):
for i in range(len(self.tdnodes)):
self.tdnodes[i].stopTaosd()
def startAllTaosd(self):
for i in range(len(self.tdnodes)):
self.tdnodes[i].startTaosd()
def restartAllTaosd(self):
for i in range(len(self.tdnodes)):
self.tdnodes[i].restartTaosd()
def addConfigs(self, configKey, configValue):
for i in range(len(self.tdnodes)):
self.tdnodes[i].configTaosd(configKey, configValue)
def removeConfigs(self, configKey, configValue):
for i in range(len(self.tdnodes)):
self.tdnodes[i].removeTaosConfig(configKey, configValue)
def removeAllDataFiles(self):
for i in range(len(self.tdnodes)):
self.tdnodes[i].removeData()
class Test:
def __init__(self):
self.nodes = Nodes()
# kill taosd randomly every 10 mins
def randomlyKillDnode(self):
loop = 0
while True:
index = random.randint(0, 4)
print("loop: %d, kill taosd on node%d" %(loop, index))
self.nodes.stopOneNode(index)
time.sleep(60)
self.nodes.startOneNode(index)
time.sleep(600)
loop = loop + 1
def detectCoredump(self):
loop = 0
while True:
for i in range(len(self.nodes.tdnodes)):
result = self.nodes.detectCoredumpFile(i)
print("core file path is %s" % result)
if result and not result.isspace():
self.nodes.stopAllTaosd()
print("sleep for 10 mins")
time.sleep(600)
test = Test()
test.detectCoredump()
\ No newline at end of file
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
from clusterSetup import *
from util.sql import tdSql
from util.log import tdLog
import random
class ClusterTestcase:
## test case 20, 21, 22 ##
def run(self):
nodes = Nodes()
ctest = ClusterTest(nodes.node1.hostName)
ctest.connectDB()
ctest.createSTable(3)
ctest.run()
tdSql.init(ctest.conn.cursor(), False)
nodes.node2.stopTaosd()
tdSql.execute("use %s" % ctest.dbName)
tdSql.query("show vgroups")
vnodeID = tdSql.getData(0, 0)
nodes.node2.removeDataForVnode(vnodeID)
nodes.node2.startTaosd()
# Wait for vnode file to recover
for i in range(10):
tdSql.query("select count(*) from t0")
tdLog.sleep(10)
for i in range(10):
tdSql.query("select count(*) from t0")
tdSql.checkData(0, 0, 1000)
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
ct = ClusterTestcase()
ct.run()
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
from clusterSetup import *
from util.sql import tdSql
from util.log import tdLog
import random
class ClusterTestcase:
##Cover test case 5 ##
def run(self):
# cluster environment set up
nodes = Nodes()
nodes.addConfigs("maxVgroupsPerDb", "10")
nodes.addConfigs("maxTablesPerVnode", "1000")
nodes.restartAllTaosd()
ctest = ClusterTest(nodes.node1.hostName)
ctest.connectDB()
ctest.createSTable(1)
ctest.run()
tdSql.init(ctest.conn.cursor(), False)
tdSql.execute("use %s" % ctest.dbName)
tdSql.error("create table tt1 using %s tags(1)" % ctest.stbName)
nodes.removeConfigs("maxVgroupsPerDb", "10")
nodes.removeConfigs("maxTablesPerVnode", "1000")
nodes.restartAllTaosd()
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
ct = ClusterTestcase()
ct.run()
\ No newline at end of file
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
from clusterSetup import *
from util.sql import tdSql
from util.log import tdLog
import random
class ClusterTestcase:
## test case 7, 10 ##
def run(self):
# cluster environment set up
tdLog.info("Test case 7, 10")
nodes = Nodes()
ctest = ClusterTest(nodes.node1.hostName)
ctest.connectDB()
tdSql.init(ctest.conn.cursor(), False)
nodes.node1.stopTaosd()
tdSql.query("show dnodes")
tdSql.checkRows(3)
tdSql.checkData(0, 4, "offline")
tdSql.checkData(1, 4, "ready")
tdSql.checkData(2, 4, "ready")
nodes.node1.startTaosd()
tdSql.checkRows(3)
tdSql.checkData(0, 4, "ready")
tdSql.checkData(1, 4, "ready")
tdSql.checkData(2, 4, "ready")
nodes.node2.stopTaosd()
tdSql.query("show dnodes")
tdSql.checkRows(3)
tdSql.checkData(0, 4, "ready")
tdSql.checkData(1, 4, "offline")
tdSql.checkData(2, 4, "ready")
nodes.node2.startTaosd()
tdSql.checkRows(3)
tdSql.checkData(0, 4, "ready")
tdSql.checkData(1, 4, "ready")
tdSql.checkData(2, 4, "ready")
nodes.node3.stopTaosd()
tdSql.query("show dnodes")
tdSql.checkRows(3)
tdSql.checkData(0, 4, "ready")
tdSql.checkData(1, 4, "ready")
tdSql.checkData(2, 4, "offline")
nodes.node3.startTaosd()
tdSql.checkRows(3)
tdSql.checkData(0, 4, "ready")
tdSql.checkData(1, 4, "ready")
tdSql.checkData(2, 4, "ready")
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
ct = ClusterTestcase()
ct.run()
\ No newline at end of file
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
from clusterSetup import *
from util.sql import tdSql
from util.log import tdLog
import random
class ClusterTestcase:
## cover test case 6, 8, 9, 11 ##
def run(self):
# cluster environment set up
nodes = Nodes()
ctest = ClusterTest(nodes.node1.hostName)
ctest.connectDB()
tdSql.init(ctest.conn.cursor(), False)
nodes.addConfigs("offlineThreshold", "10")
nodes.removeAllDataFiles()
nodes.restartAllTaosd()
nodes.node3.stopTaosd()
tdLog.sleep(10)
tdSql.query("show dnodes")
tdSql.checkRows(3)
tdSql.checkData(2, 4, "offline")
tdLog.sleep(60)
tdSql.checkRows(3)
tdSql.checkData(2, 4, "dropping")
tdLog.sleep(300)
tdSql.checkRows(2)
nodes.removeConfigs("offlineThreshold", "10")
nodes.restartAllTaosd()
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
ct = ClusterTestcase()
ct.run()
\ No newline at end of file
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
from clusterSetup import *
from util.sql import tdSql
from util.log import tdLog
import random
class ClusterTestcase:
## test case 28, 29, 30, 31 ##
def run(self):
nodes = Nodes()
ctest = ClusterTest(nodes.node1.hostName)
ctest.connectDB()
ctest.createSTable(3)
ctest.run()
tdSql.init(ctest.conn.cursor(), False)
tdSql.execute("use %s" % ctest.dbName)
nodes.node2.stopTaosd()
for i in range(100):
tdSql.execute("drop table t%d" % i)
nodes.node2.startTaosd()
tdSql.query("show tables")
tdSql.checkRows(9900)
nodes.node2.stopTaosd()
for i in range(10):
tdSql.execute("create table a%d using meters tags(2)" % i)
nodes.node2.startTaosd()
tdSql.query("show tables")
tdSql.checkRows(9910)
nodes.node2.stopTaosd()
tdSql.execute("alter table meters add col col6 int")
nodes.node2.startTaosd()
nodes.node2.stopTaosd()
tdSql.execute("drop database %s" % ctest.dbName)
nodes.node2.startTaosd()
tdSql.query("show databases")
tdSql.checkRows(0)
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
ct = ClusterTestcase()
ct.run()
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
from clusterSetup import *
from util.sql import tdSql
from util.log import tdLog
import random
import time
class ClusterTestcase:
## test case 32 ##
def run(self):
nodes = Nodes()
ctest = ClusterTest(nodes.node1.hostName)
ctest.connectDB()
ctest.createSTable(1)
ctest.run()
tdSql.init(ctest.conn.cursor(), False)
tdSql.execute("use %s" % ctest.dbName)
totalTime = 0
for i in range(10):
startTime = time.time()
tdSql.query("select * from %s" % ctest.stbName)
totalTime += time.time() - startTime
print("replica 1: avarage query time for %d records: %f seconds" % (ctest.numberOfTables * ctest.numberOfRecords,totalTime / 10))
tdSql.execute("alter database %s replica 3" % ctest.dbName)
tdLog.sleep(60)
totalTime = 0
for i in range(10):
startTime = time.time()
tdSql.query("select * from %s" % ctest.stbName)
totalTime += time.time() - startTime
print("replica 3: avarage query time for %d records: %f seconds" % (ctest.numberOfTables * ctest.numberOfRecords,totalTime / 10))
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
ct = ClusterTestcase()
ct.run()
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
from clusterSetup import *
from util.sql import tdSql
from util.log import tdLog
import random
class ClusterTestcase:
## test case 19 ##
def run(self):
nodes = Nodes()
ctest = ClusterTest(nodes.node1.hostName)
tdSql.init(ctest.conn.cursor(), False)
tdSql.query("show databases")
count = tdSql.queryRows;
nodes.stopAllTaosd()
nodes.node1.startTaosd()
tdSql.error("show databases")
nodes.node2.startTaosd()
tdSql.error("show databases")
nodes.node3.startTaosd()
tdLog.sleep(10)
tdSql.query("show databases")
tdSql.checkRows(count)
ct = ClusterTestcase()
ct.run()
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
from clusterSetup import *
from util.sql import tdSql
from util.log import tdLog
import random
class ClusterTestcase:
## test case 17, 18 ##
def run(self):
nodes = Nodes()
ctest = ClusterTest(nodes.node1.hostName)
ctest.connectDB()
ctest.createSTable(1)
ctest.run()
tdSql.init(ctest.conn.cursor(), False)
tdSql.query("show databases")
count = tdSql.queryRows;
tdSql.execute("use %s" % ctest.dbName)
tdSql.execute("alter database %s replica 3" % ctest.dbName)
nodes.node2.stopTaosd()
nodes.node3.stopTaosd()
tdSql.error("show databases")
nodes.node2.startTaosd()
tdSql.error("show databases")
nodes.node3.startTaosd()
tdSql.query("show databases")
tdSql.checkRows(count)
ct = ClusterTestcase()
ct.run()
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
from clusterSetup import *
from util.sql import tdSql
from util.log import tdLog
import random
class ClusterTestcase:
## test case 24, 25, 26, 27 ##
def run(self):
nodes = Nodes()
ctest = ClusterTest(nodes.node1.hostName)
ctest.connectDB()
ctest.createSTable(1)
ctest.run()
tdSql.init(ctest.conn.cursor(), False)
tdSql.execute("use %s" % ctest.dbName)
tdSql.execute("alter database %s replica 3" % ctest.dbName)
for i in range(100):
tdSql.execute("drop table t%d" % i)
for i in range(100):
tdSql.execute("create table a%d using meters tags(1)" % i)
tdSql.execute("alter table meters add col col5 int")
tdSql.execute("alter table meters drop col col5 int")
tdSql.execute("drop database %s" % ctest.dbName)
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
ct = ClusterTestcase()
ct.run()
python3 basicTest.py
python3 bananceTest.py
python3 changeReplicaTest.py
python3 dataFileRecoveryTest.py
python3 fullDnodesTest.py
python3 killAndRestartDnodesTest.py
python3 offlineThresholdTest.py
python3 oneReplicaOfflineTest.py
python3 queryTimeTest.py
python3 stopAllDnodesTest.py
python3 stopTwoDnodesTest.py
python3 syncingTest.py
\ No newline at end of file
此差异已折叠。
#!/bin/bash
# This is the script for us to try to cause the TDengine server or client to crash
#
# PREPARATION
#
# 1. Build an compile the TDengine source code that comes with this script, in the same directory tree
# 2. Please follow the direction in our README.md, and build TDengine in the build/ directory
# 3. Adjust the configuration file if needed under build/test/cfg/taos.cfg
# 4. Run the TDengine server instance: cd build; ./build/bin/taosd -c test/cfg
# 5. Make sure you have a working Python3 environment: run /usr/bin/python3 --version, and you should get 3.6 or above
# 6. Make sure you have the proper Python packages: # sudo apt install python3-setuptools python3-pip python3-distutils
#
# RUNNING THIS SCRIPT
#
# This script assumes the source code directory is intact, and that the binaries has been built in the
# build/ directory, as such, will will load the Python libraries in the directory tree, and also load
# the TDengine client shared library (so) file, in the build/directory, as evidenced in the env
# variables below.
#
# Running the script is simple, no parameter is needed (for now, but will change in the future).
#
# Happy Crashing...
# Due to the heavy path name assumptions/usage, let us require that the user be in the current directory
EXEC_DIR=`dirname "$0"`
if [[ $EXEC_DIR != "." ]]
then
echo "ERROR: Please execute `basename "$0"` in its own directory (for now anyway, pardon the dust)"
exit -1
fi
CURR_DIR=`pwd`
IN_TDINTERNAL="community"
if [[ "$CURR_DIR" == *"$IN_TDINTERNAL"* ]]; then
TAOS_DIR=$CURR_DIR/../../..
TAOSD_DIR=`find $TAOS_DIR -name "taosd"|grep bin|head -n1`
LIB_DIR=`echo $TAOSD_DIR|rev|cut -d '/' -f 3,4,5,6,7|rev`/lib
else
TAOS_DIR=$CURR_DIR/../..
TAOSD_DIR=`find $TAOS_DIR -name "taosd"|grep bin|head -n1`
LIB_DIR=`echo $TAOSD_DIR|rev|cut -d '/' -f 3,4,5,6|rev`/lib
fi
# Now getting ready to execute Python
# The following is the default of our standard dev env (Ubuntu 20.04), modify/adjust at your own risk
PYTHON_EXEC=python3.8
# First we need to set up a path for Python to find our own TAOS modules, so that "import" can work.
export PYTHONPATH=$(pwd)/../../src/connector/python:$(pwd)
# Then let us set up the library path so that our compiled SO file can be loaded by Python
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$LIB_DIR
# Now we are all let, and let's see if we can find a crash. Note we pass all params
CONCURRENT_INQUIRY=concurrent_inquiry.py
if [[ $1 == '--valgrind' ]]; then
shift
export PYTHONMALLOC=malloc
VALGRIND_OUT=valgrind.out
VALGRIND_ERR=valgrind.err
# How to generate valgrind suppression file: https://stackoverflow.com/questions/17159578/generating-suppressions-for-memory-leaks
# valgrind --leak-check=full --gen-suppressions=all --log-fd=9 python3.8 ./concurrent_inquiry.py $@ 9>>memcheck.log
echo Executing under VALGRIND, with STDOUT/ERR going to $VALGRIND_OUT and $VALGRIND_ERR, please watch them from a different terminal.
valgrind \
--leak-check=yes \
--suppressions=crash_gen/valgrind_taos.supp \
$PYTHON_EXEC \
$CONCURRENT_INQUIRY $@ > $VALGRIND_OUT 2> $VALGRIND_ERR
elif [[ $1 == '--helgrind' ]]; then
shift
HELGRIND_OUT=helgrind.out
HELGRIND_ERR=helgrind.err
valgrind \
--tool=helgrind \
$PYTHON_EXEC \
$CONCURRENT_INQUIRY $@ > $HELGRIND_OUT 2> $HELGRIND_ERR
else
$PYTHON_EXEC $CONCURRENT_INQUIRY $@
fi
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
# Helpful Ref: https://stackoverflow.com/questions/24100558/how-can-i-split-a-module-into-multiple-files-without-breaking-a-backwards-compa/24100645
from crash_gen.service_manager import ServiceManager, TdeInstance, TdeSubProcess
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
from typing import Any, BinaryIO, List, Dict, NewType
from enum import Enum
DirPath = NewType('DirPath', str)
QueryResult = NewType('QueryResult', List[List[Any]])
class TdDataType(Enum):
'''
Use a Python Enum types of represent all the data types in TDengine.
Ref: https://www.taosdata.com/cn/documentation/taos-sql#data-type
'''
TIMESTAMP = 'TIMESTAMP'
INT = 'INT'
BIGINT = 'BIGINT'
FLOAT = 'FLOAT'
DOUBLE = 'DOUBLE'
BINARY = 'BINARY'
BINARY16 = 'BINARY(16)' # TODO: get rid of this hack
BINARY200 = 'BINARY(200)'
SMALLINT = 'SMALLINT'
TINYINT = 'TINYINT'
BOOL = 'BOOL'
NCHAR = 'NCHAR'
TdColumns = Dict[str, TdDataType]
TdTags = Dict[str, TdDataType]
IpcStream = NewType('IpcStream', BinaryIO)
\ No newline at end of file
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册