+
+
+
+
+TDengine 社区致力于让更多的开发者理解和使用它。
+请填写**贡献者提交表**以选择您想收到的礼物。
+
+- [贡献者提交表](https://page.ma.scrmtech.com/form/index?pf_uid=27715_2095&id=12100)
+
+## 联系我们
+
+如果您有什么问题需要解决,或者有什么问题需要解答,可以添加微信:TDengineECO
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 3b1a66839d3d4779f00090a84e6895bd0d660d0d..5be84bec3483ac2f79f43941465df3b50047e661 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -1,15 +1,64 @@
# Contributing
-We appreciate contributions from all developers. Feel free to follow us, fork the repository, report bugs and even submit your code on GitHub. However, we would like developers to follow our guides to contribute for better corporation.
+We appreciate contributions from all developers. Feel free to follow us, fork the repository, report bugs, and even submit your code on GitHub. However, we would like developers to follow the guidelines in this document to ensure effective cooperation.
-## Report bugs
+## Reporting a bug
-Any users can report bugs to us through the [github issue tracker](https://github.com/taosdata/TDengine/issues). We appreciate a detailed description of the problem you met. It is better to provide the detailed steps on reproducing the bug. Otherwise, an appendix with log files generated by the bug is welcome.
+- Any users can report bugs to us through the **[GitHub issue tracker](https://github.com/taosdata/TDengine/issues)**. We would appreciate if you could provide **a detailed description** of the problem you encountered, including steps to reproduce it.
-## Read the contributor license agreement
+- Attaching log files caused by the bug is really appreciated.
-It is required to agree the Contributor Licence Agreement(CLA) before a user submitting his/her code patch. Follow the [TaosData CLA](https://www.taosdata.com/en/contributor/) link to read through the agreement.
+## Guidelines for committing code
-## Submit your code
+- You must agree to the **Contributor License Agreement(CLA) before submitting your code patch**. Follow the **[TAOSData CLA](https://cla-assistant.io/taosdata/TDengine)** link to read through and sign the agreement. If you do not accept the agreement, your contributions cannot be accepted.
-Before submitting your code, make sure to [read the contributor license agreement](#read-the-contributor-license-agreement) beforehand. If you don't accept the aggreement, please stop submitting. Your submission means you have accepted the agreement. Your submission should solve an issue or add a feature registered in the [github issue tracker](https://github.com/taosdata/TDengine/issues). If no corresponding issue or feature is found in the issue tracker, please create one. When submitting your code to our repository, please create a pull request with the issue number included.
+- Please solve an issue or add a feature registered in the **[GitHub issue tracker](https://github.com/taosdata/TDengine/issues)**.
+- If no corresponding issue or feature is found in the issue tracker, please **create one**.
+- When submitting your code to our repository, please create a pull request with the **issue number** included.
+
+## Guidelines for communicating
+
+1. Please be **nice and polite** in the description.
+2. **Active voice is better than passive voice in general**. Sentences in the active voice will highlight who is performing the action rather than the recipient of the action highlighted by the passive voice.
+3. Documentation writing advice
+
+- Spell the product name "TDengine" correctly. "TD" is written in capital letters, and there is no space between "TD" and "engine" (**Correct spelling: TDengine**).
+- Please **capitalize the first letter** of every sentence.
+- Leave **only one space** after periods or other punctuation marks.
+- Use **American spelling**.
+- When possible, **use second person** rather than first person (e.g.“You are recommended to use a reverse proxy such as Nginx.” rather than “We recommend to use a reverse proxy such as Nginx.”).
+
+5. Use **simple sentences**, rather than complex sentences.
+
+## Gifts for the contributors
+
+Developers, as long as you contribute to TDengine, whether it's code contributions to fix bugs or feature requests, or documentation changes, **you are eligible for a very special Contributor Souvenir Gift!**
+
+**You can choose one of the following gifts:**
+
+
+
+
+
+
+The TDengine community is committed to making TDengine accepted and used by more developers.
+
+Just fill out the **Contributor Submission Form** to choose your desired gift.
+
+- [Contributor Submission Form](https://page.ma.scrmtech.com/form/index?pf_uid=27715_2095&id=12100)
+
+## Contact us
+
+If you have any problems or questions that need help from us, please feel free to add our WeChat account: TDengineECO.
diff --git a/Jenkinsfile2 b/Jenkinsfile2
index 12e806c87a8ea7e9c446a03d543a8561b3740a55..98d7a5a7312ff966faec5da9c898fa006f0f4ebf 100644
--- a/Jenkinsfile2
+++ b/Jenkinsfile2
@@ -1,6 +1,7 @@
import hudson.model.Result
import hudson.model.*;
import jenkins.model.CauseOfInterruption
+docs_only=0
node {
}
@@ -29,6 +30,48 @@ def abort_previous(){
if (buildNumber > 1) milestone(buildNumber - 1)
milestone(buildNumber)
}
+def check_docs() {
+ if (env.CHANGE_URL =~ /\/TDengine\//) {
+ sh '''
+ hostname
+ date
+ env
+ '''
+ sh '''
+ cd ${WKC}
+ git reset --hard
+ git clean -fxd
+ rm -rf examples/rust/
+ git remote prune origin
+ git fetch
+ '''
+ script {
+ sh '''
+ cd ${WKC}
+ git checkout ''' + env.CHANGE_TARGET + '''
+ '''
+ }
+ sh '''
+ cd ${WKC}
+ git pull >/dev/null
+ git fetch origin +refs/pull/${CHANGE_ID}/merge
+ git checkout -qf FETCH_HEAD
+ '''
+ def file_changed = sh (
+ script: '''
+ cd ${WKC}
+ git --no-pager diff --name-only FETCH_HEAD `git merge-base FETCH_HEAD ${CHANGE_TARGET}`|grep -v "^docs/en/"|grep -v "^docs/zh/"
+ ''',
+ returnStdout: true
+ ).trim()
+ if (file_changed == '') {
+ echo "docs PR"
+ docs_only=1
+ } else {
+ echo file_changed
+ }
+ }
+}
def pre_test(){
sh '''
hostname
@@ -307,10 +350,27 @@ pipeline {
WKPY = '/var/lib/jenkins/workspace/taos-connector-python'
}
stages {
+ stage('check') {
+ when {
+ allOf {
+ not { expression { env.CHANGE_BRANCH =~ /docs\// }}
+ not { expression { env.CHANGE_URL =~ /\/TDinternal\// }}
+ }
+ }
+ parallel {
+ stage('check docs') {
+ agent{label " worker03 || slave215 || slave217 || slave219 || Mac_catalina "}
+ steps {
+ check_docs()
+ }
+ }
+ }
+ }
stage('run test') {
when {
allOf {
not { expression { env.CHANGE_BRANCH =~ /docs\// }}
+ expression { docs_only == 0 }
}
}
parallel {
diff --git a/README.md b/README.md
index 6baabed7be32fff97c4809f76666f0becf62040b..7f03089abd99c7da7b5879f61913ee7f28314dad 100644
--- a/README.md
+++ b/README.md
@@ -15,7 +15,6 @@
[](https://coveralls.io/github/taosdata/TDengine?branch=develop)
[](https://bestpractices.coreinfrastructure.org/projects/4201)
-
English | [简体中文](README-CN.md) | We are hiring, check [here](https://tdengine.com/careers)
# What is TDengine?
@@ -42,7 +41,7 @@ For user manual, system design and architecture, please refer to [TDengine Docum
At the moment, TDengine server supports running on Linux, Windows systems.Any OS application can also choose the RESTful interface of taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU , and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future.
-You can choose to install through source code according to your needs, [container](https://docs.taosdata.com/get-started/docker/), [installation package](https://docs.taosdata.com/get-started/package/) or [Kubenetes](https://docs.taosdata.com/deployment/k8s/) to install. This quick guide only applies to installing from source.
+You can choose to install through source code according to your needs, [container](https://docs.taosdata.com/get-started/docker/), [installation package](https://docs.taosdata.com/get-started/package/) or [Kubernetes](https://docs.taosdata.com/deployment/k8s/) to install. This quick guide only applies to installing from source.
TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine.
@@ -58,7 +57,6 @@ sudo apt-get install -y gcc cmake build-essential git libssl-dev
#### Install build dependencies for taosTools
-
To build the [taosTools](https://github.com/taosdata/taos-tools) on Ubuntu/Debian, the following packages need to be installed.
```bash
@@ -82,14 +80,13 @@ sudo dnf install -y gcc gcc-c++ make cmake epel-release git openssl-devel
#### Install build dependencies for taosTools on CentOS
-
#### CentOS 7.9
```
sudo yum install -y zlib-devel xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libstdc++-static openssl-devel
```
-#### CentOS 8/Rocky Linux
+#### CentOS 8/Rocky Linux
```
sudo yum install -y epel-release
@@ -100,14 +97,14 @@ sudo yum install -y zlib-devel xz-devel snappy-devel jansson jansson-devel pkgco
Note: Since snappy lacks pkg-config support (refer to [link](https://github.com/google/snappy/pull/86)), it leads a cmake prompt libsnappy not found. But snappy still works well.
-If the powertools installation fails, you can try to use:
+If the PowerTools installation fails, you can try to use:
+
```
-sudo yum config-manager --set-enabled Powertools
+sudo yum config-manager --set-enabled powertools
```
### Setup golang environment
-
TDengine includes a few components like taosAdapter developed by Go language. Please refer to golang.org official documentation for golang environment setup.
Please use version 1.14+. For the user in China, we recommend using a proxy to accelerate package downloading.
@@ -125,7 +122,7 @@ cmake .. -DBUILD_HTTP=false
### Setup rust environment
-TDengine includes a few compoments developed by Rust language. Please refer to rust-lang.org official documentation for rust environment setup.
+TDengine includes a few components developed by Rust language. Please refer to rust-lang.org official documentation for rust environment setup.
## Get the source codes
@@ -136,7 +133,6 @@ git clone https://github.com/taosdata/TDengine.git
cd TDengine
```
-
You can modify the file ~/.gitconfig to use ssh protocol instead of https for better download speed. You will need to upload ssh public key to GitHub first. Please refer to GitHub official documentation for detail.
```
@@ -146,14 +142,12 @@ You can modify the file ~/.gitconfig to use ssh protocol instead of https for be
## Special Note
-
[JDBC Connector](https://github.com/taosdata/taos-connector-jdbc), [Go Connector](https://github.com/taosdata/driver-go),[Python Connector](https://github.com/taosdata/taos-connector-python),[Node.js Connector](https://github.com/taosdata/taos-connector-node),[C# Connector](https://github.com/taosdata/taos-connector-dotnet) ,[Rust Connector](https://github.com/taosdata/taos-connector-rust) and [Grafana plugin](https://github.com/taosdata/grafanaplugin) has been moved to standalone repository.
## Build TDengine
### On Linux platform
-
You can run the bash script `build.sh` to build both TDengine and taosTools including taosBenchmark and taosdump as below:
```bash
@@ -169,7 +163,6 @@ cmake .. -DBUILD_TOOLS=true
make
```
-
You can use Jemalloc as memory allocator instead of glibc:
```
@@ -237,7 +230,7 @@ After building successfully, TDengine can be installed by
sudo make install
```
-Users can find more information about directories installed on the system in the [directory and files](https://docs.taosdata.com/reference/directory/) section.
+Users can find more information about directories installed on the system in the [directory and files](https://docs.taosdata.com/reference/directory/) section.
Installing from source code will also configure service management for TDengine.Users can also choose to [install from packages](https://docs.taosdata.com/get-started/package/) for it.
@@ -309,7 +302,7 @@ Query OK, 2 row(s) in set (0.001700s)
## Official Connectors
-TDengine provides abundant developing tools for users to develop on TDengine. include C/C++、Java、Python、Go、Node.js、C# 、RESTful ,Follow the links below to find your desired connectors and relevant documentation.
+TDengine provides abundant developing tools for users to develop on TDengine. Follow the links below to find your desired connectors and relevant documentation.
- [Java](https://docs.taosdata.com/reference/connector/java/)
- [C/C++](https://docs.taosdata.com/reference/connector/cpp/)
diff --git a/docs/assets/contributing-cup.jpg b/docs/assets/contributing-cup.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..2bf935132a9c2395a06efd92ff51ecb7244caac5
Binary files /dev/null and b/docs/assets/contributing-cup.jpg differ
diff --git a/docs/assets/contributing-notebook.jpg b/docs/assets/contributing-notebook.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..de32051cad6f659f6bf104290189076086bcb3a5
Binary files /dev/null and b/docs/assets/contributing-notebook.jpg differ
diff --git a/docs/assets/contributing-shirt.jpg b/docs/assets/contributing-shirt.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..bffe3aff1ac9bacbd008c997edbaf793af1e2de9
Binary files /dev/null and b/docs/assets/contributing-shirt.jpg differ
diff --git a/docs/en/07-develop/01-connect/index.md b/docs/en/07-develop/01-connect/index.md
index 017a1a0ee40a242c198994ae6bb432203061ac18..20537064216f812990414ffd7260dbda64c56251 100644
--- a/docs/en/07-develop/01-connect/index.md
+++ b/docs/en/07-develop/01-connect/index.md
@@ -223,7 +223,7 @@ phpize && ./configure && make -j && make install
**Specify TDengine Location:**
```shell
-phpize && ./configure --with-tdengine-dir=/usr/local/Cellar/tdengine/2.4.0.0 && make -j && make install
+phpize && ./configure --with-tdengine-dir=/usr/local/Cellar/tdengine/3.0.0.0 && make -j && make install
```
> `--with-tdengine-dir=` is followed by the TDengine installation location.
diff --git a/docs/en/07-develop/04-query-data/index.mdx b/docs/en/07-develop/04-query-data/index.mdx
index d530c59185acb481c1324bd073b0939635b1c9ff..38dc98d1ff262c7f8ec4951297e6f42e436682c8 100644
--- a/docs/en/07-develop/04-query-data/index.mdx
+++ b/docs/en/07-develop/04-query-data/index.mdx
@@ -43,7 +43,7 @@ Query OK, 2 row(s) in set (0.001100s)
To meet the requirements of varied use cases, some special functions have been added in TDengine. Some examples are `twa` (Time Weighted Average), `spread` (The difference between the maximum and the minimum), and `last_row` (the last row).
-For detailed query syntax, see [Select](../../taos-sql././select).
+For detailed query syntax, see [Select](../../taos-sql/select).
## Aggregation among Tables
@@ -74,7 +74,7 @@ taos> SELECT count(*), max(current) FROM meters where groupId = 2;
Query OK, 1 row(s) in set (0.002136s)
```
-In [Select](../../taos-sql././select), all query operations are marked as to whether they support STables or not.
+In [Select](../../taos-sql/select), all query operations are marked as to whether they support STables or not.
## Down Sampling and Interpolation
@@ -122,7 +122,7 @@ In many use cases, it's hard to align the timestamp of the data collected by eac
Interpolation can be performed in TDengine if there is no data in a time range.
-For more information, see [Aggregate by Window](../../taos-sql/interval).
+For more information, see [Aggregate by Window](../../taos-sql/distinguished).
## Examples
diff --git a/docs/en/07-develop/09-udf.md b/docs/en/07-develop/09-udf.md
index f8170d0d6324cb29fe35dd2160fcc1ae785a1679..deb9c4cdb5b50edf7b48537f607ac47edc1246fd 100644
--- a/docs/en/07-develop/09-udf.md
+++ b/docs/en/07-develop/09-udf.md
@@ -102,7 +102,7 @@ Replace `aggfn` with the name of your function.
## Interface Functions
-There are strict naming conventions for interface functions. The names of the start, finish, init, and destroy interfaces must be _start, _finish, _init, and _destroy, respectively. Replace `scalarfn`, `aggfn`, and `udf` with the name of your user-defined function.
+There are strict naming conventions for interface functions. The names of the start, finish, init, and destroy interfaces must be _start, _finish, _init, and _destroy, respectively. Replace `scalarfn`, `aggfn`, and `udf` with the name of your user-defined function.
Interface functions return a value that indicates whether the operation was successful. If an operation fails, the interface function returns an error code. Otherwise, it returns TSDB_CODE_SUCCESS. The error codes are defined in `taoserror.h` and in the common API error codes in `taos.h`. For example, TSDB_CODE_UDF_INVALID_INPUT indicates invalid input. TSDB_CODE_OUT_OF_MEMORY indicates insufficient memory.
diff --git a/docs/en/12-taos-sql/18-escape.md b/docs/en/12-taos-sql/18-escape.md
index 46ab35a276ee15b6540b0eaf096482aed210b79c..a2ae40de98be677e599e83a634952a39faeaafbf 100644
--- a/docs/en/12-taos-sql/18-escape.md
+++ b/docs/en/12-taos-sql/18-escape.md
@@ -15,11 +15,6 @@ title: Escape Characters
| `\%` | % see below for details |
| `\_` | \_ see below for details |
-:::note
-Escape characters are available from version 2.4.0.4 .
-
-:::
-
## Restrictions
1. If there are escape characters in identifiers (database name, table name, column name)
diff --git a/docs/en/12-taos-sql/index.md b/docs/en/12-taos-sql/index.md
index f63de6308d3d7296dc8dc603caa1f5d431c61b16..e243cd23186a6b9286d3297e467567c26c316112 100644
--- a/docs/en/12-taos-sql/index.md
+++ b/docs/en/12-taos-sql/index.md
@@ -3,7 +3,7 @@ title: TDengine SQL
description: "The syntax supported by TDengine SQL "
---
-This section explains the syntax of SQL to perform operations on databases, tables and STables, insert data, select data and use functions. We also provide some tips that can be used in TDengine SQL. If you have previous experience with SQL this section will be fairly easy to understand. If you do not have previous experience with SQL, you'll come to appreciate the simplicity and power of SQL. TDengine SQL has been enhanced in version 3.0, and the query engine has been rearchitected. For information about how TDengine SQL has changed, see [Changes in TDengine 3.0](/taos-sql/changes).
+This section explains the syntax of SQL to perform operations on databases, tables and STables, insert data, select data and use functions. We also provide some tips that can be used in TDengine SQL. If you have previous experience with SQL this section will be fairly easy to understand. If you do not have previous experience with SQL, you'll come to appreciate the simplicity and power of SQL. TDengine SQL has been enhanced in version 3.0, and the query engine has been rearchitected. For information about how TDengine SQL has changed, see [Changes in TDengine 3.0](../taos-sql/changes).
TDengine SQL is the major interface for users to write data into or query from TDengine. It uses standard SQL syntax and includes extensions and optimizations for time-series data and services. The maximum length of a TDengine SQL statement is 1 MB. Note that keyword abbreviations are not supported. For example, DELETE cannot be entered as DEL.
diff --git a/docs/en/13-operation/01-pkg-install.md b/docs/en/13-operation/01-pkg-install.md
index c098002962d62aa0acc7a94462c052303cb2ed90..b0f607170d335b198ad5c29af733038982a843e6 100644
--- a/docs/en/13-operation/01-pkg-install.md
+++ b/docs/en/13-operation/01-pkg-install.md
@@ -13,16 +13,16 @@ TDengine community version provides deb and rpm packages for users to choose fro
-1. Download deb package from official website, for example TDengine-server-2.4.0.7-Linux-x64.deb
+1. Download deb package from official website, for example TDengine-server-3.0.0.0-Linux-x64.deb
2. In the directory where the package is located, execute the command below
```bash
-$ sudo dpkg -i TDengine-server-2.4.0.7-Linux-x64.deb
+$ sudo dpkg -i TDengine-server-3.0.0.0-Linux-x64.deb
(Reading database ... 137504 files and directories currently installed.)
-Preparing to unpack TDengine-server-2.4.0.7-Linux-x64.deb ...
+Preparing to unpack TDengine-server-3.0.0.0-Linux-x64.deb ...
TDengine is removed successfully!
-Unpacking tdengine (2.4.0.7) over (2.4.0.7) ...
-Setting up tdengine (2.4.0.7) ...
+Unpacking tdengine (3.0.0.0) over (3.0.0.0) ...
+Setting up tdengine (3.0.0.0) ...
Start to install TDengine...
System hostname is: ubuntu-1804
@@ -45,14 +45,14 @@ TDengine is installed successfully!
-1. Download rpm package from official website, for example TDengine-server-2.4.0.7-Linux-x64.rpm;
+1. Download rpm package from official website, for example TDengine-server-3.0.0.0-Linux-x64.rpm;
2. In the directory where the package is located, execute the command below
```
-$ sudo rpm -ivh TDengine-server-2.4.0.7-Linux-x64.rpm
+$ sudo rpm -ivh TDengine-server-3.0.0.0-Linux-x64.rpm
Preparing... ################################# [100%]
Updating / installing...
- 1:tdengine-2.4.0.7-3 ################################# [100%]
+ 1:tdengine-3.0.0.0-3 ################################# [100%]
Start to install TDengine...
System hostname is: centos7
@@ -76,27 +76,27 @@ TDengine is installed successfully!
-1. Download the tar.gz package, for example TDengine-server-2.4.0.7-Linux-x64.tar.gz;
-2. In the directory where the package is located, first decompress the file, then switch to the sub-directory generated in decompressing, i.e. "TDengine-enterprise-server-2.4.0.7/" in this example, and execute the `install.sh` script.
+1. Download the tar.gz package, for example TDengine-server-3.0.0.0-Linux-x64.tar.gz;
+2. In the directory where the package is located, first decompress the file, then switch to the sub-directory generated in decompressing, i.e. "TDengine-enterprise-server-3.0.0.0/" in this example, and execute the `install.sh` script.
```bash
-$ tar xvzf TDengine-enterprise-server-2.4.0.7-Linux-x64.tar.gz
-TDengine-enterprise-server-2.4.0.7/
-TDengine-enterprise-server-2.4.0.7/driver/
-TDengine-enterprise-server-2.4.0.7/driver/vercomp.txt
-TDengine-enterprise-server-2.4.0.7/driver/libtaos.so.2.4.0.7
-TDengine-enterprise-server-2.4.0.7/install.sh
-TDengine-enterprise-server-2.4.0.7/examples/
+$ tar xvzf TDengine-enterprise-server-3.0.0.0-Linux-x64.tar.gz
+TDengine-enterprise-server-3.0.0.0/
+TDengine-enterprise-server-3.0.0.0/driver/
+TDengine-enterprise-server-3.0.0.0/driver/vercomp.txt
+TDengine-enterprise-server-3.0.0.0/driver/libtaos.so.3.0.0.0
+TDengine-enterprise-server-3.0.0.0/install.sh
+TDengine-enterprise-server-3.0.0.0/examples/
...
$ ll
total 43816
drwxrwxr-x 3 ubuntu ubuntu 4096 Feb 22 09:31 ./
drwxr-xr-x 20 ubuntu ubuntu 4096 Feb 22 09:30 ../
-drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 TDengine-enterprise-server-2.4.0.7/
--rw-rw-r-- 1 ubuntu ubuntu 44852544 Feb 22 09:31 TDengine-enterprise-server-2.4.0.7-Linux-x64.tar.gz
+drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 TDengine-enterprise-server-3.0.0.0/
+-rw-rw-r-- 1 ubuntu ubuntu 44852544 Feb 22 09:31 TDengine-enterprise-server-3.0.0.0-Linux-x64.tar.gz
-$ cd TDengine-enterprise-server-2.4.0.7/
+$ cd TDengine-enterprise-server-3.0.0.0/
$ ll
total 40784
@@ -146,7 +146,7 @@ Deb package of TDengine can be uninstalled as below:
```bash
$ sudo dpkg -r tdengine
(Reading database ... 137504 files and directories currently installed.)
-Removing tdengine (2.4.0.7) ...
+Removing tdengine (3.0.0.0) ...
TDengine is removed successfully!
```
@@ -245,7 +245,7 @@ For example, if using `systemctl` , the commands to start, stop, restart and che
- Check server status:`systemctl status taosd`
-From version 2.4.0.0, a new independent component named as `taosAdapter` has been included in TDengine. `taosAdapter` should be started and stopped using `systemctl`.
+Another component named as `taosAdapter` is to provide HTTP service for TDengine, it should be started and stopped using `systemctl`.
If the server process is OK, the output of `systemctl status` is like below:
diff --git a/docs/en/14-reference/03-connector/php.mdx b/docs/en/14-reference/03-connector/php.mdx
index 69dcce91e80fa05face1ffb35effe1ce1efa2631..9ee89d468a2fd86381b3521796886813c0fe6b06 100644
--- a/docs/en/14-reference/03-connector/php.mdx
+++ b/docs/en/14-reference/03-connector/php.mdx
@@ -61,7 +61,7 @@ phpize && ./configure && make -j && make install
**Specify TDengine location:**
```shell
-phpize && ./configure --with-tdengine-dir=/usr/local/Cellar/tdengine/2.4.0.0 && make -j && make install
+phpize && ./configure --with-tdengine-dir=/usr/local/Cellar/tdengine/3.0.0.0 && make -j && make install
```
> `--with-tdengine-dir=` is followed by TDengine location.
diff --git a/docs/en/14-reference/04-taosadapter.md b/docs/en/14-reference/04-taosadapter.md
index dc47246e2088deb33cda94d5b2bd72e11fa2b52c..31310b0f3e4f6fae7e65328baf4f9ad5d8b7b22f 100644
--- a/docs/en/14-reference/04-taosadapter.md
+++ b/docs/en/14-reference/04-taosadapter.md
@@ -30,7 +30,7 @@ taosAdapter provides the following features.
### Install taosAdapter
-taosAdapter has been part of TDengine server software since TDengine v2.4.0.0. If you use the TDengine server, you don't need additional steps to install taosAdapter. You can download taosAdapter from [TDengine official website](https://tdengine.com/all-downloads/) to download the TDengine server installation package (taosAdapter is included in v2.4.0.0 and later version). If you need to deploy taosAdapter separately on another server other than the TDengine server, you should install the full TDengine server package on that server to install taosAdapter. If you need to build taosAdapter from source code, you can refer to the [Building taosAdapter]( https://github.com/taosdata/taosadapter/blob/3.0/BUILD.md) documentation.
+If you use the TDengine server, you don't need additional steps to install taosAdapter. You can download taosAdapter from [TDengine 3.0 released versions](../../releases) to download the TDengine server installation package. If you need to deploy taosAdapter separately on another server other than the TDengine server, you should install the full TDengine server package on that server to install taosAdapter. If you need to build taosAdapter from source code, you can refer to the [Building taosAdapter]( https://github.com/taosdata/taosadapter/blob/3.0/BUILD.md) documentation.
### Start/Stop taosAdapter
diff --git a/docs/en/14-reference/12-config/index.md b/docs/en/14-reference/12-config/index.md
index b6b535429b00796b5d2636c467153415a4281e59..cb7daf3c476b2117b5de53c683e76ce07de97bc5 100644
--- a/docs/en/14-reference/12-config/index.md
+++ b/docs/en/14-reference/12-config/index.md
@@ -75,7 +75,6 @@ taos --dump-config
| Applicable | Server Only |
| Meaning | The port for external access after `taosd` is started |
| Default Value | 6030 |
-| Note | REST service is provided by `taosd` before 2.4.0.0 but by `taosAdapter` after 2.4.0.0, the default port of REST service is 6041 |
:::note
TDengine uses 13 continuous ports, both TCP and UDP, starting with the port specified by `serverPort`. You should ensure, in your firewall rules, that these ports are kept open. Below table describes the ports used by TDengine in details.
@@ -87,11 +86,11 @@ TDengine uses 13 continuous ports, both TCP and UDP, starting with the port spec
| TCP | 6030 | Communication between client and server | serverPort |
| TCP | 6035 | Communication among server nodes in cluster | serverPort+5 |
| TCP | 6040 | Data syncup among server nodes in cluster | serverPort+10 |
-| TCP | 6041 | REST connection between client and server | Prior to 2.4.0.0: serverPort+11; After 2.4.0.0 refer to [taosAdapter](/reference/taosadapter/) |
+| TCP | 6041 | REST connection between client and server | Please refer to [taosAdapter](../taosadapter/) |
| TCP | 6042 | Service Port of Arbitrator | The parameter of Arbitrator |
| TCP | 6043 | Service Port of TaosKeeper | The parameter of TaosKeeper |
-| TCP | 6044 | Data access port for StatsD | refer to [taosAdapter](/reference/taosadapter/) |
-| UDP | 6045 | Data access for statsd | refer to [taosAdapter](/reference/taosadapter/) |
+| TCP | 6044 | Data access port for StatsD | refer to [taosAdapter](../taosadapter/) |
+| UDP | 6045 | Data access for statsd | refer to [taosAdapter](../taosadapter/) |
| TCP | 6060 | Port of Monitoring Service in Enterprise version | |
| UDP | 6030-6034 | Communication between client and server | serverPort |
| UDP | 6035-6039 | Communication among server nodes in cluster | serverPort |
@@ -777,12 +776,6 @@ To prevent system resource from being exhausted by multiple concurrent streams,
## HTTP Parameters
-:::note
-HTTP service was provided by `taosd` prior to version 2.4.0.0 and is provided by `taosAdapter` after version 2.4.0.0.
-The parameters described in this section are only application in versions prior to 2.4.0.0. If you are using any version from 2.4.0.0, please refer to [taosAdapter](/reference/taosadapter/).
-
-:::
-
### http
| Attribute | Description |
@@ -980,16 +973,7 @@ The parameters described in this section are only application in versions prior
| Applicable | Server and Client |
| Meaning | Log level of common module |
| Value Range | Same as debugFlag |
-| Default Value | |
-
-### httpDebugFlag
-
-| Attribute | Description |
-| ------------- | ------------------------------------------- |
-| Applicable | Server Only |
-| Meaning | Log level of http module (prior to 2.4.0.0) |
-| Value Range | Same as debugFlag |
-| Default Value | |
+| Default Value | | |
### mqttDebugFlag
diff --git a/docs/en/14-reference/12-directory.md b/docs/en/14-reference/12-directory.md
index 0eaa7843ecdc284f2478cf5e43e9ce118bb69b3d..19b036418f18637bfd21fa286f24528c649d146d 100644
--- a/docs/en/14-reference/12-directory.md
+++ b/docs/en/14-reference/12-directory.md
@@ -29,11 +29,6 @@ All executable files of TDengine are in the _/usr/local/taos/bin_ directory by d
- _set_core.sh_: script for setting up the system to generate core dump files for easy debugging
- _taosd-dump-cfg.gdb_: script to facilitate debugging of taosd's gdb execution.
-:::note
-taosdump after version 2.4.0.0 require taosTools as a standalone installation. A new version of taosBenchmark is include in taosTools too.
-
-:::
-
:::tip
You can configure different data directories and log directories by modifying the system configuration file `taos.cfg`.
diff --git a/docs/en/27-train-faq/03-docker.md b/docs/en/27-train-faq/03-docker.md
deleted file mode 100644
index 0378fffb8bdbc4cae8d4d2176ec3d745a548c2fe..0000000000000000000000000000000000000000
--- a/docs/en/27-train-faq/03-docker.md
+++ /dev/null
@@ -1,285 +0,0 @@
----
-sidebar_label: TDengine in Docker
-title: Deploy TDengine in Docker
----
-
-We do not recommend deploying TDengine using Docker in a production system. However, Docker is still very useful in a development environment, especially when your host is not Linux. From version 2.0.14.0, the official image of TDengine can support X86-64, X86, arm64, and rm32 .
-
-In this chapter we introduce a simple step by step guide to use TDengine in Docker.
-
-## Install Docker
-
-To install Docker please refer to [Get Docker](https://docs.docker.com/get-docker/).
-
-After Docker is installed, you can check whether Docker is installed properly by displaying Docker version.
-
-```bash
-$ docker -v
-Docker version 20.10.3, build 48d30b5
-```
-
-## Launch TDengine in Docker
-
-### Launch TDengine Server
-
-```bash
-$ docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
-526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd
-```
-
-In the above command, a docker container is started to run TDengine server, the port range 6030-6049 of the container is mapped to host port range 6030-6049. If port range 6030-6049 has been occupied on the host, please change to an available host port range. For port requirements on the host, please refer to [Port Configuration](/reference/config/#serverport).
-
-- **docker run**: Launch a docker container
-- **-d**: the container will run in background mode
-- **-p**: port mapping
-- **tdengine/tdengine**: The image from which to launch the container
-- **526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd**: the container ID if successfully launched.
-
-Furthermore, `--name` can be used with `docker run` to specify name for the container, `--hostname` can be used to specify hostname for the container, `-v` can be used to mount local volumes to the container so that the data generated inside the container can be persisted to disk on the host.
-
-```bash
-docker run -d --name tdengine --hostname="tdengine-server" -v ~/work/taos/log:/var/log/taos -v ~/work/taos/data:/var/lib/taos -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
-```
-
-- **--name tdengine**: specify the name of the container, the name can be used to specify the container later
-- **--hostname=tdengine-server**: specify the hostname inside the container, the hostname can be used inside the container without worrying the container IP may vary
-- **-v**: volume mapping between host and container
-
-### Check the container
-
-```bash
-docker ps
-```
-
-The output is like below:
-
-```
-CONTAINER ID IMAGE COMMAND CREATED STATUS ···
-c452519b0f9b tdengine/tdengine "taosd" 14 minutes ago Up 14 minutes ···
-```
-
-- **docker ps**: List all the containers
-- **CONTAINER ID**: Container ID
-- **IMAGE**: The image used for the container
-- **COMMAND**: The command used when launching the container
-- **CREATED**: When the container was created
-- **STATUS**: Status of the container
-
-### Access TDengine inside container
-
-```bash
-$ docker exec -it tdengine /bin/bash
-root@tdengine-server:~/TDengine-server-2.4.0.4#
-```
-
-- **docker exec**: Attach to the container
-- **-i**: Interactive mode
-- **-t**: Use terminal
-- **tdengine**: Container name, up to the output of `docker ps`
-- **/bin/bash**: The command to execute once the container is attached
-
-Inside the container, start TDengine CLI `taos`
-
-```bash
-root@tdengine-server:~/TDengine-server-2.4.0.4# taos
-
-Welcome to the TDengine shell from Linux, Client Version:2.4.0.4
-Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
-
-taos>
-```
-
-The above example is for a successful connection. If `taos` fails to connect to the server side, error information would be shown.
-
-In TDengine CLI, SQL commands can be executed to create/drop databases, tables, STables, and insert or query data. For details please refer to [TAOS SQL](/taos-sql/).
-
-### Access TDengine from host
-
-If option `-p` used to map ports properly between host and container, it's also able to access TDengine in container from the host as long as `firstEp` is configured correctly for the client on host.
-
-```
-$ taos
-
-Welcome to the TDengine shell from Linux, Client Version:2.4.0.4
-Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
-
-taos>
-```
-
-It's also able to access the REST interface provided by TDengine in container from the host.
-
-```
-curl -L -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
-```
-
-Output is like below:
-
-```
-{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep0,keep1,keep(D)","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep0,keep1,keep(D)",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["test","2021-08-18 06:01:11.021",10000,4,1,1,10,"3650,3650,3650",16,6,100,4096,1,3000,2,0,"ms",0,"ready"],["log","2021-08-18 05:51:51.065",4,1,1,1,10,"30,30,30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":2}
-```
-
-For details of REST API please refer to [REST API](/reference/rest-api/).
-
-### Run TDengine server and taosAdapter inside container
-
-From version 2.4.0.0, in the TDengine Docker image, `taosAdapter` is enabled by default, but can be disabled using environment variable `TAOS_DISABLE_ADAPTER=true` . `taosAdapter` can also be run alone without `taosd` when launching a container.
-
-For the port mapping of `taosAdapter`, please refer to [taosAdapter](/reference/taosadapter/).
-
-- Run both `taosd` and `taosAdapter` (by default) in docker container:
-
-```bash
-docker run -d --name tdengine-all -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine:2.4.0.4
-```
-
-- Run `taosAdapter` only in docker container, `TAOS_FIRST_EP` environment variable needs to be used to specify the container name in which `taosd` is running:
-
-```bash
-docker run -d --name tdengine-taosa -p 6041-6049:6041-6049 -p 6041-6049:6041-6049/udp -e TAOS_FIRST_EP=tdengine-all tdengine/tdengine:2.4.0.4 taosadapter
-```
-
-- Run `taosd` only in docker container:
-
-```bash
-docker run -d --name tdengine-taosd -p 6030-6042:6030-6042 -p 6030-6042:6030-6042/udp -e TAOS_DISABLE_ADAPTER=true tdengine/tdengine:2.4.0.4
-```
-
-- Verify the REST interface:
-
-```bash
-curl -L -H "Authorization: Basic cm9vdDp0YW9zZGF0YQ==" -d "show databases;" 127.0.0.1:6041/rest/sql
-```
-
-Below is an example output:
-
-```
-{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["log","2021-12-28 09:18:55.765",10,1,1,1,10,"30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":1}
-```
-
-### Use taosBenchmark on host to access TDengine server in container
-
-1. Run `taosBenchmark`, named as `taosdemo` previously, on the host:
-
- ```bash
- $ taosBenchmark
-
- taosBenchmark is simulating data generated by power equipments monitoring...
-
- host: 127.0.0.1:6030
- user: root
- password: taosdata
- configDir:
- resultFile: ./output.txt
- thread num of insert data: 10
- thread num of create table: 10
- top insert interval: 0
- number of records per req: 30000
- max sql length: 1048576
- database count: 1
- database[0]:
- database[0] name: test
- drop: yes
- replica: 1
- precision: ms
- super table count: 1
- super table[0]:
- stbName: meters
- autoCreateTable: no
- childTblExists: no
- childTblCount: 10000
- childTblPrefix: d
- dataSource: rand
- iface: taosc
- insertRows: 10000
- interlaceRows: 0
- disorderRange: 1000
- disorderRatio: 0
- maxSqlLen: 1048576
- timeStampStep: 1
- startTimestamp: 2017-07-14 10:40:00.000
- sampleFormat:
- sampleFile:
- tagsFile:
- columnCount: 3
- column[0]:FLOAT column[1]:INT column[2]:FLOAT
- tagCount: 2
- tag[0]:INT tag[1]:BINARY(16)
-
- Press enter key to continue or Ctrl-C to stop
- ```
-
- Once the execution is finished, a database `test` is created, a STable `meters` is created in database `test`, 10,000 sub tables are created using `meters` as template, named as "d0" to "d9999", while 10,000 rows are inserted into each table, so totally 100,000,000 rows are inserted.
-
-2. Check the data
-
- - **Check database**
-
- ```bash
- $ taos> show databases;
- name | created_time | ntables | vgroups | ···
- test | 2021-08-18 06:01:11.021 | 10000 | 6 | ···
- log | 2021-08-18 05:51:51.065 | 4 | 1 | ···
-
- ```
-
- - **Check STable**
-
- ```bash
- $ taos> use test;
- Database changed.
-
- $ taos> show stables;
- name | created_time | columns | tags | tables |
- ============================================================================================
- meters | 2021-08-18 06:01:11.116 | 4 | 2 | 10000 |
- Query OK, 1 row(s) in set (0.003259s)
-
- ```
-
- - **Check Tables**
-
- ```bash
- $ taos> select * from test.t0 limit 10;
-
- DB error: Table does not exist (0.002857s)
- taos> select * from test.d0 limit 10;
- ts | current | voltage | phase |
- ======================================================================================
- 2017-07-14 10:40:00.000 | 10.12072 | 223 | 0.34167 |
- 2017-07-14 10:40:00.001 | 10.16103 | 224 | 0.34445 |
- 2017-07-14 10:40:00.002 | 10.00204 | 220 | 0.33334 |
- 2017-07-14 10:40:00.003 | 10.00030 | 220 | 0.33333 |
- 2017-07-14 10:40:00.004 | 9.84029 | 216 | 0.32222 |
- 2017-07-14 10:40:00.005 | 9.88028 | 217 | 0.32500 |
- 2017-07-14 10:40:00.006 | 9.88110 | 217 | 0.32500 |
- 2017-07-14 10:40:00.007 | 10.08137 | 222 | 0.33889 |
- 2017-07-14 10:40:00.008 | 10.12063 | 223 | 0.34167 |
- 2017-07-14 10:40:00.009 | 10.16086 | 224 | 0.34445 |
- Query OK, 10 row(s) in set (0.016791s)
-
- ```
-
- - **Check tag values of table d0**
-
- ```bash
- $ taos> select groupid, location from test.d0;
- groupid | location |
- =================================
- 0 | California.SanDiego |
- Query OK, 1 row(s) in set (0.003490s)
- ```
-
-### Access TDengine from 3rd party tools
-
-A lot of 3rd party tools can be used to write data into TDengine through `taosAdapter`, for details please refer to [3rd party tools](/third-party/).
-
-There is nothing different from the 3rd party side to access TDengine server inside a container, as long as the end point is specified correctly, the end point should be the FQDN and the mapped port of the host.
-
-## Stop TDengine inside container
-
-```bash
-docker stop tdengine
-```
-
-- **docker stop**: stop a container
-- **tdengine**: container name
diff --git a/docs/en/28-releases.md b/docs/en/28-releases.md
new file mode 100644
index 0000000000000000000000000000000000000000..a0c9eb119999571fb973b5e2243f237b8833b167
--- /dev/null
+++ b/docs/en/28-releases.md
@@ -0,0 +1,9 @@
+---
+sidebar_label: Releases
+title: Released Versions
+---
+
+import Release from "/components/ReleaseV3";
+
+
+
diff --git a/docs/zh/27-train-faq/01-faq.md b/docs/zh/27-train-faq/01-faq.md
index 59e0d7cae0b625a56ced79d1e4daad611ab5b7d0..04ee011b9368eb5fd60cc25fd675a5b276a8ab2b 100644
--- a/docs/zh/27-train-faq/01-faq.md
+++ b/docs/zh/27-train-faq/01-faq.md
@@ -187,7 +187,7 @@ TDengine 中时间戳的时区总是由客户端进行处理,而与服务端
### 17. 为什么 RESTful 接口无响应、Grafana 无法添加 TDengine 为数据源、TDengineGUI 选了 6041 端口还是无法连接成功?
-taosAdapter 从 TDengine 2.4.0.0 版本开始成为 TDengine 服务端软件的组成部分,是 TDengine 集群和应用程序之间的桥梁和适配器。在此之前 RESTful 接口等功能是由 taosd 内置的 HTTP 服务提供的,而如今要实现上述功能需要执行:```systemctl start taosadapter``` 命令来启动 taosAdapter 服务。
+这个现象可能是因为 taosAdapter 没有被正确启动引起的,需要执行:```systemctl start taosadapter``` 命令来启动 taosAdapter 服务。
需要说明的是,taosAdapter 的日志路径 path 需要单独配置,默认路径是 /var/log/taos ;日志等级 logLevel 有 8 个等级,默认等级是 info ,配置成 panic 可关闭日志输出。请注意操作系统 / 目录的空间大小,可通过命令行参数、环境变量或配置文件来修改配置,默认配置文件是 /etc/taos/taosadapter.toml 。
diff --git a/examples/JDBC/JDBCDemo/pom.xml b/examples/JDBC/JDBCDemo/pom.xml
index 8cf0356721f8ffd568e87fa4a77c86eb0f90a62b..807ceb0f24644d3978274faee1bc8b47c9d7af47 100644
--- a/examples/JDBC/JDBCDemo/pom.xml
+++ b/examples/JDBC/JDBCDemo/pom.xml
@@ -17,7 +17,7 @@
com.taosdata.jdbctaos-jdbcdriver
- 2.0.34
+ 3.0.0
diff --git a/examples/JDBC/SpringJdbcTemplate/pom.xml b/examples/JDBC/SpringJdbcTemplate/pom.xml
index eac3dec0a92a4c8aa519cd426b9c8d3895047be6..6e4941b4f1c5bb5557109d06496bff02744a3092 100644
--- a/examples/JDBC/SpringJdbcTemplate/pom.xml
+++ b/examples/JDBC/SpringJdbcTemplate/pom.xml
@@ -47,7 +47,7 @@
com.taosdata.jdbctaos-jdbcdriver
- 2.0.18
+ 3.0.0
diff --git a/examples/JDBC/SpringJdbcTemplate/readme.md b/examples/JDBC/SpringJdbcTemplate/readme.md
index b70a6565f88d0a08b8a26a60676e729ecdb39e2e..f59bcdbeb547b0c0576b43abe4e1f2cef2175913 100644
--- a/examples/JDBC/SpringJdbcTemplate/readme.md
+++ b/examples/JDBC/SpringJdbcTemplate/readme.md
@@ -10,7 +10,7 @@
```xml
-
+
@@ -28,5 +28,5 @@ mvn clean package
```
打包成功之后,进入 `target/` 目录下,执行以下命令就可运行测试:
```shell
-java -jar SpringJdbcTemplate-1.0-SNAPSHOT-jar-with-dependencies.jar
+java -jar target/SpringJdbcTemplate-1.0-SNAPSHOT-jar-with-dependencies.jar
```
\ No newline at end of file
diff --git a/examples/JDBC/SpringJdbcTemplate/src/main/java/com/taosdata/example/jdbcTemplate/App.java b/examples/JDBC/SpringJdbcTemplate/src/main/java/com/taosdata/example/jdbcTemplate/App.java
index 6942d62a83adafb85496a81ce93866cd0d53611d..ce26b7504ae41644032c1f59579efc310f58d527 100644
--- a/examples/JDBC/SpringJdbcTemplate/src/main/java/com/taosdata/example/jdbcTemplate/App.java
+++ b/examples/JDBC/SpringJdbcTemplate/src/main/java/com/taosdata/example/jdbcTemplate/App.java
@@ -28,7 +28,7 @@ public class App {
//use database
executor.doExecute("use test");
// create table
- executor.doExecute("create table if not exists test.weather (ts timestamp, temperature int, humidity float)");
+ executor.doExecute("create table if not exists test.weather (ts timestamp, temperature float, humidity int)");
WeatherDao weatherDao = ctx.getBean(WeatherDao.class);
Weather weather = new Weather(new Timestamp(new Date().getTime()), random.nextFloat() * 50.0f, random.nextInt(100));
diff --git a/examples/JDBC/SpringJdbcTemplate/src/test/java/com/taosdata/example/jdbcTemplate/BatcherInsertTest.java b/examples/JDBC/SpringJdbcTemplate/src/test/java/com/taosdata/example/jdbcTemplate/BatcherInsertTest.java
index 29d0f79fd4982d43078e590b4320c0df457ee44c..782fcbe0eb2020c8bcbafecb0b2d61185b139477 100644
--- a/examples/JDBC/SpringJdbcTemplate/src/test/java/com/taosdata/example/jdbcTemplate/BatcherInsertTest.java
+++ b/examples/JDBC/SpringJdbcTemplate/src/test/java/com/taosdata/example/jdbcTemplate/BatcherInsertTest.java
@@ -41,7 +41,7 @@ public class BatcherInsertTest {
//use database
executor.doExecute("use test");
// create table
- executor.doExecute("create table if not exists test.weather (ts timestamp, temperature int, humidity float)");
+ executor.doExecute("create table if not exists test.weather (ts timestamp, temperature float, humidity int)");
}
@Test
diff --git a/examples/JDBC/connectionPools/README-cn.md b/examples/JDBC/connectionPools/README-cn.md
index 9b26df3c2eb2c23171a673643891a292af4c920c..6e589418b11a3d6c8c64d24b28a0ea7c65ad0830 100644
--- a/examples/JDBC/connectionPools/README-cn.md
+++ b/examples/JDBC/connectionPools/README-cn.md
@@ -13,13 +13,13 @@ ConnectionPoolDemo的程序逻辑:
### 如何运行这个例子:
```shell script
-mvn clean package assembly:single
-java -jar target/connectionPools-1.0-SNAPSHOT-jar-with-dependencies.jar -host 127.0.0.1
+mvn clean package
+java -jar target/ConnectionPoolDemo-jar-with-dependencies.jar -host 127.0.0.1
```
使用mvn运行ConnectionPoolDemo的main方法,可以指定参数
```shell script
Usage:
-java -jar target/connectionPools-1.0-SNAPSHOT-jar-with-dependencies.jar
+java -jar target/ConnectionPoolDemo-jar-with-dependencies.jar
-host : hostname
-poolType
-poolSize
diff --git a/examples/JDBC/connectionPools/pom.xml b/examples/JDBC/connectionPools/pom.xml
index 99a7892a250bd656479b0901682d6a86c2b27d14..61717cf1121696a97d867b5d43af75231ddd0472 100644
--- a/examples/JDBC/connectionPools/pom.xml
+++ b/examples/JDBC/connectionPools/pom.xml
@@ -18,7 +18,7 @@
com.taosdata.jdbctaos-jdbcdriver
- 2.0.18
+ 3.0.0
diff --git a/examples/JDBC/mybatisplus-demo/pom.xml b/examples/JDBC/mybatisplus-demo/pom.xml
index ad6a63e800fb73dd3c768a8aca941f70cec235b3..5555145958de67fdf03eb744426afcfc13b6fcb3 100644
--- a/examples/JDBC/mybatisplus-demo/pom.xml
+++ b/examples/JDBC/mybatisplus-demo/pom.xml
@@ -47,7 +47,7 @@
com.taosdata.jdbctaos-jdbcdriver
- 2.0.18
+ 3.0.0
diff --git a/examples/JDBC/mybatisplus-demo/readme b/examples/JDBC/mybatisplus-demo/readme
new file mode 100644
index 0000000000000000000000000000000000000000..b31b6c34bf1c2bd661d88fff066eb4632d456a1c
--- /dev/null
+++ b/examples/JDBC/mybatisplus-demo/readme
@@ -0,0 +1,14 @@
+# 使用说明
+
+## 创建使用db
+```shell
+$ taos
+
+> create database mp_test
+```
+
+## 执行测试用例
+
+```shell
+$ mvn clean test
+```
\ No newline at end of file
diff --git a/examples/JDBC/mybatisplus-demo/src/main/java/com/taosdata/example/mybatisplusdemo/mapper/WeatherMapper.java b/examples/JDBC/mybatisplus-demo/src/main/java/com/taosdata/example/mybatisplusdemo/mapper/WeatherMapper.java
index 6733cbded9d1d180408eccaad9e8badad7d39a3d..1f0338db34019661a2d7c4a0716d953195d059a2 100644
--- a/examples/JDBC/mybatisplus-demo/src/main/java/com/taosdata/example/mybatisplusdemo/mapper/WeatherMapper.java
+++ b/examples/JDBC/mybatisplus-demo/src/main/java/com/taosdata/example/mybatisplusdemo/mapper/WeatherMapper.java
@@ -2,7 +2,17 @@ package com.taosdata.example.mybatisplusdemo.mapper;
import com.baomidou.mybatisplus.core.mapper.BaseMapper;
import com.taosdata.example.mybatisplusdemo.domain.Weather;
+import org.apache.ibatis.annotations.Insert;
+import org.apache.ibatis.annotations.Update;
public interface WeatherMapper extends BaseMapper {
+ @Update("CREATE TABLE if not exists weather(ts timestamp, temperature float, humidity int, location nchar(100))")
+ int createTable();
+
+ @Insert("insert into weather (ts, temperature, humidity, location) values(#{ts}, #{temperature}, #{humidity}, #{location})")
+ int insertOne(Weather one);
+
+ @Update("drop table if exists weather")
+ void dropTable();
}
diff --git a/examples/JDBC/mybatisplus-demo/src/main/resources/application.yml b/examples/JDBC/mybatisplus-demo/src/main/resources/application.yml
index 38180c6d75a620a63bcaab9ec350d97e65f9dd16..985ed1675ee408bad346dff2a1b7e03c5138f4df 100644
--- a/examples/JDBC/mybatisplus-demo/src/main/resources/application.yml
+++ b/examples/JDBC/mybatisplus-demo/src/main/resources/application.yml
@@ -2,7 +2,7 @@ spring:
datasource:
driver-class-name: com.taosdata.jdbc.TSDBDriver
url: jdbc:TAOS://localhost:6030/mp_test?charset=UTF-8&locale=en_US.UTF-8&timezone=UTC-8
- user: root
+ username: root
password: taosdata
druid:
diff --git a/examples/JDBC/mybatisplus-demo/src/test/java/com/taosdata/example/mybatisplusdemo/mapper/TemperatureMapperTest.java b/examples/JDBC/mybatisplus-demo/src/test/java/com/taosdata/example/mybatisplusdemo/mapper/TemperatureMapperTest.java
index 4331d15d3476d3428e72a186664ed77cc59aad3e..4d9dbf8d2fb909ef46dbe23a2bb5192d4971195e 100644
--- a/examples/JDBC/mybatisplus-demo/src/test/java/com/taosdata/example/mybatisplusdemo/mapper/TemperatureMapperTest.java
+++ b/examples/JDBC/mybatisplus-demo/src/test/java/com/taosdata/example/mybatisplusdemo/mapper/TemperatureMapperTest.java
@@ -82,27 +82,15 @@ public class TemperatureMapperTest {
Assert.assertEquals(1, affectRows);
}
- /***
- * test SelectOne
- * **/
- @Test
- public void testSelectOne() {
- QueryWrapper wrapper = new QueryWrapper<>();
- wrapper.eq("location", "beijing");
- Temperature one = mapper.selectOne(wrapper);
- System.out.println(one);
- Assert.assertNotNull(one);
- }
-
/***
* test select By map
* ***/
@Test
public void testSelectByMap() {
Map map = new HashMap<>();
- map.put("location", "beijing");
+ map.put("location", "北京");
List temperatures = mapper.selectByMap(map);
- Assert.assertEquals(1, temperatures.size());
+ Assert.assertTrue(temperatures.size() > 1);
}
/***
@@ -120,7 +108,7 @@ public class TemperatureMapperTest {
@Test
public void testSelectCount() {
int count = mapper.selectCount(null);
- Assert.assertEquals(5, count);
+ Assert.assertEquals(10, count);
}
/****
diff --git a/examples/JDBC/mybatisplus-demo/src/test/java/com/taosdata/example/mybatisplusdemo/mapper/WeatherMapperTest.java b/examples/JDBC/mybatisplus-demo/src/test/java/com/taosdata/example/mybatisplusdemo/mapper/WeatherMapperTest.java
index 1699344552f89e1595d1317019c992dcd3820e77..dba8abd1ed006e81cf8240e66cfcc0b525af9b79 100644
--- a/examples/JDBC/mybatisplus-demo/src/test/java/com/taosdata/example/mybatisplusdemo/mapper/WeatherMapperTest.java
+++ b/examples/JDBC/mybatisplus-demo/src/test/java/com/taosdata/example/mybatisplusdemo/mapper/WeatherMapperTest.java
@@ -6,6 +6,7 @@ import com.baomidou.mybatisplus.extension.plugins.pagination.Page;
import com.taosdata.example.mybatisplusdemo.domain.Weather;
import org.junit.Assert;
import org.junit.Test;
+import org.junit.Before;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
@@ -26,6 +27,18 @@ public class WeatherMapperTest {
@Autowired
private WeatherMapper mapper;
+ @Before
+ public void createTable(){
+ mapper.dropTable();
+ mapper.createTable();
+ Weather one = new Weather();
+ one.setTs(new Timestamp(1605024000000l));
+ one.setTemperature(12.22f);
+ one.setLocation("望京");
+ one.setHumidity(100);
+ mapper.insertOne(one);
+ }
+
@Test
public void testSelectList() {
List weathers = mapper.selectList(null);
@@ -46,20 +59,20 @@ public class WeatherMapperTest {
@Test
public void testSelectOne() {
QueryWrapper wrapper = new QueryWrapper<>();
- wrapper.eq("location", "beijing");
+ wrapper.eq("location", "望京");
Weather one = mapper.selectOne(wrapper);
System.out.println(one);
Assert.assertEquals(12.22f, one.getTemperature(), 0.00f);
- Assert.assertEquals("beijing", one.getLocation());
+ Assert.assertEquals("望京", one.getLocation());
}
- @Test
- public void testSelectByMap() {
- Map map = new HashMap<>();
- map.put("location", "beijing");
- List weathers = mapper.selectByMap(map);
- Assert.assertEquals(1, weathers.size());
- }
+ // @Test
+ // public void testSelectByMap() {
+ // Map map = new HashMap<>();
+ // map.put("location", "beijing");
+ // List weathers = mapper.selectByMap(map);
+ // Assert.assertEquals(1, weathers.size());
+ // }
@Test
public void testSelectObjs() {
diff --git a/examples/JDBC/readme.md b/examples/JDBC/readme.md
index 9a017f4feab148cb7c3fd4132360c3075c6573cb..c7d7875308d248c1abef8d47bc69a69e91374dbb 100644
--- a/examples/JDBC/readme.md
+++ b/examples/JDBC/readme.md
@@ -10,4 +10,4 @@
| 6 | taosdemo | This is an internal tool for testing Our JDBC-JNI, JDBC-RESTful, RESTful interfaces |
-more detail: https://www.taosdata.com/cn//documentation20/connector-java/
\ No newline at end of file
+more detail: https://docs.taosdata.com/reference/connector/java/
\ No newline at end of file
diff --git a/examples/JDBC/springbootdemo/pom.xml b/examples/JDBC/springbootdemo/pom.xml
index 9126813b67e71691692109920f891a6fb4cc5ab5..ee15f6013e4fd35bf30fb5af00b226e7c4d3d8c7 100644
--- a/examples/JDBC/springbootdemo/pom.xml
+++ b/examples/JDBC/springbootdemo/pom.xml
@@ -68,7 +68,7 @@
com.taosdata.jdbctaos-jdbcdriver
- 2.0.34
+ 3.0.0
diff --git a/examples/JDBC/springbootdemo/readme.md b/examples/JDBC/springbootdemo/readme.md
index 67a28947d2dfb8fc069bf94fd139a7006d35a22b..a3942a6a512501b7dee1f4f4ff5ccc93da0babbb 100644
--- a/examples/JDBC/springbootdemo/readme.md
+++ b/examples/JDBC/springbootdemo/readme.md
@@ -1,10 +1,11 @@
## TDengine SpringBoot + Mybatis Demo
+## 需要提前创建 test 数据库
### 配置 application.properties
```properties
# datasource config
spring.datasource.driver-class-name=com.taosdata.jdbc.TSDBDriver
-spring.datasource.url=jdbc:TAOS://127.0.0.1:6030/log
+spring.datasource.url=jdbc:TAOS://127.0.0.1:6030/test
spring.datasource.username=root
spring.datasource.password=taosdata
diff --git a/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/controller/WeatherController.java b/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/controller/WeatherController.java
index ed720fe6c02dd3a7eba6e645ea1e76d704c04d0c..3ee5b597ab08c945f6494d9a8a31da9cd3e01f25 100644
--- a/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/controller/WeatherController.java
+++ b/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/controller/WeatherController.java
@@ -6,7 +6,6 @@ import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;
import java.util.List;
-import java.util.Map;
@RequestMapping("/weather")
@RestController
diff --git a/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/dao/WeatherMapper.xml b/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/dao/WeatherMapper.xml
index 91938ca24e3cf9c3e0f2895cf40f214d484c55d5..99d5893ec198535d9e8ef1cc6c443625d0a64ec1 100644
--- a/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/dao/WeatherMapper.xml
+++ b/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/dao/WeatherMapper.xml
@@ -10,8 +10,7 @@
diff --git a/examples/JDBC/springbootdemo/src/main/resources/application.properties b/examples/JDBC/springbootdemo/src/main/resources/application.properties
index 06daa81bbb06450d99ab3f6e640c9795c0ad5d2e..bf21047395ed534e4c7d9db919bb371fab45ec16 100644
--- a/examples/JDBC/springbootdemo/src/main/resources/application.properties
+++ b/examples/JDBC/springbootdemo/src/main/resources/application.properties
@@ -5,7 +5,7 @@
#spring.datasource.password=taosdata
# datasource config - JDBC-RESTful
spring.datasource.driver-class-name=com.taosdata.jdbc.rs.RestfulDriver
-spring.datasource.url=jdbc:TAOS-RS://localhsot:6041/test?timezone=UTC-8&charset=UTF-8&locale=en_US.UTF-8
+spring.datasource.url=jdbc:TAOS-RS://localhost:6041/test?timezone=UTC-8&charset=UTF-8&locale=en_US.UTF-8
spring.datasource.username=root
spring.datasource.password=taosdata
spring.datasource.druid.initial-size=5
diff --git a/examples/JDBC/taosdemo/pom.xml b/examples/JDBC/taosdemo/pom.xml
index 07fd4a3576243b8950ccd25515f2512226e313d6..724ecc74077c4080269c695ca50a1cf300e39d0b 100644
--- a/examples/JDBC/taosdemo/pom.xml
+++ b/examples/JDBC/taosdemo/pom.xml
@@ -67,7 +67,7 @@
com.taosdata.jdbctaos-jdbcdriver
- 2.0.20
+ 3.0.0
diff --git a/examples/JDBC/taosdemo/readme.md b/examples/JDBC/taosdemo/readme.md
index 451fa2960adb98e2deb8499732aefde11f4810a1..e5f4eb132b2262990b8fa32fe3c40a617d16d247 100644
--- a/examples/JDBC/taosdemo/readme.md
+++ b/examples/JDBC/taosdemo/readme.md
@@ -2,9 +2,9 @@
cd tests/examples/JDBC/taosdemo
mvn clean package -Dmaven.test.skip=true
# 先建表,再插入的
-java -jar target/taosdemo-2.0-jar-with-dependencies.jar -host [hostname] -database [database] -doCreateTable true -superTableSQL "create table weather(ts timestamp, f1 int) tags(t1 nchar(4))" -numOfTables 1000 -numOfRowsPerTable 100000000 -numOfThreadsForInsert 10 -numOfTablesPerSQL 10 -numOfValuesPerSQL 100
+java -jar target/taosdemo-2.0.1-jar-with-dependencies.jar -host [hostname] -database [database] -doCreateTable true -superTableSQL "create table weather(ts timestamp, f1 int) tags(t1 nchar(4))" -numOfTables 1000 -numOfRowsPerTable 100000000 -numOfThreadsForInsert 10 -numOfTablesPerSQL 10 -numOfValuesPerSQL 100
# 不建表,直接插入的
-java -jar target/taosdemo-2.0-jar-with-dependencies.jar -host [hostname] -database [database] -doCreateTable false -superTableSQL "create table weather(ts timestamp, f1 int) tags(t1 nchar(4))" -numOfTables 1000 -numOfRowsPerTable 100000000 -numOfThreadsForInsert 10 -numOfTablesPerSQL 10 -numOfValuesPerSQL 100
+java -jar target/taosdemo-2.0.1-jar-with-dependencies.jar -host [hostname] -database [database] -doCreateTable false -superTableSQL "create table weather(ts timestamp, f1 int) tags(t1 nchar(4))" -numOfTables 1000 -numOfRowsPerTable 100000000 -numOfThreadsForInsert 10 -numOfTablesPerSQL 10 -numOfValuesPerSQL 100
```
需求:
diff --git a/examples/JDBC/taosdemo/src/main/java/com/taosdata/taosdemo/TaosDemoApplication.java b/examples/JDBC/taosdemo/src/main/java/com/taosdata/taosdemo/TaosDemoApplication.java
index d4f5ff26886b9f90a4235d47bfd004dae9de93f6..6854054703776da46abdbff593724bef179f5b6d 100644
--- a/examples/JDBC/taosdemo/src/main/java/com/taosdata/taosdemo/TaosDemoApplication.java
+++ b/examples/JDBC/taosdemo/src/main/java/com/taosdata/taosdemo/TaosDemoApplication.java
@@ -32,8 +32,10 @@ public class TaosDemoApplication {
System.exit(0);
}
// 初始化
- final DataSource dataSource = DataSourceFactory.getInstance(config.host, config.port, config.user, config.password);
- if (config.executeSql != null && !config.executeSql.isEmpty() && !config.executeSql.replaceAll("\\s", "").isEmpty()) {
+ final DataSource dataSource = DataSourceFactory.getInstance(config.host, config.port, config.user,
+ config.password);
+ if (config.executeSql != null && !config.executeSql.isEmpty()
+ && !config.executeSql.replaceAll("\\s", "").isEmpty()) {
Thread task = new Thread(new SqlExecuteTask(dataSource, config.executeSql));
task.start();
try {
@@ -55,7 +57,7 @@ public class TaosDemoApplication {
databaseParam.put("keep", Integer.toString(config.keep));
databaseParam.put("days", Integer.toString(config.days));
databaseParam.put("replica", Integer.toString(config.replica));
- //TODO: other database parameters
+ // TODO: other database parameters
databaseService.createDatabase(databaseParam);
databaseService.useDatabase(config.database);
long end = System.currentTimeMillis();
@@ -70,11 +72,13 @@ public class TaosDemoApplication {
if (config.database != null && !config.database.isEmpty())
superTableMeta.setDatabase(config.database);
} else if (config.numOfFields == 0) {
- String sql = "create table " + config.database + "." + config.superTable + " (ts timestamp, temperature float, humidity int) tags(location nchar(64), groupId int)";
+ String sql = "create table " + config.database + "." + config.superTable
+ + " (ts timestamp, temperature float, humidity int) tags(location nchar(64), groupId int)";
superTableMeta = SuperTableMetaGenerator.generate(sql);
} else {
// create super table with specified field size and tag size
- superTableMeta = SuperTableMetaGenerator.generate(config.database, config.superTable, config.numOfFields, config.prefixOfFields, config.numOfTags, config.prefixOfTags);
+ superTableMeta = SuperTableMetaGenerator.generate(config.database, config.superTable, config.numOfFields,
+ config.prefixOfFields, config.numOfTags, config.prefixOfTags);
}
/**********************************************************************************/
// 建表
@@ -84,7 +88,8 @@ public class TaosDemoApplication {
superTableService.create(superTableMeta);
if (!config.autoCreateTable) {
// 批量建子表
- subTableService.createSubTable(superTableMeta, config.numOfTables, config.prefixOfTable, config.numOfThreadsForCreate);
+ subTableService.createSubTable(superTableMeta, config.numOfTables, config.prefixOfTable,
+ config.numOfThreadsForCreate);
}
}
end = System.currentTimeMillis();
@@ -93,7 +98,7 @@ public class TaosDemoApplication {
// 插入
long tableSize = config.numOfTables;
int threadSize = config.numOfThreadsForInsert;
- long startTime = getProperStartTime(config.startTime, config.keep);
+ long startTime = getProperStartTime(config.startTime, config.days);
if (tableSize < threadSize)
threadSize = (int) tableSize;
@@ -101,13 +106,13 @@ public class TaosDemoApplication {
start = System.currentTimeMillis();
// multi threads to insert
- int affectedRows = subTableService.insertMultiThreads(superTableMeta, threadSize, tableSize, startTime, gap, config);
+ int affectedRows = subTableService.insertMultiThreads(superTableMeta, threadSize, tableSize, startTime, gap,
+ config);
end = System.currentTimeMillis();
logger.info("insert " + affectedRows + " rows, time cost: " + (end - start) + " ms");
/**********************************************************************************/
// 查询
-
/**********************************************************************************/
// 删除表
if (config.dropTable) {
diff --git a/examples/JDBC/taosdemo/src/main/java/com/taosdata/taosdemo/service/QueryService.java b/examples/JDBC/taosdemo/src/main/java/com/taosdata/taosdemo/service/QueryService.java
index efabff6afe904516ad9682cd7197412dc02765ef..ab0a1125d2b879d7e889e4c76cdb021ec46292f7 100644
--- a/examples/JDBC/taosdemo/src/main/java/com/taosdata/taosdemo/service/QueryService.java
+++ b/examples/JDBC/taosdemo/src/main/java/com/taosdata/taosdemo/service/QueryService.java
@@ -1,7 +1,5 @@
package com.taosdata.taosdemo.service;
-import com.taosdata.jdbc.utils.SqlSyntaxValidator;
-
import javax.sql.DataSource;
import java.sql.*;
import java.util.ArrayList;
@@ -23,10 +21,6 @@ public class QueryService {
Boolean[] ret = new Boolean[sqls.length];
for (int i = 0; i < sqls.length; i++) {
ret[i] = true;
- if (!SqlSyntaxValidator.isValidForExecuteQuery(sqls[i])) {
- ret[i] = false;
- continue;
- }
try (Connection conn = dataSource.getConnection(); Statement stmt = conn.createStatement()) {
stmt.executeQuery(sqls[i]);
} catch (SQLException e) {
diff --git a/examples/JDBC/taosdemo/src/main/java/com/taosdata/taosdemo/utils/SqlSpeller.java b/examples/JDBC/taosdemo/src/main/java/com/taosdata/taosdemo/utils/SqlSpeller.java
index a60f0641d3a4441195c3a60639fbe3a197115dc3..7651d1e31814981499eb69d669b9176c73f33acd 100644
--- a/examples/JDBC/taosdemo/src/main/java/com/taosdata/taosdemo/utils/SqlSpeller.java
+++ b/examples/JDBC/taosdemo/src/main/java/com/taosdata/taosdemo/utils/SqlSpeller.java
@@ -15,9 +15,12 @@ public class SqlSpeller {
StringBuilder sb = new StringBuilder();
sb.append("create database if not exists ").append(map.get("database")).append(" ");
if (map.containsKey("keep"))
- sb.append("keep ").append(map.get("keep")).append(" ");
- if (map.containsKey("days"))
- sb.append("days ").append(map.get("days")).append(" ");
+ sb.append("keep ");
+ if (map.containsKey("days")) {
+ sb.append(map.get("days")).append("d ");
+ } else {
+ sb.append(" ");
+ }
if (map.containsKey("replica"))
sb.append("replica ").append(map.get("replica")).append(" ");
if (map.containsKey("cache"))
@@ -29,7 +32,7 @@ public class SqlSpeller {
if (map.containsKey("maxrows"))
sb.append("maxrows ").append(map.get("maxrows")).append(" ");
if (map.containsKey("precision"))
- sb.append("precision ").append(map.get("precision")).append(" ");
+ sb.append("precision '").append(map.get("precision")).append("' ");
if (map.containsKey("comp"))
sb.append("comp ").append(map.get("comp")).append(" ");
if (map.containsKey("walLevel"))
@@ -46,11 +49,13 @@ public class SqlSpeller {
// create table if not exists xx.xx using xx.xx tags(x,x,x)
public static String createTableUsingSuperTable(SubTableMeta subTableMeta) {
StringBuilder sb = new StringBuilder();
- sb.append("create table if not exists ").append(subTableMeta.getDatabase()).append(".").append(subTableMeta.getName()).append(" ");
- sb.append("using ").append(subTableMeta.getDatabase()).append(".").append(subTableMeta.getSupertable()).append(" ");
-// String tagStr = subTableMeta.getTags().stream().filter(Objects::nonNull)
-// .map(tagValue -> tagValue.getName() + " '" + tagValue.getValue() + "' ")
-// .collect(Collectors.joining(",", "(", ")"));
+ sb.append("create table if not exists ").append(subTableMeta.getDatabase()).append(".")
+ .append(subTableMeta.getName()).append(" ");
+ sb.append("using ").append(subTableMeta.getDatabase()).append(".").append(subTableMeta.getSupertable())
+ .append(" ");
+ // String tagStr = subTableMeta.getTags().stream().filter(Objects::nonNull)
+ // .map(tagValue -> tagValue.getName() + " '" + tagValue.getValue() + "' ")
+ // .collect(Collectors.joining(",", "(", ")"));
sb.append("tags ").append(tagValues(subTableMeta.getTags()));
return sb.toString();
}
@@ -63,7 +68,7 @@ public class SqlSpeller {
return sb.toString();
}
- //f1, f2, f3
+ // f1, f2, f3
private static String fieldValues(List fields) {
return IntStream.range(0, fields.size()).mapToObj(i -> {
if (i == 0) {
@@ -73,13 +78,13 @@ public class SqlSpeller {
}
}).collect(Collectors.joining(",", "(", ")"));
-// return fields.stream()
-// .filter(Objects::nonNull)
-// .map(fieldValue -> "'" + fieldValue.getValue() + "'")
-// .collect(Collectors.joining(",", "(", ")"));
+ // return fields.stream()
+ // .filter(Objects::nonNull)
+ // .map(fieldValue -> "'" + fieldValue.getValue() + "'")
+ // .collect(Collectors.joining(",", "(", ")"));
}
- //(f1, f2, f3),(f1, f2, f3)
+ // (f1, f2, f3),(f1, f2, f3)
private static String rowValues(List rowValues) {
return rowValues.stream().filter(Objects::nonNull)
.map(rowValue -> fieldValues(rowValue.getFields()))
@@ -89,8 +94,10 @@ public class SqlSpeller {
// insert into xx.xxx using xx.xx tags(x,x,x) values(x,x,x),(x,x,x)...
public static String insertOneTableMultiValuesUsingSuperTable(SubTableValue subTableValue) {
StringBuilder sb = new StringBuilder();
- sb.append("insert into ").append(subTableValue.getDatabase()).append(".").append(subTableValue.getName()).append(" ");
- sb.append("using ").append(subTableValue.getDatabase()).append(".").append(subTableValue.getSupertable()).append(" ");
+ sb.append("insert into ").append(subTableValue.getDatabase()).append(".").append(subTableValue.getName())
+ .append(" ");
+ sb.append("using ").append(subTableValue.getDatabase()).append(".").append(subTableValue.getSupertable())
+ .append(" ");
sb.append("tags ").append(tagValues(subTableValue.getTags()) + " ");
sb.append("values ").append(rowValues(subTableValue.getValues()));
return sb.toString();
@@ -126,7 +133,8 @@ public class SqlSpeller {
// create table if not exists xx.xx (f1 xx,f2 xx...) tags(t1 xx, t2 xx...)
public static String createSuperTable(SuperTableMeta tableMetadata) {
StringBuilder sb = new StringBuilder();
- sb.append("create table if not exists ").append(tableMetadata.getDatabase()).append(".").append(tableMetadata.getName());
+ sb.append("create table if not exists ").append(tableMetadata.getDatabase()).append(".")
+ .append(tableMetadata.getName());
String fields = tableMetadata.getFields().stream()
.filter(Objects::nonNull).map(field -> field.getName() + " " + field.getType() + " ")
.collect(Collectors.joining(",", "(", ")"));
@@ -139,10 +147,10 @@ public class SqlSpeller {
return sb.toString();
}
-
public static String createTable(TableMeta tableMeta) {
StringBuilder sb = new StringBuilder();
- sb.append("create table if not exists ").append(tableMeta.getDatabase()).append(".").append(tableMeta.getName()).append(" ");
+ sb.append("create table if not exists ").append(tableMeta.getDatabase()).append(".").append(tableMeta.getName())
+ .append(" ");
String fields = tableMeta.getFields().stream()
.filter(Objects::nonNull).map(field -> field.getName() + " " + field.getType() + " ")
.collect(Collectors.joining(",", "(", ")"));
@@ -179,16 +187,17 @@ public class SqlSpeller {
public static String insertMultiTableMultiValuesWithColumns(List tables) {
StringBuilder sb = new StringBuilder();
sb.append("insert into ").append(tables.stream().filter(Objects::nonNull)
- .map(table -> table.getDatabase() + "." + table.getName() + " " + columnNames(table.getColumns()) + " values " + rowValues(table.getValues()))
+ .map(table -> table.getDatabase() + "." + table.getName() + " " + columnNames(table.getColumns())
+ + " values " + rowValues(table.getValues()))
.collect(Collectors.joining(" ")));
return sb.toString();
}
public static String insertMultiTableMultiValues(List tables) {
StringBuilder sb = new StringBuilder();
- sb.append("insert into ").append(tables.stream().filter(Objects::nonNull).map(table ->
- table.getDatabase() + "." + table.getName() + " values " + rowValues(table.getValues())
- ).collect(Collectors.joining(" ")));
+ sb.append("insert into ").append(tables.stream().filter(Objects::nonNull)
+ .map(table -> table.getDatabase() + "." + table.getName() + " values " + rowValues(table.getValues()))
+ .collect(Collectors.joining(" ")));
return sb.toString();
}
}
diff --git a/examples/JDBC/taosdemo/src/main/resources/application.properties b/examples/JDBC/taosdemo/src/main/resources/application.properties
index 488185196f1d2325fd9896d30068cbb202180a3f..4f550f6523587c060bbb2ed889024e1653fb0cb6 100644
--- a/examples/JDBC/taosdemo/src/main/resources/application.properties
+++ b/examples/JDBC/taosdemo/src/main/resources/application.properties
@@ -1,5 +1,5 @@
-jdbc.driver=com.taosdata.jdbc.rs.RestfulDriver
-#jdbc.driver=com.taosdata.jdbc.TSDBDriver
+# jdbc.driver=com.taosdata.jdbc.rs.RestfulDriver
+jdbc.driver=com.taosdata.jdbc.TSDBDriver
hikari.maximum-pool-size=20
hikari.minimum-idle=20
hikari.max-lifetime=0
\ No newline at end of file
diff --git a/examples/JDBC/taosdemo/src/test/java/com/taosdata/taosdemo/service/TableServiceTest.java b/examples/JDBC/taosdemo/src/test/java/com/taosdata/taosdemo/service/TableServiceTest.java
deleted file mode 100644
index 1f52198d68823326dd81d8c419fc02d89e15ef2d..0000000000000000000000000000000000000000
--- a/examples/JDBC/taosdemo/src/test/java/com/taosdata/taosdemo/service/TableServiceTest.java
+++ /dev/null
@@ -1,31 +0,0 @@
-package com.taosdata.taosdemo.service;
-
-import com.taosdata.taosdemo.domain.TableMeta;
-import org.junit.Before;
-import org.junit.Test;
-
-import java.util.ArrayList;
-import java.util.List;
-
-public class TableServiceTest {
- private TableService tableService;
-
- private List tables;
-
- @Before
- public void before() {
- tables = new ArrayList<>();
- for (int i = 0; i < 1; i++) {
- TableMeta tableMeta = new TableMeta();
- tableMeta.setDatabase("test");
- tableMeta.setName("weather" + (i + 1));
- tables.add(tableMeta);
- }
- }
-
- @Test
- public void testCreate() {
- tableService.create(tables);
- }
-
-}
\ No newline at end of file
diff --git a/include/libs/function/function.h b/include/libs/function/function.h
index e708a2c42d237e1f911ef8db994e94965f9877dd..d5da306fd297dd49f4753aa01c6423cb9dd82e9c 100644
--- a/include/libs/function/function.h
+++ b/include/libs/function/function.h
@@ -142,6 +142,7 @@ typedef struct SqlFunctionCtx {
struct SSDataBlock *pDstBlock; // used by indifinite rows function to set selectivity
int32_t curBufPage;
bool increase;
+ bool isStream;
char udfName[TSDB_FUNC_NAME_LEN];
} SqlFunctionCtx;
diff --git a/source/client/src/clientImpl.c b/source/client/src/clientImpl.c
index 9c086fc83e155b40505c42c8096e57b7e03a9bca..5f0af55d13c3e3c79f796f5f34f31dff121f1281 100644
--- a/source/client/src/clientImpl.c
+++ b/source/client/src/clientImpl.c
@@ -238,6 +238,9 @@ int32_t parseSql(SRequestObj* pRequest, bool topicQuery, SQuery** pQuery, SStmtC
TSWAP(pRequest->targetTableList, (*pQuery)->pTargetTableList);
}
+ taosArrayDestroy(cxt.pTableMetaPos);
+ taosArrayDestroy(cxt.pTableVgroupPos);
+
return code;
}
diff --git a/source/client/src/clientMain.c b/source/client/src/clientMain.c
index 0e95cd4d999f30343a66996d07409b01bdde097a..f449641f1008e79a58e02786a855711dbaeb6b9c 100644
--- a/source/client/src/clientMain.c
+++ b/source/client/src/clientMain.c
@@ -674,6 +674,8 @@ static void destorySqlParseWrapper(SqlParseWrapper *pWrapper) {
taosArrayDestroy(pWrapper->catalogReq.pIndex);
taosArrayDestroy(pWrapper->catalogReq.pUser);
taosArrayDestroy(pWrapper->catalogReq.pTableIndex);
+ taosArrayDestroy(pWrapper->pCtx->pTableMetaPos);
+ taosArrayDestroy(pWrapper->pCtx->pTableVgroupPos);
taosMemoryFree(pWrapper->pCtx);
taosMemoryFree(pWrapper);
}
diff --git a/source/dnode/vnode/src/tsdb/tsdbRead.c b/source/dnode/vnode/src/tsdb/tsdbRead.c
index 336053911eefaced6aba25145b63c16f1d915d33..2e66cac21e8a3030e3589fd4c3e23e0980a15358 100644
--- a/source/dnode/vnode/src/tsdb/tsdbRead.c
+++ b/source/dnode/vnode/src/tsdb/tsdbRead.c
@@ -69,8 +69,10 @@ typedef struct SIOCostSummary {
double buildmemBlock;
int64_t headFileLoad;
double headFileLoadTime;
- int64_t smaData;
+ int64_t smaDataLoad;
double smaLoadTime;
+ int64_t lastBlockLoad;
+ double lastBlockLoadTime;
} SIOCostSummary;
typedef struct SBlockLoadSuppInfo {
@@ -98,10 +100,10 @@ typedef struct SLastBlockReader {
} SLastBlockReader;
typedef struct SFilesetIter {
- int32_t numOfFiles; // number of total files
- int32_t index; // current accessed index in the list
- SArray* pFileList; // data file list
- int32_t order;
+ int32_t numOfFiles; // number of total files
+ int32_t index; // current accessed index in the list
+ SArray* pFileList; // data file list
+ int32_t order;
SLastBlockReader* pLastBlockReader; // last file block reader
} SFilesetIter;
@@ -728,7 +730,7 @@ static int32_t doLoadFileBlock(STsdbReader* pReader, SArray* pIndexList, SArray*
double el = (taosGetTimestampUs() - st) / 1000.0;
tsdbDebug("load block of %d tables completed, blocks:%d in %d tables, lastBlock:%d, size:%.2f Kb, elapsed time:%.2f ms %s",
- numOfTables, total, numOfQTable, pBlockNum->numOfLastBlocks, sizeInDisk
+ numOfTables, pBlockNum->numOfBlocks, numOfQTable, pBlockNum->numOfLastBlocks, sizeInDisk
/ 1000.0, el, pReader->idStr);
pReader->cost.numOfBlocks += total;
@@ -857,7 +859,7 @@ static int32_t copyBlockDataToSDataBlock(STsdbReader* pReader, STableBlockScanIn
static int32_t doLoadFileBlockData(STsdbReader* pReader, SDataBlockIter* pBlockIter, SBlockData* pBlockData) {
int64_t st = taosGetTimestampUs();
- double elapsedTime = 0;
+ double elapsedTime = 0;
int32_t code = 0;
SFileDataBlockInfo* pBlockInfo = getCurrentBlockInfo(pBlockIter);
@@ -1303,9 +1305,23 @@ static bool fileBlockShouldLoad(STsdbReader* pReader, SFileDataBlockInfo* pFBloc
overlapWithlastBlock = !(pBlock->maxKey.ts < pBlockL->minKey || pBlock->minKey.ts > pBlockL->maxKey);
}
- return (overlapWithNeighbor || hasDup || dataBlockPartiallyRequired(&pReader->window, &pReader->verRange, pBlock) ||
- keyOverlapFileBlock(key, pBlock, &pReader->verRange) || (pBlock->nRow > pReader->capacity) ||
- overlapWithDel || overlapWithlastBlock);
+ bool moreThanOutputCapacity = pBlock->nRow > pReader->capacity;
+ bool partiallyRequired = dataBlockPartiallyRequired(&pReader->window, &pReader->verRange, pBlock);
+ bool overlapWithKey = keyOverlapFileBlock(key, pBlock, &pReader->verRange);
+
+ bool loadDataBlock = (overlapWithNeighbor || hasDup || partiallyRequired || overlapWithKey ||
+ moreThanOutputCapacity || overlapWithDel || overlapWithlastBlock);
+
+ // log the reason why load the datablock for profile
+ if (loadDataBlock) {
+ tsdbDebug("%p uid:%" PRIu64
+ " need to load the datablock, overlapwithneighborblock:%d, hasDup:%d, partiallyRequired:%d, "
+ "overlapWithKey:%d, greaterThanBuf:%d, overlapWithDel:%d, overlapWithlastBlock:%d, %s",
+ pReader, pFBlock->uid, overlapWithNeighbor, hasDup, partiallyRequired, overlapWithKey,
+ moreThanOutputCapacity, overlapWithDel, overlapWithlastBlock, pReader->idStr);
+ }
+
+ return loadDataBlock;
}
static int32_t buildDataBlockFromBuf(STsdbReader* pReader, STableBlockScanInfo* pBlockScanInfo, int64_t endKey) {
@@ -1991,8 +2007,8 @@ static int32_t buildComposedDataBlockImpl(STsdbReader* pReader, STableBlockScanI
if (pBlockData->nRow > 0) {
TSDBROW fRow = tsdbRowFromBlockData(pBlockData, pDumpInfo->rowIndex);
- // no last block
- if (pLastBlockReader->lastBlockData.nRow == 0) {
+ // no last block available, only data block exists
+ if (pLastBlockReader->lastBlockData.nRow == 0 || (!hasDataInLastBlock(pLastBlockReader))) {
if (tryCopyDistinctRowFromFileBlock(pReader, pBlockData, key, pDumpInfo)) {
return TSDB_CODE_SUCCESS;
} else {
@@ -2012,54 +2028,63 @@ static int32_t buildComposedDataBlockImpl(STsdbReader* pReader, STableBlockScanI
// row in last file block
int64_t ts = getCurrentKeyInLastBlock(pLastBlockReader);
- if (ts < key) { // save rows in last block
- SBlockData* pLastBlockData = &pLastBlockReader->lastBlockData;
-
- STSRow* pTSRow = NULL;
- SRowMerger merge = {0};
-
- TSDBROW fRow1 = tsdbRowFromBlockData(pLastBlockData, *pLastBlockReader->rowIndex);
+ ASSERT(ts >= key);
- tRowMergerInit(&merge, &fRow1, pReader->pSchema);
- doMergeRowsInLastBlock(pLastBlockReader, pBlockScanInfo, ts, &merge);
- tRowMergerGetRow(&merge, &pTSRow);
-
- doAppendRowFromTSRow(pReader->pResBlock, pReader, pTSRow, pBlockScanInfo->uid);
-
- taosMemoryFree(pTSRow);
- tRowMergerClear(&merge);
- return TSDB_CODE_SUCCESS;
- } else if (ts == key) {
- STSRow* pTSRow = NULL;
- SRowMerger merge = {0};
-
- tRowMergerInit(&merge, &fRow, pReader->pSchema);
- doMergeRowsInFileBlocks(pBlockData, pBlockScanInfo, pReader, &merge);
- doMergeRowsInLastBlock(pLastBlockReader, pBlockScanInfo, ts, &merge);
+ if (ASCENDING_TRAVERSE(pReader->order)) {
+ if (key < ts) {
+ // imem & mem are all empty, only file exist
+ if (tryCopyDistinctRowFromFileBlock(pReader, pBlockData, key, pDumpInfo)) {
+ return TSDB_CODE_SUCCESS;
+ } else {
+ STSRow* pTSRow = NULL;
+ SRowMerger merge = {0};
- tRowMergerGetRow(&merge, &pTSRow);
- doAppendRowFromTSRow(pReader->pResBlock, pReader, pTSRow, pBlockScanInfo->uid);
+ tRowMergerInit(&merge, &fRow, pReader->pSchema);
+ doMergeRowsInFileBlocks(pBlockData, pBlockScanInfo, pReader, &merge);
+ tRowMergerGetRow(&merge, &pTSRow);
+ doAppendRowFromTSRow(pReader->pResBlock, pReader, pTSRow, pBlockScanInfo->uid);
- taosMemoryFree(pTSRow);
- tRowMergerClear(&merge);
- return TSDB_CODE_SUCCESS;
- } else { // ts > key, asc; todo handle desc
- // imem & mem are all empty, only file exist
- if (tryCopyDistinctRowFromFileBlock(pReader, pBlockData, key, pDumpInfo)) {
- return TSDB_CODE_SUCCESS;
- } else {
+ taosMemoryFree(pTSRow);
+ tRowMergerClear(&merge);
+ return TSDB_CODE_SUCCESS;
+ }
+ } else if (key == ts) {
STSRow* pTSRow = NULL;
SRowMerger merge = {0};
tRowMergerInit(&merge, &fRow, pReader->pSchema);
doMergeRowsInFileBlocks(pBlockData, pBlockScanInfo, pReader, &merge);
+ doMergeRowsInLastBlock(pLastBlockReader, pBlockScanInfo, ts, &merge);
+
tRowMergerGetRow(&merge, &pTSRow);
doAppendRowFromTSRow(pReader->pResBlock, pReader, pTSRow, pBlockScanInfo->uid);
taosMemoryFree(pTSRow);
tRowMergerClear(&merge);
return TSDB_CODE_SUCCESS;
+ } else {
+ ASSERT(0);
+ return TSDB_CODE_SUCCESS;
+ }
+ } else { // desc order
+ SBlockData* pLastBlockData = &pLastBlockReader->lastBlockData;
+ TSDBROW fRow1 = tsdbRowFromBlockData(pLastBlockData, *pLastBlockReader->rowIndex);
+
+ STSRow* pTSRow = NULL;
+ SRowMerger merge = {0};
+ tRowMergerInit(&merge, &fRow1, pReader->pSchema);
+ doMergeRowsInLastBlock(pLastBlockReader, pBlockScanInfo, ts, &merge);
+
+ if (ts == key) {
+ doMergeRowsInFileBlocks(pBlockData, pBlockScanInfo, pReader, &merge);
}
+
+ tRowMergerGetRow(&merge, &pTSRow);
+ doAppendRowFromTSRow(pReader->pResBlock, pReader, pTSRow, pBlockScanInfo->uid);
+
+ taosMemoryFree(pTSRow);
+ tRowMergerClear(&merge);
+ return TSDB_CODE_SUCCESS;
}
} else { // only last block exists
SBlockData* pLastBlockData = &pLastBlockReader->lastBlockData;
@@ -2383,7 +2408,6 @@ static int32_t moveToNextFile(STsdbReader* pReader, SBlockNumber* pBlockNum) {
return TSDB_CODE_SUCCESS;
}
-// todo add elapsed time results
static int32_t doLoadRelatedLastBlock(SLastBlockReader* pLastBlockReader, STableBlockScanInfo *pBlockScanInfo, STsdbReader* pReader) {
SArray* pBlocks = pLastBlockReader->pBlockL;
SBlockL* pBlock = NULL;
@@ -2415,6 +2439,7 @@ static int32_t doLoadRelatedLastBlock(SLastBlockReader* pLastBlockReader, STable
return TSDB_CODE_SUCCESS;
}
+ int64_t st = taosGetTimestampUs();
int32_t code = tBlockDataInit(&pLastBlockReader->lastBlockData, pReader->suid, pReader->suid ? 0 : uid, pReader->pSchema);
if (code != TSDB_CODE_SUCCESS) {
tsdbError("%p init block data failed, code:%s %s", pReader, tstrerror(code), pReader->idStr);
@@ -2422,17 +2447,23 @@ static int32_t doLoadRelatedLastBlock(SLastBlockReader* pLastBlockReader, STable
}
code = tsdbReadLastBlock(pReader->pFileReader, pBlock, &pLastBlockReader->lastBlockData);
+
+ double el = (taosGetTimestampUs() - st) / 1000.0;
if (code != TSDB_CODE_SUCCESS) {
tsdbError("%p error occurs in loading last block into buffer, last block index:%d, total:%d code:%s %s", pReader,
pLastBlockReader->currentBlockIndex, totalLastBlocks, tstrerror(code), pReader->idStr);
} else {
tsdbDebug("%p load last block completed, uid:%" PRIu64
- " last block index:%d, total:%d rows:%d, minVer:%d, maxVer:%d, brange:%" PRId64 " - %" PRId64 " %s",
- pReader, uid, pLastBlockReader->currentBlockIndex, totalLastBlocks, pBlock->nRow, pBlock->minVer,
- pBlock->maxVer, pBlock->minKey, pBlock->maxKey, pReader->idStr);
+ " last block index:%d, total:%d rows:%d, minVer:%d, maxVer:%d, brange:%" PRId64 "-%" PRId64
+ " elapsed time:%.2f ms, %s",
+ pReader, uid, index, totalLastBlocks, pBlock->nRow, pBlock->minVer, pBlock->maxVer, pBlock->minKey,
+ pBlock->maxKey, el, pReader->idStr);
}
pLastBlockReader->currentBlockIndex = index;
+ pReader->cost.lastBlockLoad += 1;
+ pReader->cost.lastBlockLoadTime += el;
+
return TSDB_CODE_SUCCESS;
}
@@ -2495,13 +2526,12 @@ static int32_t doLoadLastBlockSequentially(STsdbReader* pReader) {
}
static int32_t doBuildDataBlock(STsdbReader* pReader) {
- int32_t code = TSDB_CODE_SUCCESS;
-
- SReaderStatus* pStatus = &pReader->status;
- SDataBlockIter* pBlockIter = &pStatus->blockIter;
-
TSDBKEY key = {0};
+ int32_t code = TSDB_CODE_SUCCESS;
SBlock* pBlock = NULL;
+
+ SReaderStatus* pStatus = &pReader->status;
+ SDataBlockIter* pBlockIter = &pStatus->blockIter;
STableBlockScanInfo* pScanInfo = NULL;
SFileDataBlockInfo* pBlockInfo = getCurrentBlockInfo(pBlockIter);
SLastBlockReader* pLastBlockReader = pReader->status.fileIter.pLastBlockReader;
@@ -2554,13 +2584,22 @@ static int32_t doBuildDataBlock(STsdbReader* pReader) {
// todo rows in buffer should be less than the file block in asc, greater than file block in desc
int64_t endKey = (ASCENDING_TRAVERSE(pReader->order)) ? pBlock->minKey.ts : pBlock->maxKey.ts;
code = buildDataBlockFromBuf(pReader, pScanInfo, endKey);
- } else { // whole block is required, return it directly
- SDataBlockInfo* pInfo = &pReader->pResBlock->info;
- pInfo->rows = pBlock->nRow;
- pInfo->uid = pScanInfo->uid;
- pInfo->window = (STimeWindow){.skey = pBlock->minKey.ts, .ekey = pBlock->maxKey.ts};
- setComposedBlockFlag(pReader, false);
- setBlockAllDumped(&pStatus->fBlockDumpInfo, pBlock->maxKey.ts, pReader->order);
+ } else {
+ if (hasDataInLastBlock(pLastBlockReader) && !ASCENDING_TRAVERSE(pReader->order)) {
+ // only return the rows in last block
+ int64_t tsLast = getCurrentKeyInLastBlock(pLastBlockReader);
+ ASSERT (tsLast >= pBlock->maxKey.ts);
+ tBlockDataReset(&pReader->status.fileBlockData);
+
+ code = buildComposedDataBlock(pReader);
+ } else { // whole block is required, return it directly
+ SDataBlockInfo* pInfo = &pReader->pResBlock->info;
+ pInfo->rows = pBlock->nRow;
+ pInfo->uid = pScanInfo->uid;
+ pInfo->window = (STimeWindow){.skey = pBlock->minKey.ts, .ekey = pBlock->maxKey.ts};
+ setComposedBlockFlag(pReader, false);
+ setBlockAllDumped(&pStatus->fBlockDumpInfo, pBlock->maxKey.ts, pReader->order);
+ }
}
return code;
@@ -2628,7 +2667,7 @@ static int32_t initForFirstBlockInFile(STsdbReader* pReader, SDataBlockIter* pBl
// initialize the block iterator for a new fileset
if (num.numOfBlocks > 0) {
code = initBlockIterator(pReader, pBlockIter, num.numOfBlocks);
- } else {
+ } else { // no block data, only last block exists
tBlockDataReset(&pReader->status.fileBlockData);
resetDataBlockIterator(pBlockIter, pReader->order, pReader->status.pTableMap);
}
@@ -2701,7 +2740,6 @@ static int32_t buildBlockFromFiles(STsdbReader* pReader) {
if (hasNext) { // check for the next block in the block accessed order list
initBlockDumpInfo(pReader, pBlockIter);
} else if (taosArrayGetSize(pReader->status.fileIter.pLastBlockReader->pBlockL) > 0) { // data blocks in current file are exhausted, let's try the next file now
- // todo dump all data in last block if exists.
tBlockDataReset(&pReader->status.fileBlockData);
resetDataBlockIterator(pBlockIter, pReader->order, pReader->status.pTableMap);
goto _begin;
@@ -3498,10 +3536,11 @@ void tsdbReaderClose(STsdbReader* pReader) {
tsdbDebug("%p :io-cost summary: head-file:%" PRIu64 ", head-file time:%.2f ms, SMA:%" PRId64
" SMA-time:%.2f ms, fileBlocks:%" PRId64
", fileBlocks-time:%.2f ms, "
- "build in-memory-block-time:%.2f ms, STableBlockScanInfo size:%.2f Kb %s",
- pReader, pCost->headFileLoad, pCost->headFileLoadTime, pCost->smaData, pCost->smaLoadTime,
- pCost->numOfBlocks, pCost->blockLoadTime, pCost->buildmemBlock,
- numOfTables * sizeof(STableBlockScanInfo) / 1000.0, pReader->idStr);
+ "build in-memory-block-time:%.2f ms, lastBlocks:%" PRId64
+ ", lastBlocks-time:%.2f ms, STableBlockScanInfo size:%.2f Kb %s",
+ pReader, pCost->headFileLoad, pCost->headFileLoadTime, pCost->smaDataLoad, pCost->smaLoadTime,
+ pCost->numOfBlocks, pCost->blockLoadTime, pCost->buildmemBlock, pCost->lastBlockLoad,
+ pCost->lastBlockLoadTime, numOfTables * sizeof(STableBlockScanInfo) / 1000.0, pReader->idStr);
taosMemoryFree(pReader->idStr);
taosMemoryFree(pReader->pSchema);
@@ -3663,7 +3702,7 @@ int32_t tsdbRetrieveDatablockSMA(STsdbReader* pReader, SColumnDataAgg*** pBlockS
double elapsed = (taosGetTimestampUs() - stime) / 1000.0;
pReader->cost.smaLoadTime += elapsed;
- pReader->cost.smaData += 1;
+ pReader->cost.smaDataLoad += 1;
*pBlockStatis = pSup->plist;
diff --git a/source/libs/catalog/src/catalog.c b/source/libs/catalog/src/catalog.c
index 933e65e582274711ad194d6a74ca5cbec682ef49..b6e958e1929cc71dfa43ad018728e1f1844cb472 100644
--- a/source/libs/catalog/src/catalog.c
+++ b/source/libs/catalog/src/catalog.c
@@ -893,7 +893,7 @@ int32_t catalogChkTbMetaVersion(SCatalog* pCtg, SRequestConnInfo *pConn, SArray*
CTG_API_LEAVE(TSDB_CODE_CTG_INVALID_INPUT);
}
- SName name;
+ SName name = {0};
int32_t sver = 0;
int32_t tver = 0;
int32_t tbNum = taosArrayGetSize(pTables);
diff --git a/source/libs/executor/inc/executorimpl.h b/source/libs/executor/inc/executorimpl.h
index 5e339eb1137c28bedbaaf8172521adfd05f8abe7..fb4eac991f4d64e2c5477e3b102395ad6c83550b 100644
--- a/source/libs/executor/inc/executorimpl.h
+++ b/source/libs/executor/inc/executorimpl.h
@@ -860,8 +860,8 @@ int32_t handleLimitOffset(SOperatorInfo *pOperator, SLimitInfo* pLimitInfo, SSDa
bool hasLimitOffsetInfo(SLimitInfo* pLimitInfo);
void initLimitInfo(const SNode* pLimit, const SNode* pSLimit, SLimitInfo* pLimitInfo);
-void doApplyFunctions(SExecTaskInfo* taskInfo, SqlFunctionCtx* pCtx, STimeWindow* pWin, SColumnInfoData* pTimeWindowData, int32_t offset,
- int32_t forwardStep, TSKEY* tsCol, int32_t numOfTotal, int32_t numOfOutput, int32_t order);
+void doApplyFunctions(SExecTaskInfo* taskInfo, SqlFunctionCtx* pCtx, SColumnInfoData* pTimeWindowData, int32_t offset,
+ int32_t forwardStep, int32_t numOfTotal, int32_t numOfOutput);
int32_t extractDataBlockFromFetchRsp(SSDataBlock* pRes, char* pData, int32_t numOfOutput, SArray* pColList, char** pNextStart);
void updateLoadRemoteInfo(SLoadRemoteDataInfo *pInfo, int32_t numOfRows, int32_t dataLen, int64_t startTs,
diff --git a/source/libs/executor/src/executil.c b/source/libs/executor/src/executil.c
index bf969bf2e4855a12a9abfbe5e67301c5df5f3702..f3b395cc7c1811ccb2383c19386bcec91bcc689d 100644
--- a/source/libs/executor/src/executil.c
+++ b/source/libs/executor/src/executil.c
@@ -987,6 +987,7 @@ SqlFunctionCtx* createSqlFunctionCtx(SExprInfo* pExprInfo, int32_t numOfOutput,
pCtx->end.key = INT64_MIN;
pCtx->numOfParams = pExpr->base.numOfParams;
pCtx->increase = false;
+ pCtx->isStream = false;
pCtx->param = pFunct->pParam;
}
diff --git a/source/libs/executor/src/executorimpl.c b/source/libs/executor/src/executorimpl.c
index f7fb6cd4059a4a3e78fa5fd3751bfd423d1e2e1f..16a1cb898fae0874141aaf8913ee090647e0756d 100644
--- a/source/libs/executor/src/executorimpl.c
+++ b/source/libs/executor/src/executorimpl.c
@@ -378,15 +378,30 @@ void initExecTimeWindowInfo(SColumnInfoData* pColData, STimeWindow* pQueryWindow
void cleanupExecTimeWindowInfo(SColumnInfoData* pColData) { colDataDestroy(pColData); }
-void doApplyFunctions(SExecTaskInfo* taskInfo, SqlFunctionCtx* pCtx, STimeWindow* pWin,
- SColumnInfoData* pTimeWindowData, int32_t offset, int32_t forwardStep, TSKEY* tsCol,
- int32_t numOfTotal, int32_t numOfOutput, int32_t order) {
+typedef struct {
+ bool hasAgg;
+ int32_t numOfRows;
+ int32_t startOffset;
+} SFunctionCtxStatus;
+
+static void functionCtxSave(SqlFunctionCtx* pCtx, SFunctionCtxStatus* pStatus) {
+ pStatus->hasAgg = pCtx->input.colDataAggIsSet;
+ pStatus->numOfRows = pCtx->input.numOfRows;
+ pStatus->startOffset = pCtx->input.startRowIndex;
+}
+
+static void functionCtxRestore(SqlFunctionCtx* pCtx, SFunctionCtxStatus* pStatus) {
+ pCtx->input.colDataAggIsSet = pStatus->hasAgg;
+ pCtx->input.numOfRows = pStatus->numOfRows;
+ pCtx->input.startRowIndex = pStatus->startOffset;
+}
+
+void doApplyFunctions(SExecTaskInfo* taskInfo, SqlFunctionCtx* pCtx, SColumnInfoData* pTimeWindowData, int32_t offset,
+ int32_t forwardStep, int32_t numOfTotal, int32_t numOfOutput) {
for (int32_t k = 0; k < numOfOutput; ++k) {
// keep it temporarily
- // todo no need this??
- bool hasAgg = pCtx[k].input.colDataAggIsSet;
- int32_t numOfRows = pCtx[k].input.numOfRows;
- int32_t startOffset = pCtx[k].input.startRowIndex;
+ SFunctionCtxStatus status = {0};
+ functionCtxSave(&pCtx[k], &status);
pCtx[k].input.startRowIndex = offset;
pCtx[k].input.numOfRows = forwardStep;
@@ -424,9 +439,7 @@ void doApplyFunctions(SExecTaskInfo* taskInfo, SqlFunctionCtx* pCtx, STimeWindow
}
// restore it
- pCtx[k].input.colDataAggIsSet = hasAgg;
- pCtx[k].input.startRowIndex = startOffset;
- pCtx[k].input.numOfRows = numOfRows;
+ functionCtxRestore(&pCtx[k], &status);
}
}
}
diff --git a/source/libs/executor/src/groupoperator.c b/source/libs/executor/src/groupoperator.c
index 507719e0aac3d8a8224828433cc5c66445dea0c1..05dffc658b29bb5eb6675edae62d04bb6442cc48 100644
--- a/source/libs/executor/src/groupoperator.c
+++ b/source/libs/executor/src/groupoperator.c
@@ -277,7 +277,7 @@ static void doHashGroupbyAgg(SOperatorInfo* pOperator, SSDataBlock* pBlock) {
}
int32_t rowIndex = j - num;
- doApplyFunctions(pTaskInfo, pCtx, &w, NULL, rowIndex, num, NULL, pBlock->info.rows, pOperator->exprSupp.numOfExprs, TSDB_ORDER_ASC);
+ doApplyFunctions(pTaskInfo, pCtx, NULL, rowIndex, num, pBlock->info.rows, pOperator->exprSupp.numOfExprs);
// assign the group keys or user input constant values if required
doAssignGroupKeys(pCtx, pOperator->exprSupp.numOfExprs, pBlock->info.rows, rowIndex);
@@ -295,7 +295,7 @@ static void doHashGroupbyAgg(SOperatorInfo* pOperator, SSDataBlock* pBlock) {
}
int32_t rowIndex = pBlock->info.rows - num;
- doApplyFunctions(pTaskInfo, pCtx, &w, NULL, rowIndex, num, NULL, pBlock->info.rows, pOperator->exprSupp.numOfExprs, TSDB_ORDER_ASC);
+ doApplyFunctions(pTaskInfo, pCtx, NULL, rowIndex, num, pBlock->info.rows, pOperator->exprSupp.numOfExprs);
doAssignGroupKeys(pCtx, pOperator->exprSupp.numOfExprs, pBlock->info.rows, rowIndex);
}
}
diff --git a/source/libs/executor/src/timewindowoperator.c b/source/libs/executor/src/timewindowoperator.c
index 9b9a38c7eaaceffb174f0530bee1d02809933120..0594a727fcaaf8f41a55703e64d7247a2dca6d15 100644
--- a/source/libs/executor/src/timewindowoperator.c
+++ b/source/libs/executor/src/timewindowoperator.c
@@ -641,8 +641,7 @@ static void doInterpUnclosedTimeWindow(SOperatorInfo* pOperatorInfo, int32_t num
setResultRowInterpo(pResult, RESULT_ROW_END_INTERP);
setNotInterpoWindowKey(pSup->pCtx, numOfExprs, RESULT_ROW_START_INTERP);
- doApplyFunctions(pTaskInfo, pSup->pCtx, &w, &pInfo->twAggSup.timeWindowData, startPos, 0, tsCols, pBlock->info.rows,
- numOfExprs, pInfo->inputOrder);
+ doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, startPos, 0, pBlock->info.rows, numOfExprs);
if (isResultRowInterpolated(pResult, RESULT_ROW_END_INTERP)) {
closeResultRow(pr);
@@ -986,8 +985,8 @@ static void hashIntervalAgg(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResul
if ((!pInfo->ignoreExpiredData || !isCloseWindow(&win, &pInfo->twAggSup)) &&
inSlidingWindow(&pInfo->interval, &win, &pBlock->info)) {
updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &win, true);
- doApplyFunctions(pTaskInfo, pSup->pCtx, &win, &pInfo->twAggSup.timeWindowData, startPos, forwardRows, tsCols,
- pBlock->info.rows, numOfOutput, pInfo->inputOrder);
+ doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, startPos, forwardRows,
+ pBlock->info.rows, numOfOutput);
}
doCloseWindow(pResultRowInfo, pInfo, pResult);
@@ -1026,8 +1025,8 @@ static void hashIntervalAgg(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResul
doWindowBorderInterpolation(pInfo, pBlock, pResult, &nextWin, startPos, forwardRows, pSup);
updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &nextWin, true);
- doApplyFunctions(pTaskInfo, pSup->pCtx, &nextWin, &pInfo->twAggSup.timeWindowData, startPos, forwardRows, tsCols,
- pBlock->info.rows, numOfOutput, pInfo->inputOrder);
+ doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, startPos, forwardRows,
+ pBlock->info.rows, numOfOutput);
doCloseWindow(pResultRowInfo, pInfo, pResult);
}
@@ -1190,8 +1189,8 @@ static void doStateWindowAggImpl(SOperatorInfo* pOperator, SStateWindowOperatorI
}
updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &window, false);
- doApplyFunctions(pTaskInfo, pSup->pCtx, &window, &pInfo->twAggSup.timeWindowData, pRowSup->startRowIndex,
- pRowSup->numOfRows, NULL, pBlock->info.rows, numOfOutput, TSDB_ORDER_ASC);
+ doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, pRowSup->startRowIndex,
+ pRowSup->numOfRows, pBlock->info.rows, numOfOutput);
// here we start a new session window
doKeepNewWindowStartInfo(pRowSup, tsList, j, gid);
@@ -1215,8 +1214,8 @@ static void doStateWindowAggImpl(SOperatorInfo* pOperator, SStateWindowOperatorI
}
updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &pRowSup->win, false);
- doApplyFunctions(pTaskInfo, pSup->pCtx, &pRowSup->win, &pInfo->twAggSup.timeWindowData, pRowSup->startRowIndex,
- pRowSup->numOfRows, NULL, pBlock->info.rows, numOfOutput, TSDB_ORDER_ASC);
+ doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, pRowSup->startRowIndex,
+ pRowSup->numOfRows, pBlock->info.rows, numOfOutput);
}
static SSDataBlock* doStateWindowAgg(SOperatorInfo* pOperator) {
@@ -1794,6 +1793,12 @@ void initIntervalDownStream(SOperatorInfo* downstream, uint16_t type, SAggSuppor
pScanInfo->sessionSup.pIntervalAggSup = pSup;
}
+void initStreamFunciton(SqlFunctionCtx* pCtx, int32_t numOfExpr) {
+ for (int32_t i = 0; i < numOfExpr; i++) {
+ pCtx[i].isStream = true;
+ }
+}
+
SOperatorInfo* createIntervalOperatorInfo(SOperatorInfo* downstream, SExprInfo* pExprInfo, int32_t numOfCols,
SSDataBlock* pResBlock, SInterval* pInterval, int32_t primaryTsSlotId,
STimeWindowAggSupp* pTwAggSupp, SIntervalPhysiNode* pPhyNode,
@@ -1836,6 +1841,7 @@ SOperatorInfo* createIntervalOperatorInfo(SOperatorInfo* downstream, SExprInfo*
if (isStream) {
ASSERT(numOfCols > 0);
increaseTs(pSup->pCtx);
+ initStreamFunciton(pSup->pCtx, pSup->numOfExprs);
}
initExecTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &pInfo->win);
@@ -1934,8 +1940,8 @@ static void doSessionWindowAggImpl(SOperatorInfo* pOperator, SSessionAggOperator
// pInfo->numOfRows data belong to the current session window
updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &window, false);
- doApplyFunctions(pTaskInfo, pSup->pCtx, &window, &pInfo->twAggSup.timeWindowData, pRowSup->startRowIndex,
- pRowSup->numOfRows, NULL, pBlock->info.rows, numOfOutput, TSDB_ORDER_ASC);
+ doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, pRowSup->startRowIndex,
+ pRowSup->numOfRows, pBlock->info.rows, numOfOutput);
// here we start a new session window
doKeepNewWindowStartInfo(pRowSup, tsList, j, gid);
@@ -1952,8 +1958,8 @@ static void doSessionWindowAggImpl(SOperatorInfo* pOperator, SSessionAggOperator
}
updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &pRowSup->win, false);
- doApplyFunctions(pTaskInfo, pSup->pCtx, &pRowSup->win, &pInfo->twAggSup.timeWindowData, pRowSup->startRowIndex,
- pRowSup->numOfRows, NULL, pBlock->info.rows, numOfOutput, TSDB_ORDER_ASC);
+ doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, pRowSup->startRowIndex,
+ pRowSup->numOfRows, pBlock->info.rows, numOfOutput);
}
static SSDataBlock* doSessionWindowAgg(SOperatorInfo* pOperator) {
@@ -2952,8 +2958,8 @@ static void doHashInterval(SOperatorInfo* pOperatorInfo, SSDataBlock* pSDataBloc
setResultBufPageDirty(pInfo->aggSup.pResultBuf, &pResultRowInfo->cur);
}
updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &nextWin, true);
- doApplyFunctions(pTaskInfo, pSup->pCtx, &nextWin, &pInfo->twAggSup.timeWindowData, startPos, forwardRows, tsCols,
- pSDataBlock->info.rows, numOfOutput, TSDB_ORDER_ASC);
+ doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, startPos, forwardRows,
+ pSDataBlock->info.rows, numOfOutput);
int32_t prevEndPos = (forwardRows - 1) * step + startPos;
ASSERT(pSDataBlock->info.window.skey > 0 && pSDataBlock->info.window.ekey > 0);
startPos = getNextQualifiedWindow(&pInfo->interval, &nextWin, &pSDataBlock->info, tsCols, prevEndPos, pInfo->order);
@@ -3330,6 +3336,7 @@ SOperatorInfo* createStreamFinalIntervalOperatorInfo(SOperatorInfo* downstream,
SSDataBlock* pResBlock = createResDataBlock(pPhyNode->pOutputDataBlockDesc);
int32_t code = initAggInfo(&pOperator->exprSupp, &pInfo->aggSup, pExprInfo, numOfCols, keyBufSize, pTaskInfo->id.str);
+ initStreamFunciton(pOperator->exprSupp.pCtx, pOperator->exprSupp.numOfExprs);
initBasicInfo(&pInfo->binfo, pResBlock);
ASSERT(numOfCols > 0);
@@ -3471,6 +3478,7 @@ int32_t initBasicInfoEx(SOptrBasicInfo* pBasicInfo, SExprSupp* pSup, SExprInfo*
if (code != TSDB_CODE_SUCCESS) {
return code;
}
+ initStreamFunciton(pSup->pCtx, pSup->numOfExprs);
initBasicInfo(pBasicInfo, pResultBlock);
@@ -3776,8 +3784,7 @@ static int32_t doOneWindowAggImpl(int32_t tsColId, SOptrBasicInfo* pBinfo, SStre
return TSDB_CODE_QRY_OUT_OF_MEMORY;
}
updateTimeWindowInfo(pTimeWindowData, &pCurWin->win, false);
- doApplyFunctions(pTaskInfo, pSup->pCtx, &pCurWin->win, pTimeWindowData, startIndex, winRows, tsCols,
- pSDataBlock->info.rows, numOutput, TSDB_ORDER_ASC);
+ doApplyFunctions(pTaskInfo, pSup->pCtx, pTimeWindowData, startIndex, winRows, pSDataBlock->info.rows, numOutput);
SFilePage* bufPage = getBufPage(pAggSup->pResultBuf, pCurWin->pos.pageId);
setBufPageDirty(bufPage, true);
releaseBufPage(pAggSup->pResultBuf, bufPage);
@@ -4571,8 +4578,8 @@ SStateWindowInfo* getStateWindow(SStreamAggSupporter* pAggSup, TSKEY ts, uint64_
return insertNewStateWindow(pWinInfos, ts, pKeyData, index + 1, pCol);
}
-int32_t updateStateWindowInfo(SArray* pWinInfos, int32_t winIndex, TSKEY* pTs, SColumnInfoData* pKeyCol, int32_t rows,
- int32_t start, bool* allEqual, SHashObj* pSeDelete) {
+int32_t updateStateWindowInfo(SArray* pWinInfos, int32_t winIndex, TSKEY* pTs, uint64_t groupId,
+ SColumnInfoData* pKeyCol, int32_t rows, int32_t start, bool* allEqual, SHashObj* pSeDeleted) {
*allEqual = true;
SStateWindowInfo* pWinInfo = taosArrayGet(pWinInfos, winIndex);
for (int32_t i = start; i < rows; ++i) {
@@ -4592,9 +4599,10 @@ int32_t updateStateWindowInfo(SArray* pWinInfos, int32_t winIndex, TSKEY* pTs, S
}
}
if (pWinInfo->winInfo.win.skey > pTs[i]) {
- if (pSeDelete && pWinInfo->winInfo.isOutput) {
- taosHashPut(pSeDelete, &pWinInfo->winInfo.pos, sizeof(SResultRowPosition), &pWinInfo->winInfo.win.skey,
- sizeof(TSKEY));
+ if (pSeDeleted && pWinInfo->winInfo.isOutput) {
+ SWinRes res = {.ts = pWinInfo->winInfo.win.skey, .groupId = groupId};
+ taosHashPut(pSeDeleted, &pWinInfo->winInfo.pos, sizeof(SResultRowPosition), &res,
+ sizeof(SWinRes));
pWinInfo->winInfo.isOutput = false;
}
pWinInfo->winInfo.win.skey = pTs[i];
@@ -4607,22 +4615,23 @@ int32_t updateStateWindowInfo(SArray* pWinInfos, int32_t winIndex, TSKEY* pTs, S
return rows - start;
}
-static void doClearStateWindows(SStreamAggSupporter* pAggSup, SSDataBlock* pBlock, int32_t tsIndex, SColumn* pCol,
- int32_t keyIndex, SHashObj* pSeUpdated, SHashObj* pSeDeleted) {
+static void doClearStateWindows(SStreamAggSupporter* pAggSup, SSDataBlock* pBlock,
+ int32_t tsIndex, SColumn* pCol, int32_t keyIndex, SHashObj* pSeUpdated, SHashObj* pSeDeleted) {
SColumnInfoData* pTsColInfo = taosArrayGet(pBlock->pDataBlock, tsIndex);
SColumnInfoData* pKeyColInfo = taosArrayGet(pBlock->pDataBlock, keyIndex);
TSKEY* tsCol = (TSKEY*)pTsColInfo->pData;
bool allEqual = false;
int32_t step = 1;
+ uint64_t groupId = pBlock->info.groupId;
for (int32_t i = 0; i < pBlock->info.rows; i += step) {
char* pKeyData = colDataGetData(pKeyColInfo, i);
int32_t winIndex = 0;
- SStateWindowInfo* pCurWin = getStateWindowByTs(pAggSup, tsCol[i], pBlock->info.groupId, &winIndex);
+ SStateWindowInfo* pCurWin = getStateWindowByTs(pAggSup, tsCol[i], groupId, &winIndex);
if (!pCurWin) {
continue;
}
- step = updateStateWindowInfo(pAggSup->pCurWins, winIndex, tsCol, pKeyColInfo, pBlock->info.rows, i, &allEqual,
- pSeDeleted);
+ step = updateStateWindowInfo(pAggSup->pCurWins, winIndex, tsCol, groupId, pKeyColInfo,
+ pBlock->info.rows, i, &allEqual, pSeDeleted);
ASSERT(isTsInWindow(pCurWin, tsCol[i]) || isEqualStateKey(pCurWin, pKeyData));
taosHashRemove(pSeUpdated, &pCurWin->winInfo.pos, sizeof(SResultRowPosition));
deleteWindow(pAggSup->pCurWins, winIndex, destroyStateWinInfo);
@@ -4661,12 +4670,12 @@ static void doStreamStateAggImpl(SOperatorInfo* pOperator, SSDataBlock* pSDataBl
int32_t winIndex = 0;
bool allEqual = true;
SStateWindowInfo* pCurWin =
- getStateWindow(pAggSup, tsCols[i], pSDataBlock->info.groupId, pKeyData, &pInfo->stateCol, &winIndex);
- winRows = updateStateWindowInfo(pAggSup->pCurWins, winIndex, tsCols, pKeyColInfo, pSDataBlock->info.rows, i,
- &allEqual, pInfo->pSeDeleted);
+ getStateWindow(pAggSup, tsCols[i], groupId, pKeyData, &pInfo->stateCol, &winIndex);
+ winRows = updateStateWindowInfo(pAggSup->pCurWins, winIndex, tsCols, groupId, pKeyColInfo,
+ pSDataBlock->info.rows, i, &allEqual, pStDeleted);
if (!allEqual) {
appendOneRow(pAggSup->pScanBlock, &pCurWin->winInfo.win.skey, &pCurWin->winInfo.win.ekey,
- &pSDataBlock->info.groupId);
+ &groupId);
taosHashRemove(pSeUpdated, &pCurWin->winInfo.pos, sizeof(SResultRowPosition));
deleteWindow(pAggSup->pCurWins, winIndex, destroyStateWinInfo);
continue;
@@ -4830,9 +4839,7 @@ SOperatorInfo* createStreamStateAggOperatorInfo(SOperatorInfo* downstream, SPhys
_hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY);
pInfo->pSeDeleted = taosHashInit(64, hashFn, true, HASH_NO_LOCK);
pInfo->pDelIterator = NULL;
- // pInfo->pDelRes = createSpecialDataBlock(STREAM_DELETE_RESULT);
- pInfo->pDelRes = createOneDataBlock(pInfo->binfo.pRes, false); // todo(liuyao) for delete
- pInfo->pDelRes->info.type = STREAM_DELETE_RESULT; // todo(liuyao) for delete
+ pInfo->pDelRes = createSpecialDataBlock(STREAM_DELETE_RESULT);
pInfo->pChildren = NULL;
pInfo->ignoreExpiredData = pStateNode->window.igExpired;
@@ -4938,8 +4945,8 @@ static void doMergeAlignedIntervalAggImpl(SOperatorInfo* pOperatorInfo, SResultR
}
updateTimeWindowInfo(&iaInfo->twAggSup.timeWindowData, &currWin, true);
- doApplyFunctions(pTaskInfo, pSup->pCtx, &currWin, &iaInfo->twAggSup.timeWindowData, startPos, currPos - startPos,
- tsCols, pBlock->info.rows, numOfOutput, iaInfo->inputOrder);
+ doApplyFunctions(pTaskInfo, pSup->pCtx, &iaInfo->twAggSup.timeWindowData, startPos, currPos - startPos,
+ pBlock->info.rows, numOfOutput);
outputMergeAlignedIntervalResult(pOperatorInfo, tableGroupId, pResultBlock, miaInfo->curTs);
miaInfo->curTs = tsCols[currPos];
@@ -4960,8 +4967,8 @@ static void doMergeAlignedIntervalAggImpl(SOperatorInfo* pOperatorInfo, SResultR
}
updateTimeWindowInfo(&iaInfo->twAggSup.timeWindowData, &currWin, true);
- doApplyFunctions(pTaskInfo, pSup->pCtx, &currWin, &iaInfo->twAggSup.timeWindowData, startPos, currPos - startPos,
- tsCols, pBlock->info.rows, numOfOutput, iaInfo->inputOrder);
+ doApplyFunctions(pTaskInfo, pSup->pCtx, &iaInfo->twAggSup.timeWindowData, startPos, currPos - startPos,
+ pBlock->info.rows, numOfOutput);
}
static void doMergeAlignedIntervalAgg(SOperatorInfo* pOperator) {
@@ -5253,8 +5260,8 @@ static void doMergeIntervalAggImpl(SOperatorInfo* pOperatorInfo, SResultRowInfo*
}
updateTimeWindowInfo(&iaInfo->twAggSup.timeWindowData, &win, true);
- doApplyFunctions(pTaskInfo, pExprSup->pCtx, &win, &iaInfo->twAggSup.timeWindowData, startPos, forwardRows, tsCols,
- pBlock->info.rows, numOfOutput, iaInfo->inputOrder);
+ doApplyFunctions(pTaskInfo, pExprSup->pCtx, &iaInfo->twAggSup.timeWindowData, startPos, forwardRows,
+ pBlock->info.rows, numOfOutput);
doCloseWindow(pResultRowInfo, iaInfo, pResult);
// output previous interval results after this interval (&win) is closed
@@ -5285,8 +5292,8 @@ static void doMergeIntervalAggImpl(SOperatorInfo* pOperatorInfo, SResultRowInfo*
doWindowBorderInterpolation(iaInfo, pBlock, pResult, &nextWin, startPos, forwardRows, pExprSup);
updateTimeWindowInfo(&iaInfo->twAggSup.timeWindowData, &nextWin, true);
- doApplyFunctions(pTaskInfo, pExprSup->pCtx, &nextWin, &iaInfo->twAggSup.timeWindowData, startPos, forwardRows,
- tsCols, pBlock->info.rows, numOfOutput, iaInfo->inputOrder);
+ doApplyFunctions(pTaskInfo, pExprSup->pCtx, &iaInfo->twAggSup.timeWindowData, startPos, forwardRows,
+ pBlock->info.rows, numOfOutput);
doCloseWindow(pResultRowInfo, iaInfo, pResult);
// output previous interval results after this interval (&nextWin) is closed
diff --git a/source/libs/nodes/src/nodesUtilFuncs.c b/source/libs/nodes/src/nodesUtilFuncs.c
index f8ba6e69019eb229164f77933dac11b27bd1c2b3..d13057a93e824c2b94d94a006664b4cbc4c2f870 100644
--- a/source/libs/nodes/src/nodesUtilFuncs.c
+++ b/source/libs/nodes/src/nodesUtilFuncs.c
@@ -817,6 +817,7 @@ void nodesDestroyNode(SNode* pNode) {
destroyLogicNode((SLogicNode*)pLogicNode);
nodesDestroyNode(pLogicNode->pWStartTs);
nodesDestroyNode(pLogicNode->pValues);
+ nodesDestroyList(pLogicNode->pFillExprs);
break;
}
case QUERY_NODE_LOGIC_PLAN_SORT: {
diff --git a/source/libs/parser/src/parUtil.c b/source/libs/parser/src/parUtil.c
index 17e78e78061b69c9eff64ad6a5802369fefaf62d..32513fd0b6f56097b2b7f08ae03725ce39498a37 100644
--- a/source/libs/parser/src/parUtil.c
+++ b/source/libs/parser/src/parUtil.c
@@ -1159,6 +1159,16 @@ void destoryParseMetaCache(SParseMetaCache* pMetaCache, bool request) {
taosHashCleanup(pMetaCache->pTableMeta);
taosHashCleanup(pMetaCache->pTableVgroup);
}
+ SInsertTablesMetaReq* p = taosHashIterate(pMetaCache->pInsertTables, NULL);
+ while (NULL != p) {
+ taosArrayDestroy(p->pTableMetaPos);
+ taosArrayDestroy(p->pTableMetaReq);
+ taosArrayDestroy(p->pTableVgroupPos);
+ taosArrayDestroy(p->pTableVgroupReq);
+
+ p = taosHashIterate(pMetaCache->pInsertTables, p);
+ }
+ taosHashCleanup(pMetaCache->pInsertTables);
taosHashCleanup(pMetaCache->pDbVgroup);
taosHashCleanup(pMetaCache->pDbCfg);
taosHashCleanup(pMetaCache->pDbInfo);
diff --git a/source/libs/qworker/src/qworker.c b/source/libs/qworker/src/qworker.c
index 862d142100575b3af1f2551922056556d97156cf..f006096ce20a45e18a5b9d990c9c63b621638ac5 100644
--- a/source/libs/qworker/src/qworker.c
+++ b/source/libs/qworker/src/qworker.c
@@ -149,13 +149,10 @@ int32_t qwExecTask(QW_FPARAMS_DEF, SQWTaskCtx *ctx, bool *queryStop) {
}
}
- taosArrayDestroy(pResList);
- QW_RET(code);
-
_return:
- taosArrayDestroy(pResList);
- return code;
+ taosArrayDestroy(pResList);
+ QW_RET(code);
}
int32_t qwGenerateSchHbRsp(SQWorker *mgmt, SQWSchStatus *sch, SQWHbInfo *hbInfo) {
diff --git a/source/libs/sync/src/syncRaftCfg.c b/source/libs/sync/src/syncRaftCfg.c
index 5de21bceca99e13b3c2c33b72cd96f0ca6f86fa8..ab404d1b9af744b51b508cd1f870482c79756ea1 100644
--- a/source/libs/sync/src/syncRaftCfg.c
+++ b/source/libs/sync/src/syncRaftCfg.c
@@ -171,7 +171,7 @@ SRaftCfg *raftCfgOpen(const char *path) {
taosLSeekFile(pCfg->pFile, 0, SEEK_SET);
- char buf[1024] = {0};
+ char buf[CONFIG_FILE_LEN] = {0};
int len = taosReadFile(pCfg->pFile, buf, sizeof(buf));
ASSERT(len > 0);
diff --git a/tests/script/tsim/stream/state0.sim b/tests/script/tsim/stream/state0.sim
index 4fa883b8137d43521155df8682251d9147599277..877a2877b9378d2101217080906a68a27ae9fee7 100644
--- a/tests/script/tsim/stream/state0.sim
+++ b/tests/script/tsim/stream/state0.sim
@@ -5,15 +5,15 @@ sleep 50
sql connect
print =============== create database
-sql create database test vgroups 1
-sql select * from information_schema.ins_databases
+sql create database test vgroups 1;
+sql select * from information_schema.ins_databases;
if $rows != 3 then
return -1
endi
print $data00 $data01 $data02
-sql use test
+sql use test;
sql create table t1(ts timestamp, a int, b int , c int, d double, id int);
sql create stream streams1 trigger at_once into streamt1 as select _wstart, count(*) c1, count(d) c2 , sum(a) c3 , max(a) c4, min(c) c5, max(id) c from t1 state_window(a);
diff --git a/tests/script/tsim/valgrind/checkError6.sim b/tests/script/tsim/valgrind/checkError6.sim
index 00de00f71d06810e9d2a72f2b8d06bad5aa42266..fcc5b04c907852f87c469a3dc9d32c5ba1295327 100644
--- a/tests/script/tsim/valgrind/checkError6.sim
+++ b/tests/script/tsim/valgrind/checkError6.sim
@@ -114,8 +114,8 @@ sql select tbcol5 - tbcol3 from stb
sql select spread( tbcol2 )/44, spread(tbcol2), 0.204545455 * 44 from stb;
sql select min(tbcol) * max(tbcol) /4, sum(tbcol2) * apercentile(tbcol2, 20), apercentile(tbcol2, 33) + 52/9 from stb;
sql select distinct(tbname), tgcol from stb;
-#sql select sum(tbcol) from stb partition by tbname interval(1s) slimit 1 soffset 1;
-#sql select sum(tbcol) from stb partition by tbname interval(1s) slimit 2 soffset 4 limit 10 offset 1;
+sql select sum(tbcol) from stb partition by tbname interval(1s) slimit 1 soffset 1;
+sql select sum(tbcol) from stb partition by tbname interval(1s) slimit 2 soffset 4 limit 10 offset 1;
print =============== step5: explain
sql explain analyze select ts from stb where -2;
diff --git a/tests/script/tsim/valgrind/checkError7.sim b/tests/script/tsim/valgrind/checkError7.sim
index a66ddb30df063416e0f04da0dd58de9bebed186d..af42d1e76b50bc07e9c7a484bd24c9544517f980 100644
--- a/tests/script/tsim/valgrind/checkError7.sim
+++ b/tests/script/tsim/valgrind/checkError7.sim
@@ -66,7 +66,7 @@ $null=
system_content sh/checkValgrind.sh -n dnode1
print cmd return result ----> [ $system_content ]
-if $system_content > 2 then
+if $system_content > 0 then
return -1
endi
diff --git a/tests/script/tsim/valgrind/checkError8.sim b/tests/script/tsim/valgrind/checkError8.sim
index 7ca01bc3d04a489e389939221b88b9aa9432e939..2f204768eb1fc8922a07853155edfc29e97c3975 100644
--- a/tests/script/tsim/valgrind/checkError8.sim
+++ b/tests/script/tsim/valgrind/checkError8.sim
@@ -143,7 +143,7 @@ $null=
system_content sh/checkValgrind.sh -n dnode1
print cmd return result ----> [ $system_content ]
-if $system_content > 2 then
+if $system_content > 0 then
return -1
endi