提交 3d403dca 编写于 作者: H Haojun Liao

other: merge main.

...@@ -39,7 +39,7 @@ TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series ...@@ -39,7 +39,7 @@ TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series
# 构建 # 构建
TDengine 目前可以在 Linux、 Windows、macOS 等平台上安装和运行。任何 OS 的应用也可以选择 taosAdapter 的 RESTful 接口连接服务端 taosd。CPU 支持 X64/ARM64,后续会支持 MIPS64、Alpha64、ARM32、RISC-V 等 CPU 架构。 TDengine 目前可以在 Linux、 Windows、macOS 等平台上安装和运行。任何 OS 的应用也可以选择 taosAdapter 的 RESTful 接口连接服务端 taosd。CPU 支持 X64/ARM64,后续会支持 MIPS64、Alpha64、ARM32、RISC-V 等 CPU 架构。目前不支持使用交叉编译器构建。
用户可根据需求选择通过源码、[容器](https://docs.taosdata.com/get-started/docker/)[安装包](https://docs.taosdata.com/get-started/package/)[Kubernetes](https://docs.taosdata.com/deployment/k8s/)来安装。本快速指南仅适用于通过源码安装。 用户可根据需求选择通过源码、[容器](https://docs.taosdata.com/get-started/docker/)[安装包](https://docs.taosdata.com/get-started/package/)[Kubernetes](https://docs.taosdata.com/deployment/k8s/)来安装。本快速指南仅适用于通过源码安装。
...@@ -352,4 +352,4 @@ TDengine 提供了丰富的应用程序开发接口,其中包括 C/C++、Java ...@@ -352,4 +352,4 @@ TDengine 提供了丰富的应用程序开发接口,其中包括 C/C++、Java
# 加入技术交流群 # 加入技术交流群
TDengine 官方社群「物联网大数据群」对外开放,欢迎您加入讨论。搜索微信号 "tdengine1",加小 T 为好友,即可入群。 TDengine 官方社群「物联网大数据群」对外开放,欢迎您加入讨论。搜索微信号 "tdengine",加小 T 为好友,即可入群。
...@@ -47,7 +47,7 @@ For user manual, system design and architecture, please refer to [TDengine Docum ...@@ -47,7 +47,7 @@ For user manual, system design and architecture, please refer to [TDengine Docum
# Building # Building
At the moment, TDengine server supports running on Linux/Windows/macOS systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future. At the moment, TDengine server supports running on Linux/Windows/macOS systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future. Right now we don't support build with cross-compiling environment.
You can choose to install through source code, [container](https://docs.tdengine.com/get-started/docker/), [installation package](https://docs.tdengine.com/get-started/package/) or [Kubernetes](https://docs.tdengine.com/deployment/k8s/). This quick guide only applies to installing from source. You can choose to install through source code, [container](https://docs.tdengine.com/get-started/docker/), [installation package](https://docs.tdengine.com/get-started/package/) or [Kubernetes](https://docs.tdengine.com/deployment/k8s/). This quick guide only applies to installing from source.
......
...@@ -18,7 +18,7 @@ The full package of TDengine includes the TDengine Server (`taosd`), TDengine Cl ...@@ -18,7 +18,7 @@ The full package of TDengine includes the TDengine Server (`taosd`), TDengine Cl
The standard server installation package includes `taos`, `taosd`, `taosAdapter`, `taosBenchmark`, and sample code. You can also download the Lite package that includes only `taosd` and the C/C++ connector. The standard server installation package includes `taos`, `taosd`, `taosAdapter`, `taosBenchmark`, and sample code. You can also download the Lite package that includes only `taosd` and the C/C++ connector.
The TDengine Community Edition is released as Deb and RPM packages. The Deb package can be installed on Debian, Ubuntu, and derivative systems. The RPM package can be installed on CentOS, RHEL, SUSE, and derivative systems. A .tar.gz package is also provided for enterprise customers, and you can install TDengine over `apt-get` as well. The .tar.tz package includes `taosdump` and the TDinsight installation script. If you want to use these utilities with the Deb or RPM package, download and install taosTools separately. TDengine can also be installed on x64 Windows and x64/m1 macOS. TDengine OSS is released as Deb and RPM packages. The Deb package can be installed on Debian, Ubuntu, and derivative systems. The RPM package can be installed on CentOS, RHEL, SUSE, and derivative systems. A .tar.gz package is also provided for enterprise customers, and you can install TDengine over `apt-get` as well. The .tar.tz package includes `taosdump` and the TDinsight installation script. If you want to use these utilities with the Deb or RPM package, download and install taosTools separately. TDengine can also be installed on x64 Windows and x64/m1 macOS.
## Operating environment requirements ## Operating environment requirements
In the Linux system, the minimum requirements for the operating environment are as follows: In the Linux system, the minimum requirements for the operating environment are as follows:
...@@ -201,7 +201,7 @@ You can use the TDengine CLI to monitor your TDengine deployment and execute ad ...@@ -201,7 +201,7 @@ You can use the TDengine CLI to monitor your TDengine deployment and execute ad
<TabItem label="Windows" value="windows"> <TabItem label="Windows" value="windows">
After the installation is complete, please run `sc start taosd` or run `C:\TDengine\taosd.exe` with administrator privilege to start TDengine Server. After the installation is complete, please run `sc start taosd` or run `C:\TDengine\taosd.exe` with administrator privilege to start TDengine Server. Please run `sc start taosadapter` or run `C:\TDengine\taosadapter.exe` with administrator privilege to start taosAdapter to provide http/REST service.
## Command Line Interface (CLI) ## Command Line Interface (CLI)
......
...@@ -21,17 +21,6 @@ import {useCurrentSidebarCategory} from '@docusaurus/theme-common'; ...@@ -21,17 +21,6 @@ import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/> <DocCardList items={useCurrentSidebarCategory().items}/>
``` ```
## Study TDengine Knowledge Map
The TDengine Knowledge Map covers the various knowledge points of TDengine, revealing the invocation relationships and data flow between various conceptual entities. Learning and understanding the TDengine Knowledge Map will help you quickly master the TDengine knowledge system.
<figure>
<center>
<a href="pathname:///img/tdengine-map.svg" target="_blank"><img src="/img/tdengine-map.svg" width="80%" /></a>
<figcaption>Diagram 1. TDengine Knowledge Map</figcaption>
</center>
</figure>
## Join TDengine Community ## Join TDengine Community
<table width="100%"> <table width="100%">
......
...@@ -298,7 +298,7 @@ You configure the following parameters when creating a consumer: ...@@ -298,7 +298,7 @@ You configure the following parameters when creating a consumer:
| `td.connect.port` | string | Port of the server side | | | `td.connect.port` | string | Port of the server side | |
| `group.id` | string | Consumer group ID; consumers with the same ID are in the same group | **Required**. Maximum length: 192. Each topic can create up to 100 consumer groups. | | `group.id` | string | Consumer group ID; consumers with the same ID are in the same group | **Required**. Maximum length: 192. Each topic can create up to 100 consumer groups. |
| `client.id` | string | Client ID | Maximum length: 192. | | `client.id` | string | Client ID | Maximum length: 192. |
| `auto.offset.reset` | enum | Initial offset for the consumer group | Specify `earliest`, `latest`, or `none`(default) | | `auto.offset.reset` | enum | Initial offset for the consumer group | `earliest`: subscribe from the earliest data, this is the default behavior; `latest`: subscribe from the latest data; or `none`: can't subscribe without committed offset|
| `enable.auto.commit` | boolean | Commit automatically; true: user application doesn't need to explicitly commit; false: user application need to handle commit by itself | Default value is true | | `enable.auto.commit` | boolean | Commit automatically; true: user application doesn't need to explicitly commit; false: user application need to handle commit by itself | Default value is true |
| `auto.commit.interval.ms` | integer | Interval for automatic commits, in milliseconds | | `auto.commit.interval.ms` | integer | Interval for automatic commits, in milliseconds |
| `msg.with.table.name` | boolean | Specify whether to deserialize table names from messages | default value: false | `msg.with.table.name` | boolean | Specify whether to deserialize table names from messages | default value: false
......
...@@ -403,7 +403,7 @@ In this section we will demonstrate 5 examples of developing UDF in Python langu ...@@ -403,7 +403,7 @@ In this section we will demonstrate 5 examples of developing UDF in Python langu
In the guide, some debugging skills of using Python UDF will be explained too. In the guide, some debugging skills of using Python UDF will be explained too.
We assume you are using Linux system and already have TDengine 3.0.4.0+ and Python 3.x. We assume you are using Linux system and already have TDengine 3.0.4.0+ and Python 3.7+.
Note:**You can't use print() function to output log inside a UDF, you have to write the log to a specific file or use logging module of Python.** Note:**You can't use print() function to output log inside a UDF, you have to write the log to a specific file or use logging module of Python.**
......
此差异已折叠。
...@@ -58,7 +58,7 @@ database_option: { ...@@ -58,7 +58,7 @@ database_option: {
- WAL_FSYNC_PERIOD: specifies the interval (in milliseconds) at which data is written from the WAL to disk. This parameter takes effect only when the WAL parameter is set to 2. The default value is 3000. Enter a value between 0 and 180000. The value 0 indicates that incoming data is immediately written to disk. - WAL_FSYNC_PERIOD: specifies the interval (in milliseconds) at which data is written from the WAL to disk. This parameter takes effect only when the WAL parameter is set to 2. The default value is 3000. Enter a value between 0 and 180000. The value 0 indicates that incoming data is immediately written to disk.
- MAXROWS: specifies the maximum number of rows recorded in a block. The default value is 4096. - MAXROWS: specifies the maximum number of rows recorded in a block. The default value is 4096.
- MINROWS: specifies the minimum number of rows recorded in a block. The default value is 100. - MINROWS: specifies the minimum number of rows recorded in a block. The default value is 100.
- KEEP: specifies the time for which data is retained. Enter a value between 1 and 365000. The default value is 3650. The value of the KEEP parameter must be greater than or equal to the value of the DURATION parameter. TDengine automatically deletes data that is older than the value of the KEEP parameter. You can use m (minutes), h (hours), and d (days) as the unit, for example KEEP 100h or KEEP 10d. If you do not include a unit, d is used by default. The Enterprise Edition supports [Tiered Storage](https://docs.tdengine.com/tdinternal/arch/#tiered-storage) function, thus multiple KEEP values (comma separated and up to 3 values supported, and meet keep 0 <= keep 1 <= keep 2, e.g. KEEP 100h,100d,3650d) are supported; the Community Edition does not support Tiered Storage function (although multiple keep values are configured, they do not take effect, only the maximum keep value is used as KEEP). - KEEP: specifies the time for which data is retained. Enter a value between 1 and 365000. The default value is 3650. The value of the KEEP parameter must be greater than or equal to the value of the DURATION parameter. TDengine automatically deletes data that is older than the value of the KEEP parameter. You can use m (minutes), h (hours), and d (days) as the unit, for example KEEP 100h or KEEP 10d. If you do not include a unit, d is used by default. TDengine Enterprise supports [Tiered Storage](https://docs.tdengine.com/tdinternal/arch/#tiered-storage) function, thus multiple KEEP values (comma separated and up to 3 values supported, and meet keep 0 <= keep 1 <= keep 2, e.g. KEEP 100h,100d,3650d) are supported; TDengine OSS does not support Tiered Storage function (although multiple keep values are configured, they do not take effect, only the maximum keep value is used as KEEP).
- PAGES: specifies the number of pages in the metadata storage engine cache on each vnode. Enter a value greater than or equal to 64. The default value is 256. The space occupied by metadata storage on each vnode is equal to the product of the values of the PAGESIZE and PAGES parameters. The space occupied by default is 1 MB. - PAGES: specifies the number of pages in the metadata storage engine cache on each vnode. Enter a value greater than or equal to 64. The default value is 256. The space occupied by metadata storage on each vnode is equal to the product of the values of the PAGESIZE and PAGES parameters. The space occupied by default is 1 MB.
- PAGESIZE: specifies the size (in KB) of each page in the metadata storage engine cache on each vnode. The default value is 4. Enter a value between 1 and 16384. - PAGESIZE: specifies the size (in KB) of each page in the metadata storage engine cache on each vnode. The default value is 4. Enter a value between 1 and 16384.
- PRECISION: specifies the precision at which a database records timestamps. Enter ms for milliseconds, us for microseconds, or ns for nanoseconds. The default value is ms. - PRECISION: specifies the precision at which a database records timestamps. Enter ms for milliseconds, us for microseconds, or ns for nanoseconds. The default value is ms.
......
...@@ -29,7 +29,7 @@ This document introduces the tables of INFORMATION_SCHEMA and their structure. ...@@ -29,7 +29,7 @@ This document introduces the tables of INFORMATION_SCHEMA and their structure.
Provides information about dnodes. Similar to SHOW DNODES. Provides information about dnodes. Similar to SHOW DNODES.
| # | **Column** | **Data Type** | **Description** | | # | **Column** | **Data Type** | **Description** |
| --- | :------------: | ------------ | ------------------------- | | --- | :------------: | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1 | vnodes | SMALLINT | Current number of vnodes on the dnode. It should be noted that `vnodes` is a TDengine keyword and needs to be escaped with ` when used as a column name. | | 1 | vnodes | SMALLINT | Current number of vnodes on the dnode. It should be noted that `vnodes` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 2 | support_vnodes | SMALLINT | Maximum number of vnodes on the dnode | | 2 | support_vnodes | SMALLINT | Maximum number of vnodes on the dnode |
| 3 | status | BINARY(10) | Current status | | 3 | status | BINARY(10) | Current status |
...@@ -43,7 +43,7 @@ Provides information about dnodes. Similar to SHOW DNODES. ...@@ -43,7 +43,7 @@ Provides information about dnodes. Similar to SHOW DNODES.
Provides information about mnodes. Similar to SHOW MNODES. Provides information about mnodes. Similar to SHOW MNODES.
| # | **Column** | **Data Type** | **Description** | | # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------ | ------------------ | | --- | :---------: | ------------- | ------------------------------------------ |
| 1 | id | SMALLINT | Mnode ID | | 1 | id | SMALLINT | Mnode ID |
| 2 | endpoint | BINARY(134) | Mnode endpoint | | 2 | endpoint | BINARY(134) | Mnode endpoint |
| 3 | role | BINARY(10) | Current role | | 3 | role | BINARY(10) | Current role |
...@@ -55,7 +55,7 @@ Provides information about mnodes. Similar to SHOW MNODES. ...@@ -55,7 +55,7 @@ Provides information about mnodes. Similar to SHOW MNODES.
Provides information about qnodes. Similar to SHOW QNODES. Provides information about qnodes. Similar to SHOW QNODES.
| # | **Column** | **Data Type** | **Description** | | # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------ | ------------ | | --- | :---------: | ------------- | --------------- |
| 1 | id | SMALLINT | Qnode ID | | 1 | id | SMALLINT | Qnode ID |
| 2 | endpoint | BINARY(134) | Qnode endpoint | | 2 | endpoint | BINARY(134) | Qnode endpoint |
| 3 | create_time | TIMESTAMP | Creation time | | 3 | create_time | TIMESTAMP | Creation time |
...@@ -65,7 +65,7 @@ Provides information about qnodes. Similar to SHOW QNODES. ...@@ -65,7 +65,7 @@ Provides information about qnodes. Similar to SHOW QNODES.
Provides information about the cluster. Provides information about the cluster.
| # | **Column** | **Data Type** | **Description** | | # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------ | ---------- | | --- | :---------: | ------------- | --------------- |
| 1 | id | BIGINT | Cluster ID | | 1 | id | BIGINT | Cluster ID |
| 2 | name | BINARY(134) | Cluster name | | 2 | name | BINARY(134) | Cluster name |
| 3 | create_time | TIMESTAMP | Creation time | | 3 | create_time | TIMESTAMP | Creation time |
...@@ -75,8 +75,8 @@ Provides information about the cluster. ...@@ -75,8 +75,8 @@ Provides information about the cluster.
Provides information about user-created databases. Similar to SHOW DATABASES. Provides information about user-created databases. Similar to SHOW DATABASES.
| # | **Column** | **Data Type** | **Description** | | # | **Column** | **Data Type** | **Description** |
| --- | :------------------: | ---------------- | ------------------------------------------------ | | --- | :------------------: | ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1| name| BINARY(32)| Database name | | 1 | name | BINARY(32) | Database name |
| 2 | create_time | TIMESTAMP | Creation time | | 2 | create_time | TIMESTAMP | Creation time |
| 3 | ntables | INT | Number of standard tables and subtables (not including supertables) | | 3 | ntables | INT | Number of standard tables and subtables (not including supertables) |
| 4 | vgroups | INT | Number of vgroups. It should be noted that `vnodes` is a TDengine keyword and needs to be escaped with ` when used as a column name. | | 4 | vgroups | INT | Number of vgroups. It should be noted that `vnodes` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
...@@ -112,7 +112,7 @@ Provides information about user-created databases. Similar to SHOW DATABASES. ...@@ -112,7 +112,7 @@ Provides information about user-created databases. Similar to SHOW DATABASES.
Provides information about user-defined functions. Provides information about user-defined functions.
| # | **Column** | **Data Type** | **Description** | | # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------ | -------------- | | --- | :-----------: | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1 | name | BINARY(64) | Function name | | 1 | name | BINARY(64) | Function name |
| 2 | comment | BINARY(255) | Function description. It should be noted that `comment` is a TDengine keyword and needs to be escaped with ` when used as a column name. | | 2 | comment | BINARY(255) | Function description. It should be noted that `comment` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 3 | aggregate | INT | Whether the UDF is an aggregate function. It should be noted that `aggregate` is a TDengine keyword and needs to be escaped with ` when used as a column name. | | 3 | aggregate | INT | Whether the UDF is an aggregate function. It should be noted that `aggregate` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
...@@ -122,27 +122,27 @@ Provides information about user-defined functions. ...@@ -122,27 +122,27 @@ Provides information about user-defined functions.
| 7 | bufsize | INT | Buffer size | | 7 | bufsize | INT | Buffer size |
| 8 | func_language | BINARY(31) | UDF programming language | | 8 | func_language | BINARY(31) | UDF programming language |
| 9 | func_body | BINARY(16384) | UDF function body | | 9 | func_body | BINARY(16384) | UDF function body |
| 10 | func_version | INT | UDF function version. starting from 0. Increasing by 1 each time it is updated| | 10 | func_version | INT | UDF function version. starting from 0. Increasing by 1 each time it is updated |
## INS_INDEXES ## INS_INDEXES
Provides information about user-created indices. Similar to SHOW INDEX. Provides information about user-created indices. Similar to SHOW INDEX.
| # | **Column** | **Data Type** | **Description** | | # | **Column** | **Data Type** | **Description** |
| --- | :--------------: | ------------ | ---------------------------------------------------------------------------------- | | --- | :--------------: | ------------- | --------------------------------------------------------------------- |
| 1 | db_name | BINARY(32) | Database containing the table with the specified index | | 1 | db_name | BINARY(32) | Database containing the table with the specified index |
| 2 | table_name | BINARY(192) | Table containing the specified index | | 2 | table_name | BINARY(192) | Table containing the specified index |
| 3 | index_name | BINARY(192) | Index name | | 3 | index_name | BINARY(192) | Index name |
| 4 | db_name | BINARY(64) | Index column | | 4 | db_name | BINARY(64) | Index column |
| 5 | index_type | BINARY(10) | SMA or FULLTEXT index | | 5 | index_type | BINARY(10) | SMA or tag index |
| 6 | index_extensions | BINARY(256) | Other information For SMA indices, this shows a list of functions. For FULLTEXT indices, this is null. | | 6 | index_extensions | BINARY(256) | Other information For SMA/tag indices, this shows a list of functions |
## INS_STABLES ## INS_STABLES
Provides information about supertables. Provides information about supertables.
| # | **Column** | **Data Type** | **Description** | | # | **Column** | **Data Type** | **Description** |
| --- | :-----------: | ------------ | ------------------------ | | --- | :-----------: | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1 | stable_name | BINARY(192) | Supertable name | | 1 | stable_name | BINARY(192) | Supertable name |
| 2 | db_name | BINARY(64) | All databases in the supertable | | 2 | db_name | BINARY(64) | All databases in the supertable |
| 3 | create_time | TIMESTAMP | Creation time | | 3 | create_time | TIMESTAMP | Creation time |
...@@ -159,7 +159,7 @@ Provides information about supertables. ...@@ -159,7 +159,7 @@ Provides information about supertables.
Provides information about standard tables and subtables. Provides information about standard tables and subtables.
| # | **Column** | **Data Type** | **Description** | | # | **Column** | **Data Type** | **Description** |
| --- | :-----------: | ------------ | ---------------- | | --- | :-----------: | ------------- | ---------------------------------------------------------------------------------------------------------------------------------- |
| 1 | table_name | BINARY(192) | Table name | | 1 | table_name | BINARY(192) | Table name |
| 2 | db_name | BINARY(64) | Database name | | 2 | db_name | BINARY(64) | Database name |
| 3 | create_time | TIMESTAMP | Creation time | | 3 | create_time | TIMESTAMP | Creation time |
...@@ -174,7 +174,7 @@ Provides information about standard tables and subtables. ...@@ -174,7 +174,7 @@ Provides information about standard tables and subtables.
## INS_TAGS ## INS_TAGS
| # | **Column** | **Data Type** | **Description** | | # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------- | ---------------------- | | --- | :---------: | ------------- | --------------- |
| 1 | table_name | BINARY(192) | Table name | | 1 | table_name | BINARY(192) | Table name |
| 2 | db_name | BINARY(64) | Database name | | 2 | db_name | BINARY(64) | Database name |
| 3 | stable_name | BINARY(192) | Supertable name | | 3 | stable_name | BINARY(192) | Supertable name |
...@@ -185,7 +185,7 @@ Provides information about standard tables and subtables. ...@@ -185,7 +185,7 @@ Provides information about standard tables and subtables.
## INS_COLUMNS ## INS_COLUMNS
| # | **Column** | **Data Type** | **Description** | | # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------- | ---------------------- | | --- | :-----------: | ------------- | ---------------- |
| 1 | table_name | BINARY(192) | Table name | | 1 | table_name | BINARY(192) | Table name |
| 2 | db_name | BINARY(64) | Database name | | 2 | db_name | BINARY(64) | Database name |
| 3 | table_type | BINARY(21) | Table type | | 3 | table_type | BINARY(21) | Table type |
...@@ -201,7 +201,7 @@ Provides information about standard tables and subtables. ...@@ -201,7 +201,7 @@ Provides information about standard tables and subtables.
Provides information about TDengine users. Provides information about TDengine users.
| # | **Column** | **Data Type** | **Description** | | # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------ | -------- | | --- | :---------: | ------------- | ---------------- |
| 1 | user_name | BINARY(23) | User name | | 1 | user_name | BINARY(23) | User name |
| 2 | privilege | BINARY(256) | User permissions | | 2 | privilege | BINARY(256) | User permissions |
| 3 | create_time | TIMESTAMP | Creation time | | 3 | create_time | TIMESTAMP | Creation time |
...@@ -211,7 +211,7 @@ Provides information about TDengine users. ...@@ -211,7 +211,7 @@ Provides information about TDengine users.
Provides information about TDengine Enterprise Edition permissions. Provides information about TDengine Enterprise Edition permissions.
| # | **Column** | **Data Type** | **Description** | | # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------ | -------------------------------------------------- | | --- | :---------: | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1 | version | BINARY(9) | Whether the deployment is a licensed or trial version | | 1 | version | BINARY(9) | Whether the deployment is a licensed or trial version |
| 2 | cpu_cores | BINARY(9) | CPU cores included in license | | 2 | cpu_cores | BINARY(9) | CPU cores included in license |
| 3 | dnodes | BINARY(10) | Dnodes included in license. It should be noted that `dnodes` is a TDengine keyword and needs to be escaped with ` when used as a column name. | | 3 | dnodes | BINARY(10) | Dnodes included in license. It should be noted that `dnodes` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
...@@ -232,7 +232,7 @@ Provides information about TDengine Enterprise Edition permissions. ...@@ -232,7 +232,7 @@ Provides information about TDengine Enterprise Edition permissions.
Provides information about vgroups. Provides information about vgroups.
| # | **Column** | **Data Type** | **Description** | | # | **Column** | **Data Type** | **Description** |
| --- | :-------: | ------------ | ------------------------------------------------------ | | --- | :--------: | ------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| 1 | vgroup_id | INT | Vgroup ID | | 1 | vgroup_id | INT | Vgroup ID |
| 2 | db_name | BINARY(32) | Database name | | 2 | db_name | BINARY(32) | Database name |
| 3 | tables | INT | Tables in vgroup. It should be noted that `tables` is a TDengine keyword and needs to be escaped with ` when used as a column name. | | 3 | tables | INT | Tables in vgroup. It should be noted that `tables` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
...@@ -252,7 +252,7 @@ Provides information about vgroups. ...@@ -252,7 +252,7 @@ Provides information about vgroups.
Provides system configuration information. Provides system configuration information.
| # | **Column** | **Data Type** | **Description** | | # | **Column** | **Data Type** | **Description** |
| --- | :------: | ------------ | ------------ | | --- | :--------: | ------------- | ----------------------------------------------------------------------------------------------------------------------- |
| 1 | name | BINARY(32) | Parameter | | 1 | name | BINARY(32) | Parameter |
| 2 | value | BINARY(64) | Value. It should be noted that `value` is a TDengine keyword and needs to be escaped with ` when used as a column name. | | 2 | value | BINARY(64) | Value. It should be noted that `value` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
...@@ -261,7 +261,7 @@ Provides system configuration information. ...@@ -261,7 +261,7 @@ Provides system configuration information.
Provides dnode configuration information. Provides dnode configuration information.
| # | **Column** | **Data Type** | **Description** | | # | **Column** | **Data Type** | **Description** |
| --- | :------: | ------------ | ------------ | | --- | :--------: | ------------- | ----------------------------------------------------------------------------------------------------------------------- |
| 1 | dnode_id | INT | Dnode ID | | 1 | dnode_id | INT | Dnode ID |
| 2 | name | BINARY(32) | Parameter | | 2 | name | BINARY(32) | Parameter |
| 3 | value | BINARY(64) | Value. It should be noted that `value` is a TDengine keyword and needs to be escaped with ` when used as a column name. | | 3 | value | BINARY(64) | Value. It should be noted that `value` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
...@@ -269,7 +269,7 @@ Provides dnode configuration information. ...@@ -269,7 +269,7 @@ Provides dnode configuration information.
## INS_TOPICS ## INS_TOPICS
| # | **Column** | **Data Type** | **Description** | | # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------ | ------------------------------ | | --- | :---------: | ------------- | -------------------------------------- |
| 1 | topic_name | BINARY(192) | Topic name | | 1 | topic_name | BINARY(192) | Topic name |
| 2 | db_name | BINARY(64) | Database for the topic | | 2 | db_name | BINARY(64) | Database for the topic |
| 3 | create_time | TIMESTAMP | Creation time | | 3 | create_time | TIMESTAMP | Creation time |
...@@ -278,7 +278,7 @@ Provides dnode configuration information. ...@@ -278,7 +278,7 @@ Provides dnode configuration information.
## INS_SUBSCRIPTIONS ## INS_SUBSCRIPTIONS
| # | **Column** | **Data Type** | **Description** | | # | **Column** | **Data Type** | **Description** |
| --- | :------------: | ------------ | ------------------------ | | --- | :------------: | ------------- | --------------------------- |
| 1 | topic_name | BINARY(204) | Subscribed topic | | 1 | topic_name | BINARY(204) | Subscribed topic |
| 2 | consumer_group | BINARY(193) | Subscribed consumer group | | 2 | consumer_group | BINARY(193) | Subscribed consumer group |
| 3 | vgroup_id | INT | Vgroup ID for the consumer | | 3 | vgroup_id | INT | Vgroup ID for the consumer |
...@@ -289,7 +289,7 @@ Provides dnode configuration information. ...@@ -289,7 +289,7 @@ Provides dnode configuration information.
## INS_STREAMS ## INS_STREAMS
| # | **Column** | **Data Type** | **Description** | | # | **Column** | **Data Type** | **Description** |
| --- | :----------: | ------------ | --------------------------------------- | | --- | :----------: | ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1 | stream_name | BINARY(64) | Stream name | | 1 | stream_name | BINARY(64) | Stream name |
| 2 | create_time | TIMESTAMP | Creation time | | 2 | create_time | TIMESTAMP | Creation time |
| 3 | sql | BINARY(1024) | SQL statement used to create the stream | | 3 | sql | BINARY(1024) | SQL statement used to create the stream |
......
...@@ -4,12 +4,12 @@ sidebar_label: Indexing ...@@ -4,12 +4,12 @@ sidebar_label: Indexing
description: This document describes the SQL statements related to indexing in TDengine. description: This document describes the SQL statements related to indexing in TDengine.
--- ---
TDengine supports SMA and FULLTEXT indexing. TDengine supports SMA and tag indexing.
## Create an Index ## Create an Index
```sql ```sql
CREATE FULLTEXT INDEX index_name ON tb_name (col_name [, col_name] ...) CREATE INDEX index_name ON tb_name (col_name [, col_name] ...)
CREATE SMA INDEX index_name ON tb_name index_option CREATE SMA INDEX index_name ON tb_name index_option
...@@ -46,10 +46,6 @@ SELECT _wstart,_wend,_wduration,max(c2),min(c1) FROM st1 INTERVAL(5m,10s) SLIDIN ...@@ -46,10 +46,6 @@ SELECT _wstart,_wend,_wduration,max(c2),min(c1) FROM st1 INTERVAL(5m,10s) SLIDIN
ALTER LOCAL 'querySmaOptimize' '0'; ALTER LOCAL 'querySmaOptimize' '0';
``` ```
### FULLTEXT Indexing
Creates a text index for the specified column. FULLTEXT indexing improves performance for queries with text filtering. The index_option syntax is not supported for FULLTEXT indexing. FULLTEXT indexing is supported for JSON tag columns only. Multiple columns cannot be indexed together. However, separate indices can be created for each column.
## Delete an Index ## Delete an Index
```sql ```sql
......
...@@ -214,19 +214,6 @@ The data of tdinsight dashboard is stored in `log` database (default. You can ch ...@@ -214,19 +214,6 @@ The data of tdinsight dashboard is stored in `log` database (default. You can ch
|dnode\_ep|NCHAR|TAG|dnode endpoint| |dnode\_ep|NCHAR|TAG|dnode endpoint|
|cluster\_id|NCHAR|TAG|cluster id| |cluster\_id|NCHAR|TAG|cluster id|
### logs table
`logs` table contains login information records.
|field|type|is\_tag|comment|
|:----|:---|:-----|:------|
|ts|TIMESTAMP||timestamp|
|level|VARCHAR||log level|
|content|NCHAR||log content|
|dnode\_id|INT|TAG|dnode id|
|dnode\_ep|NCHAR|TAG|dnode endpoint|
|cluster\_id|NCHAR|TAG|cluster id|
### log\_summary table ### log\_summary table
`log_summary` table contains log summary information records. `log_summary` table contains log summary information records.
......
...@@ -36,8 +36,8 @@ REST connection supports all platforms that can run Java. ...@@ -36,8 +36,8 @@ REST connection supports all platforms that can run Java.
| taos-jdbcdriver version | major changes | TDengine version | | taos-jdbcdriver version | major changes | TDengine version |
| :---------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------: | :--------------: | | :---------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------: | :--------------: |
| 3.2.4 | Subscription add the enable.auto.commit parameter and the unsubscribe() method in the WebSocket connection | 3.0.5.0 or later | | 3.2.4 | Subscription add the enable.auto.commit parameter and the unsubscribe() method in the WebSocket connection | - |
| 3.2.3 | Fixed resultSet data parsing failure in some cases | 3.0.5.0 or later | | 3.2.3 | Fixed resultSet data parsing failure in some cases | - |
| 3.2.2 | Subscription add seek function | 3.0.5.0 or later | | 3.2.2 | Subscription add seek function | 3.0.5.0 or later |
| 3.2.1 | JDBC REST connection supports schemaless/prepareStatement over WebSocket | 3.0.3.0 or later | | 3.2.1 | JDBC REST connection supports schemaless/prepareStatement over WebSocket | 3.0.3.0 or later |
| 3.2.0 | This version has been deprecated | - | | 3.2.0 | This version has been deprecated | - |
...@@ -1019,11 +1019,13 @@ while(true) { ...@@ -1019,11 +1019,13 @@ while(true) {
#### Assignment subscription Offset #### Assignment subscription Offset
```java ```java
// get offset
long position(TopicPartition partition) throws SQLException; long position(TopicPartition partition) throws SQLException;
Map<TopicPartition, Long> position(String topic) throws SQLException; Map<TopicPartition, Long> position(String topic) throws SQLException;
Map<TopicPartition, Long> beginningOffsets(String topic) throws SQLException; Map<TopicPartition, Long> beginningOffsets(String topic) throws SQLException;
Map<TopicPartition, Long> endOffsets(String topic) throws SQLException; Map<TopicPartition, Long> endOffsets(String topic) throws SQLException;
// Overrides the fetch offsets that the consumer will use on the next poll(timeout).
void seek(TopicPartition partition, long offset) throws SQLException; void seek(TopicPartition partition, long offset) throws SQLException;
``` ```
......
...@@ -648,12 +648,12 @@ stmt.execute()?; ...@@ -648,12 +648,12 @@ stmt.execute()?;
//stmt.execute()?; //stmt.execute()?;
``` ```
For a working example, see [GitHub](https://github.com/taosdata/taos-connector-rust/blob/main/examples/bind.rs). For a working example, see [GitHub](https://github.com/taosdata/taos-connector-rust/blob/main/taos/examples/bind.rs).
For information about other structure APIs, see the [Rust documentation](https://docs.rs/taos). For information about other structure APIs, see the [Rust documentation](https://docs.rs/taos).
[taos]: https://github.com/taosdata/rust-connector-taos [taos]: https://github.com/taosdata/taos-connector-rust
[r2d2]: https://crates.io/crates/r2d2 [r2d2]: https://crates.io/crates/r2d2
[TaosBuilder]: https://docs.rs/taos/latest/taos/struct.TaosBuilder.html [TaosBuilder]: https://docs.rs/taos/latest/taos/struct.TaosBuilder.html
[TaosCfg]: https://docs.rs/taos/latest/taos/struct.TaosCfg.html [TaosCfg]: https://docs.rs/taos/latest/taos/struct.TaosCfg.html
......
...@@ -1007,13 +1007,12 @@ consumer.close() ...@@ -1007,13 +1007,12 @@ consumer.close()
### Other sample programs ### Other sample programs
| Example program links | Example program content | | Example program links | Example program content |
| ------------------------------------------------------------------------------------------------------------- | ------------------- ---- | |-----------------------|-------------------------|
| [bind_multi.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/bind-multi.py) | parameter binding, | [bind_multi.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/bind-multi.py) | parameter binding, bind multiple rows at once |
bind multiple rows at once | | [bind_row.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/bind-row.py) | parameter binding, bind one row at once |
| [bind_row.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/bind-row.py) | bind_row.py
| [insert_lines.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/insert-lines.py) | InfluxDB line protocol writing | | [insert_lines.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/insert-lines.py) | InfluxDB line protocol writing |
| [json_tag.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/json-tag.py) | Use JSON type tags | | [json_tag.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/json-tag.py) | Use JSON type tags |
| [tmq.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/tmq.py) | TMQ subscription | | [tmq_consumer.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/tmq_consumer.py) | TMQ subscription |
## Other notes ## Other notes
......
...@@ -59,9 +59,9 @@ The different database framework specifications for various programming language ...@@ -59,9 +59,9 @@ The different database framework specifications for various programming language
| -------------------------------------- | ------------- | --------------- | ------------- | ------------- | ------------- | ------------- | | -------------------------------------- | ------------- | --------------- | ------------- | ------------- | ------------- | ------------- |
| **Connection Management** | Support | Support | Support | Support | Support | Support | | **Connection Management** | Support | Support | Support | Support | Support | Support |
| **Regular Query** | Support | Support | Support | Support | Support | Support | | **Regular Query** | Support | Support | Support | Support | Support | Support |
| **Parameter Binding** | Supported | Not Supported | Support | Support | Not Supported | Support | | **Parameter Binding** | Supported | Supported | Support | Support | Not Supported | Support |
| **Subscription (TMQ) ** | Supported | Support | Support | Not Supported | Not Supported | Support | | **Subscription (TMQ) ** | Supported | Support | Support | Not Supported | Not Supported | Support |
| **Schemaless** | Supported | Not Supported | Supported | Not Supported | Not Supported | Not Supported | | **Schemaless** | Supported | Supported | Supported | Not Supported | Not Supported | Not Supported |
| **Bulk Pulling (based on WebSocket) ** | Support | Support | Support | Support | Support | Support | | **Bulk Pulling (based on WebSocket) ** | Support | Support | Support | Support | Support | Support |
:::warning :::warning
......
...@@ -364,6 +364,7 @@ The configuration parameters for specifying super table tag columns and data col ...@@ -364,6 +364,7 @@ The configuration parameters for specifying super table tag columns and data col
- **min**: The minimum value of the column/label of the data type. The generated value will equal or large than the minimum value. - **min**: The minimum value of the column/label of the data type. The generated value will equal or large than the minimum value.
- **max**: The maximum value of the column/label of the data type. The generated value will less than the maximum value. - **max**: The maximum value of the column/label of the data type. The generated value will less than the maximum value.
- **fun**: This column of data is filled with functions. Currently, only the sin and cos functions are supported. The input parameter is the timestamp and converted to an angle value. The conversion formula is: angle x=input time column ts value % 360. At the same time, it supports coefficient adjustment and random fluctuation factor adjustment, presented in a fixed format expression, such as fun="10\*sin(x)+100\*random(5)", where x represents the angle, ranging from 0 to 360 degrees, and the growth step size is consistent with the time column step size. 10 represents the coefficient of multiplication, 100 represents the coefficient of addition or subtraction, and 5 represents the fluctuation range within a random range of 5%. The currently supported data types are int, bigint, float, and double. Note: The expression is fixed and cannot be reversed.
- **values**: The value field of the nchar/binary column/label, which will be chosen randomly from the values. - **values**: The value field of the nchar/binary column/label, which will be chosen randomly from the values.
......
...@@ -5,7 +5,7 @@ description: This document describes the supported platforms for the TDengine se ...@@ -5,7 +5,7 @@ description: This document describes the supported platforms for the TDengine se
## List of supported platforms for TDengine server ## List of supported platforms for TDengine server
| | **Windows Server 2016/2019** | **Windows 10/11** | **CentOS 7.9/8** | **Ubuntu 18/20** | **macOS** | | | **Windows Server 2016/2019** | **Windows 10/11** | **CentOS 7.9/8** | **Ubuntu 18 or later** | **macOS** |
| ------------ | ---------------------------- | ----------------- | ---------------- | ---------------- | --------- | | ------------ | ---------------------------- | ----------------- | ---------------- | ---------------- | --------- |
| X64 | ● | ● | ● | ● | ● | | X64 | ● | ● | ● | ● | ● |
| ARM64 | | | ● | | ● | | ARM64 | | | ● | | ● |
......
...@@ -722,6 +722,15 @@ The charset that takes effect is UTF-8. ...@@ -722,6 +722,15 @@ The charset that takes effect is UTF-8.
| Value Range | 0: not change; 1: change by modification | | Value Range | 0: not change; 1: change by modification |
| Default Value | 0 | | Default Value | 0 |
### tmqMaxTopicNum
| Attribute | Description |
| -------- | ------------------ |
| Applicable | Server Only |
| Meaning | The max num of topics |
| Value Range | 1-10000|
| Default Value | 20 |
## 3.0 Parameters ## 3.0 Parameters
| # | **Parameter** | **Applicable to 2.x ** | **Applicable to 3.0 ** | Current behavior in 3.0 | | # | **Parameter** | **Applicable to 2.x ** | **Applicable to 3.0 ** | Current behavior in 3.0 |
......
...@@ -363,7 +363,10 @@ The following configuration items apply to TDengine Sink Connector and TDengine ...@@ -363,7 +363,10 @@ The following configuration items apply to TDengine Sink Connector and TDengine
7. `out.format`: Result output format. `line` indicates that the output format is InfluxDB line protocol format, `json` indicates that the output format is json. The default is line. 7. `out.format`: Result output format. `line` indicates that the output format is InfluxDB line protocol format, `json` indicates that the output format is json. The default is line.
8. `topic.per.stable`: If it's set to true, it means one super table in TDengine corresponds to a topic in Kafka, the topic naming rule is `<topic.prefix><topic.delimiter><connection.database><topic.delimiter><stable.name>`; if it's set to false, it means the whole DB corresponds to a topic in Kafka, the topic naming rule is `<topic.prefix><topic.delimiter><connection.database>`. 8. `topic.per.stable`: If it's set to true, it means one super table in TDengine corresponds to a topic in Kafka, the topic naming rule is `<topic.prefix><topic.delimiter><connection.database><topic.delimiter><stable.name>`; if it's set to false, it means the whole DB corresponds to a topic in Kafka, the topic naming rule is `<topic.prefix><topic.delimiter><connection.database>`.
9. `topic.ignore.db`: Whether the topic naming rule contains the database name: true indicates that the rule is `<topic.prefix><topic.delimiter><stable.name>`, false indicates that the rule is `<topic.prefix><topic.delimiter><connection.database><topic.delimiter><stable.name>`, and the default is false. Does not take effect when `topic.per.stable` is set to false. 9. `topic.ignore.db`: Whether the topic naming rule contains the database name: true indicates that the rule is `<topic.prefix><topic.delimiter><stable.name>`, false indicates that the rule is `<topic.prefix><topic.delimiter><connection.database><topic.delimiter><stable.name>`, and the default is false. Does not take effect when `topic.per.stable` is set to false.
10. `topic.delimiter`: topic name delimiter,default is `-` 10. `topic.delimiter`: topic name delimiter,default is `-`.
11. `read.method`: read method for query TDengine data, query or subscription. default is subscription.
12. `subscription.group.id`: subscription group id for subscription data from TDengine, this field is required when `read.method` is subscription.
13. `subscription.from`: subscription from latest or earliest. default is latest。
## Other notes ## Other notes
......
...@@ -6,10 +6,14 @@ description: This document provides download links for all released versions of ...@@ -6,10 +6,14 @@ description: This document provides download links for all released versions of
TDengine 3.x installation packages can be downloaded at the following links: TDengine 3.x installation packages can be downloaded at the following links:
For TDengine 2.x installation packages by version, please visit [here](https://www.taosdata.com/all-downloads). For TDengine 2.x installation packages by version, please visit [here](https://tdengine.com/downloads/historical/).
import Release from "/components/ReleaseV3"; import Release from "/components/ReleaseV3";
## 3.0.7.1
<Release type="tdengine" version="3.0.7.1" />
## 3.0.7.0 ## 3.0.7.0
<Release type="tdengine" version="3.0.7.0" /> <Release type="tdengine" version="3.0.7.0" />
......
...@@ -201,7 +201,7 @@ Active: inactive (dead) ...@@ -201,7 +201,7 @@ Active: inactive (dead)
<TabItem label="Windows 系统" value="windows"> <TabItem label="Windows 系统" value="windows">
安装后,可以在拥有管理员权限的 cmd 窗口执行 `sc start taosd` 或在 `C:\TDengine` 目录下,运行 `taosd.exe` 来启动 TDengine 服务进程。 安装后,可以在拥有管理员权限的 cmd 窗口执行 `sc start taosd` 或在 `C:\TDengine` 目录下,运行 `taosd.exe` 来启动 TDengine 服务进程。如需使用 http/REST 服务,请执行 `sc start taosadapter` 或运行 `taosadapter.exe` 来启动 taosAdapter 服务进程。
**TDengine 命令行(CLI)** **TDengine 命令行(CLI)**
......
...@@ -398,7 +398,7 @@ def finish(buf: bytes) -> output_type: ...@@ -398,7 +398,7 @@ def finish(buf: bytes) -> output_type:
3. 定义一个标量函数,输入一个时间戳,输出距离这个时间最近的下一个周日。完成这个函数要用到第三方库 moment。我们在这个示例中讲解使用第三方库的注意事项。 3. 定义一个标量函数,输入一个时间戳,输出距离这个时间最近的下一个周日。完成这个函数要用到第三方库 moment。我们在这个示例中讲解使用第三方库的注意事项。
4. 定义一个聚合函数,计算某一列最大值和最小值的差, 也就是实现 TDengien 内置的 spread 函数。 4. 定义一个聚合函数,计算某一列最大值和最小值的差, 也就是实现 TDengien 内置的 spread 函数。
同时也包含大量实用的 debug 技巧。 同时也包含大量实用的 debug 技巧。
本文假设你用的是 Linux 系统,且已安装好了 TDengine 3.0.4.0+ 和 Python 3.x 本文假设你用的是 Linux 系统,且已安装好了 TDengine 3.0.4.0+ 和 Python 3.7+
注意:**UDF 内无法通过 print 函数输出日志,需要自己写文件或用 python 内置的 logging 库写文件** 注意:**UDF 内无法通过 print 函数输出日志,需要自己写文件或用 python 内置的 logging 库写文件**
......
...@@ -1022,11 +1022,13 @@ while(true) { ...@@ -1022,11 +1022,13 @@ while(true) {
#### 指定订阅 Offset #### 指定订阅 Offset
```java ```java
// 获取 offset
long position(TopicPartition partition) throws SQLException; long position(TopicPartition partition) throws SQLException;
Map<TopicPartition, Long> position(String topic) throws SQLException; Map<TopicPartition, Long> position(String topic) throws SQLException;
Map<TopicPartition, Long> beginningOffsets(String topic) throws SQLException; Map<TopicPartition, Long> beginningOffsets(String topic) throws SQLException;
Map<TopicPartition, Long> endOffsets(String topic) throws SQLException; Map<TopicPartition, Long> endOffsets(String topic) throws SQLException;
// 指定下一次 poll 中使用的 offset
void seek(TopicPartition partition, long offset) throws SQLException; void seek(TopicPartition partition, long offset) throws SQLException;
``` ```
......
...@@ -58,9 +58,9 @@ TDengine 版本更新往往会增加新的功能特性,列表中的连接器 ...@@ -58,9 +58,9 @@ TDengine 版本更新往往会增加新的功能特性,列表中的连接器
| ------------------------------ | -------- | ---------- | -------- | -------- | ----------- | -------- | | ------------------------------ | -------- | ---------- | -------- | -------- | ----------- | -------- |
| **连接管理** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 | | **连接管理** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **普通查询** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 | | **普通查询** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **参数绑定** | 支持 | 暂不支持 | 支持 | 支持 | 暂不支持 | 支持 | | **参数绑定** | 支持 | 支持 | 支持 | 支持 | 暂不支持 | 支持 |
| **数据订阅(TMQ)** | 支持 | 支持 | 支持 | 暂不支持 | 暂不支持 | 支持 | | **数据订阅(TMQ)** | 支持 | 支持 | 支持 | 暂不支持 | 暂不支持 | 支持 |
| **Schemaless** | 支持 | 暂不支持 | 支持 | 暂不支持 | 暂不支持 | 支持 | | **Schemaless** | 支持 | 支持 | 支持 | 暂不支持 | 暂不支持 | 支持 |
| **批量拉取(基于 WebSocket)** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 | | **批量拉取(基于 WebSocket)** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
:::warning :::warning
......
此差异已折叠。
...@@ -85,7 +85,7 @@ create database if not exists db vgroups 10 buffer 10 ...@@ -85,7 +85,7 @@ create database if not exists db vgroups 10 buffer 10
``` ```
以上示例创建了一个有 10 个 vgroup 名为 db 的数据库, 其中每个 vnode 分配 10MB 的写入缓存 以上示例创建了一个有 10 个 vgroup 名为 db 的数据库, 其中每个 vnode 分配 10MB 的写入缓存
### 使用数据库 ### 使用数据库
......
...@@ -44,11 +44,10 @@ table_option: { ...@@ -44,11 +44,10 @@ table_option: {
1. 表的第一个字段必须是 TIMESTAMP,并且系统自动将其设为主键; 1. 表的第一个字段必须是 TIMESTAMP,并且系统自动将其设为主键;
2. 表名最大长度为 192; 2. 表名最大长度为 192;
3. 表的每行长度不能超过 48KB(从 3.0.5.0 版本开始为 64KB);(注意:每个 BINARY/NCHAR 类型的列还会额外占用 2 个字节的存储位置) 3. 表的每行长度不能超过 48KB(从 3.0.5.0 版本开始为 64KB);(注意:每个 BINARY/NCHAR 类型的列还会额外占用 2 个字节的存储位置)
4. 子表名只能由字母、数字和下划线组成,且不能以数字开头,不区分大小写 4. 表名,超级表名,以及子表名只能由字母、数字和下划线组成,且不能以数字开头,不区分大小写
5. 使用数据类型 binary 或 nchar,需指定其最长的字节数,如 binary(20),表示 20 字节; 5. 使用数据类型 binary 或 nchar,需指定其最长的字节数,如 binary(20),表示 20 字节;
6. 为了兼容支持更多形式的表名,TDengine 引入新的转义符 "\`",可以让表名与关键词不冲突,同时不受限于上述表名称合法性约束检查。但是同样具有长度限制要求。使用转义字符以后,不再对转义字符中的内容进行大小写统一 6. 为了兼容支持更多形式的表名,TDengine 引入新的转义符 "\`"。 如果不加转义符,表名会被默认转换成小组;加上转义符可以保留表名中的大小写属性
例如:\`aBc\`\`abc\` 是不同的表名,但是 abc 和 aBc 是相同的表名。 例如:\`aBc\`\`abc\` 是不同的表名,但是 abc 和 aBc 是相同的表名。
需要注意的是转义字符中的内容必须是可打印字符。
**参数说明** **参数说明**
......
...@@ -10,11 +10,9 @@ description: 合法字符集和命名中的限制规则 ...@@ -10,11 +10,9 @@ description: 合法字符集和命名中的限制规则
2. 允许英文字符或下划线开头,不允许以数字开头 2. 允许英文字符或下划线开头,不允许以数字开头
3. 不区分大小写 3. 不区分大小写
4. 转义后表(列)名规则: 4. 转义后表(列)名规则:
为了兼容支持更多形式的表(列)名,TDengine 引入新的转义符 "`"。可用让表名与关键词不冲突,同时不受限于上述表名称合法性约束检查 为了兼容支持更多形式的表(列)名,TDengine 引入新的转义符 "`"。使用转义字符以后,不再对转义字符中的内容进行大小写统一,即可以保留用户指定表名中的大小写属性。
转义后的表(列)名同样受到长度限制要求,且长度计算的时候不计算转义符。使用转义字符以后,不再对转义字符中的内容进行大小写统一
例如:\`aBc\` 和 \`abc\` 是不同的表(列)名,但是 abc 和 aBc 是相同的表(列)名。 例如:\`aBc\` 和 \`abc\` 是不同的表(列)名,但是 abc 和 aBc 是相同的表(列)名。
需要注意的是转义字符中的内容必须是可打印字符。
## 密码合法字符集 ## 密码合法字符集
...@@ -48,13 +46,13 @@ description: 合法字符集和命名中的限制规则 ...@@ -48,13 +46,13 @@ description: 合法字符集和命名中的限制规则
### 转义后表(列)名规则: ### 转义后表(列)名规则:
为了兼容支持更多形式的表(列)名,TDengine 引入新的转义符 "`",可以避免表名与关键词的冲突,同时不受限于上述表名合法性约束检查,转义符不计入表名的长度。 为了兼容支持更多形式的表(列)名,TDengine 引入新的转义符 "`",可以避免表名与关键词的冲突,转义符不计入表名的长度。
转义后的表(列)名同样受到长度限制要求,且长度计算的时候不计算转义符。使用转义字符以后,不再对转义字符中的内容进行大小写统一。 转义后的表(列)名同样受到长度限制要求,且长度计算的时候不计算转义符。使用转义字符以后,不再对转义字符中的内容进行大小写统一。
例如: 例如:
\`aBc\`\`abc\` 是不同的表(列)名,但是 abc 和 aBc 是相同的表(列)名。 \`aBc\`\`abc\` 是不同的表(列)名,但是 abc 和 aBc 是相同的表(列)名。
:::note :::note
转义字符中的内容必须是可打印字符 转义字符中的内容必须符合命名规则中的字符约束
::: :::
...@@ -29,7 +29,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数 ...@@ -29,7 +29,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
提供 dnode 的相关信息。也可以使用 SHOW DNODES 来查询这些信息。 提供 dnode 的相关信息。也可以使用 SHOW DNODES 来查询这些信息。
| # | **列名** | **数据类型** | **说明** | | # | **列名** | **数据类型** | **说明** |
| --- | :------------: | ------------ | ------------------------- | | --- | :------------: | ------------ | ----------------------------------------------------------------------------------------------------- |
| 1 | vnodes | SMALLINT | dnode 中的实际 vnode 个数。需要注意,`vnodes` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 | | 1 | vnodes | SMALLINT | dnode 中的实际 vnode 个数。需要注意,`vnodes` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 2 | support_vnodes | SMALLINT | 最多支持的 vnode 个数 | | 2 | support_vnodes | SMALLINT | 最多支持的 vnode 个数 |
| 3 | status | BINARY(10) | 当前状态 | | 3 | status | BINARY(10) | 当前状态 |
...@@ -75,7 +75,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数 ...@@ -75,7 +75,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
提供用户创建的数据库对象的相关信息。也可以使用 SHOW DATABASES 来查询这些信息。 提供用户创建的数据库对象的相关信息。也可以使用 SHOW DATABASES 来查询这些信息。
| # | **列名** | **数据类型** | **说明** | | # | **列名** | **数据类型** | **说明** |
| --- | :------------------: | ---------------- | ------------------------------------------------ | | --- | :------------------: | ------------ | --------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1 | name | BINARY(32) | 数据库名 | | 1 | name | BINARY(32) | 数据库名 |
| 2 | create_time | TIMESTAMP | 创建时间 | | 2 | create_time | TIMESTAMP | 创建时间 |
| 3 | ntables | INT | 数据库中表的数量,包含子表和普通表但不包含超级表 | | 3 | ntables | INT | 数据库中表的数量,包含子表和普通表但不包含超级表 |
...@@ -112,7 +112,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数 ...@@ -112,7 +112,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
用户创建的自定义函数的信息。 用户创建的自定义函数的信息。
| # | **列名** | **数据类型** | **说明** | | # | **列名** | **数据类型** | **说明** |
| --- | :---------: | ------------ | -------------- | | --- | :-----------: | ------------- | --------------------------------------------------------------------------------------------- |
| 1 | name | BINARY(64) | 函数名 | | 1 | name | BINARY(64) | 函数名 |
| 2 | comment | BINARY(255) | 补充说明。需要注意,`comment` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 | | 2 | comment | BINARY(255) | 补充说明。需要注意,`comment` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 3 | aggregate | INT | 是否为聚合函数。需要注意,`aggregate` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 | | 3 | aggregate | INT | 是否为聚合函数。需要注意,`aggregate` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
...@@ -122,7 +122,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数 ...@@ -122,7 +122,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
| 7 | bufsize | INT | buffer 大小 | | 7 | bufsize | INT | buffer 大小 |
| 8 | func_language | BINARY(31) | 自定义函数编程语言 | | 8 | func_language | BINARY(31) | 自定义函数编程语言 |
| 9 | func_body | BINARY(16384) | 函数体定义 | | 9 | func_body | BINARY(16384) | 函数体定义 |
| 10 | func_version | INT | 函数版本号。初始版本为0,每次替换更新,版本号加1。| | 10 | func_version | INT | 函数版本号。初始版本为0,每次替换更新,版本号加1。 |
## INS_INDEXES ## INS_INDEXES
...@@ -130,20 +130,20 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数 ...@@ -130,20 +130,20 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
提供用户创建的索引的相关信息。也可以使用 SHOW INDEX 来查询这些信息。 提供用户创建的索引的相关信息。也可以使用 SHOW INDEX 来查询这些信息。
| # | **列名** | **数据类型** | **说明** | | # | **列名** | **数据类型** | **说明** |
| --- | :--------------: | ------------ | ---------------------------------------------------------------------------------- | | --- | :--------------: | ------------ | ------------------------------------------------------- |
| 1 | db_name | BINARY(32) | 包含此索引的表所在的数据库名 | | 1 | db_name | BINARY(32) | 包含此索引的表所在的数据库名 |
| 2 | table_name | BINARY(192) | 包含此索引的表的名称 | | 2 | table_name | BINARY(192) | 包含此索引的表的名称 |
| 3 | index_name | BINARY(192) | 索引名 | | 3 | index_name | BINARY(192) | 索引名 |
| 4 | column_name | BINARY(64) | 建索引的列的列名 | | 4 | column_name | BINARY(64) | 建索引的列的列名 |
| 5 | index_type | BINARY(10) | 目前有 SMA 和 FULLTEXT | | 5 | index_type | BINARY(10) | 目前有 SMA 和 tag |
| 6 | index_extensions | BINARY(256) | 索引的额外信息。对 SMA 类型的索引,是函数名的列表。对 FULLTEXT 类型的索引为 NULL。 | | 6 | index_extensions | BINARY(256) | 索引的额外信息。对 SMA/tag 类型的索引,是函数名的列表。 |
## INS_STABLES ## INS_STABLES
提供用户创建的超级表的相关信息。 提供用户创建的超级表的相关信息。
| # | **列名** | **数据类型** | **说明** | | # | **列名** | **数据类型** | **说明** |
| --- | :-----------: | ------------ | ------------------------ | | --- | :-----------: | ------------ | ----------------------------------------------------------------------------------------------------- |
| 1 | stable_name | BINARY(192) | 超级表表名 | | 1 | stable_name | BINARY(192) | 超级表表名 |
| 2 | db_name | BINARY(64) | 超级表所在的数据库的名称 | | 2 | db_name | BINARY(64) | 超级表所在的数据库的名称 |
| 3 | create_time | TIMESTAMP | 创建时间 | | 3 | create_time | TIMESTAMP | 创建时间 |
...@@ -160,7 +160,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数 ...@@ -160,7 +160,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
提供用户创建的普通表和子表的相关信息 提供用户创建的普通表和子表的相关信息
| # | **列名** | **数据类型** | **说明** | | # | **列名** | **数据类型** | **说明** |
| --- | :-----------: | ------------ | ---------------- | | --- | :-----------: | ------------ | ------------------------------------------------------------------------------------- |
| 1 | table_name | BINARY(192) | 表名 | | 1 | table_name | BINARY(192) | 表名 |
| 2 | db_name | BINARY(64) | 数据库名 | | 2 | db_name | BINARY(64) | 数据库名 |
| 3 | create_time | TIMESTAMP | 创建时间 | | 3 | create_time | TIMESTAMP | 创建时间 |
...@@ -186,7 +186,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数 ...@@ -186,7 +186,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
## INS_COLUMNS ## INS_COLUMNS
| # | **列名** | **数据类型** | **说明** | | # | **列名** | **数据类型** | **说明** |
| --- | :---------: | ------------- | ---------------------- | | --- | :-----------: | ------------ | ---------------------- |
| 1 | table_name | BINARY(192) | 表名 | | 1 | table_name | BINARY(192) | 表名 |
| 2 | db_name | BINARY(64) | 该表所在的数据库的名称 | | 2 | db_name | BINARY(64) | 该表所在的数据库的名称 |
| 3 | table_type | BINARY(21) | 表类型 | | 3 | table_type | BINARY(21) | 表类型 |
...@@ -212,7 +212,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数 ...@@ -212,7 +212,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
提供企业版授权的相关信息。 提供企业版授权的相关信息。
| # | **列名** | **数据类型** | **说明** | | # | **列名** | **数据类型** | **说明** |
| --- | :---------: | ------------ | -------------------------------------------------- | | --- | :---------: | ------------ | --------------------------------------------------------------------------------------------------------- |
| 1 | version | BINARY(9) | 企业版授权说明:official(官方授权的)/trial(试用的) | | 1 | version | BINARY(9) | 企业版授权说明:official(官方授权的)/trial(试用的) |
| 2 | cpu_cores | BINARY(9) | 授权使用的 CPU 核心数量 | | 2 | cpu_cores | BINARY(9) | 授权使用的 CPU 核心数量 |
| 3 | dnodes | BINARY(10) | 授权使用的 dnode 节点数量。需要注意,`dnodes` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 | | 3 | dnodes | BINARY(10) | 授权使用的 dnode 节点数量。需要注意,`dnodes` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
...@@ -233,7 +233,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数 ...@@ -233,7 +233,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
系统中所有 vgroups 的信息。 系统中所有 vgroups 的信息。
| # | **列名** | **数据类型** | **说明** | | # | **列名** | **数据类型** | **说明** |
| --- | :-------: | ------------ | ------------------------------------------------------ | | --- | :-------: | ------------ | ------------------------------------------------------------------------------------------------ |
| 1 | vgroup_id | INT | vgroup id | | 1 | vgroup_id | INT | vgroup id |
| 2 | db_name | BINARY(32) | 数据库名 | | 2 | db_name | BINARY(32) | 数据库名 |
| 3 | tables | INT | 此 vgroup 内有多少表。需要注意,`tables` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 | | 3 | tables | INT | 此 vgroup 内有多少表。需要注意,`tables` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
...@@ -253,7 +253,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数 ...@@ -253,7 +253,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
系统配置参数。 系统配置参数。
| # | **列名** | **数据类型** | **说明** | | # | **列名** | **数据类型** | **说明** |
| --- | :------: | ------------ | ------------ | | --- | :------: | ------------ | --------------------------------------------------------------------------------------- |
| 1 | name | BINARY(32) | 配置项名称 | | 1 | name | BINARY(32) | 配置项名称 |
| 2 | value | BINARY(64) | 该配置项的值。需要注意,`value` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 | | 2 | value | BINARY(64) | 该配置项的值。需要注意,`value` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
...@@ -262,7 +262,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数 ...@@ -262,7 +262,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
系统中每个 dnode 的配置参数。 系统中每个 dnode 的配置参数。
| # | **列名** | **数据类型** | **说明** | | # | **列名** | **数据类型** | **说明** |
| --- | :------: | ------------ | ------------ | | --- | :------: | ------------ | --------------------------------------------------------------------------------------- |
| 1 | dnode_id | INT | dnode 的 ID | | 1 | dnode_id | INT | dnode 的 ID |
| 2 | name | BINARY(32) | 配置项名称 | | 2 | name | BINARY(32) | 配置项名称 |
| 3 | value | BINARY(64) | 该配置项的值。需要注意,`value` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 | | 3 | value | BINARY(64) | 该配置项的值。需要注意,`value` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
...@@ -290,7 +290,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数 ...@@ -290,7 +290,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
## INS_STREAMS ## INS_STREAMS
| # | **列名** | **数据类型** | **说明** | | # | **列名** | **数据类型** | **说明** |
| --- | :----------: | ------------ | --------------------------------------- | | --- | :----------: | ------------ | -------------------------------------------------------------------------------------------------------------------- |
| 1 | stream_name | BINARY(64) | 流计算名称 | | 1 | stream_name | BINARY(64) | 流计算名称 |
| 2 | create_time | TIMESTAMP | 创建时间 | | 2 | create_time | TIMESTAMP | 创建时间 |
| 3 | sql | BINARY(1024) | 创建流计算时提供的 SQL 语句 | | 3 | sql | BINARY(1024) | 创建流计算时提供的 SQL 语句 |
......
...@@ -4,12 +4,13 @@ title: 索引 ...@@ -4,12 +4,13 @@ title: 索引
description: 索引功能的使用细节 description: 索引功能的使用细节
--- ---
TDengine 从 3.0.0.0 版本开始引入了索引功能,支持 SMA 索引和 FULLTEXT 索引。 TDengine 从 3.0.0.0 版本开始引入了索引功能,支持 SMA 索引和 tag 索引。
## 创建索引 ## 创建索引
```sql ```sql
CREATE FULLTEXT INDEX index_name ON tb_name (col_name [, col_name] ...)
CREATE INDEX index_name ON tb_name index_option
CREATE SMA INDEX index_name ON tb_name index_option CREATE SMA INDEX index_name ON tb_name index_option
...@@ -46,10 +47,6 @@ SELECT _wstart,_wend,_wduration,max(c2),min(c1) FROM st1 INTERVAL(5m,10s) SLIDIN ...@@ -46,10 +47,6 @@ SELECT _wstart,_wend,_wduration,max(c2),min(c1) FROM st1 INTERVAL(5m,10s) SLIDIN
ALTER LOCAL 'querySmaOptimize' '0'; ALTER LOCAL 'querySmaOptimize' '0';
``` ```
### FULLTEXT 索引
对指定列建立文本索引,可以提升含有文本过滤的查询的性能。FULLTEXT 索引不支持 index_option 语法。现阶段只支持对 JSON 类型的标签列创建 FULLTEXT 索引。不支持多列联合索引,但可以为每个列分布创建 FULLTEXT 索引。
## 删除索引 ## 删除索引
```sql ```sql
......
...@@ -362,6 +362,8 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\) ...@@ -362,6 +362,8 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
- **max** : 数据类型的 列/标签 的最大值。生成的值将小于最小值。 - **max** : 数据类型的 列/标签 的最大值。生成的值将小于最小值。
- **fun** : 此列数据以函数填充,目前只支持 sin 和 cos 两函数,输入参数为时间戳换算成角度值,换算公式: 角度 x = 输入的时间列ts值 % 360。同时支持系数调节,随机波动因子调节,以固定格式的表达式展现,如 fun=“10\*sin(x)+100\*random(5)” , x 表示角度,取值 0 ~ 360度,增长步长与时间列步长一致。10 表示乘的系数,100 表示加或减的系数,5 表示波动幅度在 5% 的随机范围内。目前支持的数据类型为 int, bigint, float, double 四种数据类型。注意:表达式为固定模式,不可前后颠倒。
- **values** : nchar/binary 列/标签的值域,将从值中随机选择。 - **values** : nchar/binary 列/标签的值域,将从值中随机选择。
- **sma**: 将该列加入 SMA 中,值为 "yes" 或者 "no",默认为 "no"。 - **sma**: 将该列加入 SMA 中,值为 "yes" 或者 "no",默认为 "no"。
......
...@@ -5,7 +5,7 @@ description: "TDengine 服务端、客户端和连接器支持的平台列表" ...@@ -5,7 +5,7 @@ description: "TDengine 服务端、客户端和连接器支持的平台列表"
## TDengine 服务端支持的平台列表 ## TDengine 服务端支持的平台列表
| | **Windows server 2016/2019** | **Windows 10/11** | **CentOS 7.9/8** | **Ubuntu 18/20** | **统信 UOS** | **银河/中标麒麟** | **凝思 V60/V80** | **macOS** | | | **Windows server 2016/2019** | **Windows 10/11** | **CentOS 7.9/8** | **Ubuntu 18 以上** | **统信 UOS** | **银河/中标麒麟** | **凝思 V60/V80** | **macOS** |
| ------------ | ---------------------------- | ----------------- | ---------------- | ---------------- | ------------ | ----------------- | ---------------- | --------- | | ------------ | ---------------------------- | ----------------- | ---------------- | ---------------- | ------------ | ----------------- | ---------------- | --------- |
| X64 | ● | ● | ● | ● | ● | ● | ● | ● | | X64 | ● | ● | ● | ● | ● | ● | ● | ● |
| 树莓派 ARM64 | | | ● | | | | | | | 树莓派 ARM64 | | | ● | | | | | |
......
...@@ -726,6 +726,15 @@ charset 的有效值是 UTF-8。 ...@@ -726,6 +726,15 @@ charset 的有效值是 UTF-8。
| 取值范围 | 0: 不改变;1:改变 | | 取值范围 | 0: 不改变;1:改变 |
| 缺省值 | 0 | | 缺省值 | 0 |
### tmqMaxTopicNum
| 属性 | 说明 |
| -------- | ------------------ |
| 适用范围 | 仅服务端适用 |
| 含义 | 订阅最多可建立的 topic 数量 |
| 取值范围 | 1-10000|
| 缺省值 | 20 |
## 压缩参数 ## 压缩参数
### compressMsgSize ### compressMsgSize
......
...@@ -210,19 +210,6 @@ TDinsight dashboard 数据来源于 log 库(存放监控数据的默认db, ...@@ -210,19 +210,6 @@ TDinsight dashboard 数据来源于 log 库(存放监控数据的默认db,
|dnode\_ep|NCHAR|TAG|dnode endpoint| |dnode\_ep|NCHAR|TAG|dnode endpoint|
|cluster\_id|NCHAR|TAG|cluster id| |cluster\_id|NCHAR|TAG|cluster id|
### logs 表
`logs` 表记录登录信息。
|field|type|is\_tag|comment|
|:----|:---|:-----|:------|
|ts|TIMESTAMP||timestamp|
|level|VARCHAR||log level|
|content|NCHAR||log content,长度不超过1024字节|
|dnode\_id|INT|TAG|dnode id|
|dnode\_ep|NCHAR|TAG|dnode endpoint|
|cluster\_id|NCHAR|TAG|cluster id|
### log\_summary 表 ### log\_summary 表
`log_summary` 记录日志统计信息。 `log_summary` 记录日志统计信息。
......
...@@ -369,6 +369,9 @@ curl -X DELETE http://localhost:8083/connectors/TDengineSourceConnector ...@@ -369,6 +369,9 @@ curl -X DELETE http://localhost:8083/connectors/TDengineSourceConnector
8. `topic.per.stable`: 如果设置为 true,表示一个超级表对应一个 Kafka topic,topic的命名规则 `<topic.prefix><topic.delimiter><connection.database><topic.delimiter><stable.name>`;如果设置为 false,则指定的 DB 中的所有数据进入一个 Kafka topic,topic 的命名规则为 `<topic.prefix><topic.delimiter><connection.database>` 8. `topic.per.stable`: 如果设置为 true,表示一个超级表对应一个 Kafka topic,topic的命名规则 `<topic.prefix><topic.delimiter><connection.database><topic.delimiter><stable.name>`;如果设置为 false,则指定的 DB 中的所有数据进入一个 Kafka topic,topic 的命名规则为 `<topic.prefix><topic.delimiter><connection.database>`
9. `topic.ignore.db`: topic 命名规则是否包含 database 名称,true 表示规则为 `<topic.prefix><topic.delimiter><stable.name>`,false 表示规则为 `<topic.prefix><topic.delimiter><connection.database><topic.delimiter><stable.name>`,默认 false。此配置项在 `topic.per.stable` 设置为 false 时不生效。 9. `topic.ignore.db`: topic 命名规则是否包含 database 名称,true 表示规则为 `<topic.prefix><topic.delimiter><stable.name>`,false 表示规则为 `<topic.prefix><topic.delimiter><connection.database><topic.delimiter><stable.name>`,默认 false。此配置项在 `topic.per.stable` 设置为 false 时不生效。
10. `topic.delimiter`: topic 名称分割符,默认为 `-` 10. `topic.delimiter`: topic 名称分割符,默认为 `-`
11. `read.method`: 从 TDengine 读取数据方式,query 或是 subscription。默认为 subscription。
12. `subscription.group.id`: 指定 TDengine 数据订阅的组 id,当 `read.method` 为 subscription 时,此项为必填项。
13. `subscription.from`: 指定 TDengine 数据订阅起始位置,latest 或是 earliest。默认为 latest。
## 其他说明 ## 其他说明
......
...@@ -112,7 +112,7 @@ TDengine 3.0 采用 hash 一致性算法,确定每张数据表所在的 vnode ...@@ -112,7 +112,7 @@ TDengine 3.0 采用 hash 一致性算法,确定每张数据表所在的 vnode
### 数据分区 ### 数据分区
TDengine 除 vnode 分片之外,还对时序数据按照时间段进行分区。每个数据文件只包含一个时间段的时序数据,时间段的长度由 DB 的配置参数 days 决定。这种按时间段分区的方法还便于高效实现数据的保留策略,只要数据文件超过规定的天数(系统配置参数 keep),将被自动删除。而且不同的时间段可以存放于不同的路径和存储介质,以便于大数据的冷热管理,实现多级存储。 TDengine 除 vnode 分片之外,还对时序数据按照时间段进行分区。每个数据文件只包含一个时间段的时序数据,时间段的长度由 DB 的配置参数 duration 决定。这种按时间段分区的方法还便于高效实现数据的保留策略,只要数据文件超过规定的天数(系统配置参数 keep),将被自动删除。而且不同的时间段可以存放于不同的路径和存储介质,以便于大数据的冷热管理,实现多级存储。
总的来说,**TDengine 是通过 vnode 以及时间两个维度,对大数据进行切分**,便于并行高效的管理,实现水平扩展。 总的来说,**TDengine 是通过 vnode 以及时间两个维度,对大数据进行切分**,便于并行高效的管理,实现水平扩展。
......
...@@ -10,6 +10,10 @@ TDengine 2.x 各版本安装包请访问[这里](https://www.taosdata.com/all-do ...@@ -10,6 +10,10 @@ TDengine 2.x 各版本安装包请访问[这里](https://www.taosdata.com/all-do
import Release from "/components/ReleaseV3"; import Release from "/components/ReleaseV3";
## 3.0.7.1
<Release type="tdengine" version="3.0.7.1" />
## 3.0.7.0 ## 3.0.7.0
<Release type="tdengine" version="3.0.7.0" /> <Release type="tdengine" version="3.0.7.0" />
......
...@@ -47,7 +47,7 @@ ...@@ -47,7 +47,7 @@
<dependency> <dependency>
<groupId>com.taosdata.jdbc</groupId> <groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId> <artifactId>taos-jdbcdriver</artifactId>
<version>3.0.0</version> <version>3.2.4</version>
</dependency> </dependency>
<dependency> <dependency>
......
...@@ -287,11 +287,20 @@ DLL_EXPORT TAOS_RES *tmq_consumer_poll(tmq_t *tmq, int64_t timeout); ...@@ -287,11 +287,20 @@ DLL_EXPORT TAOS_RES *tmq_consumer_poll(tmq_t *tmq, int64_t timeout);
DLL_EXPORT int32_t tmq_consumer_close(tmq_t *tmq); DLL_EXPORT int32_t tmq_consumer_close(tmq_t *tmq);
DLL_EXPORT int32_t tmq_commit_sync(tmq_t *tmq, const TAOS_RES *msg); DLL_EXPORT int32_t tmq_commit_sync(tmq_t *tmq, const TAOS_RES *msg);
DLL_EXPORT void tmq_commit_async(tmq_t *tmq, const TAOS_RES *msg, tmq_commit_cb *cb, void *param); DLL_EXPORT void tmq_commit_async(tmq_t *tmq, const TAOS_RES *msg, tmq_commit_cb *cb, void *param);
DLL_EXPORT int32_t tmq_commit_offset_sync(tmq_t *tmq, const char *pTopicName, int32_t vgId, int64_t offset);
DLL_EXPORT void tmq_commit_offset_async(tmq_t *tmq, const char *pTopicName, int32_t vgId, int64_t offset, tmq_commit_cb *cb, void *param);
DLL_EXPORT int32_t tmq_get_topic_assignment(tmq_t *tmq, const char *pTopicName, tmq_topic_assignment **assignment, DLL_EXPORT int32_t tmq_get_topic_assignment(tmq_t *tmq, const char *pTopicName, tmq_topic_assignment **assignment,
int32_t *numOfAssignment); int32_t *numOfAssignment);
DLL_EXPORT void tmq_free_assignment(tmq_topic_assignment* pAssignment); DLL_EXPORT void tmq_free_assignment(tmq_topic_assignment* pAssignment);
DLL_EXPORT int32_t tmq_offset_seek(tmq_t *tmq, const char *pTopicName, int32_t vgId, int64_t offset); DLL_EXPORT int32_t tmq_offset_seek(tmq_t *tmq, const char *pTopicName, int32_t vgId, int64_t offset);
DLL_EXPORT const char *tmq_get_topic_name(TAOS_RES *res);
DLL_EXPORT const char *tmq_get_db_name(TAOS_RES *res);
DLL_EXPORT int32_t tmq_get_vgroup_id(TAOS_RES *res);
DLL_EXPORT int64_t tmq_get_vgroup_offset(TAOS_RES* res);
DLL_EXPORT int64_t tmq_position(tmq_t *tmq, const char *pTopicName, int32_t vgId);
DLL_EXPORT int64_t tmq_committed(tmq_t *tmq, const char *pTopicName, int32_t vgId);
/* ----------------------TMQ CONFIGURATION INTERFACE---------------------- */ /* ----------------------TMQ CONFIGURATION INTERFACE---------------------- */
enum tmq_conf_res_t { enum tmq_conf_res_t {
...@@ -309,11 +318,6 @@ DLL_EXPORT void tmq_conf_set_auto_commit_cb(tmq_conf_t *conf, tmq_comm ...@@ -309,11 +318,6 @@ DLL_EXPORT void tmq_conf_set_auto_commit_cb(tmq_conf_t *conf, tmq_comm
/* -------------------------TMQ MSG HANDLE INTERFACE---------------------- */ /* -------------------------TMQ MSG HANDLE INTERFACE---------------------- */
DLL_EXPORT const char *tmq_get_topic_name(TAOS_RES *res);
DLL_EXPORT const char *tmq_get_db_name(TAOS_RES *res);
DLL_EXPORT int32_t tmq_get_vgroup_id(TAOS_RES *res);
DLL_EXPORT int64_t tmq_get_vgroup_offset(TAOS_RES* res);
/* ------------------------------ TAOSX -----------------------------------*/ /* ------------------------------ TAOSX -----------------------------------*/
// note: following apis are unstable // note: following apis are unstable
enum tmq_res_t { enum tmq_res_t {
......
...@@ -3378,6 +3378,12 @@ typedef struct { ...@@ -3378,6 +3378,12 @@ typedef struct {
int8_t reserved; int8_t reserved;
} SMqHbRsp; } SMqHbRsp;
typedef struct {
SMsgHead head;
int64_t consumerId;
char subKey[TSDB_SUBSCRIBE_KEY_LEN];
} SMqSeekReq;
#define TD_AUTO_CREATE_TABLE 0x1 #define TD_AUTO_CREATE_TABLE 0x1
typedef struct { typedef struct {
int64_t suid; int64_t suid;
...@@ -3507,6 +3513,8 @@ int32_t tSerializeSMqHbReq(void* buf, int32_t bufLen, SMqHbReq* pReq); ...@@ -3507,6 +3513,8 @@ int32_t tSerializeSMqHbReq(void* buf, int32_t bufLen, SMqHbReq* pReq);
int32_t tDeserializeSMqHbReq(void* buf, int32_t bufLen, SMqHbReq* pReq); int32_t tDeserializeSMqHbReq(void* buf, int32_t bufLen, SMqHbReq* pReq);
int32_t tDeatroySMqHbReq(SMqHbReq* pReq); int32_t tDeatroySMqHbReq(SMqHbReq* pReq);
int32_t tSerializeSMqSeekReq(void *buf, int32_t bufLen, SMqSeekReq *pReq);
int32_t tDeserializeSMqSeekReq(void *buf, int32_t bufLen, SMqSeekReq *pReq);
#define SUBMIT_REQ_AUTO_CREATE_TABLE 0x1 #define SUBMIT_REQ_AUTO_CREATE_TABLE 0x1
#define SUBMIT_REQ_COLUMN_DATA_FORMAT 0x2 #define SUBMIT_REQ_COLUMN_DATA_FORMAT 0x2
......
...@@ -307,12 +307,13 @@ enum { ...@@ -307,12 +307,13 @@ enum {
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_SUBSCRIBE, "vnode-tmq-subscribe", SMqRebVgReq, SMqRebVgRsp) TD_DEF_MSG_TYPE(TDMT_VND_TMQ_SUBSCRIBE, "vnode-tmq-subscribe", SMqRebVgReq, SMqRebVgRsp)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_DELETE_SUB, "vnode-tmq-delete-sub", SMqVDeleteReq, SMqVDeleteRsp) TD_DEF_MSG_TYPE(TDMT_VND_TMQ_DELETE_SUB, "vnode-tmq-delete-sub", SMqVDeleteReq, SMqVDeleteRsp)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_COMMIT_OFFSET, "vnode-tmq-commit-offset", STqOffset, STqOffset) TD_DEF_MSG_TYPE(TDMT_VND_TMQ_COMMIT_OFFSET, "vnode-tmq-commit-offset", STqOffset, STqOffset)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_SEEK_TO_OFFSET, "vnode-tmq-seekto-offset", STqOffset, STqOffset) TD_DEF_MSG_TYPE(TDMT_VND_TMQ_SEEK, "vnode-tmq-seek", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_ADD_CHECKINFO, "vnode-tmq-add-checkinfo", NULL, NULL) TD_DEF_MSG_TYPE(TDMT_VND_TMQ_ADD_CHECKINFO, "vnode-tmq-add-checkinfo", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_DEL_CHECKINFO, "vnode-del-checkinfo", NULL, NULL) TD_DEF_MSG_TYPE(TDMT_VND_TMQ_DEL_CHECKINFO, "vnode-del-checkinfo", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_CONSUME, "vnode-tmq-consume", SMqPollReq, SMqDataBlkRsp) TD_DEF_MSG_TYPE(TDMT_VND_TMQ_CONSUME, "vnode-tmq-consume", SMqPollReq, SMqDataBlkRsp)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_CONSUME_PUSH, "vnode-tmq-consume-push", NULL, NULL) TD_DEF_MSG_TYPE(TDMT_VND_TMQ_CONSUME_PUSH, "vnode-tmq-consume-push", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_VG_WALINFO, "vnode-tmq-vg-walinfo", SMqPollReq, SMqDataBlkRsp) TD_DEF_MSG_TYPE(TDMT_VND_TMQ_VG_WALINFO, "vnode-tmq-vg-walinfo", SMqPollReq, SMqDataBlkRsp)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_VG_COMMITTEDINFO, "vnode-tmq-committedinfo", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_MAX_MSG, "vnd-tmq-max", NULL, NULL) TD_DEF_MSG_TYPE(TDMT_VND_TMQ_MAX_MSG, "vnd-tmq-max", NULL, NULL)
......
...@@ -228,7 +228,7 @@ typedef struct SStoreTqReader { ...@@ -228,7 +228,7 @@ typedef struct SStoreTqReader {
} SStoreTqReader; } SStoreTqReader;
typedef struct SStoreSnapshotFn { typedef struct SStoreSnapshotFn {
int32_t (*createSnapshot)(SSnapContext* ctx, int64_t uid); int32_t (*setForSnapShot)(SSnapContext* ctx, int64_t uid);
int32_t (*destroySnapshot)(SSnapContext* ctx); int32_t (*destroySnapshot)(SSnapContext* ctx);
SMetaTableInfo (*getMetaTableInfoFromSnapshot)(SSnapContext* ctx); SMetaTableInfo (*getMetaTableInfoFromSnapshot)(SSnapContext* ctx);
int32_t (*getTableInfoFromSnapshot)(SSnapContext* ctx, void** pBuf, int32_t* contLen, int16_t* type, int64_t* uid); int32_t (*getTableInfoFromSnapshot)(SSnapContext* ctx, void** pBuf, int32_t* contLen, int16_t* type, int64_t* uid);
......
...@@ -361,7 +361,7 @@ typedef struct SRestoreComponentNodeStmt { ...@@ -361,7 +361,7 @@ typedef struct SRestoreComponentNodeStmt {
typedef struct SCreateTopicStmt { typedef struct SCreateTopicStmt {
ENodeType type; ENodeType type;
char topicName[TSDB_TABLE_NAME_LEN]; char topicName[TSDB_TOPIC_NAME_LEN];
char subDbName[TSDB_DB_NAME_LEN]; char subDbName[TSDB_DB_NAME_LEN];
char subSTbName[TSDB_TABLE_NAME_LEN]; char subSTbName[TSDB_TABLE_NAME_LEN];
bool ignoreExists; bool ignoreExists;
...@@ -372,13 +372,13 @@ typedef struct SCreateTopicStmt { ...@@ -372,13 +372,13 @@ typedef struct SCreateTopicStmt {
typedef struct SDropTopicStmt { typedef struct SDropTopicStmt {
ENodeType type; ENodeType type;
char topicName[TSDB_TABLE_NAME_LEN]; char topicName[TSDB_TOPIC_NAME_LEN];
bool ignoreNotExists; bool ignoreNotExists;
} SDropTopicStmt; } SDropTopicStmt;
typedef struct SDropCGroupStmt { typedef struct SDropCGroupStmt {
ENodeType type; ENodeType type;
char topicName[TSDB_TABLE_NAME_LEN]; char topicName[TSDB_TOPIC_NAME_LEN];
char cgroup[TSDB_CGROUP_LEN]; char cgroup[TSDB_CGROUP_LEN];
bool ignoreNotExists; bool ignoreNotExists;
} SDropCGroupStmt; } SDropCGroupStmt;
......
...@@ -247,6 +247,7 @@ typedef struct SSortLogicNode { ...@@ -247,6 +247,7 @@ typedef struct SSortLogicNode {
SNodeList* pSortKeys; SNodeList* pSortKeys;
bool groupSort; bool groupSort;
int64_t maxRows; int64_t maxRows;
bool skipPKSortOpt;
} SSortLogicNode; } SSortLogicNode;
typedef struct SPartitionLogicNode { typedef struct SPartitionLogicNode {
......
...@@ -776,6 +776,11 @@ int32_t* taosGetErrno(); ...@@ -776,6 +776,11 @@ int32_t* taosGetErrno();
#define TSDB_CODE_TMQ_TOPIC_OUT_OF_RANGE TAOS_DEF_ERROR_CODE(0, 0x4004) #define TSDB_CODE_TMQ_TOPIC_OUT_OF_RANGE TAOS_DEF_ERROR_CODE(0, 0x4004)
#define TSDB_CODE_TMQ_GROUP_OUT_OF_RANGE TAOS_DEF_ERROR_CODE(0, 0x4005) #define TSDB_CODE_TMQ_GROUP_OUT_OF_RANGE TAOS_DEF_ERROR_CODE(0, 0x4005)
#define TSDB_CODE_TMQ_SNAPSHOT_ERROR TAOS_DEF_ERROR_CODE(0, 0x4006) #define TSDB_CODE_TMQ_SNAPSHOT_ERROR TAOS_DEF_ERROR_CODE(0, 0x4006)
#define TSDB_CODE_TMQ_VERSION_OUT_OF_RANGE TAOS_DEF_ERROR_CODE(0, 0x4007)
#define TSDB_CODE_TMQ_INVALID_VGID TAOS_DEF_ERROR_CODE(0, 0x4008)
#define TSDB_CODE_TMQ_INVALID_TOPIC TAOS_DEF_ERROR_CODE(0, 0x4009)
#define TSDB_CODE_TMQ_NEED_INITIALIZED TAOS_DEF_ERROR_CODE(0, 0x4010)
#define TSDB_CODE_TMQ_NO_COMMITTED TAOS_DEF_ERROR_CODE(0, 0x4011)
// stream // stream
#define TSDB_CODE_STREAM_TASK_NOT_EXIST TAOS_DEF_ERROR_CODE(0, 0x4100) #define TSDB_CODE_STREAM_TASK_NOT_EXIST TAOS_DEF_ERROR_CODE(0, 0x4100)
......
...@@ -613,12 +613,6 @@ function install_examples() { ...@@ -613,12 +613,6 @@ function install_examples() {
fi fi
} }
function install_web() {
if [ -d "${script_dir}/share" ]; then
${csudo}cp -rf ${script_dir}/share/* ${install_main_dir}/share > /dev/null 2>&1 ||:
fi
}
function clean_service_on_sysvinit() { function clean_service_on_sysvinit() {
if ps aux | grep -v grep | grep ${serverName2} &>/dev/null; then if ps aux | grep -v grep | grep ${serverName2} &>/dev/null; then
...@@ -894,7 +888,6 @@ function updateProduct() { ...@@ -894,7 +888,6 @@ function updateProduct() {
fi fi
install_examples install_examples
install_web
if [ -z $1 ]; then if [ -z $1 ]; then
install_bin install_bin
install_service install_service
...@@ -907,21 +900,23 @@ function updateProduct() { ...@@ -907,21 +900,23 @@ function updateProduct() {
echo echo
echo -e "${GREEN_DARK}To configure ${productName2} ${NC}: edit ${cfg_install_dir}/${configFile2}" echo -e "${GREEN_DARK}To configure ${productName2} ${NC}: edit ${cfg_install_dir}/${configFile2}"
[ -f ${configDir}/${clientName2}adapter.toml ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \ [ -f ${configDir}/${clientName2}adapter.toml ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To configure ${clientName2} Adapter ${NC}: edit ${configDir}/${clientName2}adapter.toml" echo -e "${GREEN_DARK}To configure ${clientName2}Adapter ${NC}: edit ${configDir}/${clientName2}adapter.toml"
if ((${service_mod} == 0)); then if ((${service_mod} == 0)); then
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${csudo}systemctl start ${serverName2}${NC}" echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${csudo}systemctl start ${serverName2}${NC}"
[ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \ [ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${csudo}systemctl start ${clientName2}adapter ${NC}" echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}: ${csudo}systemctl start ${clientName2}adapter ${NC}"
elif ((${service_mod} == 1)); then elif ((${service_mod} == 1)); then
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${csudo}service ${serverName2} start${NC}" echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${csudo}service ${serverName2} start${NC}"
[ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \ [ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${csudo}service ${clientName2}adapter start${NC}" echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}: ${csudo}service ${clientName2}adapter start${NC}"
else else
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ./${serverName2}${NC}" echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ./${serverName2}${NC}"
[ -f ${installDir}/bin/${clientName2}adapter ] && \ [ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${clientName2}adapter &${NC}" echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}: ${clientName2}adapter &${NC}"
fi fi
echo -e "${GREEN_DARK}To enable ${clientName2}keeper ${NC}: sudo systemctl enable ${clientName2}keeper &${NC}"
if [ ${openresty_work} = 'true' ]; then if [ ${openresty_work} = 'true' ]; then
echo -e "${GREEN_DARK}To access ${productName2} ${NC}: use ${GREEN_UNDERLINE}${clientName2} -h $serverFqdn${NC} in shell OR from ${GREEN_UNDERLINE}http://127.0.0.1:${web_port}${NC}" echo -e "${GREEN_DARK}To access ${productName2} ${NC}: use ${GREEN_UNDERLINE}${clientName2} -h $serverFqdn${NC} in shell OR from ${GREEN_UNDERLINE}http://127.0.0.1:${web_port}${NC}"
else else
...@@ -934,6 +929,7 @@ function updateProduct() { ...@@ -934,6 +929,7 @@ function updateProduct() {
fi fi
echo echo
echo -e "\033[44;32;1m${productName2} is updated successfully!${NC}" echo -e "\033[44;32;1m${productName2} is updated successfully!${NC}"
echo -e "\033[44;32;1mTo manage ${productName2} instance, view documentation and explorer features, you need to install ${clientName2}Explorer ${NC}"
else else
install_bin install_bin
install_config install_config
...@@ -972,7 +968,6 @@ function installProduct() { ...@@ -972,7 +968,6 @@ function installProduct() {
install_connector install_connector
fi fi
install_examples install_examples
install_web
if [ -z $1 ]; then # install service and client if [ -z $1 ]; then # install service and client
# For installing new # For installing new
...@@ -989,21 +984,23 @@ function installProduct() { ...@@ -989,21 +984,23 @@ function installProduct() {
echo echo
echo -e "${GREEN_DARK}To configure ${productName2} ${NC}: edit ${cfg_install_dir}/${configFile2}" echo -e "${GREEN_DARK}To configure ${productName2} ${NC}: edit ${cfg_install_dir}/${configFile2}"
[ -f ${configDir}/${clientName2}adapter.toml ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \ [ -f ${configDir}/${clientName2}adapter.toml ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To configure ${clientName2} Adapter ${NC}: edit ${configDir}/${clientName2}adapter.toml" echo -e "${GREEN_DARK}To configure ${clientName2}Adapter ${NC}: edit ${configDir}/${clientName2}adapter.toml"
if ((${service_mod} == 0)); then if ((${service_mod} == 0)); then
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${csudo}systemctl start ${serverName2}${NC}" echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${csudo}systemctl start ${serverName2}${NC}"
[ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \ [ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${csudo}systemctl start ${clientName2}adapter ${NC}" echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}: ${csudo}systemctl start ${clientName2}adapter ${NC}"
elif ((${service_mod} == 1)); then elif ((${service_mod} == 1)); then
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${csudo}service ${serverName2} start${NC}" echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${csudo}service ${serverName2} start${NC}"
[ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \ [ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${csudo}service ${clientName2}adapter start${NC}" echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}: ${csudo}service ${clientName2}adapter start${NC}"
else else
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${serverName2}${NC}" echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${serverName2}${NC}"
[ -f ${installDir}/bin/${clientName2}adapter ] && \ [ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${clientName2}adapter &${NC}" echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}: ${clientName2}adapter &${NC}"
fi fi
echo -e "${GREEN_DARK}To enable ${clientName2}keeper ${NC}: sudo systemctl enable ${clientName2}keeper &${NC}"
if [ ! -z "$firstEp" ]; then if [ ! -z "$firstEp" ]; then
tmpFqdn=${firstEp%%:*} tmpFqdn=${firstEp%%:*}
substr=":" substr=":"
...@@ -1025,6 +1022,7 @@ function installProduct() { ...@@ -1025,6 +1022,7 @@ function installProduct() {
fi fi
echo -e "\033[44;32;1m${productName2} is installed successfully!${NC}" echo -e "\033[44;32;1m${productName2} is installed successfully!${NC}"
echo -e "\033[44;32;1mTo manage ${productName2} instance, view documentation and explorer features, you need to install ${clientName2}Explorer ${NC}"
echo echo
else # Only install client else # Only install client
install_bin install_bin
......
...@@ -267,7 +267,9 @@ function install_log() { ...@@ -267,7 +267,9 @@ function install_log() {
} }
function install_connector() { function install_connector() {
if [ -d ${script_dir}/connector ]; then
${csudo}cp -rf ${script_dir}/connector/ ${install_main_dir}/ ${csudo}cp -rf ${script_dir}/connector/ ${install_main_dir}/
fi
} }
function install_examples() { function install_examples() {
......
...@@ -56,8 +56,8 @@ copy %binary_dir%\\build\\bin\\taos.exe %target_dir% > nul ...@@ -56,8 +56,8 @@ copy %binary_dir%\\build\\bin\\taos.exe %target_dir% > nul
if exist %binary_dir%\\build\\bin\\taosBenchmark.exe ( if exist %binary_dir%\\build\\bin\\taosBenchmark.exe (
copy %binary_dir%\\build\\bin\\taosBenchmark.exe %target_dir% > nul copy %binary_dir%\\build\\bin\\taosBenchmark.exe %target_dir% > nul
) )
if exist %binary_dir%\\build\\lib\\taosws.dll.lib ( if exist %binary_dir%\\build\\lib\\taosws.lib (
copy %binary_dir%\\build\\lib\\taosws.dll.lib %target_dir%\\driver > nul copy %binary_dir%\\build\\lib\\taosws.lib %target_dir%\\driver > nul
) )
if exist %binary_dir%\\build\\lib\\taosws.dll ( if exist %binary_dir%\\build\\lib\\taosws.dll (
copy %binary_dir%\\build\\lib\\taosws.dll %target_dir%\\driver > nul copy %binary_dir%\\build\\lib\\taosws.dll %target_dir%\\driver > nul
......
...@@ -432,12 +432,6 @@ function install_examples() { ...@@ -432,12 +432,6 @@ function install_examples() {
${csudo}cp -rf ${source_dir}/examples/* ${install_main_dir}/examples || : ${csudo}cp -rf ${source_dir}/examples/* ${install_main_dir}/examples || :
} }
function install_web() {
if [ -d "${binary_dir}/build/share" ]; then
${csudo}cp -rf ${binary_dir}/build/share/* ${install_main_dir}/share || :
fi
}
function clean_service_on_sysvinit() { function clean_service_on_sysvinit() {
if ps aux | grep -v grep | grep ${serverName} &>/dev/null; then if ps aux | grep -v grep | grep ${serverName} &>/dev/null; then
${csudo}service ${serverName} stop || : ${csudo}service ${serverName} stop || :
...@@ -592,7 +586,6 @@ function update_TDengine() { ...@@ -592,7 +586,6 @@ function update_TDengine() {
install_lib install_lib
# install_connector # install_connector
install_examples install_examples
install_web
install_bin install_bin
install_app install_app
......
...@@ -126,7 +126,6 @@ else ...@@ -126,7 +126,6 @@ else
fi fi
install_files="${script_dir}/install.sh" install_files="${script_dir}/install.sh"
web_dir="${top_dir}/../enterprise/src/plugins/web"
init_file_deb=${script_dir}/../deb/taosd init_file_deb=${script_dir}/../deb/taosd
init_file_rpm=${script_dir}/../rpm/taosd init_file_rpm=${script_dir}/../rpm/taosd
...@@ -320,17 +319,6 @@ if [[ $dbName == "taos" ]]; then ...@@ -320,17 +319,6 @@ if [[ $dbName == "taos" ]]; then
mkdir -p ${install_dir}/examples/taosbenchmark-json && cp ${examples_dir}/../tools/taos-tools/example/* ${install_dir}/examples/taosbenchmark-json mkdir -p ${install_dir}/examples/taosbenchmark-json && cp ${examples_dir}/../tools/taos-tools/example/* ${install_dir}/examples/taosbenchmark-json
fi fi
# Add web files
if [ "$verMode" == "cluster" ] || [ "$verMode" == "cloud" ]; then
if [ -d "${web_dir}/admin" ] ; then
mkdir -p ${install_dir}/share/
cp -Rfap ${web_dir}/admin ${install_dir}/share/
cp ${web_dir}/png/taos.png ${install_dir}/share/admin/images/taos.png
cp -rf ${build_dir}/share/{etc,srv} ${install_dir}/share ||:
else
echo "directory not found for enterprise release: ${web_dir}/admin"
fi
fi
fi fi
# Copy driver # Copy driver
......
...@@ -1297,14 +1297,20 @@ int initEpSetFromCfg(const char* firstEp, const char* secondEp, SCorEpSet* pEpSe ...@@ -1297,14 +1297,20 @@ int initEpSetFromCfg(const char* firstEp, const char* secondEp, SCorEpSet* pEpSe
return -1; return -1;
} }
int32_t code = taosGetFqdnPortFromEp(firstEp, &mgmtEpSet->eps[0]); int32_t code = taosGetFqdnPortFromEp(firstEp, &mgmtEpSet->eps[mgmtEpSet->numOfEps]);
if (code != TSDB_CODE_SUCCESS) { if (code != TSDB_CODE_SUCCESS) {
terrno = TSDB_CODE_TSC_INVALID_FQDN; terrno = TSDB_CODE_TSC_INVALID_FQDN;
return terrno; return terrno;
} }
uint32_t addr = taosGetIpv4FromFqdn(mgmtEpSet->eps[mgmtEpSet->numOfEps].fqdn);
if (addr == 0xffffffff) {
tscError("failed to resolve firstEp fqdn: %s, code:%s", mgmtEpSet->eps[mgmtEpSet->numOfEps].fqdn,
tstrerror(TSDB_CODE_TSC_INVALID_FQDN));
memset(&(mgmtEpSet->eps[mgmtEpSet->numOfEps]), 0, sizeof(mgmtEpSet->eps[mgmtEpSet->numOfEps]));
} else {
mgmtEpSet->numOfEps++; mgmtEpSet->numOfEps++;
} }
}
if (secondEp && secondEp[0] != 0) { if (secondEp && secondEp[0] != 0) {
if (strlen(secondEp) >= TSDB_EP_LEN) { if (strlen(secondEp) >= TSDB_EP_LEN) {
...@@ -1313,12 +1319,19 @@ int initEpSetFromCfg(const char* firstEp, const char* secondEp, SCorEpSet* pEpSe ...@@ -1313,12 +1319,19 @@ int initEpSetFromCfg(const char* firstEp, const char* secondEp, SCorEpSet* pEpSe
} }
taosGetFqdnPortFromEp(secondEp, &mgmtEpSet->eps[mgmtEpSet->numOfEps]); taosGetFqdnPortFromEp(secondEp, &mgmtEpSet->eps[mgmtEpSet->numOfEps]);
uint32_t addr = taosGetIpv4FromFqdn(mgmtEpSet->eps[mgmtEpSet->numOfEps].fqdn);
if (addr == 0xffffffff) {
tscError("failed to resolve secondEp fqdn: %s, code:%s", mgmtEpSet->eps[mgmtEpSet->numOfEps].fqdn,
tstrerror(TSDB_CODE_TSC_INVALID_FQDN));
memset(&(mgmtEpSet->eps[mgmtEpSet->numOfEps]), 0, sizeof(mgmtEpSet->eps[mgmtEpSet->numOfEps]));
} else {
mgmtEpSet->numOfEps++; mgmtEpSet->numOfEps++;
} }
}
if (mgmtEpSet->numOfEps == 0) { if (mgmtEpSet->numOfEps == 0) {
terrno = TSDB_CODE_TSC_INVALID_FQDN; terrno = TSDB_CODE_RPC_NETWORK_UNAVAIL;
return -1; return TSDB_CODE_RPC_NETWORK_UNAVAIL;
} }
return 0; return 0;
......
...@@ -99,13 +99,20 @@ int32_t processConnectRsp(void* param, SDataBuf* pMsg, int32_t code) { ...@@ -99,13 +99,20 @@ int32_t processConnectRsp(void* param, SDataBuf* pMsg, int32_t code) {
goto End; goto End;
} }
int updateEpSet = 1;
if (connectRsp.dnodeNum == 1) { if (connectRsp.dnodeNum == 1) {
SEpSet srcEpSet = getEpSet_s(&pTscObj->pAppInfo->mgmtEp); SEpSet srcEpSet = getEpSet_s(&pTscObj->pAppInfo->mgmtEp);
SEpSet dstEpSet = connectRsp.epSet; SEpSet dstEpSet = connectRsp.epSet;
if (srcEpSet.numOfEps == 1) {
rpcSetDefaultAddr(pTscObj->pAppInfo->pTransporter, srcEpSet.eps[srcEpSet.inUse].fqdn, rpcSetDefaultAddr(pTscObj->pAppInfo->pTransporter, srcEpSet.eps[srcEpSet.inUse].fqdn,
dstEpSet.eps[dstEpSet.inUse].fqdn); dstEpSet.eps[dstEpSet.inUse].fqdn);
} else if (connectRsp.dnodeNum > 1 && !isEpsetEqual(&pTscObj->pAppInfo->mgmtEp.epSet, &connectRsp.epSet)) { updateEpSet = 0;
SEpSet* pOrig = &pTscObj->pAppInfo->mgmtEp.epSet; }
}
if (updateEpSet == 1 && !isEpsetEqual(&pTscObj->pAppInfo->mgmtEp.epSet, &connectRsp.epSet)) {
SEpSet corEpSet = getEpSet_s(&pTscObj->pAppInfo->mgmtEp);
SEpSet* pOrig = &corEpSet;
SEp* pOrigEp = &pOrig->eps[pOrig->inUse]; SEp* pOrigEp = &pOrig->eps[pOrig->inUse];
SEp* pNewEp = &connectRsp.epSet.eps[connectRsp.epSet.inUse]; SEp* pNewEp = &connectRsp.epSet.eps[connectRsp.epSet.inUse];
tscDebug("mnode epset updated from %d/%d=>%s:%d to %d/%d=>%s:%d in connRsp", pOrig->inUse, pOrig->numOfEps, tscDebug("mnode epset updated from %d/%d=>%s:%d to %d/%d=>%s:%d in connRsp", pOrig->inUse, pOrig->numOfEps,
......
...@@ -1327,6 +1327,9 @@ end: ...@@ -1327,6 +1327,9 @@ end:
int taos_write_raw_block_with_fields(TAOS* taos, int rows, char* pData, const char* tbname, TAOS_FIELD* fields, int taos_write_raw_block_with_fields(TAOS* taos, int rows, char* pData, const char* tbname, TAOS_FIELD* fields,
int numFields) { int numFields) {
if (!taos || !pData || !tbname) {
return TSDB_CODE_INVALID_PARA;
}
int32_t code = TSDB_CODE_SUCCESS; int32_t code = TSDB_CODE_SUCCESS;
STableMeta* pTableMeta = NULL; STableMeta* pTableMeta = NULL;
SQuery* pQuery = NULL; SQuery* pQuery = NULL;
...@@ -1413,6 +1416,9 @@ end: ...@@ -1413,6 +1416,9 @@ end:
} }
int taos_write_raw_block(TAOS* taos, int rows, char* pData, const char* tbname) { int taos_write_raw_block(TAOS* taos, int rows, char* pData, const char* tbname) {
if (!taos || !pData || !tbname) {
return TSDB_CODE_INVALID_PARA;
}
int32_t code = TSDB_CODE_SUCCESS; int32_t code = TSDB_CODE_SUCCESS;
STableMeta* pTableMeta = NULL; STableMeta* pTableMeta = NULL;
SQuery* pQuery = NULL; SQuery* pQuery = NULL;
...@@ -1812,6 +1818,7 @@ end: ...@@ -1812,6 +1818,7 @@ end:
} }
char* tmq_get_json_meta(TAOS_RES* res) { char* tmq_get_json_meta(TAOS_RES* res) {
if (res == NULL) return NULL;
uDebug("tmq_get_json_meta called"); uDebug("tmq_get_json_meta called");
if (!TD_RES_TMQ_META(res) && !TD_RES_TMQ_METADATA(res)) { if (!TD_RES_TMQ_META(res) && !TD_RES_TMQ_METADATA(res)) {
return NULL; return NULL;
......
...@@ -456,7 +456,7 @@ int smlJsonParseObj(char **start, SSmlLineInfo *element, int8_t *offset) { ...@@ -456,7 +456,7 @@ int smlJsonParseObj(char **start, SSmlLineInfo *element, int8_t *offset) {
static inline int32_t smlParseMetricFromJSON(SSmlHandle *info, cJSON *metric, SSmlLineInfo *elements) { static inline int32_t smlParseMetricFromJSON(SSmlHandle *info, cJSON *metric, SSmlLineInfo *elements) {
elements->measureLen = strlen(metric->valuestring); elements->measureLen = strlen(metric->valuestring);
if (IS_INVALID_TABLE_LEN(elements->measureLen)) { if (IS_INVALID_TABLE_LEN(elements->measureLen)) {
uError("OTD:0x%" PRIx64 " Metric lenght is 0 or large than 192", info->id); uError("OTD:0x%" PRIx64 " Metric length is 0 or large than 192", info->id);
return TSDB_CODE_TSC_INVALID_TABLE_ID_LENGTH; return TSDB_CODE_TSC_INVALID_TABLE_ID_LENGTH;
} }
......
此差异已折叠。
...@@ -34,6 +34,8 @@ namespace { ...@@ -34,6 +34,8 @@ namespace {
void printSubResults(void* pRes, int32_t* totalRows) { void printSubResults(void* pRes, int32_t* totalRows) {
char buf[1024]; char buf[1024];
int32_t vgId = tmq_get_vgroup_id(pRes);
int64_t offset = tmq_get_vgroup_offset(pRes);
while (1) { while (1) {
TAOS_ROW row = taos_fetch_row(pRes); TAOS_ROW row = taos_fetch_row(pRes);
if (row == NULL) { if (row == NULL) {
...@@ -45,7 +47,7 @@ void printSubResults(void* pRes, int32_t* totalRows) { ...@@ -45,7 +47,7 @@ void printSubResults(void* pRes, int32_t* totalRows) {
int32_t precision = taos_result_precision(pRes); int32_t precision = taos_result_precision(pRes);
taos_print_row(buf, row, fields, numOfFields); taos_print_row(buf, row, fields, numOfFields);
*totalRows += 1; *totalRows += 1;
printf("precision: %d, row content: %s\n", precision, buf); printf("vgId: %d, offset: %lld, precision: %d, row content: %s\n", vgId, offset, precision, buf);
} }
// taos_free_result(pRes); // taos_free_result(pRes);
...@@ -1073,6 +1075,98 @@ TEST(clientCase, sub_db_test) { ...@@ -1073,6 +1075,98 @@ TEST(clientCase, sub_db_test) {
fprintf(stderr, "%d msg consumed, include %d rows\n", msgCnt, totalRows); fprintf(stderr, "%d msg consumed, include %d rows\n", msgCnt, totalRows);
} }
TEST(clientCase, tmq_commit) {
// taos_options(TSDB_OPTION_CONFIGDIR, "~/first/cfg");
TAOS* pConn = taos_connect("localhost", "root", "taosdata", NULL, 0);
ASSERT_NE(pConn, nullptr);
tmq_conf_t* conf = tmq_conf_new();
tmq_conf_set(conf, "enable.auto.commit", "false");
tmq_conf_set(conf, "auto.commit.interval.ms", "2000");
tmq_conf_set(conf, "group.id", "group_id_2");
tmq_conf_set(conf, "td.connect.user", "root");
tmq_conf_set(conf, "td.connect.pass", "taosdata");
tmq_conf_set(conf, "auto.offset.reset", "earliest");
tmq_conf_set(conf, "msg.with.table.name", "true");
tmq_t* tmq = tmq_consumer_new(conf, NULL, 0);
tmq_conf_destroy(conf);
char topicName[128] = "tp";
// 创建订阅 topics 列表
tmq_list_t* topicList = tmq_list_new();
tmq_list_append(topicList, topicName);
// 启动订阅
tmq_subscribe(tmq, topicList);
tmq_list_destroy(topicList);
int32_t totalRows = 0;
int32_t msgCnt = 0;
int32_t timeout = 2000;
tmq_topic_assignment* pAssign = NULL;
int32_t numOfAssign = 0;
int32_t code = tmq_get_topic_assignment(tmq, topicName, &pAssign, &numOfAssign);
if (code != 0) {
printf("error occurs:%s\n", tmq_err2str(code));
tmq_free_assignment(pAssign);
tmq_consumer_close(tmq);
taos_close(pConn);
fprintf(stderr, "%d msg consumed, include %d rows\n", msgCnt, totalRows);
return;
}
for(int i = 0; i < numOfAssign; i++){
printf("assign i:%d, vgId:%d, offset:%lld, start:%lld, end:%lld\n", i, pAssign[i].vgId, pAssign[i].currentOffset, pAssign[i].begin, pAssign[i].end);
int64_t committed = tmq_committed(tmq, topicName, pAssign[i].vgId);
printf("committed vgId:%d, committed:%lld\n", pAssign[i].vgId, committed);
int64_t position = tmq_position(tmq, topicName, pAssign[i].vgId);
printf("position vgId:%d, position:%lld\n", pAssign[i].vgId, position);
tmq_offset_seek(tmq, topicName, pAssign[i].vgId, 1);
position = tmq_position(tmq, topicName, pAssign[i].vgId);
printf("after seek 1, position vgId:%d, position:%lld\n", pAssign[i].vgId, position);
}
while (1) {
printf("start to poll\n");
TAOS_RES* pRes = tmq_consumer_poll(tmq, timeout);
if (pRes) {
printSubResults(pRes, &totalRows);
} else {
break;
}
tmq_commit_sync(tmq, pRes);
for(int i = 0; i < numOfAssign; i++) {
int64_t committed = tmq_committed(tmq, topicName, pAssign[i].vgId);
printf("committed vgId:%d, committed:%lld\n", pAssign[i].vgId, committed);
if(committed > 0){
int32_t code = tmq_commit_offset_sync(tmq, topicName, pAssign[i].vgId, 4);
printf("tmq_commit_offset_sync vgId:%d, offset:4, code:%d\n", pAssign[i].vgId, code);
int64_t committed = tmq_committed(tmq, topicName, pAssign[i].vgId);
printf("after tmq_commit_offset_sync, committed vgId:%d, committed:%lld\n", pAssign[i].vgId, committed);
}
}
if (pRes != NULL) {
taos_free_result(pRes);
}
// tmq_offset_seek(tmq, "tp", pAssign[0].vgId, pAssign[0].begin);
}
tmq_free_assignment(pAssign);
tmq_consumer_close(tmq);
taos_close(pConn);
fprintf(stderr, "%d msg consumed, include %d rows\n", msgCnt, totalRows);
}
TEST(clientCase, td_25129) { TEST(clientCase, td_25129) {
// taos_options(TSDB_OPTION_CONFIGDIR, "~/first/cfg"); // taos_options(TSDB_OPTION_CONFIGDIR, "~/first/cfg");
...@@ -1092,9 +1186,10 @@ TEST(clientCase, td_25129) { ...@@ -1092,9 +1186,10 @@ TEST(clientCase, td_25129) {
tmq_t* tmq = tmq_consumer_new(conf, NULL, 0); tmq_t* tmq = tmq_consumer_new(conf, NULL, 0);
tmq_conf_destroy(conf); tmq_conf_destroy(conf);
char topicName[128] = "tp";
// 创建订阅 topics 列表 // 创建订阅 topics 列表
tmq_list_t* topicList = tmq_list_new(); tmq_list_t* topicList = tmq_list_new();
tmq_list_append(topicList, "tp"); tmq_list_append(topicList, topicName);
// 启动订阅 // 启动订阅
tmq_subscribe(tmq, topicList); tmq_subscribe(tmq, topicList);
...@@ -1112,7 +1207,7 @@ TEST(clientCase, td_25129) { ...@@ -1112,7 +1207,7 @@ TEST(clientCase, td_25129) {
tmq_topic_assignment* pAssign = NULL; tmq_topic_assignment* pAssign = NULL;
int32_t numOfAssign = 0; int32_t numOfAssign = 0;
int32_t code = tmq_get_topic_assignment(tmq, "tp", &pAssign, &numOfAssign); int32_t code = tmq_get_topic_assignment(tmq, topicName, &pAssign, &numOfAssign);
if (code != 0) { if (code != 0) {
printf("error occurs:%s\n", tmq_err2str(code)); printf("error occurs:%s\n", tmq_err2str(code));
tmq_free_assignment(pAssign); tmq_free_assignment(pAssign);
...@@ -1129,7 +1224,7 @@ TEST(clientCase, td_25129) { ...@@ -1129,7 +1224,7 @@ TEST(clientCase, td_25129) {
// tmq_offset_seek(tmq, "tp", pAssign[0].vgId, 4); // tmq_offset_seek(tmq, "tp", pAssign[0].vgId, 4);
tmq_free_assignment(pAssign); tmq_free_assignment(pAssign);
code = tmq_get_topic_assignment(tmq, "tp", &pAssign, &numOfAssign); code = tmq_get_topic_assignment(tmq, topicName, &pAssign, &numOfAssign);
if (code != 0) { if (code != 0) {
printf("error occurs:%s\n", tmq_err2str(code)); printf("error occurs:%s\n", tmq_err2str(code));
tmq_free_assignment(pAssign); tmq_free_assignment(pAssign);
...@@ -1145,7 +1240,7 @@ TEST(clientCase, td_25129) { ...@@ -1145,7 +1240,7 @@ TEST(clientCase, td_25129) {
tmq_free_assignment(pAssign); tmq_free_assignment(pAssign);
code = tmq_get_topic_assignment(tmq, "tp", &pAssign, &numOfAssign); code = tmq_get_topic_assignment(tmq, topicName, &pAssign, &numOfAssign);
if (code != 0) { if (code != 0) {
printf("error occurs:%s\n", tmq_err2str(code)); printf("error occurs:%s\n", tmq_err2str(code));
tmq_free_assignment(pAssign); tmq_free_assignment(pAssign);
...@@ -1160,6 +1255,7 @@ TEST(clientCase, td_25129) { ...@@ -1160,6 +1255,7 @@ TEST(clientCase, td_25129) {
} }
while (1) { while (1) {
printf("start to poll\n");
TAOS_RES* pRes = tmq_consumer_poll(tmq, timeout); TAOS_RES* pRes = tmq_consumer_poll(tmq, timeout);
if (pRes) { if (pRes) {
char buf[128]; char buf[128];
...@@ -1173,10 +1269,26 @@ TEST(clientCase, td_25129) { ...@@ -1173,10 +1269,26 @@ TEST(clientCase, td_25129) {
// printf("vgroup id: %d\n", vgroupId); // printf("vgroup id: %d\n", vgroupId);
printSubResults(pRes, &totalRows); printSubResults(pRes, &totalRows);
code = tmq_get_topic_assignment(tmq, topicName, &pAssign, &numOfAssign);
if (code != 0) {
printf("error occurs:%s\n", tmq_err2str(code));
tmq_free_assignment(pAssign);
tmq_consumer_close(tmq);
taos_close(pConn);
fprintf(stderr, "%d msg consumed, include %d rows\n", msgCnt, totalRows);
return;
}
for(int i = 0; i < numOfAssign; i++){
printf("assign i:%d, vgId:%d, offset:%lld, start:%lld, end:%lld\n", i, pAssign[i].vgId, pAssign[i].currentOffset, pAssign[i].begin, pAssign[i].end);
}
} else { } else {
tmq_offset_seek(tmq, "tp", pAssign[0].vgId, pAssign[0].currentOffset); for(int i = 0; i < numOfAssign; i++) {
tmq_offset_seek(tmq, "tp", pAssign[1].vgId, pAssign[1].currentOffset); tmq_offset_seek(tmq, topicName, pAssign[i].vgId, pAssign[i].currentOffset);
continue; }
tmq_commit_sync(tmq, pRes);
break;
} }
// tmq_commit_sync(tmq, pRes); // tmq_commit_sync(tmq, pRes);
...@@ -1208,6 +1320,7 @@ TEST(clientCase, td_25129) { ...@@ -1208,6 +1320,7 @@ TEST(clientCase, td_25129) {
printf("assign i:%d, vgId:%d, offset:%lld, start:%lld, end:%lld\n", i, pAssign[i].vgId, pAssign[i].currentOffset, pAssign[i].begin, pAssign[i].end); printf("assign i:%d, vgId:%d, offset:%lld, start:%lld, end:%lld\n", i, pAssign[i].vgId, pAssign[i].currentOffset, pAssign[i].begin, pAssign[i].end);
} }
tmq_free_assignment(pAssign);
tmq_consumer_close(tmq); tmq_consumer_close(tmq);
taos_close(pConn); taos_close(pConn);
fprintf(stderr, "%d msg consumed, include %d rows\n", msgCnt, totalRows); fprintf(stderr, "%d msg consumed, include %d rows\n", msgCnt, totalRows);
......
...@@ -1145,7 +1145,8 @@ int32_t tDeserializeSStatusReq(void *buf, int32_t bufLen, SStatusReq *pReq) { ...@@ -1145,7 +1145,8 @@ int32_t tDeserializeSStatusReq(void *buf, int32_t bufLen, SStatusReq *pReq) {
for (int32_t i = 0; i < vlen; ++i) { for (int32_t i = 0; i < vlen; ++i) {
SVnodeLoad vload = {0}; SVnodeLoad vload = {0};
int64_t reserved = 0; int64_t reserved64 = 0;
int32_t reserved32 = 0;
if (tDecodeI32(&decoder, &vload.vgId) < 0) return -1; if (tDecodeI32(&decoder, &vload.vgId) < 0) return -1;
if (tDecodeI8(&decoder, &vload.syncState) < 0) return -1; if (tDecodeI8(&decoder, &vload.syncState) < 0) return -1;
if (tDecodeI8(&decoder, &vload.syncRestore) < 0) return -1; if (tDecodeI8(&decoder, &vload.syncRestore) < 0) return -1;
...@@ -1157,9 +1158,9 @@ int32_t tDeserializeSStatusReq(void *buf, int32_t bufLen, SStatusReq *pReq) { ...@@ -1157,9 +1158,9 @@ int32_t tDeserializeSStatusReq(void *buf, int32_t bufLen, SStatusReq *pReq) {
if (tDecodeI64(&decoder, &vload.compStorage) < 0) return -1; if (tDecodeI64(&decoder, &vload.compStorage) < 0) return -1;
if (tDecodeI64(&decoder, &vload.pointsWritten) < 0) return -1; if (tDecodeI64(&decoder, &vload.pointsWritten) < 0) return -1;
if (tDecodeI32(&decoder, &vload.numOfCachedTables) < 0) return -1; if (tDecodeI32(&decoder, &vload.numOfCachedTables) < 0) return -1;
if (tDecodeI32(&decoder, (int32_t *)&reserved) < 0) return -1; if (tDecodeI32(&decoder, (int32_t *)&reserved32) < 0) return -1;
if (tDecodeI64(&decoder, &reserved) < 0) return -1; if (tDecodeI64(&decoder, &reserved64) < 0) return -1;
if (tDecodeI64(&decoder, &reserved) < 0) return -1; if (tDecodeI64(&decoder, &reserved64) < 0) return -1;
if (taosArrayPush(pReq->pVloads, &vload) == NULL) { if (taosArrayPush(pReq->pVloads, &vload) == NULL) {
terrno = TSDB_CODE_OUT_OF_MEMORY; terrno = TSDB_CODE_OUT_OF_MEMORY;
return -1; return -1;
...@@ -1547,6 +1548,7 @@ int32_t tSerializeSGetUserAuthRsp(void *buf, int32_t bufLen, SGetUserAuthRsp *pR ...@@ -1547,6 +1548,7 @@ int32_t tSerializeSGetUserAuthRsp(void *buf, int32_t bufLen, SGetUserAuthRsp *pR
} }
int32_t tDeserializeSGetUserAuthRspImpl(SDecoder *pDecoder, SGetUserAuthRsp *pRsp) { int32_t tDeserializeSGetUserAuthRspImpl(SDecoder *pDecoder, SGetUserAuthRsp *pRsp) {
char *key = NULL, *value = NULL;
pRsp->createdDbs = taosHashInit(4, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_ENTRY_LOCK); pRsp->createdDbs = taosHashInit(4, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_ENTRY_LOCK);
pRsp->readDbs = taosHashInit(4, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_ENTRY_LOCK); pRsp->readDbs = taosHashInit(4, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_ENTRY_LOCK);
pRsp->writeDbs = taosHashInit(4, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_ENTRY_LOCK); pRsp->writeDbs = taosHashInit(4, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_ENTRY_LOCK);
...@@ -1555,40 +1557,40 @@ int32_t tDeserializeSGetUserAuthRspImpl(SDecoder *pDecoder, SGetUserAuthRsp *pRs ...@@ -1555,40 +1557,40 @@ int32_t tDeserializeSGetUserAuthRspImpl(SDecoder *pDecoder, SGetUserAuthRsp *pRs
pRsp->useDbs = taosHashInit(4, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_ENTRY_LOCK); pRsp->useDbs = taosHashInit(4, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_ENTRY_LOCK);
if (pRsp->createdDbs == NULL || pRsp->readDbs == NULL || pRsp->writeDbs == NULL || pRsp->readTbs == NULL || if (pRsp->createdDbs == NULL || pRsp->readDbs == NULL || pRsp->writeDbs == NULL || pRsp->readTbs == NULL ||
pRsp->writeTbs == NULL || pRsp->useDbs == NULL) { pRsp->writeTbs == NULL || pRsp->useDbs == NULL) {
return -1; goto _err;
} }
if (tDecodeCStrTo(pDecoder, pRsp->user) < 0) return -1; if (tDecodeCStrTo(pDecoder, pRsp->user) < 0) goto _err;
if (tDecodeI8(pDecoder, &pRsp->superAuth) < 0) return -1; if (tDecodeI8(pDecoder, &pRsp->superAuth) < 0) goto _err;
if (tDecodeI8(pDecoder, &pRsp->sysInfo) < 0) return -1; if (tDecodeI8(pDecoder, &pRsp->sysInfo) < 0) goto _err;
if (tDecodeI8(pDecoder, &pRsp->enable) < 0) return -1; if (tDecodeI8(pDecoder, &pRsp->enable) < 0) goto _err;
if (tDecodeI8(pDecoder, &pRsp->reserve) < 0) return -1; if (tDecodeI8(pDecoder, &pRsp->reserve) < 0) goto _err;
if (tDecodeI32(pDecoder, &pRsp->version) < 0) return -1; if (tDecodeI32(pDecoder, &pRsp->version) < 0) goto _err;
int32_t numOfCreatedDbs = 0; int32_t numOfCreatedDbs = 0;
int32_t numOfReadDbs = 0; int32_t numOfReadDbs = 0;
int32_t numOfWriteDbs = 0; int32_t numOfWriteDbs = 0;
if (tDecodeI32(pDecoder, &numOfCreatedDbs) < 0) return -1; if (tDecodeI32(pDecoder, &numOfCreatedDbs) < 0) goto _err;
if (tDecodeI32(pDecoder, &numOfReadDbs) < 0) return -1; if (tDecodeI32(pDecoder, &numOfReadDbs) < 0) goto _err;
if (tDecodeI32(pDecoder, &numOfWriteDbs) < 0) return -1; if (tDecodeI32(pDecoder, &numOfWriteDbs) < 0) goto _err;
for (int32_t i = 0; i < numOfCreatedDbs; ++i) { for (int32_t i = 0; i < numOfCreatedDbs; ++i) {
char db[TSDB_DB_FNAME_LEN] = {0}; char db[TSDB_DB_FNAME_LEN] = {0};
if (tDecodeCStrTo(pDecoder, db) < 0) return -1; if (tDecodeCStrTo(pDecoder, db) < 0) goto _err;
int32_t len = strlen(db); int32_t len = strlen(db);
taosHashPut(pRsp->createdDbs, db, len, db, len + 1); taosHashPut(pRsp->createdDbs, db, len, db, len + 1);
} }
for (int32_t i = 0; i < numOfReadDbs; ++i) { for (int32_t i = 0; i < numOfReadDbs; ++i) {
char db[TSDB_DB_FNAME_LEN] = {0}; char db[TSDB_DB_FNAME_LEN] = {0};
if (tDecodeCStrTo(pDecoder, db) < 0) return -1; if (tDecodeCStrTo(pDecoder, db) < 0) goto _err;
int32_t len = strlen(db); int32_t len = strlen(db);
taosHashPut(pRsp->readDbs, db, len, db, len + 1); taosHashPut(pRsp->readDbs, db, len, db, len + 1);
} }
for (int32_t i = 0; i < numOfWriteDbs; ++i) { for (int32_t i = 0; i < numOfWriteDbs; ++i) {
char db[TSDB_DB_FNAME_LEN] = {0}; char db[TSDB_DB_FNAME_LEN] = {0};
if (tDecodeCStrTo(pDecoder, db) < 0) return -1; if (tDecodeCStrTo(pDecoder, db) < 0) goto _err;
int32_t len = strlen(db); int32_t len = strlen(db);
taosHashPut(pRsp->writeDbs, db, len, db, len + 1); taosHashPut(pRsp->writeDbs, db, len, db, len + 1);
} }
...@@ -1597,67 +1599,80 @@ int32_t tDeserializeSGetUserAuthRspImpl(SDecoder *pDecoder, SGetUserAuthRsp *pRs ...@@ -1597,67 +1599,80 @@ int32_t tDeserializeSGetUserAuthRspImpl(SDecoder *pDecoder, SGetUserAuthRsp *pRs
int32_t numOfReadTbs = 0; int32_t numOfReadTbs = 0;
int32_t numOfWriteTbs = 0; int32_t numOfWriteTbs = 0;
int32_t numOfUseDbs = 0; int32_t numOfUseDbs = 0;
if (tDecodeI32(pDecoder, &numOfReadTbs) < 0) return -1; if (tDecodeI32(pDecoder, &numOfReadTbs) < 0) goto _err;
if (tDecodeI32(pDecoder, &numOfWriteTbs) < 0) return -1; if (tDecodeI32(pDecoder, &numOfWriteTbs) < 0) goto _err;
if (tDecodeI32(pDecoder, &numOfUseDbs) < 0) return -1; if (tDecodeI32(pDecoder, &numOfUseDbs) < 0) goto _err;
for (int32_t i = 0; i < numOfReadTbs; ++i) { for (int32_t i = 0; i < numOfReadTbs; ++i) {
int32_t keyLen = 0; int32_t keyLen = 0;
if (tDecodeI32(pDecoder, &keyLen) < 0) return -1; if (tDecodeI32(pDecoder, &keyLen) < 0) goto _err;
char *key = taosMemoryCalloc(keyLen + 1, sizeof(char)); key = taosMemoryCalloc(keyLen + 1, sizeof(char));
if (tDecodeCStrTo(pDecoder, key) < 0) return -1; if (tDecodeCStrTo(pDecoder, key) < 0) goto _err;
int32_t valuelen = 0; int32_t valuelen = 0;
if (tDecodeI32(pDecoder, &valuelen) < 0) return -1; if (tDecodeI32(pDecoder, &valuelen) < 0) goto _err;
char *value = taosMemoryCalloc(valuelen + 1, sizeof(char));
if (tDecodeCStrTo(pDecoder, value) < 0) return -1; value = taosMemoryCalloc(valuelen + 1, sizeof(char));
if (tDecodeCStrTo(pDecoder, value) < 0) goto _err;
taosHashPut(pRsp->readTbs, key, strlen(key), value, valuelen + 1); taosHashPut(pRsp->readTbs, key, strlen(key), value, valuelen + 1);
taosMemoryFree(key); taosMemoryFreeClear(key);
taosMemoryFree(value); taosMemoryFreeClear(value);
} }
for (int32_t i = 0; i < numOfWriteTbs; ++i) { for (int32_t i = 0; i < numOfWriteTbs; ++i) {
int32_t keyLen = 0; int32_t keyLen = 0;
if (tDecodeI32(pDecoder, &keyLen) < 0) return -1; if (tDecodeI32(pDecoder, &keyLen) < 0) goto _err;
char *key = taosMemoryCalloc(keyLen + 1, sizeof(char)); key = taosMemoryCalloc(keyLen + 1, sizeof(char));
if (tDecodeCStrTo(pDecoder, key) < 0) return -1; if (tDecodeCStrTo(pDecoder, key) < 0) goto _err;
int32_t valuelen = 0; int32_t valuelen = 0;
if (tDecodeI32(pDecoder, &valuelen) < 0) return -1; if (tDecodeI32(pDecoder, &valuelen) < 0) goto _err;
char *value = taosMemoryCalloc(valuelen + 1, sizeof(char));
if (tDecodeCStrTo(pDecoder, value) < 0) return -1; value = taosMemoryCalloc(valuelen + 1, sizeof(char));
if (tDecodeCStrTo(pDecoder, value) < 0) goto _err;
taosHashPut(pRsp->writeTbs, key, strlen(key), value, valuelen + 1); taosHashPut(pRsp->writeTbs, key, strlen(key), value, valuelen + 1);
taosMemoryFree(key); taosMemoryFreeClear(key);
taosMemoryFree(value); taosMemoryFreeClear(value);
} }
for (int32_t i = 0; i < numOfUseDbs; ++i) { for (int32_t i = 0; i < numOfUseDbs; ++i) {
int32_t keyLen = 0; int32_t keyLen = 0;
if (tDecodeI32(pDecoder, &keyLen) < 0) return -1; if (tDecodeI32(pDecoder, &keyLen) < 0) goto _err;
char *key = taosMemoryCalloc(keyLen + 1, sizeof(char)); key = taosMemoryCalloc(keyLen + 1, sizeof(char));
if (tDecodeCStrTo(pDecoder, key) < 0) return -1; if (tDecodeCStrTo(pDecoder, key) < 0) goto _err;
int32_t ref = 0; int32_t ref = 0;
if (tDecodeI32(pDecoder, &ref) < 0) return -1; if (tDecodeI32(pDecoder, &ref) < 0) goto _err;
taosHashPut(pRsp->useDbs, key, strlen(key), &ref, sizeof(ref)); taosHashPut(pRsp->useDbs, key, strlen(key), &ref, sizeof(ref));
taosMemoryFree(key); taosMemoryFreeClear(key);
} }
// since 3.0.7.0 // since 3.0.7.0
if (!tDecodeIsEnd(pDecoder)) { if (!tDecodeIsEnd(pDecoder)) {
if (tDecodeI32(pDecoder, &pRsp->passVer) < 0) return -1; if (tDecodeI32(pDecoder, &pRsp->passVer) < 0) goto _err;
} else { } else {
pRsp->passVer = 0; pRsp->passVer = 0;
} }
} }
return 0; return 0;
_err:
taosHashCleanup(pRsp->createdDbs);
taosHashCleanup(pRsp->readDbs);
taosHashCleanup(pRsp->writeDbs);
taosHashCleanup(pRsp->writeTbs);
taosHashCleanup(pRsp->readTbs);
taosHashCleanup(pRsp->useDbs);
taosMemoryFreeClear(key);
taosMemoryFreeClear(value);
return -1;
} }
int32_t tDeserializeSGetUserAuthRsp(void *buf, int32_t bufLen, SGetUserAuthRsp *pRsp) { int32_t tDeserializeSGetUserAuthRsp(void *buf, int32_t bufLen, SGetUserAuthRsp *pRsp) {
...@@ -2846,7 +2861,6 @@ int32_t tSerializeSDbHbRspImp(SEncoder *pEncoder, const SDbHbRsp *pRsp) { ...@@ -2846,7 +2861,6 @@ int32_t tSerializeSDbHbRspImp(SEncoder *pEncoder, const SDbHbRsp *pRsp) {
return 0; return 0;
} }
int32_t tSerializeSDbHbBatchRsp(void *buf, int32_t bufLen, SDbHbBatchRsp *pRsp) { int32_t tSerializeSDbHbBatchRsp(void *buf, int32_t bufLen, SDbHbBatchRsp *pRsp) {
SEncoder encoder = {0}; SEncoder encoder = {0};
tEncoderInit(&encoder, buf, bufLen); tEncoderInit(&encoder, buf, bufLen);
...@@ -2910,7 +2924,7 @@ int32_t tDeserializeSUseDbRsp(void *buf, int32_t bufLen, SUseDbRsp *pRsp) { ...@@ -2910,7 +2924,7 @@ int32_t tDeserializeSUseDbRsp(void *buf, int32_t bufLen, SUseDbRsp *pRsp) {
return 0; return 0;
} }
int32_t tDeserializeSDbHbRspImp(SDecoder* decoder, SDbHbRsp* pRsp) { int32_t tDeserializeSDbHbRspImp(SDecoder *decoder, SDbHbRsp *pRsp) {
int8_t flag = 0; int8_t flag = 0;
if (tDecodeI8(decoder, &flag) < 0) return -1; if (tDecodeI8(decoder, &flag) < 0) return -1;
if (flag) { if (flag) {
...@@ -3198,7 +3212,7 @@ int32_t tSerializeSDbCfgRsp(void *buf, int32_t bufLen, const SDbCfgRsp *pRsp) { ...@@ -3198,7 +3212,7 @@ int32_t tSerializeSDbCfgRsp(void *buf, int32_t bufLen, const SDbCfgRsp *pRsp) {
return tlen; return tlen;
} }
int32_t tDeserializeSDbCfgRspImpl(SDecoder* decoder, SDbCfgRsp *pRsp) { int32_t tDeserializeSDbCfgRspImpl(SDecoder *decoder, SDbCfgRsp *pRsp) {
if (tDecodeCStrTo(decoder, pRsp->db) < 0) return -1; if (tDecodeCStrTo(decoder, pRsp->db) < 0) return -1;
if (tDecodeI64(decoder, &pRsp->dbId) < 0) return -1; if (tDecodeI64(decoder, &pRsp->dbId) < 0) return -1;
if (tDecodeI32(decoder, &pRsp->cfgVersion) < 0) return -1; if (tDecodeI32(decoder, &pRsp->cfgVersion) < 0) return -1;
...@@ -5308,10 +5322,10 @@ int32_t tDeserializeSMqAskEpReq(void *buf, int32_t bufLen, SMqAskEpReq *pReq) { ...@@ -5308,10 +5322,10 @@ int32_t tDeserializeSMqAskEpReq(void *buf, int32_t bufLen, SMqAskEpReq *pReq) {
return 0; return 0;
} }
int32_t tDeatroySMqHbReq(SMqHbReq* pReq){ int32_t tDeatroySMqHbReq(SMqHbReq *pReq) {
for(int i = 0; i < taosArrayGetSize(pReq->topics); i++){ for (int i = 0; i < taosArrayGetSize(pReq->topics); i++) {
TopicOffsetRows* vgs = taosArrayGet(pReq->topics, i); TopicOffsetRows *vgs = taosArrayGet(pReq->topics, i);
if(vgs) taosArrayDestroy(vgs->offsetRows); if (vgs) taosArrayDestroy(vgs->offsetRows);
} }
taosArrayDestroy(pReq->topics); taosArrayDestroy(pReq->topics);
return 0; return 0;
...@@ -5328,7 +5342,7 @@ int32_t tSerializeSMqHbReq(void *buf, int32_t bufLen, SMqHbReq *pReq) { ...@@ -5328,7 +5342,7 @@ int32_t tSerializeSMqHbReq(void *buf, int32_t bufLen, SMqHbReq *pReq) {
int32_t sz = taosArrayGetSize(pReq->topics); int32_t sz = taosArrayGetSize(pReq->topics);
if (tEncodeI32(&encoder, sz) < 0) return -1; if (tEncodeI32(&encoder, sz) < 0) return -1;
for (int32_t i = 0; i < sz; ++i) { for (int32_t i = 0; i < sz; ++i) {
TopicOffsetRows* vgs = (TopicOffsetRows*)taosArrayGet(pReq->topics, i); TopicOffsetRows *vgs = (TopicOffsetRows *)taosArrayGet(pReq->topics, i);
if (tEncodeCStr(&encoder, vgs->topicName) < 0) return -1; if (tEncodeCStr(&encoder, vgs->topicName) < 0) return -1;
int32_t szVgs = taosArrayGetSize(vgs->offsetRows); int32_t szVgs = taosArrayGetSize(vgs->offsetRows);
if (tEncodeI32(&encoder, szVgs) < 0) return -1; if (tEncodeI32(&encoder, szVgs) < 0) return -1;
...@@ -5358,19 +5372,19 @@ int32_t tDeserializeSMqHbReq(void *buf, int32_t bufLen, SMqHbReq *pReq) { ...@@ -5358,19 +5372,19 @@ int32_t tDeserializeSMqHbReq(void *buf, int32_t bufLen, SMqHbReq *pReq) {
if (tDecodeI32(&decoder, &pReq->epoch) < 0) return -1; if (tDecodeI32(&decoder, &pReq->epoch) < 0) return -1;
int32_t sz = 0; int32_t sz = 0;
if (tDecodeI32(&decoder, &sz) < 0) return -1; if (tDecodeI32(&decoder, &sz) < 0) return -1;
if(sz > 0){ if (sz > 0) {
pReq->topics = taosArrayInit(sz, sizeof(TopicOffsetRows)); pReq->topics = taosArrayInit(sz, sizeof(TopicOffsetRows));
if (NULL == pReq->topics) return -1; if (NULL == pReq->topics) return -1;
for (int32_t i = 0; i < sz; ++i) { for (int32_t i = 0; i < sz; ++i) {
TopicOffsetRows* data = taosArrayReserve(pReq->topics, 1); TopicOffsetRows *data = taosArrayReserve(pReq->topics, 1);
tDecodeCStrTo(&decoder, data->topicName); tDecodeCStrTo(&decoder, data->topicName);
int32_t szVgs = 0; int32_t szVgs = 0;
if (tDecodeI32(&decoder, &szVgs) < 0) return -1; if (tDecodeI32(&decoder, &szVgs) < 0) return -1;
if(szVgs > 0){ if (szVgs > 0) {
data->offsetRows = taosArrayInit(szVgs, sizeof(OffsetRows)); data->offsetRows = taosArrayInit(szVgs, sizeof(OffsetRows));
if (NULL == data->offsetRows) return -1; if (NULL == data->offsetRows) return -1;
for (int32_t j= 0; j < szVgs; ++j) { for (int32_t j = 0; j < szVgs; ++j) {
OffsetRows* offRows = taosArrayReserve(data->offsetRows, 1); OffsetRows *offRows = taosArrayReserve(data->offsetRows, 1);
if (tDecodeI32(&decoder, &offRows->vgId) < 0) return -1; if (tDecodeI32(&decoder, &offRows->vgId) < 0) return -1;
if (tDecodeI64(&decoder, &offRows->rows) < 0) return -1; if (tDecodeI64(&decoder, &offRows->rows) < 0) return -1;
if (tDecodeSTqOffsetVal(&decoder, &offRows->offset) < 0) return -1; if (tDecodeSTqOffsetVal(&decoder, &offRows->offset) < 0) return -1;
...@@ -5384,6 +5398,47 @@ int32_t tDeserializeSMqHbReq(void *buf, int32_t bufLen, SMqHbReq *pReq) { ...@@ -5384,6 +5398,47 @@ int32_t tDeserializeSMqHbReq(void *buf, int32_t bufLen, SMqHbReq *pReq) {
return 0; return 0;
} }
int32_t tSerializeSMqSeekReq(void *buf, int32_t bufLen, SMqSeekReq *pReq) {
int32_t headLen = sizeof(SMsgHead);
if (buf != NULL) {
buf = (char *)buf + headLen;
bufLen -= headLen;
}
SEncoder encoder = {0};
tEncoderInit(&encoder, buf, bufLen);
if (tStartEncode(&encoder) < 0) return -1;
if (tEncodeI64(&encoder, pReq->consumerId) < 0) return -1;
if (tEncodeCStr(&encoder, pReq->subKey) < 0) return -1;
tEndEncode(&encoder);
int32_t tlen = encoder.pos;
tEncoderClear(&encoder);
if (buf != NULL) {
SMsgHead *pHead = (SMsgHead *)((char *)buf - headLen);
pHead->vgId = htonl(pReq->head.vgId);
pHead->contLen = htonl(tlen + headLen);
}
return tlen + headLen;
}
int32_t tDeserializeSMqSeekReq(void *buf, int32_t bufLen, SMqSeekReq *pReq) {
int32_t headLen = sizeof(SMsgHead);
SDecoder decoder = {0};
tDecoderInit(&decoder, (char *)buf + headLen, bufLen - headLen);
if (tStartDecode(&decoder) < 0) return -1;
if (tDecodeI64(&decoder, &pReq->consumerId) < 0) return -1;
tDecodeCStrTo(&decoder, pReq->subKey);
tEndDecode(&decoder);
tDecoderClear(&decoder);
return 0;
}
int32_t tSerializeSSubQueryMsg(void *buf, int32_t bufLen, SSubQueryMsg *pReq) { int32_t tSerializeSSubQueryMsg(void *buf, int32_t bufLen, SSubQueryMsg *pReq) {
int32_t headLen = sizeof(SMsgHead); int32_t headLen = sizeof(SMsgHead);
if (buf != NULL) { if (buf != NULL) {
...@@ -5570,9 +5625,9 @@ int32_t tSerializeSMqPollReq(void *buf, int32_t bufLen, SMqPollReq *pReq) { ...@@ -5570,9 +5625,9 @@ int32_t tSerializeSMqPollReq(void *buf, int32_t bufLen, SMqPollReq *pReq) {
int32_t tDeserializeSMqPollReq(void *buf, int32_t bufLen, SMqPollReq *pReq) { int32_t tDeserializeSMqPollReq(void *buf, int32_t bufLen, SMqPollReq *pReq) {
int32_t headLen = sizeof(SMsgHead); int32_t headLen = sizeof(SMsgHead);
// SMsgHead *pHead = buf; // SMsgHead *pHead = buf;
// pHead->vgId = pReq->head.vgId; // pHead->vgId = pReq->head.vgId;
// pHead->contLen = pReq->head.contLen; // pHead->contLen = pReq->head.contLen;
SDecoder decoder = {0}; SDecoder decoder = {0};
tDecoderInit(&decoder, (char *)buf + headLen, bufLen - headLen); tDecoderInit(&decoder, (char *)buf + headLen, bufLen - headLen);
...@@ -6943,7 +6998,7 @@ int32_t tDecodeSVAlterTbReq(SDecoder *pDecoder, SVAlterTbReq *pReq) { ...@@ -6943,7 +6998,7 @@ int32_t tDecodeSVAlterTbReq(SDecoder *pDecoder, SVAlterTbReq *pReq) {
return 0; return 0;
} }
int32_t tDecodeSVAlterTbReqSetCtime(SDecoder* pDecoder, SVAlterTbReq* pReq, int64_t ctimeMs) { int32_t tDecodeSVAlterTbReqSetCtime(SDecoder *pDecoder, SVAlterTbReq *pReq, int64_t ctimeMs) {
if (tStartDecode(pDecoder) < 0) return -1; if (tStartDecode(pDecoder) < 0) return -1;
if (tDecodeSVAlterTbReqCommon(pDecoder, pReq) < 0) return -1; if (tDecodeSVAlterTbReqCommon(pDecoder, pReq) < 0) return -1;
...@@ -7154,11 +7209,6 @@ bool tOffsetEqual(const STqOffsetVal *pLeft, const STqOffsetVal *pRight) { ...@@ -7154,11 +7209,6 @@ bool tOffsetEqual(const STqOffsetVal *pLeft, const STqOffsetVal *pRight) {
return pLeft->uid == pRight->uid && pLeft->ts == pRight->ts; return pLeft->uid == pRight->uid && pLeft->ts == pRight->ts;
} else if (pLeft->type == TMQ_OFFSET__SNAPSHOT_META) { } else if (pLeft->type == TMQ_OFFSET__SNAPSHOT_META) {
return pLeft->uid == pRight->uid; return pLeft->uid == pRight->uid;
} else {
ASSERT(0);
/*ASSERT(pLeft->type == TMQ_OFFSET__RESET_NONE || pLeft->type == TMQ_OFFSET__RESET_EARLIEST ||*/
/*pLeft->type == TMQ_OFFSET__RESET_LATEST);*/
/*return true;*/
} }
} }
return false; return false;
...@@ -7176,13 +7226,13 @@ int32_t tDecodeSTqOffset(SDecoder *pDecoder, STqOffset *pOffset) { ...@@ -7176,13 +7226,13 @@ int32_t tDecodeSTqOffset(SDecoder *pDecoder, STqOffset *pOffset) {
return 0; return 0;
} }
int32_t tEncodeMqVgOffset(SEncoder* pEncoder, const SMqVgOffset* pOffset) { int32_t tEncodeMqVgOffset(SEncoder *pEncoder, const SMqVgOffset *pOffset) {
if (tEncodeSTqOffset(pEncoder, &pOffset->offset) < 0) return -1; if (tEncodeSTqOffset(pEncoder, &pOffset->offset) < 0) return -1;
if (tEncodeI64(pEncoder, pOffset->consumerId) < 0) return -1; if (tEncodeI64(pEncoder, pOffset->consumerId) < 0) return -1;
return 0; return 0;
} }
int32_t tDecodeMqVgOffset(SDecoder* pDecoder, SMqVgOffset* pOffset) { int32_t tDecodeMqVgOffset(SDecoder *pDecoder, SMqVgOffset *pOffset) {
if (tDecodeSTqOffset(pDecoder, &pOffset->offset) < 0) return -1; if (tDecodeSTqOffset(pDecoder, &pOffset->offset) < 0) return -1;
if (tDecodeI64(pDecoder, &pOffset->consumerId) < 0) return -1; if (tDecodeI64(pDecoder, &pOffset->consumerId) < 0) return -1;
return 0; return 0;
...@@ -7353,27 +7403,8 @@ void tDeleteMqDataRsp(SMqDataRsp *pRsp) { ...@@ -7353,27 +7403,8 @@ void tDeleteMqDataRsp(SMqDataRsp *pRsp) {
} }
int32_t tEncodeSTaosxRsp(SEncoder *pEncoder, const STaosxRsp *pRsp) { int32_t tEncodeSTaosxRsp(SEncoder *pEncoder, const STaosxRsp *pRsp) {
if (tEncodeSTqOffsetVal(pEncoder, &pRsp->reqOffset) < 0) return -1; if (tEncodeMqDataRsp(pEncoder, (const SMqDataRsp *)pRsp) < 0) return -1;
if (tEncodeSTqOffsetVal(pEncoder, &pRsp->rspOffset) < 0) return -1;
if (tEncodeI32(pEncoder, pRsp->blockNum) < 0) return -1;
if (pRsp->blockNum != 0) {
if (tEncodeI8(pEncoder, pRsp->withTbName) < 0) return -1;
if (tEncodeI8(pEncoder, pRsp->withSchema) < 0) return -1;
for (int32_t i = 0; i < pRsp->blockNum; i++) {
int32_t bLen = *(int32_t *)taosArrayGet(pRsp->blockDataLen, i);
void *data = taosArrayGetP(pRsp->blockData, i);
if (tEncodeBinary(pEncoder, (const uint8_t *)data, bLen) < 0) return -1;
if (pRsp->withSchema) {
SSchemaWrapper *pSW = (SSchemaWrapper *)taosArrayGetP(pRsp->blockSchema, i);
if (tEncodeSSchemaWrapper(pEncoder, pSW) < 0) return -1;
}
if (pRsp->withTbName) {
char *tbName = (char *)taosArrayGetP(pRsp->blockTbName, i);
if (tEncodeCStr(pEncoder, tbName) < 0) return -1;
}
}
}
if (tEncodeI32(pEncoder, pRsp->createTableNum) < 0) return -1; if (tEncodeI32(pEncoder, pRsp->createTableNum) < 0) return -1;
if (pRsp->createTableNum) { if (pRsp->createTableNum) {
for (int32_t i = 0; i < pRsp->createTableNum; i++) { for (int32_t i = 0; i < pRsp->createTableNum; i++) {
...@@ -7386,46 +7417,8 @@ int32_t tEncodeSTaosxRsp(SEncoder *pEncoder, const STaosxRsp *pRsp) { ...@@ -7386,46 +7417,8 @@ int32_t tEncodeSTaosxRsp(SEncoder *pEncoder, const STaosxRsp *pRsp) {
} }
int32_t tDecodeSTaosxRsp(SDecoder *pDecoder, STaosxRsp *pRsp) { int32_t tDecodeSTaosxRsp(SDecoder *pDecoder, STaosxRsp *pRsp) {
if (tDecodeSTqOffsetVal(pDecoder, &pRsp->reqOffset) < 0) return -1; if (tDecodeMqDataRsp(pDecoder, (SMqDataRsp *)pRsp) < 0) return -1;
if (tDecodeSTqOffsetVal(pDecoder, &pRsp->rspOffset) < 0) return -1;
if (tDecodeI32(pDecoder, &pRsp->blockNum) < 0) return -1;
if (pRsp->blockNum != 0) {
pRsp->blockData = taosArrayInit(pRsp->blockNum, sizeof(void *));
pRsp->blockDataLen = taosArrayInit(pRsp->blockNum, sizeof(int32_t));
if (tDecodeI8(pDecoder, &pRsp->withTbName) < 0) return -1;
if (tDecodeI8(pDecoder, &pRsp->withSchema) < 0) return -1;
if (pRsp->withTbName) {
pRsp->blockTbName = taosArrayInit(pRsp->blockNum, sizeof(void *));
}
if (pRsp->withSchema) {
pRsp->blockSchema = taosArrayInit(pRsp->blockNum, sizeof(void *));
}
for (int32_t i = 0; i < pRsp->blockNum; i++) {
void *data;
uint64_t bLen;
if (tDecodeBinaryAlloc(pDecoder, &data, &bLen) < 0) return -1;
taosArrayPush(pRsp->blockData, &data);
int32_t len = bLen;
taosArrayPush(pRsp->blockDataLen, &len);
if (pRsp->withSchema) {
SSchemaWrapper *pSW = (SSchemaWrapper *)taosMemoryCalloc(1, sizeof(SSchemaWrapper));
if (pSW == NULL) return -1;
if (tDecodeSSchemaWrapper(pDecoder, pSW) < 0) {
taosMemoryFree(pSW);
return -1;
}
taosArrayPush(pRsp->blockSchema, &pSW);
}
if (pRsp->withTbName) {
char *tbName;
if (tDecodeCStrAlloc(pDecoder, &tbName) < 0) return -1;
taosArrayPush(pRsp->blockTbName, &tbName);
}
}
}
if (tDecodeI32(pDecoder, &pRsp->createTableNum) < 0) return -1; if (tDecodeI32(pDecoder, &pRsp->createTableNum) < 0) return -1;
if (pRsp->createTableNum) { if (pRsp->createTableNum) {
pRsp->createTableLen = taosArrayInit(pRsp->createTableNum, sizeof(int32_t)); pRsp->createTableLen = taosArrayInit(pRsp->createTableNum, sizeof(int32_t));
......
...@@ -726,12 +726,13 @@ SArray *vmGetMsgHandles() { ...@@ -726,12 +726,13 @@ SArray *vmGetMsgHandles() {
if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_SUBSCRIBE, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER; if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_SUBSCRIBE, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_DELETE_SUB, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER; if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_DELETE_SUB, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_COMMIT_OFFSET, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER; if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_COMMIT_OFFSET, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_SEEK_TO_OFFSET, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER; if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_SEEK, vmPutMsgToFetchQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_ADD_CHECKINFO, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER; if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_ADD_CHECKINFO, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_DEL_CHECKINFO, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER; if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_DEL_CHECKINFO, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_CONSUME, vmPutMsgToQueryQueue, 0) == NULL) goto _OVER; if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_CONSUME, vmPutMsgToQueryQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_CONSUME_PUSH, vmPutMsgToQueryQueue, 0) == NULL) goto _OVER; if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_CONSUME_PUSH, vmPutMsgToQueryQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_VG_WALINFO, vmPutMsgToFetchQueue, 0) == NULL) goto _OVER; if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_VG_WALINFO, vmPutMsgToFetchQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_VG_COMMITTEDINFO, vmPutMsgToFetchQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_DELETE, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER; if (dmSetMgmtHandle(pArray, TDMT_VND_DELETE, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_BATCH_DEL, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER; if (dmSetMgmtHandle(pArray, TDMT_VND_BATCH_DEL, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_COMMIT, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER; if (dmSetMgmtHandle(pArray, TDMT_VND_COMMIT, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
......
...@@ -41,7 +41,7 @@ int32_t dmOpenNode(SMgmtWrapper *pWrapper) { ...@@ -41,7 +41,7 @@ int32_t dmOpenNode(SMgmtWrapper *pWrapper) {
pWrapper->pMgmt = output.pMgmt; pWrapper->pMgmt = output.pMgmt;
} }
dmReportStartup(pWrapper->name, "openned"); dmReportStartup(pWrapper->name, "opened");
return 0; return 0;
} }
......
...@@ -137,12 +137,11 @@ typedef enum { ...@@ -137,12 +137,11 @@ typedef enum {
} EDndReason; } EDndReason;
typedef enum { typedef enum {
CONSUMER_UPDATE_REB_MODIFY_NOTOPIC = 1, // topic do not need modified after rebalance CONSUMER_UPDATE_REB = 1, // update after rebalance
CONSUMER_UPDATE_REB_MODIFY_TOPIC, // topic need modified after rebalance CONSUMER_ADD_REB, // add after rebalance
CONSUMER_UPDATE_REB_MODIFY_REMOVE, // topic need removed after rebalance CONSUMER_REMOVE_REB, // remove after rebalance
// CONSUMER_UPDATE_TIMER_LOST, CONSUMER_UPDATE_REC, // update after recover
CONSUMER_UPDATE_RECOVER, CONSUMER_UPDATE_SUB, // update after subscribe req
CONSUMER_UPDATE_SUB_MODIFY, // modify after subscribe req
} ECsmUpdateType; } ECsmUpdateType;
typedef struct { typedef struct {
......
...@@ -77,7 +77,6 @@ static SClusterObj *mndAcquireCluster(SMnode *pMnode, void **ppIter) { ...@@ -77,7 +77,6 @@ static SClusterObj *mndAcquireCluster(SMnode *pMnode, void **ppIter) {
if (pIter == NULL) break; if (pIter == NULL) break;
*ppIter = pIter; *ppIter = pIter;
return pCluster; return pCluster;
} }
......
...@@ -94,7 +94,7 @@ void mndDropConsumerFromSdb(SMnode *pMnode, int64_t consumerId){ ...@@ -94,7 +94,7 @@ void mndDropConsumerFromSdb(SMnode *pMnode, int64_t consumerId){
bool mndRebTryStart() { bool mndRebTryStart() {
int32_t old = atomic_val_compare_exchange_32(&mqRebInExecCnt, 0, 1); int32_t old = atomic_val_compare_exchange_32(&mqRebInExecCnt, 0, 1);
mDebug("tq timer, rebalance counter old val:%d", old); mInfo("tq timer, rebalance counter old val:%d", old);
return old == 0; return old == 0;
} }
...@@ -116,7 +116,7 @@ void mndRebCntDec() { ...@@ -116,7 +116,7 @@ void mndRebCntDec() {
int32_t newVal = val - 1; int32_t newVal = val - 1;
int32_t oldVal = atomic_val_compare_exchange_32(&mqRebInExecCnt, val, newVal); int32_t oldVal = atomic_val_compare_exchange_32(&mqRebInExecCnt, val, newVal);
if (oldVal == val) { if (oldVal == val) {
mDebug("rebalance trans end, rebalance counter:%d", newVal); mInfo("rebalance trans end, rebalance counter:%d", newVal);
break; break;
} }
} }
...@@ -184,7 +184,7 @@ static int32_t mndProcessConsumerRecoverMsg(SRpcMsg *pMsg) { ...@@ -184,7 +184,7 @@ static int32_t mndProcessConsumerRecoverMsg(SRpcMsg *pMsg) {
} }
SMqConsumerObj *pConsumerNew = tNewSMqConsumerObj(pConsumer->consumerId, pConsumer->cgroup); SMqConsumerObj *pConsumerNew = tNewSMqConsumerObj(pConsumer->consumerId, pConsumer->cgroup);
pConsumerNew->updateType = CONSUMER_UPDATE_RECOVER; pConsumerNew->updateType = CONSUMER_UPDATE_REC;
mndReleaseConsumer(pMnode, pConsumer); mndReleaseConsumer(pMnode, pConsumer);
...@@ -281,7 +281,7 @@ static int32_t mndProcessMqTimerMsg(SRpcMsg *pMsg) { ...@@ -281,7 +281,7 @@ static int32_t mndProcessMqTimerMsg(SRpcMsg *pMsg) {
// rebalance cannot be parallel // rebalance cannot be parallel
if (!mndRebTryStart()) { if (!mndRebTryStart()) {
mDebug("mq rebalance already in progress, do nothing"); mInfo("mq rebalance already in progress, do nothing");
return 0; return 0;
} }
...@@ -312,7 +312,7 @@ static int32_t mndProcessMqTimerMsg(SRpcMsg *pMsg) { ...@@ -312,7 +312,7 @@ static int32_t mndProcessMqTimerMsg(SRpcMsg *pMsg) {
int32_t hbStatus = atomic_add_fetch_32(&pConsumer->hbStatus, 1); int32_t hbStatus = atomic_add_fetch_32(&pConsumer->hbStatus, 1);
int32_t status = atomic_load_32(&pConsumer->status); int32_t status = atomic_load_32(&pConsumer->status);
mDebug("check for consumer:0x%" PRIx64 " status:%d(%s), sub-time:%" PRId64 ", createTime:%" PRId64 ", hbstatus:%d", mInfo("check for consumer:0x%" PRIx64 " status:%d(%s), sub-time:%" PRId64 ", createTime:%" PRId64 ", hbstatus:%d",
pConsumer->consumerId, status, mndConsumerStatusName(status), pConsumer->subscribeTime, pConsumer->createTime, pConsumer->consumerId, status, mndConsumerStatusName(status), pConsumer->subscribeTime, pConsumer->createTime,
hbStatus); hbStatus);
...@@ -416,7 +416,7 @@ static int32_t mndProcessMqHbReq(SRpcMsg *pMsg) { ...@@ -416,7 +416,7 @@ static int32_t mndProcessMqHbReq(SRpcMsg *pMsg) {
for(int i = 0; i < taosArrayGetSize(req.topics); i++){ for(int i = 0; i < taosArrayGetSize(req.topics); i++){
TopicOffsetRows* data = taosArrayGet(req.topics, i); TopicOffsetRows* data = taosArrayGet(req.topics, i);
mDebug("heartbeat report offset rows.%s:%s", pConsumer->cgroup, data->topicName); mInfo("heartbeat report offset rows.%s:%s", pConsumer->cgroup, data->topicName);
SMqSubscribeObj *pSub = mndAcquireSubscribe(pMnode, pConsumer->cgroup, data->topicName); SMqSubscribeObj *pSub = mndAcquireSubscribe(pMnode, pConsumer->cgroup, data->topicName);
if(pSub == NULL){ if(pSub == NULL){
...@@ -515,7 +515,7 @@ static int32_t mndProcessAskEpReq(SRpcMsg *pMsg) { ...@@ -515,7 +515,7 @@ static int32_t mndProcessAskEpReq(SRpcMsg *pMsg) {
char *topic = taosArrayGetP(pConsumer->currentTopics, i); char *topic = taosArrayGetP(pConsumer->currentTopics, i);
SMqSubscribeObj *pSub = mndAcquireSubscribe(pMnode, pConsumer->cgroup, topic); SMqSubscribeObj *pSub = mndAcquireSubscribe(pMnode, pConsumer->cgroup, topic);
// txn guarantees pSub is created // txn guarantees pSub is created
if(pSub == NULL) continue;
taosRLockLatch(&pSub->lock); taosRLockLatch(&pSub->lock);
SMqSubTopicEp topicEp = {0}; SMqSubTopicEp topicEp = {0};
...@@ -523,6 +523,11 @@ static int32_t mndProcessAskEpReq(SRpcMsg *pMsg) { ...@@ -523,6 +523,11 @@ static int32_t mndProcessAskEpReq(SRpcMsg *pMsg) {
// 2.1 fetch topic schema // 2.1 fetch topic schema
SMqTopicObj *pTopic = mndAcquireTopic(pMnode, topic); SMqTopicObj *pTopic = mndAcquireTopic(pMnode, topic);
if(pTopic == NULL) {
taosRUnLockLatch(&pSub->lock);
mndReleaseSubscribe(pMnode, pSub);
continue;
}
taosRLockLatch(&pTopic->lock); taosRLockLatch(&pTopic->lock);
tstrncpy(topicEp.db, pTopic->db, TSDB_DB_FNAME_LEN); tstrncpy(topicEp.db, pTopic->db, TSDB_DB_FNAME_LEN);
topicEp.schema.nCols = pTopic->schema.nCols; topicEp.schema.nCols = pTopic->schema.nCols;
...@@ -701,7 +706,7 @@ int32_t mndProcessSubscribeReq(SRpcMsg *pMsg) { ...@@ -701,7 +706,7 @@ int32_t mndProcessSubscribeReq(SRpcMsg *pMsg) {
pConsumerNew->autoCommitInterval = subscribe.autoCommitInterval; pConsumerNew->autoCommitInterval = subscribe.autoCommitInterval;
pConsumerNew->resetOffsetCfg = subscribe.resetOffsetCfg; pConsumerNew->resetOffsetCfg = subscribe.resetOffsetCfg;
// pConsumerNew->updateType = CONSUMER_UPDATE_SUB_MODIFY; // use insert logic // pConsumerNew->updateType = CONSUMER_UPDATE_SUB; // use insert logic
taosArrayDestroy(pConsumerNew->assignedTopics); taosArrayDestroy(pConsumerNew->assignedTopics);
pConsumerNew->assignedTopics = taosArrayDup(pTopicList, topicNameDup); pConsumerNew->assignedTopics = taosArrayDup(pTopicList, topicNameDup);
...@@ -731,7 +736,7 @@ int32_t mndProcessSubscribeReq(SRpcMsg *pMsg) { ...@@ -731,7 +736,7 @@ int32_t mndProcessSubscribeReq(SRpcMsg *pMsg) {
} }
// set the update type // set the update type
pConsumerNew->updateType = CONSUMER_UPDATE_SUB_MODIFY; pConsumerNew->updateType = CONSUMER_UPDATE_SUB;
taosArrayDestroy(pConsumerNew->assignedTopics); taosArrayDestroy(pConsumerNew->assignedTopics);
pConsumerNew->assignedTopics = taosArrayDup(pTopicList, topicNameDup); pConsumerNew->assignedTopics = taosArrayDup(pTopicList, topicNameDup);
...@@ -984,7 +989,7 @@ static int32_t mndConsumerActionUpdate(SSdb *pSdb, SMqConsumerObj *pOldConsumer, ...@@ -984,7 +989,7 @@ static int32_t mndConsumerActionUpdate(SSdb *pSdb, SMqConsumerObj *pOldConsumer,
taosWLockLatch(&pOldConsumer->lock); taosWLockLatch(&pOldConsumer->lock);
if (pNewConsumer->updateType == CONSUMER_UPDATE_SUB_MODIFY) { if (pNewConsumer->updateType == CONSUMER_UPDATE_SUB) {
TSWAP(pOldConsumer->rebNewTopics, pNewConsumer->rebNewTopics); TSWAP(pOldConsumer->rebNewTopics, pNewConsumer->rebNewTopics);
TSWAP(pOldConsumer->rebRemovedTopics, pNewConsumer->rebRemovedTopics); TSWAP(pOldConsumer->rebRemovedTopics, pNewConsumer->rebRemovedTopics);
TSWAP(pOldConsumer->assignedTopics, pNewConsumer->assignedTopics); TSWAP(pOldConsumer->assignedTopics, pNewConsumer->assignedTopics);
...@@ -1004,7 +1009,7 @@ static int32_t mndConsumerActionUpdate(SSdb *pSdb, SMqConsumerObj *pOldConsumer, ...@@ -1004,7 +1009,7 @@ static int32_t mndConsumerActionUpdate(SSdb *pSdb, SMqConsumerObj *pOldConsumer,
// mInfo("consumer:0x%" PRIx64 " timer update, timer lost. state %s -> %s, reb-time:%" PRId64 ", reb-removed-topics:%d", // mInfo("consumer:0x%" PRIx64 " timer update, timer lost. state %s -> %s, reb-time:%" PRId64 ", reb-removed-topics:%d",
// pOldConsumer->consumerId, mndConsumerStatusName(prevStatus), mndConsumerStatusName(pOldConsumer->status), // pOldConsumer->consumerId, mndConsumerStatusName(prevStatus), mndConsumerStatusName(pOldConsumer->status),
// pOldConsumer->rebalanceTime, (int)taosArrayGetSize(pOldConsumer->rebRemovedTopics)); // pOldConsumer->rebalanceTime, (int)taosArrayGetSize(pOldConsumer->rebRemovedTopics));
} else if (pNewConsumer->updateType == CONSUMER_UPDATE_RECOVER) { } else if (pNewConsumer->updateType == CONSUMER_UPDATE_REC) {
int32_t sz = taosArrayGetSize(pOldConsumer->assignedTopics); int32_t sz = taosArrayGetSize(pOldConsumer->assignedTopics);
for (int32_t i = 0; i < sz; i++) { for (int32_t i = 0; i < sz; i++) {
char *topic = taosStrdup(taosArrayGetP(pOldConsumer->assignedTopics, i)); char *topic = taosStrdup(taosArrayGetP(pOldConsumer->assignedTopics, i));
...@@ -1013,12 +1018,12 @@ static int32_t mndConsumerActionUpdate(SSdb *pSdb, SMqConsumerObj *pOldConsumer, ...@@ -1013,12 +1018,12 @@ static int32_t mndConsumerActionUpdate(SSdb *pSdb, SMqConsumerObj *pOldConsumer,
pOldConsumer->status = MQ_CONSUMER_STATUS_REBALANCE; pOldConsumer->status = MQ_CONSUMER_STATUS_REBALANCE;
mInfo("consumer:0x%" PRIx64 " timer update, timer recover",pOldConsumer->consumerId); mInfo("consumer:0x%" PRIx64 " timer update, timer recover",pOldConsumer->consumerId);
} else if (pNewConsumer->updateType == CONSUMER_UPDATE_REB_MODIFY_NOTOPIC) { } else if (pNewConsumer->updateType == CONSUMER_UPDATE_REB) {
atomic_add_fetch_32(&pOldConsumer->epoch, 1); atomic_add_fetch_32(&pOldConsumer->epoch, 1);
pOldConsumer->rebalanceTime = taosGetTimestampMs(); pOldConsumer->rebalanceTime = taosGetTimestampMs();
mInfo("consumer:0x%" PRIx64 " reb update, only rebalance time", pOldConsumer->consumerId); mInfo("consumer:0x%" PRIx64 " reb update, only rebalance time", pOldConsumer->consumerId);
} else if (pNewConsumer->updateType == CONSUMER_UPDATE_REB_MODIFY_TOPIC) { } else if (pNewConsumer->updateType == CONSUMER_ADD_REB) {
char *pNewTopic = taosStrdup(taosArrayGetP(pNewConsumer->rebNewTopics, 0)); char *pNewTopic = taosStrdup(taosArrayGetP(pNewConsumer->rebNewTopics, 0));
// check if exist in current topic // check if exist in current topic
...@@ -1049,7 +1054,7 @@ static int32_t mndConsumerActionUpdate(SSdb *pSdb, SMqConsumerObj *pOldConsumer, ...@@ -1049,7 +1054,7 @@ static int32_t mndConsumerActionUpdate(SSdb *pSdb, SMqConsumerObj *pOldConsumer,
(int)taosArrayGetSize(pOldConsumer->currentTopics), (int)taosArrayGetSize(pOldConsumer->rebNewTopics), (int)taosArrayGetSize(pOldConsumer->currentTopics), (int)taosArrayGetSize(pOldConsumer->rebNewTopics),
(int)taosArrayGetSize(pOldConsumer->rebRemovedTopics)); (int)taosArrayGetSize(pOldConsumer->rebRemovedTopics));
} else if (pNewConsumer->updateType == CONSUMER_UPDATE_REB_MODIFY_REMOVE) { } else if (pNewConsumer->updateType == CONSUMER_REMOVE_REB) {
char *removedTopic = taosArrayGetP(pNewConsumer->rebRemovedTopics, 0); char *removedTopic = taosArrayGetP(pNewConsumer->rebRemovedTopics, 0);
// remove from removed topic // remove from removed topic
...@@ -1104,13 +1109,13 @@ static int32_t mndRetrieveConsumer(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock * ...@@ -1104,13 +1109,13 @@ static int32_t mndRetrieveConsumer(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *
} }
if (taosArrayGetSize(pConsumer->assignedTopics) == 0) { if (taosArrayGetSize(pConsumer->assignedTopics) == 0) {
mDebug("showing consumer:0x%" PRIx64 " no assigned topic, skip", pConsumer->consumerId); mInfo("showing consumer:0x%" PRIx64 " no assigned topic, skip", pConsumer->consumerId);
sdbRelease(pSdb, pConsumer); sdbRelease(pSdb, pConsumer);
continue; continue;
} }
taosRLockLatch(&pConsumer->lock); taosRLockLatch(&pConsumer->lock);
mDebug("showing consumer:0x%" PRIx64, pConsumer->consumerId); mInfo("showing consumer:0x%" PRIx64, pConsumer->consumerId);
int32_t topicSz = taosArrayGetSize(pConsumer->assignedTopics); int32_t topicSz = taosArrayGetSize(pConsumer->assignedTopics);
bool hasTopic = true; bool hasTopic = true;
......
...@@ -1303,11 +1303,10 @@ static void mndBuildDBVgroupInfo(SDbObj *pDb, SMnode *pMnode, SArray *pVgList) { ...@@ -1303,11 +1303,10 @@ static void mndBuildDBVgroupInfo(SDbObj *pDb, SMnode *pMnode, SArray *pVgList) {
sdbRelease(pSdb, pVgroup); sdbRelease(pSdb, pVgroup);
if (pDb && (vindex >= pDb->cfg.numOfVgroups)) { if (pDb && (vindex >= pDb->cfg.numOfVgroups)) {
sdbCancelFetch(pSdb, pIter);
break; break;
} }
} }
sdbCancelFetch(pSdb, pIter);
} }
int32_t mndExtractDbInfo(SMnode *pMnode, SDbObj *pDb, SUseDbRsp *pRsp, const SUseDbReq *pReq) { int32_t mndExtractDbInfo(SMnode *pMnode, SDbObj *pDb, SUseDbRsp *pRsp, const SUseDbReq *pReq) {
......
...@@ -706,6 +706,7 @@ _OVER: ...@@ -706,6 +706,7 @@ _OVER:
} else { } else {
mndReleaseDnode(pMnode, pDnode); mndReleaseDnode(pMnode, pDnode);
} }
sdbCancelFetch(pSdb, pIter);
mndTransDrop(pTrans); mndTransDrop(pTrans);
sdbFreeRaw(pRaw); sdbFreeRaw(pRaw);
return terrno; return terrno;
......
...@@ -831,6 +831,7 @@ int32_t mndGetIdxsByTagName(SMnode *pMnode, SStbObj *pStb, char *tagName, SIdxOb ...@@ -831,6 +831,7 @@ int32_t mndGetIdxsByTagName(SMnode *pMnode, SStbObj *pStb, char *tagName, SIdxOb
if (pIdx->stbUid == pStb->uid && strcasecmp(pIdx->colName, tagName) == 0) { if (pIdx->stbUid == pStb->uid && strcasecmp(pIdx->colName, tagName) == 0) {
memcpy((char *)idx, (char *)pIdx, sizeof(SIdxObj)); memcpy((char *)idx, (char *)pIdx, sizeof(SIdxObj));
sdbRelease(pSdb, pIdx); sdbRelease(pSdb, pIdx);
sdbCancelFetch(pSdb, pIter);
return 0; return 0;
} }
...@@ -851,7 +852,7 @@ int32_t mndDropIdxsByDb(SMnode *pMnode, STrans *pTrans, SDbObj *pDb) { ...@@ -851,7 +852,7 @@ int32_t mndDropIdxsByDb(SMnode *pMnode, STrans *pTrans, SDbObj *pDb) {
if (pIdx->dbUid == pDb->uid) { if (pIdx->dbUid == pDb->uid) {
if (mndSetDropIdxCommitLogs(pMnode, pTrans, pIdx) != 0) { if (mndSetDropIdxCommitLogs(pMnode, pTrans, pIdx) != 0) {
sdbRelease(pSdb, pIdx); sdbRelease(pSdb, pIdx);
sdbCancelFetch(pSdb, pIdx); sdbCancelFetch(pSdb, pIter);
return -1; return -1;
} }
} }
......
...@@ -454,6 +454,7 @@ int32_t mndCreateQnodeList(SMnode *pMnode, SArray **pList, int32_t limit) { ...@@ -454,6 +454,7 @@ int32_t mndCreateQnodeList(SMnode *pMnode, SArray **pList, int32_t limit) {
sdbRelease(pSdb, pObj); sdbRelease(pSdb, pObj);
if (limit > 0 && numOfRows >= limit) { if (limit > 0 && numOfRows >= limit) {
sdbCancelFetch(pSdb, pIter);
break; break;
} }
} }
......
...@@ -167,6 +167,7 @@ SSnodeObj* mndSchedFetchOneSnode(SMnode* pMnode) { ...@@ -167,6 +167,7 @@ SSnodeObj* mndSchedFetchOneSnode(SMnode* pMnode) {
void* pIter = NULL; void* pIter = NULL;
// TODO random fetch // TODO random fetch
pIter = sdbFetch(pMnode->pSdb, SDB_SNODE, pIter, (void**)&pObj); pIter = sdbFetch(pMnode->pSdb, SDB_SNODE, pIter, (void**)&pObj);
sdbCancelFetch(pMnode->pSdb, pIter);
return pObj; return pObj;
} }
...@@ -198,6 +199,7 @@ SVgObj* mndSchedFetchOneVg(SMnode* pMnode, int64_t dbUid) { ...@@ -198,6 +199,7 @@ SVgObj* mndSchedFetchOneVg(SMnode* pMnode, int64_t dbUid) {
sdbRelease(pMnode->pSdb, pVgroup); sdbRelease(pMnode->pSdb, pVgroup);
continue; continue;
} }
sdbCancelFetch(pMnode->pSdb, pIter);
return pVgroup; return pVgroup;
} }
return pVgroup; return pVgroup;
......
...@@ -504,6 +504,11 @@ static void mndDestroySmaObj(SSmaObj *pSmaObj) { ...@@ -504,6 +504,11 @@ static void mndDestroySmaObj(SSmaObj *pSmaObj) {
static int32_t mndCreateSma(SMnode *pMnode, SRpcMsg *pReq, SMCreateSmaReq *pCreate, SDbObj *pDb, SStbObj *pStb, static int32_t mndCreateSma(SMnode *pMnode, SRpcMsg *pReq, SMCreateSmaReq *pCreate, SDbObj *pDb, SStbObj *pStb,
const char *streamName) { const char *streamName) {
if (pDb->cfg.replications > 1) {
terrno = TSDB_CODE_MND_INVALID_SMA_OPTION;
mError("sma:%s, failed to create since not support multiple replicas", pCreate->name);
return -1;
}
SSmaObj smaObj = {0}; SSmaObj smaObj = {0};
memcpy(smaObj.name, pCreate->name, TSDB_TABLE_FNAME_LEN); memcpy(smaObj.name, pCreate->name, TSDB_TABLE_FNAME_LEN);
memcpy(smaObj.stb, pStb->name, TSDB_TABLE_FNAME_LEN); memcpy(smaObj.stb, pStb->name, TSDB_TABLE_FNAME_LEN);
......
...@@ -900,7 +900,6 @@ static int32_t mndProcessTtlTimer(SRpcMsg *pReq) { ...@@ -900,7 +900,6 @@ static int32_t mndProcessTtlTimer(SRpcMsg *pReq) {
SMsgHead *pHead = rpcMallocCont(contLen); SMsgHead *pHead = rpcMallocCont(contLen);
if (pHead == NULL) { if (pHead == NULL) {
sdbCancelFetch(pSdb, pVgroup);
sdbRelease(pSdb, pVgroup); sdbRelease(pSdb, pVgroup);
continue; continue;
} }
...@@ -1240,6 +1239,7 @@ static int32_t mndCheckAlterColForTopic(SMnode *pMnode, const char *stbFullName, ...@@ -1240,6 +1239,7 @@ static int32_t mndCheckAlterColForTopic(SMnode *pMnode, const char *stbFullName,
terrno = TSDB_CODE_MND_FIELD_CONFLICT_WITH_TOPIC; terrno = TSDB_CODE_MND_FIELD_CONFLICT_WITH_TOPIC;
mError("topic:%s, create ast error", pTopic->name); mError("topic:%s, create ast error", pTopic->name);
sdbRelease(pSdb, pTopic); sdbRelease(pSdb, pTopic);
sdbCancelFetch(pSdb, pIter);
return -1; return -1;
} }
...@@ -1260,6 +1260,7 @@ static int32_t mndCheckAlterColForTopic(SMnode *pMnode, const char *stbFullName, ...@@ -1260,6 +1260,7 @@ static int32_t mndCheckAlterColForTopic(SMnode *pMnode, const char *stbFullName,
mError("topic:%s, check colId:%d conflicted", pTopic->name, pCol->colId); mError("topic:%s, check colId:%d conflicted", pTopic->name, pCol->colId);
nodesDestroyNode(pAst); nodesDestroyNode(pAst);
nodesDestroyList(pNodeList); nodesDestroyList(pNodeList);
sdbCancelFetch(pSdb, pIter);
sdbRelease(pSdb, pTopic); sdbRelease(pSdb, pTopic);
return -1; return -1;
} }
...@@ -1287,6 +1288,7 @@ static int32_t mndCheckAlterColForStream(SMnode *pMnode, const char *stbFullName ...@@ -1287,6 +1288,7 @@ static int32_t mndCheckAlterColForStream(SMnode *pMnode, const char *stbFullName
terrno = TSDB_CODE_MND_INVALID_STREAM_OPTION; terrno = TSDB_CODE_MND_INVALID_STREAM_OPTION;
mError("stream:%s, create ast error", pStream->name); mError("stream:%s, create ast error", pStream->name);
sdbRelease(pSdb, pStream); sdbRelease(pSdb, pStream);
sdbCancelFetch(pSdb, pIter);
return -1; return -1;
} }
...@@ -1306,6 +1308,7 @@ static int32_t mndCheckAlterColForStream(SMnode *pMnode, const char *stbFullName ...@@ -1306,6 +1308,7 @@ static int32_t mndCheckAlterColForStream(SMnode *pMnode, const char *stbFullName
nodesDestroyNode(pAst); nodesDestroyNode(pAst);
nodesDestroyList(pNodeList); nodesDestroyList(pNodeList);
sdbRelease(pSdb, pStream); sdbRelease(pSdb, pStream);
sdbCancelFetch(pSdb, pIter);
return -1; return -1;
} }
mInfo("stream:%s, check colId:%d passed", pStream->name, pCol->colId); mInfo("stream:%s, check colId:%d passed", pStream->name, pCol->colId);
...@@ -1335,6 +1338,7 @@ static int32_t mndCheckAlterColForTSma(SMnode *pMnode, const char *stbFullName, ...@@ -1335,6 +1338,7 @@ static int32_t mndCheckAlterColForTSma(SMnode *pMnode, const char *stbFullName,
terrno = TSDB_CODE_SDB_INVALID_DATA_CONTENT; terrno = TSDB_CODE_SDB_INVALID_DATA_CONTENT;
mError("tsma:%s, check tag and column modifiable, stb:%s suid:%" PRId64 " colId:%d failed since parse AST err", mError("tsma:%s, check tag and column modifiable, stb:%s suid:%" PRId64 " colId:%d failed since parse AST err",
pSma->name, stbFullName, suid, colId); pSma->name, stbFullName, suid, colId);
sdbCancelFetch(pSdb, pIter);
return -1; return -1;
} }
...@@ -1355,6 +1359,7 @@ static int32_t mndCheckAlterColForTSma(SMnode *pMnode, const char *stbFullName, ...@@ -1355,6 +1359,7 @@ static int32_t mndCheckAlterColForTSma(SMnode *pMnode, const char *stbFullName,
nodesDestroyNode(pAst); nodesDestroyNode(pAst);
nodesDestroyList(pNodeList); nodesDestroyList(pNodeList);
sdbRelease(pSdb, pSma); sdbRelease(pSdb, pSma);
sdbCancelFetch(pSdb, pIter);
return -1; return -1;
} }
mInfo("tsma:%s, check colId:%d passed", pSma->name, pCol->colId); mInfo("tsma:%s, check colId:%d passed", pSma->name, pCol->colId);
...@@ -2268,6 +2273,7 @@ static int32_t mndCheckDropStbForTopic(SMnode *pMnode, const char *stbFullName, ...@@ -2268,6 +2273,7 @@ static int32_t mndCheckDropStbForTopic(SMnode *pMnode, const char *stbFullName,
if (pTopic->subType == TOPIC_SUB_TYPE__TABLE) { if (pTopic->subType == TOPIC_SUB_TYPE__TABLE) {
if (pTopic->stbUid == suid) { if (pTopic->stbUid == suid) {
sdbRelease(pSdb, pTopic); sdbRelease(pSdb, pTopic);
sdbCancelFetch(pSdb, pIter);
return -1; return -1;
} }
} }
...@@ -2282,6 +2288,7 @@ static int32_t mndCheckDropStbForTopic(SMnode *pMnode, const char *stbFullName, ...@@ -2282,6 +2288,7 @@ static int32_t mndCheckDropStbForTopic(SMnode *pMnode, const char *stbFullName,
terrno = TSDB_CODE_MND_INVALID_TOPIC_OPTION; terrno = TSDB_CODE_MND_INVALID_TOPIC_OPTION;
mError("topic:%s, create ast error", pTopic->name); mError("topic:%s, create ast error", pTopic->name);
sdbRelease(pSdb, pTopic); sdbRelease(pSdb, pTopic);
sdbCancelFetch(pSdb, pIter);
return -1; return -1;
} }
...@@ -2295,6 +2302,7 @@ static int32_t mndCheckDropStbForTopic(SMnode *pMnode, const char *stbFullName, ...@@ -2295,6 +2302,7 @@ static int32_t mndCheckDropStbForTopic(SMnode *pMnode, const char *stbFullName,
sdbRelease(pSdb, pTopic); sdbRelease(pSdb, pTopic);
nodesDestroyNode(pAst); nodesDestroyNode(pAst);
nodesDestroyList(pNodeList); nodesDestroyList(pNodeList);
sdbCancelFetch(pSdb, pIter);
return -1; return -1;
} else { } else {
goto NEXT; goto NEXT;
...@@ -2322,6 +2330,7 @@ static int32_t mndCheckDropStbForStream(SMnode *pMnode, const char *stbFullName, ...@@ -2322,6 +2330,7 @@ static int32_t mndCheckDropStbForStream(SMnode *pMnode, const char *stbFullName,
} }
if (pStream->targetStbUid == suid) { if (pStream->targetStbUid == suid) {
sdbCancelFetch(pSdb, pIter);
sdbRelease(pSdb, pStream); sdbRelease(pSdb, pStream);
return -1; return -1;
} }
...@@ -2330,6 +2339,7 @@ static int32_t mndCheckDropStbForStream(SMnode *pMnode, const char *stbFullName, ...@@ -2330,6 +2339,7 @@ static int32_t mndCheckDropStbForStream(SMnode *pMnode, const char *stbFullName,
if (nodesStringToNode(pStream->ast, &pAst) != 0) { if (nodesStringToNode(pStream->ast, &pAst) != 0) {
terrno = TSDB_CODE_MND_INVALID_STREAM_OPTION; terrno = TSDB_CODE_MND_INVALID_STREAM_OPTION;
mError("stream:%s, create ast error", pStream->name); mError("stream:%s, create ast error", pStream->name);
sdbCancelFetch(pSdb, pIter);
sdbRelease(pSdb, pStream); sdbRelease(pSdb, pStream);
return -1; return -1;
} }
...@@ -2341,6 +2351,7 @@ static int32_t mndCheckDropStbForStream(SMnode *pMnode, const char *stbFullName, ...@@ -2341,6 +2351,7 @@ static int32_t mndCheckDropStbForStream(SMnode *pMnode, const char *stbFullName,
SColumnNode *pCol = (SColumnNode *)pNode; SColumnNode *pCol = (SColumnNode *)pNode;
if (pCol->tableId == suid) { if (pCol->tableId == suid) {
sdbCancelFetch(pSdb, pIter);
sdbRelease(pSdb, pStream); sdbRelease(pSdb, pStream);
nodesDestroyNode(pAst); nodesDestroyNode(pAst);
nodesDestroyList(pNodeList); nodesDestroyList(pNodeList);
......
...@@ -740,12 +740,14 @@ static int32_t mndProcessCreateStreamReq(SRpcMsg *pReq) { ...@@ -740,12 +740,14 @@ static int32_t mndProcessCreateStreamReq(SRpcMsg *pReq) {
if (numOfStream > MND_STREAM_MAX_NUM) { if (numOfStream > MND_STREAM_MAX_NUM) {
mError("too many streams, no more than %d for each database", MND_STREAM_MAX_NUM); mError("too many streams, no more than %d for each database", MND_STREAM_MAX_NUM);
terrno = TSDB_CODE_MND_TOO_MANY_STREAMS; terrno = TSDB_CODE_MND_TOO_MANY_STREAMS;
sdbCancelFetch(pMnode->pSdb, pIter);
goto _OVER; goto _OVER;
} }
if (pStream->targetStbUid == streamObj.targetStbUid) { if (pStream->targetStbUid == streamObj.targetStbUid) {
mError("Cannot write the same stable as other stream:%s", pStream->name); mError("Cannot write the same stable as other stream:%s", pStream->name);
terrno = TSDB_CODE_MND_INVALID_TARGET_TABLE; terrno = TSDB_CODE_MND_INVALID_TARGET_TABLE;
sdbCancelFetch(pMnode->pSdb, pIter);
goto _OVER; goto _OVER;
} }
} }
......
...@@ -597,7 +597,7 @@ static int32_t mndPersistRebResult(SMnode *pMnode, SRpcMsg *pMsg, const SMqRebOu ...@@ -597,7 +597,7 @@ static int32_t mndPersistRebResult(SMnode *pMnode, SRpcMsg *pMsg, const SMqRebOu
for (int32_t i = 0; i < consumerNum; i++) { for (int32_t i = 0; i < consumerNum; i++) {
int64_t consumerId = *(int64_t *)taosArrayGet(pOutput->modifyConsumers, i); int64_t consumerId = *(int64_t *)taosArrayGet(pOutput->modifyConsumers, i);
SMqConsumerObj *pConsumerNew = tNewSMqConsumerObj(consumerId, cgroup); SMqConsumerObj *pConsumerNew = tNewSMqConsumerObj(consumerId, cgroup);
pConsumerNew->updateType = CONSUMER_UPDATE_REB_MODIFY_NOTOPIC; pConsumerNew->updateType = CONSUMER_UPDATE_REB;
if (mndSetConsumerCommitLogs(pMnode, pTrans, pConsumerNew) != 0) { if (mndSetConsumerCommitLogs(pMnode, pTrans, pConsumerNew) != 0) {
tDeleteSMqConsumerObj(pConsumerNew, true); tDeleteSMqConsumerObj(pConsumerNew, true);
...@@ -613,7 +613,7 @@ static int32_t mndPersistRebResult(SMnode *pMnode, SRpcMsg *pMsg, const SMqRebOu ...@@ -613,7 +613,7 @@ static int32_t mndPersistRebResult(SMnode *pMnode, SRpcMsg *pMsg, const SMqRebOu
for (int32_t i = 0; i < consumerNum; i++) { for (int32_t i = 0; i < consumerNum; i++) {
int64_t consumerId = *(int64_t *)taosArrayGet(pOutput->newConsumers, i); int64_t consumerId = *(int64_t *)taosArrayGet(pOutput->newConsumers, i);
SMqConsumerObj *pConsumerNew = tNewSMqConsumerObj(consumerId, cgroup); SMqConsumerObj *pConsumerNew = tNewSMqConsumerObj(consumerId, cgroup);
pConsumerNew->updateType = CONSUMER_UPDATE_REB_MODIFY_TOPIC; pConsumerNew->updateType = CONSUMER_ADD_REB;
char* topicTmp = taosStrdup(topic); char* topicTmp = taosStrdup(topic);
taosArrayPush(pConsumerNew->rebNewTopics, &topicTmp); taosArrayPush(pConsumerNew->rebNewTopics, &topicTmp);
...@@ -633,7 +633,7 @@ static int32_t mndPersistRebResult(SMnode *pMnode, SRpcMsg *pMsg, const SMqRebOu ...@@ -633,7 +633,7 @@ static int32_t mndPersistRebResult(SMnode *pMnode, SRpcMsg *pMsg, const SMqRebOu
int64_t consumerId = *(int64_t *)taosArrayGet(pOutput->removedConsumers, i); int64_t consumerId = *(int64_t *)taosArrayGet(pOutput->removedConsumers, i);
SMqConsumerObj *pConsumerNew = tNewSMqConsumerObj(consumerId, cgroup); SMqConsumerObj *pConsumerNew = tNewSMqConsumerObj(consumerId, cgroup);
pConsumerNew->updateType = CONSUMER_UPDATE_REB_MODIFY_REMOVE; pConsumerNew->updateType = CONSUMER_REMOVE_REB;
char* topicTmp = taosStrdup(topic); char* topicTmp = taosStrdup(topic);
taosArrayPush(pConsumerNew->rebRemovedTopics, &topicTmp); taosArrayPush(pConsumerNew->rebRemovedTopics, &topicTmp);
...@@ -1104,6 +1104,7 @@ int32_t mndDropSubByTopic(SMnode *pMnode, STrans *pTrans, const char *topicName) ...@@ -1104,6 +1104,7 @@ int32_t mndDropSubByTopic(SMnode *pMnode, STrans *pTrans, const char *topicName)
if (taosHashGetSize(pSub->consumerHash) != 0) { if (taosHashGetSize(pSub->consumerHash) != 0) {
sdbRelease(pSdb, pSub); sdbRelease(pSdb, pSub);
terrno = TSDB_CODE_MND_IN_REBALANCE; terrno = TSDB_CODE_MND_IN_REBALANCE;
sdbCancelFetch(pSdb, pIter);
return -1; return -1;
} }
int32_t sz = taosArrayGetSize(pSub->unassignedVgs); int32_t sz = taosArrayGetSize(pSub->unassignedVgs);
...@@ -1122,12 +1123,14 @@ int32_t mndDropSubByTopic(SMnode *pMnode, STrans *pTrans, const char *topicName) ...@@ -1122,12 +1123,14 @@ int32_t mndDropSubByTopic(SMnode *pMnode, STrans *pTrans, const char *topicName)
if (mndTransAppendRedoAction(pTrans, &action) != 0) { if (mndTransAppendRedoAction(pTrans, &action) != 0) {
taosMemoryFree(pReq); taosMemoryFree(pReq);
sdbRelease(pSdb, pSub); sdbRelease(pSdb, pSub);
sdbCancelFetch(pSdb, pIter);
return -1; return -1;
} }
} }
if (mndSetDropSubRedoLogs(pMnode, pTrans, pSub) < 0) { if (mndSetDropSubRedoLogs(pMnode, pTrans, pSub) < 0) {
sdbRelease(pSdb, pSub); sdbRelease(pSdb, pSub);
sdbCancelFetch(pSdb, pIter);
goto END; goto END;
} }
...@@ -1204,7 +1207,7 @@ int32_t mndRetrieveSubscribe(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pBlock ...@@ -1204,7 +1207,7 @@ int32_t mndRetrieveSubscribe(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pBlock
int32_t numOfRows = 0; int32_t numOfRows = 0;
SMqSubscribeObj *pSub = NULL; SMqSubscribeObj *pSub = NULL;
mDebug("mnd show subscriptions begin"); mInfo("mnd show subscriptions begin");
while (numOfRows < rowsCapacity) { while (numOfRows < rowsCapacity) {
pShow->pIter = sdbFetch(pSdb, SDB_SUBSCRIBE, pShow->pIter, (void **)&pSub); pShow->pIter = sdbFetch(pSdb, SDB_SUBSCRIBE, pShow->pIter, (void **)&pSub);
...@@ -1244,7 +1247,7 @@ int32_t mndRetrieveSubscribe(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pBlock ...@@ -1244,7 +1247,7 @@ int32_t mndRetrieveSubscribe(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pBlock
sdbRelease(pSdb, pSub); sdbRelease(pSdb, pSub);
} }
mDebug("mnd end show subscriptions"); mInfo("mnd end show subscriptions");
pShow->numOfRows += numOfRows; pShow->numOfRows += numOfRows;
return numOfRows; return numOfRows;
......
...@@ -513,6 +513,7 @@ static int32_t mndCreateTopic(SMnode *pMnode, SRpcMsg *pReq, SCMCreateTopicReq * ...@@ -513,6 +513,7 @@ static int32_t mndCreateTopic(SMnode *pMnode, SRpcMsg *pReq, SCMCreateTopicReq *
tEncodeSize(tEncodeSTqCheckInfo, &info, len, code); tEncodeSize(tEncodeSTqCheckInfo, &info, len, code);
if (code < 0) { if (code < 0) {
sdbRelease(pSdb, pVgroup); sdbRelease(pSdb, pVgroup);
sdbCancelFetch(pSdb, pIter);
goto _OUT; goto _OUT;
} }
void *buf = taosMemoryCalloc(1, sizeof(SMsgHead) + len); void *buf = taosMemoryCalloc(1, sizeof(SMsgHead) + len);
...@@ -522,6 +523,7 @@ static int32_t mndCreateTopic(SMnode *pMnode, SRpcMsg *pReq, SCMCreateTopicReq * ...@@ -522,6 +523,7 @@ static int32_t mndCreateTopic(SMnode *pMnode, SRpcMsg *pReq, SCMCreateTopicReq *
if (tEncodeSTqCheckInfo(&encoder, &info) < 0) { if (tEncodeSTqCheckInfo(&encoder, &info) < 0) {
taosMemoryFree(buf); taosMemoryFree(buf);
sdbRelease(pSdb, pVgroup); sdbRelease(pSdb, pVgroup);
sdbCancelFetch(pSdb, pIter);
goto _OUT; goto _OUT;
} }
tEncoderClear(&encoder); tEncoderClear(&encoder);
...@@ -535,6 +537,7 @@ static int32_t mndCreateTopic(SMnode *pMnode, SRpcMsg *pReq, SCMCreateTopicReq * ...@@ -535,6 +537,7 @@ static int32_t mndCreateTopic(SMnode *pMnode, SRpcMsg *pReq, SCMCreateTopicReq *
if (mndTransAppendRedoAction(pTrans, &action) != 0) { if (mndTransAppendRedoAction(pTrans, &action) != 0) {
taosMemoryFree(buf); taosMemoryFree(buf);
sdbRelease(pSdb, pVgroup); sdbRelease(pSdb, pVgroup);
sdbCancelFetch(pSdb, pIter);
goto _OUT; goto _OUT;
} }
buf = NULL; buf = NULL;
...@@ -647,7 +650,6 @@ static int32_t mndDropTopic(SMnode *pMnode, STrans *pTrans, SRpcMsg *pReq, SMqTo ...@@ -647,7 +650,6 @@ static int32_t mndDropTopic(SMnode *pMnode, STrans *pTrans, SRpcMsg *pReq, SMqTo
code = 0; code = 0;
_OVER: _OVER:
mndTransDrop(pTrans);
return code; return code;
} }
...@@ -698,6 +700,7 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) { ...@@ -698,6 +700,7 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) {
if (strcmp(name, pTopic->name) == 0) { if (strcmp(name, pTopic->name) == 0) {
mndReleaseConsumer(pMnode, pConsumer); mndReleaseConsumer(pMnode, pConsumer);
mndReleaseTopic(pMnode, pTopic); mndReleaseTopic(pMnode, pTopic);
sdbCancelFetch(pSdb, pIter);
terrno = TSDB_CODE_MND_TOPIC_SUBSCRIBED; terrno = TSDB_CODE_MND_TOPIC_SUBSCRIBED;
mError("topic:%s, failed to drop since subscribed by consumer:0x%" PRIx64 ", in consumer group %s", mError("topic:%s, failed to drop since subscribed by consumer:0x%" PRIx64 ", in consumer group %s",
dropReq.name, pConsumer->consumerId, pConsumer->cgroup); dropReq.name, pConsumer->consumerId, pConsumer->cgroup);
...@@ -711,6 +714,7 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) { ...@@ -711,6 +714,7 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) {
if (strcmp(name, pTopic->name) == 0) { if (strcmp(name, pTopic->name) == 0) {
mndReleaseConsumer(pMnode, pConsumer); mndReleaseConsumer(pMnode, pConsumer);
mndReleaseTopic(pMnode, pTopic); mndReleaseTopic(pMnode, pTopic);
sdbCancelFetch(pSdb, pIter);
terrno = TSDB_CODE_MND_TOPIC_SUBSCRIBED; terrno = TSDB_CODE_MND_TOPIC_SUBSCRIBED;
mError("topic:%s, failed to drop since subscribed by consumer:%" PRId64 ", in consumer group %s (reb new)", mError("topic:%s, failed to drop since subscribed by consumer:%" PRId64 ", in consumer group %s (reb new)",
dropReq.name, pConsumer->consumerId, pConsumer->cgroup); dropReq.name, pConsumer->consumerId, pConsumer->cgroup);
...@@ -724,6 +728,7 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) { ...@@ -724,6 +728,7 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) {
if (strcmp(name, pTopic->name) == 0) { if (strcmp(name, pTopic->name) == 0) {
mndReleaseConsumer(pMnode, pConsumer); mndReleaseConsumer(pMnode, pConsumer);
mndReleaseTopic(pMnode, pTopic); mndReleaseTopic(pMnode, pTopic);
sdbCancelFetch(pSdb, pIter);
terrno = TSDB_CODE_MND_TOPIC_SUBSCRIBED; terrno = TSDB_CODE_MND_TOPIC_SUBSCRIBED;
mError("topic:%s, failed to drop since subscribed by consumer:%" PRId64 ", in consumer group %s (reb remove)", mError("topic:%s, failed to drop since subscribed by consumer:%" PRId64 ", in consumer group %s (reb remove)",
dropReq.name, pConsumer->consumerId, pConsumer->cgroup); dropReq.name, pConsumer->consumerId, pConsumer->cgroup);
...@@ -735,6 +740,7 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) { ...@@ -735,6 +740,7 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) {
} }
if (mndCheckDbPrivilegeByName(pMnode, pReq->info.conn.user, MND_OPER_READ_DB, pTopic->db) != 0) { if (mndCheckDbPrivilegeByName(pMnode, pReq->info.conn.user, MND_OPER_READ_DB, pTopic->db) != 0) {
mndReleaseTopic(pMnode, pTopic);
return -1; return -1;
} }
...@@ -788,6 +794,8 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) { ...@@ -788,6 +794,8 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) {
if (mndTransAppendRedoAction(pTrans, &action) != 0) { if (mndTransAppendRedoAction(pTrans, &action) != 0) {
taosMemoryFree(buf); taosMemoryFree(buf);
sdbRelease(pSdb, pVgroup); sdbRelease(pSdb, pVgroup);
mndReleaseTopic(pMnode, pTopic);
sdbCancelFetch(pSdb, pIter);
mndTransDrop(pTrans); mndTransDrop(pTrans);
return -1; return -1;
} }
...@@ -796,6 +804,7 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) { ...@@ -796,6 +804,7 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) {
int32_t code = mndDropTopic(pMnode, pTrans, pReq, pTopic); int32_t code = mndDropTopic(pMnode, pTrans, pReq, pTopic);
mndReleaseTopic(pMnode, pTopic); mndReleaseTopic(pMnode, pTopic);
mndTransDrop(pTrans);
if (code != 0) { if (code != 0) {
mError("topic:%s, failed to drop since %s", dropReq.name, terrstr()); mError("topic:%s, failed to drop since %s", dropReq.name, terrstr());
...@@ -999,6 +1008,7 @@ bool mndTopicExistsForDb(SMnode *pMnode, SDbObj *pDb) { ...@@ -999,6 +1008,7 @@ bool mndTopicExistsForDb(SMnode *pMnode, SDbObj *pDb) {
if (pTopic->dbUid == pDb->uid) { if (pTopic->dbUid == pDb->uid) {
sdbRelease(pSdb, pTopic); sdbRelease(pSdb, pTopic);
sdbCancelFetch(pSdb, pIter);
return true; return true;
} }
......
...@@ -630,6 +630,11 @@ static int32_t mndProcessCreateUserReq(SRpcMsg *pReq) { ...@@ -630,6 +630,11 @@ static int32_t mndProcessCreateUserReq(SRpcMsg *pReq) {
goto _OVER; goto _OVER;
} }
if (strlen(createReq.pass) >= TSDB_PASSWORD_LEN){
terrno = TSDB_CODE_PAR_NAME_OR_PASSWD_TOO_LONG;
goto _OVER;
}
pUser = mndAcquireUser(pMnode, createReq.user); pUser = mndAcquireUser(pMnode, createReq.user);
if (pUser != NULL) { if (pUser != NULL) {
terrno = TSDB_CODE_MND_USER_ALREADY_EXIST; terrno = TSDB_CODE_MND_USER_ALREADY_EXIST;
...@@ -922,19 +927,19 @@ static int32_t mndProcessAlterUserReq(SRpcMsg *pReq) { ...@@ -922,19 +927,19 @@ static int32_t mndProcessAlterUserReq(SRpcMsg *pReq) {
} }
} }
if (alterReq.alterType == TSDB_ALTER_USER_ADD_READ_TABLE) { if (alterReq.alterType == TSDB_ALTER_USER_ADD_READ_TABLE || alterReq.alterType == TSDB_ALTER_USER_ADD_ALL_TABLE) {
if (mndTablePriviledge(pMnode, newUser.readTbs, newUser.useDbs, &alterReq, pSdb) != 0) goto _OVER; if (mndTablePriviledge(pMnode, newUser.readTbs, newUser.useDbs, &alterReq, pSdb) != 0) goto _OVER;
} }
if (alterReq.alterType == TSDB_ALTER_USER_ADD_WRITE_TABLE) { if (alterReq.alterType == TSDB_ALTER_USER_ADD_WRITE_TABLE || alterReq.alterType == TSDB_ALTER_USER_ADD_ALL_TABLE) {
if (mndTablePriviledge(pMnode, newUser.writeTbs, newUser.useDbs, &alterReq, pSdb) != 0) goto _OVER; if (mndTablePriviledge(pMnode, newUser.writeTbs, newUser.useDbs, &alterReq, pSdb) != 0) goto _OVER;
} }
if (alterReq.alterType == TSDB_ALTER_USER_REMOVE_READ_TABLE) { if (alterReq.alterType == TSDB_ALTER_USER_REMOVE_READ_TABLE || alterReq.alterType == TSDB_ALTER_USER_REMOVE_ALL_TABLE) {
if (mndRemoveTablePriviledge(pMnode, newUser.readTbs, newUser.useDbs, &alterReq, pSdb) != 0) goto _OVER; if (mndRemoveTablePriviledge(pMnode, newUser.readTbs, newUser.useDbs, &alterReq, pSdb) != 0) goto _OVER;
} }
if (alterReq.alterType == TSDB_ALTER_USER_REMOVE_WRITE_TABLE) { if (alterReq.alterType == TSDB_ALTER_USER_REMOVE_WRITE_TABLE || alterReq.alterType == TSDB_ALTER_USER_REMOVE_ALL_TABLE) {
if (mndRemoveTablePriviledge(pMnode, newUser.writeTbs, newUser.useDbs, &alterReq, pSdb) != 0) goto _OVER; if (mndRemoveTablePriviledge(pMnode, newUser.writeTbs, newUser.useDbs, &alterReq, pSdb) != 0) goto _OVER;
} }
...@@ -1444,7 +1449,9 @@ int32_t mndUserRemoveDb(SMnode *pMnode, STrans *pTrans, char *db) { ...@@ -1444,7 +1449,9 @@ int32_t mndUserRemoveDb(SMnode *pMnode, STrans *pTrans, char *db) {
if (pIter == NULL) break; if (pIter == NULL) break;
code = -1; code = -1;
if (mndUserDupObj(pUser, &newUser) != 0) break; if (mndUserDupObj(pUser, &newUser) != 0) {
break;
}
bool inRead = (taosHashGet(newUser.readDbs, db, len) != NULL); bool inRead = (taosHashGet(newUser.readDbs, db, len) != NULL);
bool inWrite = (taosHashGet(newUser.writeDbs, db, len) != NULL); bool inWrite = (taosHashGet(newUser.writeDbs, db, len) != NULL);
...@@ -1453,7 +1460,9 @@ int32_t mndUserRemoveDb(SMnode *pMnode, STrans *pTrans, char *db) { ...@@ -1453,7 +1460,9 @@ int32_t mndUserRemoveDb(SMnode *pMnode, STrans *pTrans, char *db) {
(void)taosHashRemove(newUser.writeDbs, db, len); (void)taosHashRemove(newUser.writeDbs, db, len);
SSdbRaw *pCommitRaw = mndUserActionEncode(&newUser); SSdbRaw *pCommitRaw = mndUserActionEncode(&newUser);
if (pCommitRaw == NULL || mndTransAppendCommitlog(pTrans, pCommitRaw) != 0) break; if (pCommitRaw == NULL || mndTransAppendCommitlog(pTrans, pCommitRaw) != 0) {
break;
}
(void)sdbSetRawStatus(pCommitRaw, SDB_STATUS_READY); (void)sdbSetRawStatus(pCommitRaw, SDB_STATUS_READY);
} }
...@@ -1491,7 +1500,9 @@ int32_t mndUserRemoveTopic(SMnode *pMnode, STrans *pTrans, char *topic) { ...@@ -1491,7 +1500,9 @@ int32_t mndUserRemoveTopic(SMnode *pMnode, STrans *pTrans, char *topic) {
if (inTopic) { if (inTopic) {
(void)taosHashRemove(newUser.topics, topic, len); (void)taosHashRemove(newUser.topics, topic, len);
SSdbRaw *pCommitRaw = mndUserActionEncode(&newUser); SSdbRaw *pCommitRaw = mndUserActionEncode(&newUser);
if (pCommitRaw == NULL || mndTransAppendCommitlog(pTrans, pCommitRaw) != 0) break; if (pCommitRaw == NULL || mndTransAppendCommitlog(pTrans, pCommitRaw) != 0) {
break;
}
(void)sdbSetRawStatus(pCommitRaw, SDB_STATUS_READY); (void)sdbSetRawStatus(pCommitRaw, SDB_STATUS_READY);
} }
......
...@@ -2591,6 +2591,7 @@ static int32_t mndProcessBalanceVgroupMsg(SRpcMsg *pReq) { ...@@ -2591,6 +2591,7 @@ static int32_t mndProcessBalanceVgroupMsg(SRpcMsg *pReq) {
pIter = sdbFetch(pMnode->pSdb, SDB_DNODE, pIter, (void **)&pDnode); pIter = sdbFetch(pMnode->pSdb, SDB_DNODE, pIter, (void **)&pDnode);
if (pIter == NULL) break; if (pIter == NULL) break;
if (!mndIsDnodeOnline(pDnode, curMs)) { if (!mndIsDnodeOnline(pDnode, curMs)) {
sdbCancelFetch(pMnode->pSdb, pIter);
terrno = TSDB_CODE_MND_HAS_OFFLINE_DNODE; terrno = TSDB_CODE_MND_HAS_OFFLINE_DNODE;
mError("failed to balance vgroup since %s, dnode:%d", terrstr(), pDnode->id); mError("failed to balance vgroup since %s, dnode:%d", terrstr(), pDnode->id);
sdbRelease(pMnode->pSdb, pDnode); sdbRelease(pMnode->pSdb, pDnode);
......
...@@ -38,6 +38,8 @@ typedef struct STtlManger { ...@@ -38,6 +38,8 @@ typedef struct STtlManger {
SHashObj* pTtlCache; // key: tuid, value: {ttl, ctime} SHashObj* pTtlCache; // key: tuid, value: {ttl, ctime}
SHashObj* pDirtyUids; // dirty tuid SHashObj* pDirtyUids; // dirty tuid
TTB* pTtlIdx; // btree<{deleteTime, tuid}, ttl> TTB* pTtlIdx; // btree<{deleteTime, tuid}, ttl>
char* logPrefix;
} STtlManger; } STtlManger;
typedef struct { typedef struct {
...@@ -77,9 +79,10 @@ typedef struct { ...@@ -77,9 +79,10 @@ typedef struct {
typedef struct { typedef struct {
tb_uid_t uid; tb_uid_t uid;
TXN* pTxn; TXN* pTxn;
int64_t ttlDays;
} STtlDelTtlCtx; } STtlDelTtlCtx;
int ttlMgrOpen(STtlManger** ppTtlMgr, TDB* pEnv, int8_t rollback); int ttlMgrOpen(STtlManger** ppTtlMgr, TDB* pEnv, int8_t rollback, const char* logPrefix);
void ttlMgrClose(STtlManger* pTtlMgr); void ttlMgrClose(STtlManger* pTtlMgr);
int ttlMgrPostOpen(STtlManger* pTtlMgr, void* pMeta); int ttlMgrPostOpen(STtlManger* pTtlMgr, void* pMeta);
......
...@@ -230,10 +230,11 @@ int32_t tqProcessDelCheckInfoReq(STQ* pTq, int64_t version, char* msg, int32_t m ...@@ -230,10 +230,11 @@ int32_t tqProcessDelCheckInfoReq(STQ* pTq, int64_t version, char* msg, int32_t m
int32_t tqProcessSubscribeReq(STQ* pTq, int64_t version, char* msg, int32_t msgLen); int32_t tqProcessSubscribeReq(STQ* pTq, int64_t version, char* msg, int32_t msgLen);
int32_t tqProcessDeleteSubReq(STQ* pTq, int64_t version, char* msg, int32_t msgLen); int32_t tqProcessDeleteSubReq(STQ* pTq, int64_t version, char* msg, int32_t msgLen);
int32_t tqProcessOffsetCommitReq(STQ* pTq, int64_t version, char* msg, int32_t msgLen); int32_t tqProcessOffsetCommitReq(STQ* pTq, int64_t version, char* msg, int32_t msgLen);
int32_t tqProcessSeekReq(STQ* pTq, int64_t sversion, char* msg, int32_t msgLen); int32_t tqProcessSeekReq(STQ* pTq, SRpcMsg* pMsg);
int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg); int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg);
int32_t tqProcessPollPush(STQ* pTq, SRpcMsg* pMsg); int32_t tqProcessPollPush(STQ* pTq, SRpcMsg* pMsg);
int32_t tqProcessVgWalInfoReq(STQ* pTq, SRpcMsg* pMsg); int32_t tqProcessVgWalInfoReq(STQ* pTq, SRpcMsg* pMsg);
int32_t tqProcessVgCommittedInfoReq(STQ* pTq, SRpcMsg* pMsg);
// tq-stream // tq-stream
int32_t tqProcessTaskDeployReq(STQ* pTq, int64_t version, char* msg, int32_t msgLen); int32_t tqProcessTaskDeployReq(STQ* pTq, int64_t version, char* msg, int32_t msgLen);
......
...@@ -130,7 +130,9 @@ int metaOpen(SVnode *pVnode, SMeta **ppMeta, int8_t rollback) { ...@@ -130,7 +130,9 @@ int metaOpen(SVnode *pVnode, SMeta **ppMeta, int8_t rollback) {
} }
// open pTtlMgr ("ttlv1.idx") // open pTtlMgr ("ttlv1.idx")
ret = ttlMgrOpen(&pMeta->pTtlMgr, pMeta->pEnv, 0); char logPrefix[128] = {0};
sprintf(logPrefix, "vgId:%d", TD_VID(pVnode));
ret = ttlMgrOpen(&pMeta->pTtlMgr, pMeta->pEnv, 0, logPrefix);
if (ret < 0) { if (ret < 0) {
metaError("vgId:%d, failed to open meta ttl index since %s", TD_VID(pVnode), tstrerror(terrno)); metaError("vgId:%d, failed to open meta ttl index since %s", TD_VID(pVnode), tstrerror(terrno));
goto _err; goto _err;
......
此差异已折叠。
...@@ -196,7 +196,7 @@ int32_t tqFetchLog(STQ* pTq, STqHandle* pHandle, int64_t* fetchOffset, SWalCkHea ...@@ -196,7 +196,7 @@ int32_t tqFetchLog(STQ* pTq, STqHandle* pHandle, int64_t* fetchOffset, SWalCkHea
tqDebug("tmq poll: consumer:0x%" PRIx64 ", (epoch %d) vgId:%d offset %" PRId64 tqDebug("tmq poll: consumer:0x%" PRIx64 ", (epoch %d) vgId:%d offset %" PRId64
", no more log to return, reqId:0x%" PRIx64, ", no more log to return, reqId:0x%" PRIx64,
pHandle->consumerId, pHandle->epoch, vgId, offset, reqId); pHandle->consumerId, pHandle->epoch, vgId, offset, reqId);
*fetchOffset = offset - 1; *fetchOffset = offset;
code = -1; code = -1;
goto END; goto END;
} }
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册