提交 9065017c 编写于 作者: wmmhello's avatar wmmhello

Merge branch '3.0' into feature/TD-14761

...@@ -15,11 +15,11 @@ ...@@ -15,11 +15,11 @@
[![Coverage Status](https://coveralls.io/repos/github/taosdata/TDengine/badge.svg?branch=develop)](https://coveralls.io/github/taosdata/TDengine?branch=develop) [![Coverage Status](https://coveralls.io/repos/github/taosdata/TDengine/badge.svg?branch=develop)](https://coveralls.io/github/taosdata/TDengine?branch=develop)
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/4201/badge)](https://bestpractices.coreinfrastructure.org/projects/4201) [![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/4201/badge)](https://bestpractices.coreinfrastructure.org/projects/4201)
English | [简体中文](README-CN.md) | We are hiring, check [here](https://tdengine.com/careers) English | [简体中文](README-CN.md) | [Lean more about TSDB](https://tdengine.com/tsdb)
# What is TDengine? # What is TDengine?
TDengine is an open source, high-performance, cloud native [time-series database](https://tdengine.com/tsdb/what-is-a-time-series-database/) optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. TDengine differentiates itself from other time-seires databases with the following advantages: TDengine is an open source, high-performance, cloud native [time-series database](https://tdengine.com/tsdb/) optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. TDengine differentiates itself from other time-seires databases with the following advantages:
- **[High-Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression. - **[High-Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression.
...@@ -33,6 +33,8 @@ TDengine is an open source, high-performance, cloud native [time-series database ...@@ -33,6 +33,8 @@ TDengine is an open source, high-performance, cloud native [time-series database
- **[Open Source](https://tdengine.com/tdengine/open-source-time-series-database/)**: TDengine’s core modules, including cluster feature, are all available under open source licenses. It has gathered 18.8k stars on GitHub. There is an active developer community, and over 139k running instances worldwide. - **[Open Source](https://tdengine.com/tdengine/open-source-time-series-database/)**: TDengine’s core modules, including cluster feature, are all available under open source licenses. It has gathered 18.8k stars on GitHub. There is an active developer community, and over 139k running instances worldwide.
For a full list of TDengine competitive advantages, please [check here](https://tdengine.com/tdengine/)
# Documentation # Documentation
For user manual, system design and architecture, please refer to [TDengine Documentation](https://docs.tdengine.com) ([TDengine 文档](https://docs.taosdata.com)) For user manual, system design and architecture, please refer to [TDengine Documentation](https://docs.tdengine.com) ([TDengine 文档](https://docs.taosdata.com))
...@@ -319,6 +321,7 @@ TDengine provides abundant developing tools for users to develop on TDengine. Fo ...@@ -319,6 +321,7 @@ TDengine provides abundant developing tools for users to develop on TDengine. Fo
Please follow the [contribution guidelines](CONTRIBUTING.md) to contribute to the project. Please follow the [contribution guidelines](CONTRIBUTING.md) to contribute to the project.
# Join TDengine WeChat Group # Join TDengine User Community
Add WeChat “tdengine” to join the group,you can communicate with other users. - Join [TDengine Discord Channel](https://discord.com/invite/VZdSuUg4pS?utm_id=discord)
- Join wechat group by adding WeChat “tdengine”
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
IF (DEFINED VERNUMBER) IF (DEFINED VERNUMBER)
SET(TD_VER_NUMBER ${VERNUMBER}) SET(TD_VER_NUMBER ${VERNUMBER})
ELSE () ELSE ()
SET(TD_VER_NUMBER "3.0.1.0") SET(TD_VER_NUMBER "3.0.1.1")
ENDIF () ENDIF ()
IF (DEFINED VERCOMPATIBLE) IF (DEFINED VERCOMPATIBLE)
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
# taosadapter # taosadapter
ExternalProject_Add(taosadapter ExternalProject_Add(taosadapter
GIT_REPOSITORY https://github.com/taosdata/taosadapter.git GIT_REPOSITORY https://github.com/taosdata/taosadapter.git
GIT_TAG 71e7ccf GIT_TAG 05fb2ff
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taosadapter" SOURCE_DIR "${TD_SOURCE_DIR}/tools/taosadapter"
BINARY_DIR "" BINARY_DIR ""
#BUILD_IN_SOURCE TRUE #BUILD_IN_SOURCE TRUE
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
# taos-tools # taos-tools
ExternalProject_Add(taos-tools ExternalProject_Add(taos-tools
GIT_REPOSITORY https://github.com/taosdata/taos-tools.git GIT_REPOSITORY https://github.com/taosdata/taos-tools.git
GIT_TAG 2dba49c GIT_TAG 509ec72
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools" SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools"
BINARY_DIR "" BINARY_DIR ""
#BUILD_IN_SOURCE TRUE #BUILD_IN_SOURCE TRUE
......
...@@ -4,7 +4,7 @@ sidebar_label: Documentation Home ...@@ -4,7 +4,7 @@ sidebar_label: Documentation Home
slug: / slug: /
--- ---
TDengine is an [open-source](https://tdengine.com/tdengine/open-source-time-series-database/), [cloud-native](https://tdengine.com/tdengine/cloud-native-time-series-database/) time-series database optimized for the Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. This document is the TDengine user manual. It introduces the basic, as well as novel concepts, in TDengine, and also talks in detail about installation, features, SQL, APIs, operation, maintenance, kernel design, and other topics. It’s written mainly for architects, developers, and system administrators. TDengine is an [open-source](https://tdengine.com/tdengine/open-source-time-series-database/), [cloud-native](https://tdengine.com/tdengine/cloud-native-time-series-database/) [time-series database](https://tdengine.com/tsdb/) optimized for the Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. This document is the TDengine user manual. It introduces the basic, as well as novel concepts, in TDengine, and also talks in detail about installation, features, SQL, APIs, operation, maintenance, kernel design, and other topics. It’s written mainly for architects, developers, and system administrators.
To get an overview of TDengine, such as a feature list, benchmarks, and competitive advantages, please browse through the [Introduction](./intro) section. To get an overview of TDengine, such as a feature list, benchmarks, and competitive advantages, please browse through the [Introduction](./intro) section.
...@@ -22,6 +22,8 @@ If you want to know more about TDengine tools, the REST API, and connectors for ...@@ -22,6 +22,8 @@ If you want to know more about TDengine tools, the REST API, and connectors for
If you are very interested in the internal design of TDengine, please read the chapter [Inside TDengine](./tdinternal), which introduces the cluster design, data partitioning, sharding, writing, and reading processes in detail. If you want to study TDengine code or even contribute code, please read this chapter carefully. If you are very interested in the internal design of TDengine, please read the chapter [Inside TDengine](./tdinternal), which introduces the cluster design, data partitioning, sharding, writing, and reading processes in detail. If you want to study TDengine code or even contribute code, please read this chapter carefully.
To get more general introduction about time series database, please read through [a series of articles](https://tdengine.com/tsdb/). To lean more competitive advantages about TDengine, please read through [a series of blogs](https://tdengine.com/tdengine/).
TDengine is an open-source database, and we would love for you to be a part of TDengine. If you find any errors in the documentation or see parts where more clarity or elaboration is needed, please click "Edit this page" at the bottom of each page to edit it directly. TDengine is an open-source database, and we would love for you to be a part of TDengine. If you find any errors in the documentation or see parts where more clarity or elaboration is needed, please click "Edit this page" at the bottom of each page to edit it directly.
Together, we make a difference! Together, we make a difference!
...@@ -3,7 +3,7 @@ title: Introduction ...@@ -3,7 +3,7 @@ title: Introduction
toc_max_heading_level: 2 toc_max_heading_level: 2
--- ---
TDengine is an open source, high-performance, cloud native [time-series database](https://tdengine.com/tsdb/) optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. Its code, including its cluster feature is open source under GNU AGPL v3.0. Besides the database engine, it provides [caching](../develop/cache), [stream processing](../develop/stream), [data subscription](../develop/tmq) and other functionalities to reduce the system complexity and cost of development and operation. TDengine is an [open source](https://tdengine.com/tdengine/open-source-time-series-database/), [high-performance](https://tdengine.com/tdengine/high-performance-time-series-database/), [cloud native](https://tdengine.com/tdengine/cloud-native-time-series-database/) [time-series database](https://tdengine.com/tsdb/) optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. Its code, including its cluster feature is open source under GNU AGPL v3.0. Besides the database engine, it provides [caching](../develop/cache), [stream processing](../develop/stream), [data subscription](../develop/tmq) and other functionalities to reduce the system complexity and cost of development and operation.
This section introduces the major features, competitive advantages, typical use-cases and benchmarks to help you get a high level overview of TDengine. This section introduces the major features, competitive advantages, typical use-cases and benchmarks to help you get a high level overview of TDengine.
...@@ -43,7 +43,7 @@ For more details on features, please read through the entire documentation. ...@@ -43,7 +43,7 @@ For more details on features, please read through the entire documentation.
## Competitive Advantages ## Competitive Advantages
By making full use of [characteristics of time series data](https://tdengine.com/tsdb/characteristics-of-time-series-data/), TDengine differentiates itself from other time series databases, with the following advantages. By making full use of [characteristics of time series data](https://tdengine.com/tsdb/characteristics-of-time-series-data/), TDengine differentiates itself from other [time series databases](https://tdengine.com/tsdb), with the following advantages.
- **[High-Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression. - **[High-Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression.
...@@ -127,3 +127,8 @@ As a high-performance, scalable and SQL supported time-series database, TDengine ...@@ -127,3 +127,8 @@ As a high-performance, scalable and SQL supported time-series database, TDengine
- [TDengine vs OpenTSDB](https://tdengine.com/2019/09/12/710.html) - [TDengine vs OpenTSDB](https://tdengine.com/2019/09/12/710.html)
- [TDengine vs Cassandra](https://tdengine.com/2019/09/12/708.html) - [TDengine vs Cassandra](https://tdengine.com/2019/09/12/708.html)
- [TDengine vs InfluxDB](https://tdengine.com/2019/09/12/706.html) - [TDengine vs InfluxDB](https://tdengine.com/2019/09/12/706.html)
## More readings
- [Introduction to Time-Series Database](https://tdengine.com/tsdb/)
- [Introduction to TDengine competitive advantages](https://tdengine.com/tdengine/)
...@@ -3,7 +3,11 @@ sidebar_label: Docker ...@@ -3,7 +3,11 @@ sidebar_label: Docker
title: Quick Install on Docker title: Quick Install on Docker
--- ---
This document describes how to install TDengine in a Docker container and perform queries and inserts. To get started with TDengine in a non-containerized environment, see [Quick Install](../../get-started/package). If you want to view the source code, build TDengine yourself, or contribute to the project, see the [TDengine GitHub repository](https://github.com/taosdata/TDengine). This document describes how to install TDengine in a Docker container and perform queries and inserts.
- To get started with TDengine in a non-containerized environment, see [Quick Install from Package](../../get-started/package).
- For a fully managed solution, see the [TDengine Cloud documentation](/cloud/).
- If you want to view the source code, build TDengine yourself, or contribute to the project, see the [TDengine GitHub repository](https://github.com/taosdata/TDengine).
## Run TDengine ## Run TDengine
...@@ -52,7 +56,7 @@ Start TDengine service and execute `taosBenchmark` (formerly named `taosdemo`) i ...@@ -52,7 +56,7 @@ Start TDengine service and execute `taosBenchmark` (formerly named `taosdemo`) i
taosBenchmark taosBenchmark
``` ```
This command creates the `meters` supertable in the `test` database. In the `meters` supertable, it then creates 10,000 subtables named `d0` to `d9999`. Each table has 10,000 rows and each row has four columns: `ts`, `current`, `voltage`, and `phase`. The timestamps of the data in these columns range from 2017-07-14 10:40:00 000 to 2017-07-14 10:40:09 999. Each table is randomly assigned a `groupId` tag from 1 to 10 and a `location` tag of either `Campbell`, `Cupertino`, `Los Angeles`, `Mountain View`, `Palo Alto`, `San Diego`, `San Francisco`, `San Jose`, `Santa Clara` or `Sunnyvale`. This command creates the `meters` supertable in the `test` database. In the `meters` supertable, it then creates 10,000 subtables named `d0` to `d9999`. Each table has 10,000 rows and each row has four columns: `ts`, `current`, `voltage`, and `phase`. The timestamps of the data in these columns range from 2017-07-14 10:40:00 000 to 2017-07-14 10:40:09 999. Each table is randomly assigned a `groupId` tag from 1 to 10 and a `location` tag of either `California.Campbell`, `California.Cupertino`, `California.LosAngeles`, `California.MountainView`, `California.PaloAlto`, `California.SanDiego`, `California.SanFrancisco`, `California.SanJose`, `California.SantaClara` or `California.Sunnyvale`.
The `taosBenchmark` command creates a deployment with 100 million data points that you can use for testing purposes. The time required to create the deployment depends on your hardware. On most modern servers, the deployment is created in ten to twenty seconds. The `taosBenchmark` command creates a deployment with 100 million data points that you can use for testing purposes. The time required to create the deployment depends on your hardware. On most modern servers, the deployment is created in ten to twenty seconds.
...@@ -74,10 +78,10 @@ Query the average, maximum, and minimum values of all 100 million rows of data: ...@@ -74,10 +78,10 @@ Query the average, maximum, and minimum values of all 100 million rows of data:
SELECT AVG(current), MAX(voltage), MIN(phase) FROM test.meters; SELECT AVG(current), MAX(voltage), MIN(phase) FROM test.meters;
``` ```
Query the number of rows whose `location` tag is `San Francisco`: Query the number of rows whose `location` tag is `California.SanFrancisco`:
```sql ```sql
SELECT COUNT(*) FROM test.meters WHERE location = "San Francisco"; SELECT COUNT(*) FROM test.meters WHERE location = "California.SanFrancisco";
``` ```
Query the average, maximum, and minimum values of all rows whose `groupId` tag is `10`: Query the average, maximum, and minimum values of all rows whose `groupId` tag is `10`:
......
...@@ -7,7 +7,11 @@ import Tabs from "@theme/Tabs"; ...@@ -7,7 +7,11 @@ import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem"; import TabItem from "@theme/TabItem";
import PkgListV3 from "/components/PkgListV3"; import PkgListV3 from "/components/PkgListV3";
For information about installing TDengine on Docker, see [Quick Install on Docker](../../get-started/docker). If you want to view the source code, build TDengine yourself, or contribute to the project, see the [TDengine GitHub repository](https://github.com/taosdata/TDengine). This document describes how to install TDengine on Linux and Windows and perform queries and inserts.
- To get started with TDengine on Docker, see [Quick Install on Docker](../../get-started/docker).
- For a fully managed solution, see the [TDengine Cloud documentation](/cloud/).
- If you want to view the source code, build TDengine yourself, or contribute to the project, see the [TDengine GitHub repository](https://github.com/taosdata/TDengine).
The full package of TDengine includes the TDengine Server (`taosd`), TDengine Client (`taosc`), taosAdapter for connecting with third-party systems and providing a RESTful interface, a command-line interface (CLI, taos), and some tools. Note that taosAdapter supports Linux only. In addition to connectors for multiple languages, TDengine also provides a [REST API](../../reference/rest-api) through [taosAdapter](../../reference/taosadapter). The full package of TDengine includes the TDengine Server (`taosd`), TDengine Client (`taosc`), taosAdapter for connecting with third-party systems and providing a RESTful interface, a command-line interface (CLI, taos), and some tools. Note that taosAdapter supports Linux only. In addition to connectors for multiple languages, TDengine also provides a [REST API](../../reference/rest-api) through [taosAdapter](../../reference/taosadapter).
...@@ -111,7 +115,7 @@ Note: TDengine only supports Windows Server 2016/2019 and Windows 10/11 on the W ...@@ -111,7 +115,7 @@ Note: TDengine only supports Windows Server 2016/2019 and Windows 10/11 on the W
</Tabs> </Tabs>
:::info :::info
For information about TDengine releases, see [Release History](../../releases). For information about TDengine releases, see [Release History](../../releases/tdengine).
::: :::
:::note :::note
...@@ -221,7 +225,7 @@ Start TDengine service and execute `taosBenchmark` (formerly named `taosdemo`) i ...@@ -221,7 +225,7 @@ Start TDengine service and execute `taosBenchmark` (formerly named `taosdemo`) i
taosBenchmark taosBenchmark
``` ```
This command creates the `meters` supertable in the `test` database. In the `meters` supertable, it then creates 10,000 subtables named `d0` to `d9999`. Each table has 10,000 rows and each row has four columns: `ts`, `current`, `voltage`, and `phase`. The timestamps of the data in these columns range from 2017-07-14 10:40:00 000 to 2017-07-14 10:40:09 999. Each table is randomly assigned a `groupId` tag from 1 to 10 and a `location` tag of either `Campbell`, `Cupertino`, `Los Angeles`, `Mountain View`, `Palo Alto`, `San Diego`, `San Francisco`, `San Jose`, `Santa Clara` or `Sunnyvale`. This command creates the `meters` supertable in the `test` database. In the `meters` supertable, it then creates 10,000 subtables named `d0` to `d9999`. Each table has 10,000 rows and each row has four columns: `ts`, `current`, `voltage`, and `phase`. The timestamps of the data in these columns range from 2017-07-14 10:40:00 000 to 2017-07-14 10:40:09 999. Each table is randomly assigned a `groupId` tag from 1 to 10 and a `location` tag of either `California.Campbell`, `California.Cupertino`, `California.LosAngeles`, `California.MountainView`, `California.PaloAlto`, `California.SanDiego`, `California.SanFrancisco`, `California.SanJose`, `California.SantaClara` or `California.Sunnyvale`.
The `taosBenchmark` command creates a deployment with 100 million data points that you can use for testing purposes. The time required to create the deployment depends on your hardware. On most modern servers, the deployment is created in ten to twenty seconds. The `taosBenchmark` command creates a deployment with 100 million data points that you can use for testing purposes. The time required to create the deployment depends on your hardware. On most modern servers, the deployment is created in ten to twenty seconds.
...@@ -243,10 +247,10 @@ Query the average, maximum, and minimum values of all 100 million rows of data: ...@@ -243,10 +247,10 @@ Query the average, maximum, and minimum values of all 100 million rows of data:
SELECT AVG(current), MAX(voltage), MIN(phase) FROM test.meters; SELECT AVG(current), MAX(voltage), MIN(phase) FROM test.meters;
``` ```
Query the number of rows whose `location` tag is `San Francisco`: Query the number of rows whose `location` tag is `California.SanFrancisco`:
```sql ```sql
SELECT COUNT(*) FROM test.meters WHERE location = "San Francisco"; SELECT COUNT(*) FROM test.meters WHERE location = "California.SanFrancisco";
``` ```
Query the average, maximum, and minimum values of all rows whose `groupId` tag is `10`: Query the average, maximum, and minimum values of all rows whose `groupId` tag is `10`:
......
...@@ -3,9 +3,9 @@ title: Get Started ...@@ -3,9 +3,9 @@ title: Get Started
description: This article describes how to install TDengine and test its performance. description: This article describes how to install TDengine and test its performance.
--- ---
The full package of TDengine includes the TDengine Server (`taosd`), TDengine Client (`taosc`), taosAdapter for connecting with third-party systems and providing a RESTful interface, a command-line interface, and some tools. In addition to connectors for multiple languages, TDengine also provides a [RESTful interface](/reference/rest-api) through [taosAdapter](/reference/taosadapter). You can install and run TDengine on Linux and Windows machines as well as Docker containers. You can also deploy TDengine as a managed service with TDengine Cloud.
You can install and run TDengine on Linux and Windows machines as well as Docker containers. The full package of TDengine includes the TDengine Server (`taosd`), TDengine Client (`taosc`), taosAdapter for connecting with third-party systems and providing a RESTful interface, a command-line interface, and some tools. In addition to connectors for multiple languages, TDengine also provides a [RESTful interface](/reference/rest-api) through [taosAdapter](/reference/taosadapter).
```mdx-code-block ```mdx-code-block
import DocCardList from '@theme/DocCardList'; import DocCardList from '@theme/DocCardList';
......
...@@ -16,6 +16,8 @@ INSERT INTO ...@@ -16,6 +16,8 @@ INSERT INTO
[(field1_name, ...)] [(field1_name, ...)]
VALUES (field1_value, ...) [(field1_value2, ...) ...] | FILE csv_file_path VALUES (field1_value, ...) [(field1_value2, ...) ...] | FILE csv_file_path
...]; ...];
INSERT INTO tb_name [(field1_name, ...)] subquery
``` ```
**Timestamps** **Timestamps**
...@@ -37,7 +39,7 @@ INSERT INTO ...@@ -37,7 +39,7 @@ INSERT INTO
4. The FILE clause inserts tags or data from a comma-separates values (CSV) file. Do not include headers in your CSV files. 4. The FILE clause inserts tags or data from a comma-separates values (CSV) file. Do not include headers in your CSV files.
5. A single INSERT statement can write data to multiple tables. 5. A single `INSERT ... VALUES` statement and `INSERT ... FILE` statement can write data to multiple tables.
6. The INSERT statement is fully parsed before being executed, so that if any element of the statement fails, the entire statement will fail. For example, the following statement will not create a table because the latter part of the statement is invalid: 6. The INSERT statement is fully parsed before being executed, so that if any element of the statement fails, the entire statement will fail. For example, the following statement will not create a table because the latter part of the statement is invalid:
...@@ -47,6 +49,8 @@ INSERT INTO ...@@ -47,6 +49,8 @@ INSERT INTO
7. However, an INSERT statement that writes data to multiple subtables can succeed for some tables and fail for others. This situation is caused because vnodes perform write operations independently of each other. One vnode failing to write data does not affect the ability of other vnodes to write successfully. 7. However, an INSERT statement that writes data to multiple subtables can succeed for some tables and fail for others. This situation is caused because vnodes perform write operations independently of each other. One vnode failing to write data does not affect the ability of other vnodes to write successfully.
8. Data from TDengine can be inserted into a specified table using the `INSERT ... subquery` statement. Arbitrary query statements are supported. This syntax can only be used for subtables and normal tables, and does not support automatic table creation.
## Insert a Record ## Insert a Record
Single row or multiple rows specified with VALUES can be inserted into a specific table. A single row is inserted using the below statement. Single row or multiple rows specified with VALUES can be inserted into a specific table. A single row is inserted using the below statement.
......
...@@ -66,7 +66,7 @@ order_expr: ...@@ -66,7 +66,7 @@ order_expr:
A query can be performed on some or all columns. Data and tag columns can all be included in the SELECT list. A query can be performed on some or all columns. Data and tag columns can all be included in the SELECT list.
## Wildcards ### Wildcards
You can use an asterisk (\*) as a wildcard character to indicate all columns. For standard tables, the asterisk indicates only data columns. For supertables and subtables, tag columns are also included. You can use an asterisk (\*) as a wildcard character to indicate all columns. For standard tables, the asterisk indicates only data columns. For supertables and subtables, tag columns are also included.
...@@ -136,6 +136,8 @@ taos> SELECT ts, ts AS primary_key_ts FROM d1001; ...@@ -136,6 +136,8 @@ taos> SELECT ts, ts AS primary_key_ts FROM d1001;
### Pseudocolumns ### Pseudocolumns
**Pseudocolumn:** A pseudo-column behaves like a table column but is not actually stored in the table. You can select from pseudo-columns, but you cannot insert, update, or delete their values. A pseudo-column is also similar to a function without arguments. This section describes these pseudo-columns:
**TBNAME** **TBNAME**
The TBNAME pseudocolumn in a supertable contains the names of subtables within the supertable. The TBNAME pseudocolumn in a supertable contains the names of subtables within the supertable.
...@@ -348,19 +350,15 @@ SELECT ... FROM (SELECT ... FROM ...) ...; ...@@ -348,19 +350,15 @@ SELECT ... FROM (SELECT ... FROM ...) ...;
:::info :::info
- Only one layer of nesting is allowed, that means no sub query is allowed within a sub query - The result of a nested query is returned as a virtual table used by the outer query. It's recommended to give an alias to this table for the convenience of using it in the outer query.
- The result set returned by the inner query will be used as a "virtual table" by the outer query. The "virtual table" can be renamed using `AS` keyword for easy reference in the outer query.
- Sub query is not allowed in continuous query.
- JOIN operation is allowed between tables/STables inside both inner and outer queries. Join operation can be performed on the result set of the inner query. - JOIN operation is allowed between tables/STables inside both inner and outer queries. Join operation can be performed on the result set of the inner query.
- UNION operation is not allowed in either inner query or outer query. - The features that can be used in the inner query are the same as those that can be used in a non-nested query.
- The functions that can be used in the inner query are the same as those that can be used in a non-nested query.
- `ORDER BY` inside the inner query is unnecessary and will slow down the query performance significantly. It is best to avoid the use of `ORDER BY` inside the inner query. - `ORDER BY` inside the inner query is unnecessary and will slow down the query performance significantly. It is best to avoid the use of `ORDER BY` inside the inner query.
- Compared to the non-nested query, the functionality that can be used in the outer query has the following restrictions: - Compared to the non-nested query, the functionality that can be used in the outer query has the following restrictions:
- Functions - Functions
- If the result set returned by the inner query doesn't contain timestamp column, then functions relying on timestamp can't be used in the outer query, like `TOP`, `BOTTOM`, `FIRST`, `LAST`, `DIFF`. - If the result set returned by the inner query doesn't contain timestamp column, then functions relying on timestamp can't be used in the outer query, like INTERP,DERIVATIVE, IRATE, LAST_ROW, FIRST, LAST, TWA, STATEDURATION, TAIL, UNIQUE.
- Functions that need to scan the data twice can't be used in the outer query, like `STDDEV`, `PERCENTILE`. - If the result set returned by the inner query are not sorted in order by timestamp, then functions relying on data ordered by timestamp can't be used in the outer query, like LEASTSQUARES, ELAPSED, INTERP, DERIVATIVE, IRATE, TWA, DIFF, STATECOUNT, STATEDURATION, CSUM, MAVG, TAIL, UNIQUE.
- `IN` operator is not allowed in the outer query but can be used in the inner query. - Functions that need to scan the data twice can't be used in the outer query, like PERCENTILE.
- `GROUP BY` is not supported in the outer query.
::: :::
......
...@@ -126,7 +126,7 @@ SELECT COS(field_name) FROM { tb_name | stb_name } [WHERE clause] ...@@ -126,7 +126,7 @@ SELECT COS(field_name) FROM { tb_name | stb_name } [WHERE clause]
SELECT FLOOR(field_name) FROM { tb_name | stb_name } [WHERE clause]; SELECT FLOOR(field_name) FROM { tb_name | stb_name } [WHERE clause];
``` ```
**Description**: The rounded down value of a specific field **Description**: The rounded down value of a specific field
**More explanations**: The restrictions are same as those of the `CEIL` function. **More explanations**: The restrictions are same as those of the `CEIL` function.
#### LOG #### LOG
...@@ -173,7 +173,7 @@ SELECT POW(field_name, power) FROM { tb_name | stb_name } [WHERE clause] ...@@ -173,7 +173,7 @@ SELECT POW(field_name, power) FROM { tb_name | stb_name } [WHERE clause]
SELECT ROUND(field_name) FROM { tb_name | stb_name } [WHERE clause]; SELECT ROUND(field_name) FROM { tb_name | stb_name } [WHERE clause];
``` ```
**Description**: The rounded value of a specific field. **Description**: The rounded value of a specific field.
**More explanations**: The restrictions are same as those of the `CEIL` function. **More explanations**: The restrictions are same as those of the `CEIL` function.
...@@ -434,7 +434,7 @@ SELECT TO_ISO8601(ts[, timezone]) FROM { tb_name | stb_name } [WHERE clause]; ...@@ -434,7 +434,7 @@ SELECT TO_ISO8601(ts[, timezone]) FROM { tb_name | stb_name } [WHERE clause];
**More explanations**: **More explanations**:
- You can specify a time zone in the following format: [z/Z, +/-hhmm, +/-hh, +/-hh:mm]。 For example, TO_ISO8601(1, "+00:00"). - You can specify a time zone in the following format: [z/Z, +/-hhmm, +/-hh, +/-hh:mm]。 For example, TO_ISO8601(1, "+00:00").
- If the input is a UNIX timestamp, the precision of the returned value is determined by the digits of the input timestamp - If the input is a UNIX timestamp, the precision of the returned value is determined by the digits of the input timestamp
- If the input is a column of TIMESTAMP type, the precision of the returned value is same as the precision set for the current data base in use - If the input is a column of TIMESTAMP type, the precision of the returned value is same as the precision set for the current data base in use
...@@ -769,14 +769,14 @@ SELECT HISTOGRAM(field_name,bin_type, bin_description, normalized) FROM tb_nam ...@@ -769,14 +769,14 @@ SELECT HISTOGRAM(field_name,bin_type, bin_description, normalized) FROM tb_nam
**Explanations**: **Explanations**:
- bin_type: parameter to indicate the bucket type, valid inputs are: "user_input", "linear_bin", "log_bin"。 - bin_type: parameter to indicate the bucket type, valid inputs are: "user_input", "linear_bin", "log_bin"。
- bin_description: parameter to describe how to generate buckets,can be in the following JSON formats for each bin_type respectively: - bin_description: parameter to describe how to generate buckets,can be in the following JSON formats for each bin_type respectively:
- "user_input": "[1, 3, 5, 7]": - "user_input": "[1, 3, 5, 7]":
User specified bin values. User specified bin values.
- "linear_bin": "{"start": 0.0, "width": 5.0, "count": 5, "infinity": true}" - "linear_bin": "{"start": 0.0, "width": 5.0, "count": 5, "infinity": true}"
"start" - bin starting point. "width" - bin offset. "count" - number of bins generated. "infinity" - whether to add(-inf, inf)as start/end point in generated set of bins. "start" - bin starting point. "width" - bin offset. "count" - number of bins generated. "infinity" - whether to add(-inf, inf)as start/end point in generated set of bins.
The above "linear_bin" descriptor generates a set of bins: [-inf, 0.0, 5.0, 10.0, 15.0, 20.0, +inf]. The above "linear_bin" descriptor generates a set of bins: [-inf, 0.0, 5.0, 10.0, 15.0, 20.0, +inf].
- "log_bin": "{"start":1.0, "factor": 2.0, "count": 5, "infinity": true}" - "log_bin": "{"start":1.0, "factor": 2.0, "count": 5, "infinity": true}"
"start" - bin starting point. "factor" - exponential factor of bin offset. "count" - number of bins generated. "infinity" - whether to add(-inf, inf)as start/end point in generated range of bins. "start" - bin starting point. "factor" - exponential factor of bin offset. "count" - number of bins generated. "infinity" - whether to add(-inf, inf)as start/end point in generated range of bins.
The above "linear_bin" descriptor generates a set of bins: [-inf, 1.0, 2.0, 4.0, 8.0, 16.0, +inf]. The above "linear_bin" descriptor generates a set of bins: [-inf, 1.0, 2.0, 4.0, 8.0, 16.0, +inf].
...@@ -862,9 +862,9 @@ SELECT INTERP(field_name) FROM { tb_name | stb_name } [WHERE where_condition] RA ...@@ -862,9 +862,9 @@ SELECT INTERP(field_name) FROM { tb_name | stb_name } [WHERE where_condition] RA
- `INTERP` is used to get the value that matches the specified time slice from a column. If no such value exists an interpolation value will be returned based on `FILL` parameter. - `INTERP` is used to get the value that matches the specified time slice from a column. If no such value exists an interpolation value will be returned based on `FILL` parameter.
- The input data of `INTERP` is the value of the specified column and a `where` clause can be used to filter the original data. If no `where` condition is specified then all original data is the input. - The input data of `INTERP` is the value of the specified column and a `where` clause can be used to filter the original data. If no `where` condition is specified then all original data is the input.
- The output time range of `INTERP` is specified by `RANGE(timestamp1,timestamp2)` parameter, with timestamp1<=timestamp2. timestamp1 is the starting point of the output time range and must be specified. timestamp2 is the ending point of the output time range and must be specified. - The output time range of `INTERP` is specified by `RANGE(timestamp1,timestamp2)` parameter, with timestamp1<=timestamp2. timestamp1 is the starting point of the output time range and must be specified. timestamp2 is the ending point of the output time range and must be specified.
- The number of rows in the result set of `INTERP` is determined by the parameter `EVERY`. Starting from timestamp1, one interpolation is performed for every time interval specified `EVERY` parameter. - The number of rows in the result set of `INTERP` is determined by the parameter `EVERY`. Starting from timestamp1, one interpolation is performed for every time interval specified `EVERY` parameter.
- Interpolation is performed based on `FILL` parameter. - Interpolation is performed based on `FILL` parameter.
- `INTERP` can only be used to interpolate in single timeline. So it must be used with `partition by tbname` when it's used on a STable. - `INTERP` can only be used to interpolate in single timeline. So it must be used with `partition by tbname` when it's used on a STable.
### LAST ### LAST
...@@ -917,7 +917,7 @@ SELECT MAX(field_name) FROM { tb_name | stb_name } [WHERE clause]; ...@@ -917,7 +917,7 @@ SELECT MAX(field_name) FROM { tb_name | stb_name } [WHERE clause];
**Return value type**:Same as the data type of the column being operated upon **Return value type**:Same as the data type of the column being operated upon
**Applicable data types**: Numeric, Timestamp **Applicable data types**: Numeric
**Applicable table types**: standard tables and supertables **Applicable table types**: standard tables and supertables
...@@ -932,7 +932,7 @@ SELECT MIN(field_name) FROM {tb_name | stb_name} [WHERE clause]; ...@@ -932,7 +932,7 @@ SELECT MIN(field_name) FROM {tb_name | stb_name} [WHERE clause];
**Return value type**:Same as the data type of the column being operated upon **Return value type**:Same as the data type of the column being operated upon
**Applicable data types**: Numeric, Timestamp **Applicable data types**: Numeric
**Applicable table types**: standard tables and supertables **Applicable table types**: standard tables and supertables
...@@ -968,7 +968,7 @@ SELECT SAMPLE(field_name, K) FROM { tb_name | stb_name } [WHERE clause] ...@@ -968,7 +968,7 @@ SELECT SAMPLE(field_name, K) FROM { tb_name | stb_name } [WHERE clause]
**Applicable table types**: standard tables and supertables **Applicable table types**: standard tables and supertables
**More explanations**: **More explanations**:
This function cannot be used in expression calculation. This function cannot be used in expression calculation.
- Must be used with `PARTITION BY tbname` when it's used on a STable to force the result on each single timeline - Must be used with `PARTITION BY tbname` when it's used on a STable to force the result on each single timeline
...@@ -1046,10 +1046,10 @@ SELECT CSUM(field_name) FROM { tb_name | stb_name } [WHERE clause] ...@@ -1046,10 +1046,10 @@ SELECT CSUM(field_name) FROM { tb_name | stb_name } [WHERE clause]
**Applicable table types**: standard tables and supertables **Applicable table types**: standard tables and supertables
**More explanations**: **More explanations**:
- Arithmetic operation can't be performed on the result of `csum` function - Arithmetic operation can't be performed on the result of `csum` function
- Can only be used with aggregate functions This function can be used with supertables and standard tables. - Can only be used with aggregate functions This function can be used with supertables and standard tables.
- Must be used with `PARTITION BY tbname` when it's used on a STable to force the result on each single timeline - Must be used with `PARTITION BY tbname` when it's used on a STable to force the result on each single timeline
...@@ -1067,8 +1067,8 @@ SELECT DERIVATIVE(field_name, time_interval, ignore_negative) FROM tb_name [WHER ...@@ -1067,8 +1067,8 @@ SELECT DERIVATIVE(field_name, time_interval, ignore_negative) FROM tb_name [WHER
**Applicable table types**: standard tables and supertables **Applicable table types**: standard tables and supertables
**More explanation**: **More explanation**:
- It can be used together with `PARTITION BY tbname` against a STable. - It can be used together with `PARTITION BY tbname` against a STable.
- It can be used together with a selected column. For example: select \_rowts, DERIVATIVE() from。 - It can be used together with a selected column. For example: select \_rowts, DERIVATIVE() from。
...@@ -1086,7 +1086,7 @@ SELECT {DIFF(field_name, ignore_negative) | DIFF(field_name)} FROM tb_name [WHER ...@@ -1086,7 +1086,7 @@ SELECT {DIFF(field_name, ignore_negative) | DIFF(field_name)} FROM tb_name [WHER
**Applicable table types**: standard tables and supertables **Applicable table types**: standard tables and supertables
**More explanation**: **More explanation**:
- The number of result rows is the number of rows subtracted by one, no output for the first row - The number of result rows is the number of rows subtracted by one, no output for the first row
- It can be used together with a selected column. For example: select \_rowts, DIFF() from。 - It can be used together with a selected column. For example: select \_rowts, DIFF() from。
...@@ -1123,9 +1123,9 @@ SELECT MAVG(field_name, K) FROM { tb_name | stb_name } [WHERE clause] ...@@ -1123,9 +1123,9 @@ SELECT MAVG(field_name, K) FROM { tb_name | stb_name } [WHERE clause]
**Applicable table types**: standard tables and supertables **Applicable table types**: standard tables and supertables
**More explanations**: **More explanations**:
- Arithmetic operation can't be performed on the result of `MAVG`. - Arithmetic operation can't be performed on the result of `MAVG`.
- Can only be used with data columns, can't be used with tags. - Can't be used with aggregate functions. - Can only be used with data columns, can't be used with tags. - Can't be used with aggregate functions.
- Must be used with `PARTITION BY tbname` when it's used on a STable to force the result on each single timeline - Must be used with `PARTITION BY tbname` when it's used on a STable to force the result on each single timeline
......
...@@ -5,11 +5,11 @@ title: Time-Series Extensions ...@@ -5,11 +5,11 @@ title: Time-Series Extensions
As a purpose-built database for storing and processing time-series data, TDengine provides time-series-specific extensions to standard SQL. As a purpose-built database for storing and processing time-series data, TDengine provides time-series-specific extensions to standard SQL.
These extensions include tag-partitioned queries and windowed queries. These extensions include partitioned queries and windowed queries.
## Tag-Partitioned Queries ## Partitioned Queries
When you query a supertable, you may need to partition the supertable by tag and perform additional operations on a specific partition. In this case, you can use the following SQL clause: When you query a supertable, you may need to partition the supertable by some dimensions and perform additional operations on a specific partition. In this case, you can use the following SQL clause:
```sql ```sql
PARTITION BY part_list PARTITION BY part_list
...@@ -17,22 +17,24 @@ PARTITION BY part_list ...@@ -17,22 +17,24 @@ PARTITION BY part_list
part_list can be any scalar expression, such as a column, constant, scalar function, or a combination of the preceding items. part_list can be any scalar expression, such as a column, constant, scalar function, or a combination of the preceding items.
A PARTITION BY clause with a tag is processed as follows: A PARTITION BY clause is processed as follows:
- The PARTITION BY clause must occur after the WHERE clause and cannot be used with a JOIN clause. - The PARTITION BY clause must occur after the WHERE clause
- The PARTITION BY clause partitions the super table by the specified tag group, and the specified calculation is performed on each partition. The calculation performed is determined by the rest of the statement - a window clause, GROUP BY clause, or SELECT clause. - The PARTITION BY caluse partitions the data according to the specified dimentions, then perform computation on each partition. The performed computation is determined by the rest of the statement - a window clause, GROUP BY clause, or SELECT clause.
- You can use PARTITION BY together with a window clause or GROUP BY clause. In this case, the window or GROUP BY clause takes effect on every partition. For example, the following statement partitions the table by the location tag, performs downsampling over a 10 minute window, and returns the maximum value: - The PARTITION BY clause can be used together with a window clause or GROUP BY clause. In this case, the window or GROUP BY clause takes effect on every partition. For example, the following statement partitions the table by the location tag, performs downsampling over a 10 minute window, and returns the maximum value:
```sql ```sql
select max(current) from meters partition by location interval(10m) select max(current) from meters partition by location interval(10m)
``` ```
The most common usage of PARTITION BY is partitioning the data in subtables by tags then perform computation when querying data in a supertable. More specifically, `PARTITION BY TBNAME` partitions the data of each subtable into a single timeline, and this method facilitates the statistical analysis in many use cases of processing timeseries data.
## Windowed Queries ## Windowed Queries
Aggregation by time window is supported in TDengine. For example, in the case where temperature sensors report the temperature every seconds, the average temperature for every 10 minutes can be retrieved by performing a query with a time window. Window related clauses are used to divide the data set to be queried into subsets and then aggregation is performed across the subsets. There are three kinds of windows: time window, status window, and session window. There are two kinds of time windows: sliding window and flip time/tumbling window. The query syntax is as follows: Aggregation by time window is supported in TDengine. For example, in the case where temperature sensors report the temperature every seconds, the average temperature for every 10 minutes can be retrieved by performing a query with a time window. Window related clauses are used to divide the data set to be queried into subsets and then aggregation is performed across the subsets. There are three kinds of windows: time window, status window, and session window. There are two kinds of time windows: sliding window and flip time/tumbling window. The query syntax is as follows:
```sql ```sql
SELECT function_list FROM tb_name SELECT select_list FROM tb_name
[WHERE where_condition] [WHERE where_condition]
[SESSION(ts_col, tol_val)] [SESSION(ts_col, tol_val)]
[STATE_WINDOW(col)] [STATE_WINDOW(col)]
...@@ -42,15 +44,9 @@ SELECT function_list FROM tb_name ...@@ -42,15 +44,9 @@ SELECT function_list FROM tb_name
The following restrictions apply: The following restrictions apply:
### Restricted Functions
- Aggregate functions and select functions can be used in `function_list`, with each function having only one output. For example COUNT, AVG, SUM, STDDEV, LEASTSQUARES, PERCENTILE, MIN, MAX, FIRST, LAST. Functions having multiple outputs, such as DIFF or arithmetic operations can't be used.
- `LAST_ROW` can't be used together with window aggregate.
- Scalar functions, like CEIL/FLOOR, can't be used with window aggregate.
### Other Rules ### Other Rules
- The window clause must occur after the PARTITION BY clause and before the GROUP BY clause. It cannot be used with a GROUP BY clause. - The window clause must occur after the PARTITION BY clause. It cannot be used with a GROUP BY clause.
- SELECT clauses on windows can contain only the following expressions: - SELECT clauses on windows can contain only the following expressions:
- Constants - Constants
- Aggregate functions - Aggregate functions
...@@ -82,7 +78,7 @@ These pseudocolumns occur after the aggregation clause. ...@@ -82,7 +78,7 @@ These pseudocolumns occur after the aggregation clause.
1. A huge volume of interpolation output may be returned using `FILL`, so it's recommended to specify the time range when using `FILL`. The maximum number of interpolation values that can be returned in a single query is 10,000,000. 1. A huge volume of interpolation output may be returned using `FILL`, so it's recommended to specify the time range when using `FILL`. The maximum number of interpolation values that can be returned in a single query is 10,000,000.
2. The result set is in ascending order of timestamp when you aggregate by time window. 2. The result set is in ascending order of timestamp when you aggregate by time window.
3. If aggregate by window is used on STable, the aggregate function is performed on all the rows matching the filter conditions. If `GROUP BY` is not used in the query, the result set will be returned in ascending order of timestamp; otherwise the result set is not exactly in the order of ascending timestamp in each group. 3. If aggregate by window is used on STable, the aggregate function is performed on all the rows matching the filter conditions. If `PARTITION BY` is not used in the query, the result set will be returned in strict ascending order of timestamp; otherwise the result set will be returned in the order of ascending timestamp in each group.
::: :::
...@@ -112,9 +108,9 @@ When using time windows, note the following: ...@@ -112,9 +108,9 @@ When using time windows, note the following:
Please note that the `timezone` parameter should be configured to be the same value in the `taos.cfg` configuration file on client side and server side. Please note that the `timezone` parameter should be configured to be the same value in the `taos.cfg` configuration file on client side and server side.
- The result set is in ascending order of timestamp when you aggregate by time window. - The result set is in ascending order of timestamp when you aggregate by time window.
### Status Window ### State Window
In case of using integer, bool, or string to represent the status of a device at any given moment, continuous rows with the same status belong to a status window. Once the status changes, the status window closes. As shown in the following figure, there are two status windows according to status, [2019-04-28 14:22:07,2019-04-28 14:22:10] and [2019-04-28 14:22:11,2019-04-28 14:22:12]. Status window is not applicable to STable for now. In case of using integer, bool, or string to represent the status of a device at any given moment, continuous rows with the same status belong to a status window. Once the status changes, the status window closes. As shown in the following figure, there are two state windows according to status, [2019-04-28 14:22:07,2019-04-28 14:22:10] and [2019-04-28 14:22:11,2019-04-28 14:22:12].
![TDengine Database Status Window](./timewindow-3.webp) ![TDengine Database Status Window](./timewindow-3.webp)
...@@ -124,13 +120,19 @@ In case of using integer, bool, or string to represent the status of a device at ...@@ -124,13 +120,19 @@ In case of using integer, bool, or string to represent the status of a device at
SELECT COUNT(*), FIRST(ts), status FROM temp_tb_1 STATE_WINDOW(status); SELECT COUNT(*), FIRST(ts), status FROM temp_tb_1 STATE_WINDOW(status);
``` ```
Only care about the information of the status window when the status is 2. For example:
```
SELECT * FROM (SELECT COUNT(*) AS cnt, FIRST(ts) AS fst, status FROM temp_tb_1 STATE_WINDOW(status)) t WHERE status = 2;
```
### Session Window ### Session Window
The primary key, i.e. timestamp, is used to determine which session window a row belongs to. As shown in the figure below, if the limit of time interval for the session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:10,2019-04-28 14:22:30] and [2019-04-28 14:23:10,2019-04-28 14:23:30] because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds. The primary key, i.e. timestamp, is used to determine which session window a row belongs to. As shown in the figure below, if the limit of time interval for the session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:10,2019-04-28 14:22:30] and [2019-04-28 14:23:10,2019-04-28 14:23:30] because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds.
![TDengine Database Session Window](./timewindow-2.webp) ![TDengine Database Session Window](./timewindow-2.webp)
If the time interval between two continuous rows are within the time interval specified by `tol_value` they belong to the same session window; otherwise a new session window is started automatically. Session window is not supported on STable for now. If the time interval between two continuous rows are within the time interval specified by `tol_value` they belong to the same session window; otherwise a new session window is started automatically.
``` ```
......
...@@ -5,7 +5,9 @@ title: Reserved Keywords ...@@ -5,7 +5,9 @@ title: Reserved Keywords
## Keyword List ## Keyword List
There are about 200 keywords reserved by TDengine, they can't be used as the name of database, STable or table with either upper case, lower case or mixed case. The following list shows all reserved keywords: There are more than 200 keywords reserved by TDengine, they can't be used as the name of database, table, STable, subtable, column or tag with either upper case, lower case or mixed case. If you need to use these keywords, use the symbol `` ` `` to enclose the keywords, e.g. \`ADD\`.
The following list shows all reserved keywords:
### A ### A
...@@ -14,15 +16,20 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam ...@@ -14,15 +16,20 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
- ACCOUNTS - ACCOUNTS
- ADD - ADD
- AFTER - AFTER
- AGGREGATE
- ALL - ALL
- ALTER - ALTER
- ANALYZE
- AND - AND
- APPS
- AS - AS
- ASC - ASC
- AT_ONCE
- ATTACH - ATTACH
### B ### B
- BALANCE
- BEFORE - BEFORE
- BEGIN - BEGIN
- BETWEEN - BETWEEN
...@@ -32,19 +39,27 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam ...@@ -32,19 +39,27 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
- BITNOT - BITNOT
- BITOR - BITOR
- BLOCKS - BLOCKS
- BNODE
- BNODES
- BOOL - BOOL
- BUFFER
- BUFSIZE
- BY - BY
### C ### C
- CACHE - CACHE
- CACHELAST - CACHEMODEL
- CACHESIZE
- CASCADE - CASCADE
- CAST
- CHANGE - CHANGE
- CLIENT_VERSION
- CLUSTER - CLUSTER
- COLON - COLON
- COLUMN - COLUMN
- COMMA - COMMA
- COMMENT
- COMP - COMP
- COMPACT - COMPACT
- CONCAT - CONCAT
...@@ -52,15 +67,18 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam ...@@ -52,15 +67,18 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
- CONNECTION - CONNECTION
- CONNECTIONS - CONNECTIONS
- CONNS - CONNS
- CONSUMER
- CONSUMERS
- CONTAINS
- COPY - COPY
- COUNT
- CREATE - CREATE
- CTIME - CURRENT_USER
### D ### D
- DATABASE - DATABASE
- DATABASES - DATABASES
- DAYS
- DBS - DBS
- DEFERRED - DEFERRED
- DELETE - DELETE
...@@ -69,18 +87,23 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam ...@@ -69,18 +87,23 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
- DESCRIBE - DESCRIBE
- DETACH - DETACH
- DISTINCT - DISTINCT
- DISTRIBUTED
- DIVIDE - DIVIDE
- DNODE - DNODE
- DNODES - DNODES
- DOT - DOT
- DOUBLE - DOUBLE
- DROP - DROP
- DURATION
### E ### E
- EACH
- ENABLE
- END - END
- EQ - EVERY
- EXISTS - EXISTS
- EXPIRED
- EXPLAIN - EXPLAIN
### F ### F
...@@ -88,18 +111,20 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam ...@@ -88,18 +111,20 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
- FAIL - FAIL
- FILE - FILE
- FILL - FILL
- FIRST
- FLOAT - FLOAT
- FLUSH
- FOR - FOR
- FROM - FROM
- FSYNC - FUNCTION
- FUNCTIONS
### G ### G
- GE
- GLOB - GLOB
- GRANT
- GRANTS - GRANTS
- GROUP - GROUP
- GT
### H ### H
...@@ -110,15 +135,18 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam ...@@ -110,15 +135,18 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
- ID - ID
- IF - IF
- IGNORE - IGNORE
- IMMEDIA - IMMEDIATE
- IMPORT - IMPORT
- IN - IN
- INITIAL - INDEX
- INDEXES
- INITIALLY
- INNER
- INSERT - INSERT
- INSTEAD - INSTEAD
- INT - INT
- INTEGER - INTEGER
- INTERVA - INTERVAL
- INTO - INTO
- IS - IS
- ISNULL - ISNULL
...@@ -126,6 +154,7 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam ...@@ -126,6 +154,7 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
### J ### J
- JOIN - JOIN
- JSON
### K ### K
...@@ -135,46 +164,57 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam ...@@ -135,46 +164,57 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
### L ### L
- LE - LAST
- LAST_ROW
- LICENCES
- LIKE - LIKE
- LIMIT - LIMIT
- LINEAR - LINEAR
- LOCAL - LOCAL
- LP
- LSHIFT
- LT
### M ### M
- MATCH - MATCH
- MAX_DELAY
- MAXROWS - MAXROWS
- MERGE
- META
- MINROWS - MINROWS
- MINUS - MINUS
- MNODE
- MNODES - MNODES
- MODIFY - MODIFY
- MODULES - MODULES
### N ### N
- NE - NCHAR
- NEXT
- NMATCH
- NONE - NONE
- NOT - NOT
- NOTNULL - NOTNULL
- NOW - NOW
- NULL - NULL
- NULLS
### O ### O
- OF - OF
- OFFSET - OFFSET
- ON
- OR - OR
- ORDER - ORDER
- OUTPUTTYPE
### P ### P
- PARTITION - PAGES
- PAGESIZE
- PARTITIONS
- PASS - PASS
- PLUS - PLUS
- PORT
- PPS - PPS
- PRECISION - PRECISION
- PREV - PREV
...@@ -182,47 +222,63 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam ...@@ -182,47 +222,63 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
### Q ### Q
- QNODE
- QNODES
- QTIME - QTIME
- QUERIE - QUERIES
- QUERY - QUERY
- QUORUM
### R ### R
- RAISE - RAISE
- REM - RANGE
- RATIO
- READ
- REDISTRIBUTE
- RENAME
- REPLACE - REPLACE
- REPLICA - REPLICA
- RESET - RESET
- RESTRIC - RESTRICT
- RETENTIONS
- REVOKE
- ROLLUP
- ROW - ROW
- RP
- RSHIFT
### S ### S
- SCHEMALESS
- SCORES - SCORES
- SELECT - SELECT
- SEMI - SEMI
- SERVER_STATUS
- SERVER_VERSION
- SESSION - SESSION
- SET - SET
- SHOW - SHOW
- SLASH - SINGLE_STABLE
- SLIDING - SLIDING
- SLIMIT - SLIMIT
- SMALLIN - SMA
- SMALLINT
- SNODE
- SNODES
- SOFFSET - SOFFSET
- STable - SPLIT
- STableS - STABLE
- STABLES
- STAR - STAR
- STATE - STATE
- STATEMEN - STATE_WINDOW
- STATE_WI - STATEMENT
- STORAGE - STORAGE
- STREAM - STREAM
- STREAMS - STREAMS
- STRICT
- STRING - STRING
- SUBSCRIPTIONS
- SYNCDB - SYNCDB
- SYSINFO
### T ### T
...@@ -233,19 +289,24 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam ...@@ -233,19 +289,24 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
- TBNAME - TBNAME
- TIMES - TIMES
- TIMESTAMP - TIMESTAMP
- TIMEZONE
- TINYINT - TINYINT
- TO
- TODAY
- TOPIC - TOPIC
- TOPICS - TOPICS
- TRANSACTION
- TRANSACTIONS
- TRIGGER - TRIGGER
- TRIM
- TSERIES - TSERIES
- TTL
### U ### U
- UMINUS
- UNION - UNION
- UNSIGNED - UNSIGNED
- UPDATE - UPDATE
- UPLUS
- USE - USE
- USER - USER
- USERS - USERS
...@@ -253,9 +314,13 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam ...@@ -253,9 +314,13 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
### V ### V
- VALUE
- VALUES - VALUES
- VARCHAR
- VARIABLE - VARIABLE
- VARIABLES - VARIABLES
- VERBOSE
- VGROUP
- VGROUPS - VGROUPS
- VIEW - VIEW
- VNODES - VNODES
...@@ -263,14 +328,25 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam ...@@ -263,14 +328,25 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
### W ### W
- WAL - WAL
- WAL_FSYNC_PERIOD
- WAL_LEVEL
- WAL_RETENTION_PERIOD
- WAL_RETENTION_SIZE
- WAL_ROLL_PERIOD
- WAL_SEGMENT_SIZE
- WATERMARK
- WHERE - WHERE
- WINDOW_CLOSE
- WITH
- WRITE
### \_ ### \_
- \_C0 - \_C0
- \_QSTART
- \_QSTOP
- \_QDURATION - \_QDURATION
- \_WSTART - \_QEND
- \_WSTOP - \_QSTART
- \_ROWTS
- \_WDURATION - \_WDURATION
- \_WEND
- \_WSTART
...@@ -9,15 +9,54 @@ This document describes how to manage permissions in TDengine. ...@@ -9,15 +9,54 @@ This document describes how to manage permissions in TDengine.
## Create a User ## Create a User
```sql ```sql
CREATE USER use_name PASS 'password'; CREATE USER user_name PASS 'password' [SYSINFO {1|0}];
``` ```
This statement creates a user account. This statement creates a user account.
The maximum length of use_name is 23 bytes. The maximum length of user_name is 23 bytes.
The maximum length of password is 128 bytes. The password can include leters, digits, and special characters excluding single quotation marks, double quotation marks, backticks, backslashes, and spaces. The password cannot be empty. The maximum length of password is 128 bytes. The password can include leters, digits, and special characters excluding single quotation marks, double quotation marks, backticks, backslashes, and spaces. The password cannot be empty.
`SYSINFO` indicates whether the user is allowed to view system information. `1` means allowed, `0` means not allowed. System information includes server configuration, dnode, vnode, storage. The default value is `1`.
For example, we can create a user whose password is `123456` and is able to view system information.
```sql
taos> create user test pass '123456' sysinfo 1;
Query OK, 0 of 0 rows affected (0.001254s)
```
## View Users
To show the users in the system, please use
```sql
SHOW USERS;
```
This is an example:
```sql
taos> show users;
name | super | enable | sysinfo | create_time |
================================================================================
test | 0 | 1 | 1 | 2022-08-29 15:10:27.315 |
root | 1 | 1 | 1 | 2022-08-29 15:03:34.710 |
Query OK, 2 rows in database (0.001657s)
```
Alternatively, you can get the user information by querying a built-in table, INFORMATION_SCHEMA.INS_USERS. For example:
```sql
taos> select * from information_schema.ins_users;
name | super | enable | sysinfo | create_time |
================================================================================
test | 0 | 1 | 1 | 2022-08-29 15:10:27.315 |
root | 1 | 1 | 1 | 2022-08-29 15:03:34.710 |
Query OK, 2 rows in database (0.001953s)
```
## Delete a User ## Delete a User
```sql ```sql
...@@ -40,6 +79,13 @@ alter_user_clause: { ...@@ -40,6 +79,13 @@ alter_user_clause: {
- ENABLE: Specify whether the user is enabled or disabled. 1 indicates enabled and 0 indicates disabled. - ENABLE: Specify whether the user is enabled or disabled. 1 indicates enabled and 0 indicates disabled.
- SYSINFO: Specify whether the user can query system information. 1 indicates that the user can query system information and 0 indicates that the user cannot query system information. - SYSINFO: Specify whether the user can query system information. 1 indicates that the user can query system information and 0 indicates that the user cannot query system information.
For example, you can use below command to disable user `test`:
```sql
taos> alter user test enable 0;
Query OK, 0 of 0 rows affected (0.001160s)
```
## Grant Permissions ## Grant Permissions
...@@ -62,7 +108,7 @@ priv_level : { ...@@ -62,7 +108,7 @@ priv_level : {
} }
``` ```
Grant permissions to a user. Grant permissions to a user, this feature is only available in enterprise edition.
Permissions are granted on the database level. You can grant read or write permissions. Permissions are granted on the database level. You can grant read or write permissions.
...@@ -92,4 +138,4 @@ priv_level : { ...@@ -92,4 +138,4 @@ priv_level : {
``` ```
Revoke permissions from a user. Revoke permissions from a user, this feature is only available in enterprise edition.
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
title: TDengine Monitoring title: TDengine Monitoring
--- ---
After TDengine is started, a database named `log` is created automatically to help with monitoring. Information that includes CPU, memory and disk usage, bandwidth, number of requests, disk I/O speed, slow queries, is written into the `log` database at a predefined interval. Additionally, some important system operations, like logon, create user, drop database, and alerts and warnings generated in TDengine are written into the `log` database too. A system operator can view the data in `log` database from TDengine CLI or from a web console. After TDengine is started, it automatically writes monitoring data including CPU, memory and disk usage, bandwidth, number of requests, disk I/O speed, slow queries, into a designated database at a predefined interval through taosKeeper. Additionally, some important system operations, like logon, create user, drop database, and alerts and warnings generated in TDengine are written into the `log` database too. A system operator can view the data in `log` database from TDengine CLI or from a web console.
The collection of the monitoring information is enabled by default, but can be disabled by parameter `monitor` in the configuration file. The collection of the monitoring information is enabled by default, but can be disabled by parameter `monitor` in the configuration file.
...@@ -10,7 +10,7 @@ The collection of the monitoring information is enabled by default, but can be d ...@@ -10,7 +10,7 @@ The collection of the monitoring information is enabled by default, but can be d
TDinsight is a complete solution which uses the monitoring database `log` mentioned previously, and Grafana, to monitor a TDengine cluster. TDinsight is a complete solution which uses the monitoring database `log` mentioned previously, and Grafana, to monitor a TDengine cluster.
From version 2.3.3.0, more monitoring data has been added in the `log` database. Please refer to [TDinsight Grafana Dashboard](https://grafana.com/grafana/dashboards/15167) to learn more details about using TDinsight to monitor TDengine. Please refer to [TDinsight Grafana Dashboard](../../reference/tdinsight) to learn more details about using TDinsight to monitor TDengine.
A script `TDinsight.sh` is provided to deploy TDinsight automatically. A script `TDinsight.sh` is provided to deploy TDinsight automatically.
...@@ -30,31 +30,14 @@ Prepare: ...@@ -30,31 +30,14 @@ Prepare:
2. Grafana Alert Notification 2. Grafana Alert Notification
There are two ways to setup Grafana alert notification. You can use below command to setup Grafana alert notification.
- An existing Grafana Notification Channel can be specified with parameter `-E`, the notifier uid of the channel can be obtained by `curl -u admin:admin localhost:3000/api/alert-notifications |jq` An existing Grafana Notification Channel can be specified with parameter `-E`, the notifier uid of the channel can be obtained by `curl -u admin:admin localhost:3000/api/alert-notifications |jq`
```bash ```bash
sudo ./TDinsight.sh -a http://localhost:6041 -u root -p taosdata -E <notifier uid> sudo ./TDinsight.sh -a http://localhost:6041 -u root -p taosdata -E <notifier uid>
``` ```
- The AliCloud SMS alert built in TDengine data source plugin can be enabled with parameter `-s`, the parameters of enabling this plugin are listed below:
- `-I`: AliCloud SMS Key ID
- `-K`: AliCloud SMS Key Secret
- `-S`: AliCloud SMS Signature
- `-C`: SMS notification template
- `-T`: Input parameters in JSON format for the SMS notification template, for example`{"alarm_level":"%s","time":"%s","name":"%s","content":"%s"}`
- `-B`: List of mobile numbers to be notified
Below is an example of the full command using the AliCloud SMS alert.
```bash
sudo ./TDinsight.sh -a http://localhost:6041 -u root -p taosdata -s \
-I XXXXXXX -K XXXXXXXX -S taosdata -C SMS_1111111 -B 18900000000 \
-T '{"alarm_level":"%s","time":"%s","name":"%s","content":"%s"}'
```
Launch `TDinsight.sh` with the command above and restart Grafana, then open Dashboard `http://localhost:3000/d/tdinsight`. Launch `TDinsight.sh` with the command above and restart Grafana, then open Dashboard `http://localhost:3000/d/tdinsight`.
For more use cases and restrictions please refer to [TDinsight](/reference/tdinsight/). For more use cases and restrictions please refer to [TDinsight](/reference/tdinsight/).
...@@ -155,15 +155,15 @@ async fn demo(taos: &Taos, db: &str) -> Result<(), Error> { ...@@ -155,15 +155,15 @@ async fn demo(taos: &Taos, db: &str) -> Result<(), Error> {
let inserted = taos.exec_many([ let inserted = taos.exec_many([
// create super table // create super table
"CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT, `phase` FLOAT) \ "CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT, `phase` FLOAT) \
TAGS (`groupid` INT, `location` BINARY(16))", TAGS (`groupid` INT, `location` BINARY(24))",
// create child table // create child table
"CREATE TABLE `d0` USING `meters` TAGS(0, 'Los Angles')", "CREATE TABLE `d0` USING `meters` TAGS(0, 'California.LosAngles')",
// insert into child table // insert into child table
"INSERT INTO `d0` values(now - 10s, 10, 116, 0.32)", "INSERT INTO `d0` values(now - 10s, 10, 116, 0.32)",
// insert with NULL values // insert with NULL values
"INSERT INTO `d0` values(now - 8s, NULL, NULL, NULL)", "INSERT INTO `d0` values(now - 8s, NULL, NULL, NULL)",
// insert and automatically create table with tags if not exists // insert and automatically create table with tags if not exists
"INSERT INTO `d1` USING `meters` TAGS(1, 'San Francisco') values(now - 9s, 10.1, 119, 0.33)", "INSERT INTO `d1` USING `meters` TAGS(1, 'California.SanFrancisco') values(now - 9s, 10.1, 119, 0.33)",
// insert many records in a single sql // insert many records in a single sql
"INSERT INTO `d1` values (now-8s, 10, 120, 0.33) (now - 6s, 10, 119, 0.34) (now - 4s, 11.2, 118, 0.322)", "INSERT INTO `d1` values (now-8s, 10, 120, 0.33) (now - 6s, 10, 119, 0.34) (now - 4s, 11.2, 118, 0.322)",
]).await?; ]).await?;
......
...@@ -4,7 +4,7 @@ import PkgListV3 from "/components/PkgListV3"; ...@@ -4,7 +4,7 @@ import PkgListV3 from "/components/PkgListV3";
<PkgListV3 type={1} sys="Linux" /> <PkgListV3 type={1} sys="Linux" />
[All Downloads](../../releases) [All Downloads](../../releases/tdengine)
2. Unzip 2. Unzip
......
...@@ -4,7 +4,7 @@ import PkgListV3 from "/components/PkgListV3"; ...@@ -4,7 +4,7 @@ import PkgListV3 from "/components/PkgListV3";
<PkgListV3 type={4} sys="Windows" /> <PkgListV3 type={4} sys="Windows" />
[All Downloads](../../releases) [All Downloads](../../releases/tdengine)
2. Execute the installer, select the default value as prompted, and complete the installation 2. Execute the installer, select the default value as prompted, and complete the installation
3. Installation path 3. Installation path
......
...@@ -39,14 +39,14 @@ Comparing the connector support for TDengine functional features as follows. ...@@ -39,14 +39,14 @@ Comparing the connector support for TDengine functional features as follows.
### Using the native interface (taosc) ### Using the native interface (taosc)
| **Functional Features** | **Java** | **Python** | **Go** | **C#** | **Node.js** | **Rust** | | **Functional Features** | **Java** | **Python** | **Go** | **C#** | **Node.js** | **Rust** |
| -------------- | -------- | ---------- | ------ | ------ | ----------- | -------- | | ----------------------------- | ------------- | ---------- | ------------- | ------------- | ------------- | ------------- |
| **Connection Management** | Support | Support | Support | Support | Support | Support | | **Connection Management** | Support | Support | Support | Support | Support | Support |
| **Regular Query** | Support | Support | Support | Support | Support | Support | | **Regular Query** | Support | Support | Support | Support | Support | Support |
| **Parameter Binding** | Support | Support | Support | Support | Support | Support | | **Parameter Binding** | Support | Support | Support | Support | Support | Support |
| ** TMQ ** | Support | Support | Support | Support | Support | Support | | **Subscription (TMQ)** | Support | Support | Support | Support | Support | Support |
| **Schemaless** | Support | Support | Support | Support | Support | Support | | **Schemaless** | Support | Support | Support | Support | Support | Support |
| **DataFrame** | Not Supported | Support | Not Supported | Not Supported | Not Supported | Not Supported | | **DataFrame** | Not Supported | Support | Not Supported | Not Supported | Not Supported | Not Supported |
:::info :::info
The different database framework specifications for various programming languages do not mean that all C/C++ interfaces need a wrapper. The different database framework specifications for various programming languages do not mean that all C/C++ interfaces need a wrapper.
...@@ -54,16 +54,15 @@ The different database framework specifications for various programming language ...@@ -54,16 +54,15 @@ The different database framework specifications for various programming language
### Use HTTP Interfaces (REST or WebSocket) ### Use HTTP Interfaces (REST or WebSocket)
| **Functional Features** | **Java** | **Python** | **Go** | **C# (not supported yet)** | **Node.js** | **Rust** | | **Functional Features** | **Java** | **Python** | **Go** | **C#** | **Node.js** | **Rust** |
| ------------------------------ | -------- | ---------- | -------- | ------------------ | ----------- | -------- | | -------------------------------------- | ------------- | --------------- | ------------- | ------------- | ------------- | ------------- |
| **Connection Management** | Support | Support | Support | N/A | Support | Support | | **Connection Management** | Support | Support | Support | Support | Support | Support |
| **Regular Query** | Support | Support | Support | N/A | Support | Support | | **Regular Query** | Support | Support | Support | Support | Support | Support |
| **Continous Query ** | Support | Support | Support | N/A | Support | Support | | **Parameter Binding** | Not supported | Not supported | Not supported | Support | Not supported | Support |
| **Parameter Binding** | Not supported | Not supported | Not supported | N/A | Not supported | Support | | **Subscription (TMQ) ** | Not supported | Not supported | Not supported | Not supported | Not supported | Support |
| ** TMQ ** | Not supported | Not supported | Not supported | N/A | Not supported | Support | | **Schemaless** | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported |
| **Schemaless** | Not supported | Not supported | Not supported | N/A | Not supported | Not supported | | **Bulk Pulling (based on WebSocket) ** | Support | Support | Not Supported | support | Not Supported | Supported |
| **Bulk Pulling (based on WebSocket) **| Support | Support | Not Supported | N/A | Not Supported | Supported | | **DataFrame** | Not supported | Support | Not supported | Not supported | Not supported | Not supported |
| **DataFrame** | Not supported | Support | Not supported | N/A | Not supported | Not supported |
:::warning :::warning
......
...@@ -30,7 +30,7 @@ taosAdapter provides the following features. ...@@ -30,7 +30,7 @@ taosAdapter provides the following features.
### Install taosAdapter ### Install taosAdapter
If you use the TDengine server, you don't need additional steps to install taosAdapter. You can download taosAdapter from [TDengine 3.0 released versions](../../releases) to download the TDengine server installation package. If you need to deploy taosAdapter separately on another server other than the TDengine server, you should install the full TDengine server package on that server to install taosAdapter. If you need to build taosAdapter from source code, you can refer to the [Building taosAdapter]( https://github.com/taosdata/taosadapter/blob/3.0/BUILD.md) documentation. If you use the TDengine server, you don't need additional steps to install taosAdapter. You can download taosAdapter from [TDengine 3.0 released versions](../../releases/tdengine) to download the TDengine server installation package. If you need to deploy taosAdapter separately on another server other than the TDengine server, you should install the full TDengine server package on that server to install taosAdapter. If you need to build taosAdapter from source code, you can refer to the [Building taosAdapter]( https://github.com/taosdata/taosadapter/blob/3.0/BUILD.md) documentation.
### Start/Stop taosAdapter ### Start/Stop taosAdapter
......
...@@ -211,7 +211,7 @@ ...@@ -211,7 +211,7 @@
], ],
"timeFrom": null, "timeFrom": null,
"timeShift": null, "timeShift": null,
"title": "Leader MNode", "title": "Master MNode",
"transformations": [ "transformations": [
{ {
"id": "filterByValue", "id": "filterByValue",
...@@ -221,7 +221,7 @@ ...@@ -221,7 +221,7 @@
"config": { "config": {
"id": "regex", "id": "regex",
"options": { "options": {
"value": "leader" "value": "master"
} }
}, },
"fieldName": "role" "fieldName": "role"
...@@ -300,7 +300,7 @@ ...@@ -300,7 +300,7 @@
], ],
"timeFrom": null, "timeFrom": null,
"timeShift": null, "timeShift": null,
"title": "Leader MNode Create Time", "title": "Master MNode Create Time",
"transformations": [ "transformations": [
{ {
"id": "filterByValue", "id": "filterByValue",
...@@ -310,7 +310,7 @@ ...@@ -310,7 +310,7 @@
"config": { "config": {
"id": "regex", "id": "regex",
"options": { "options": {
"value": "leader" "value": "master"
} }
}, },
"fieldName": "role" "fieldName": "role"
......
...@@ -153,7 +153,7 @@ ...@@ -153,7 +153,7 @@
], ],
"timeFrom": null, "timeFrom": null,
"timeShift": null, "timeShift": null,
"title": "Leader MNode", "title": "Master MNode",
"transformations": [ "transformations": [
{ {
"id": "filterByValue", "id": "filterByValue",
...@@ -163,7 +163,7 @@ ...@@ -163,7 +163,7 @@
"config": { "config": {
"id": "regex", "id": "regex",
"options": { "options": {
"value": "leader" "value": "master"
} }
}, },
"fieldName": "role" "fieldName": "role"
...@@ -246,7 +246,7 @@ ...@@ -246,7 +246,7 @@
], ],
"timeFrom": null, "timeFrom": null,
"timeShift": null, "timeShift": null,
"title": "Leader MNode Create Time", "title": "Master MNode Create Time",
"transformations": [ "transformations": [
{ {
"id": "filterByValue", "id": "filterByValue",
...@@ -256,7 +256,7 @@ ...@@ -256,7 +256,7 @@
"config": { "config": {
"id": "regex", "id": "regex",
"options": { "options": {
"value": "leader" "value": "master"
} }
}, },
"fieldName": "role" "fieldName": "role"
......
...@@ -5,15 +5,23 @@ sidebar_label: TDinsight ...@@ -5,15 +5,23 @@ sidebar_label: TDinsight
TDinsight is a solution for monitoring TDengine using the builtin native monitoring database and [Grafana]. TDinsight is a solution for monitoring TDengine using the builtin native monitoring database and [Grafana].
After TDengine starts, it will automatically create a monitoring database `log`. TDengine will automatically write many metrics in specific intervals into the `log` database. The metrics may include the server's CPU, memory, hard disk space, network bandwidth, number of requests, disk read/write speed, slow queries, other information like important system operations (user login, database creation, database deletion, etc.), and error alarms. With [Grafana] and [TDengine Data Source Plugin](https://github.com/taosdata/grafanaplugin/releases), TDinsight can visualize cluster status, node information, insertion and query requests, resource usage, vnode, dnode, and mnode status, exception alerts and many other metrics. This is very convenient for developers who want to monitor TDengine cluster status in real-time. This article will guide users to install the Grafana server, automatically install the TDengine data source plug-in, and deploy the TDinsight visualization panel using the `TDinsight.sh` installation script. After TDengine starts, it automatically writes many metrics in specific intervals into a designated database. The metrics may include the server's CPU, memory, hard disk space, network bandwidth, number of requests, disk read/write speed, slow queries, other information like important system operations (user login, database creation, database deletion, etc.), and error alarms. With [Grafana] and [TDengine Data Source Plugin](https://github.com/taosdata/grafanaplugin/releases), TDinsight can visualize cluster status, node information, insertion and query requests, resource usage, vnode, dnode, and mnode status, exception alerts and many other metrics. This is very convenient for developers who want to monitor TDengine cluster status in real-time. This article will guide users to install the Grafana server, automatically install the TDengine data source plug-in, and deploy the TDinsight visualization panel using the `TDinsight.sh` installation script.
## System Requirements ## System Requirements
To deploy TDinsight, a single-node TDengine server or a multi-node TDengine cluster and a [Grafana] server are required. This dashboard requires TDengine 2.3.3.0 and above, with the `log` database enabled (`monitor = 1`). To deploy TDinsight, we need
- a single-node TDengine server or a multi-node TDengine cluster and a [Grafana] server are required. This dashboard requires TDengine 3.0.1.0 and above, with the monitoring feature enabled. For detailed configuration, please refer to [TDengine monitoring configuration](../config/#monitoring-parameters).
- taosAdapter has been instaleld and running, please refer to [taosAdapter](../taosadapter).
- taosKeeper has been installed and running, please refer to [taosKeeper](../taoskeeper).
Please record
- The endpoint of taosAdapter REST service, for example `http://tdengine.local:6041`
- Authentication of taosAdapter, e.g. user name and password
- The database name used by taosKeeper to store monitoring data
## Installing Grafana ## Installing Grafana
We recommend using the latest [Grafana] version 7 or 8 here. You can install Grafana on any [supported operating system](https://grafana.com/docs/grafana/latest/installation/requirements/#supported-operating-systems) by following the [official Grafana documentation Instructions](https://grafana.com/docs/grafana/latest/installation/) to install [Grafana]. We recommend using the latest [Grafana] version 8 or 9 here. You can install Grafana on any [supported operating system](https://grafana.com/docs/grafana/latest/installation/requirements/#supported-operating-systems) by following the [official Grafana documentation Instructions](https://grafana.com/docs/grafana/latest/installation/) to install [Grafana].
### Installing Grafana on Debian or Ubuntu ### Installing Grafana on Debian or Ubuntu
...@@ -71,7 +79,7 @@ chmod +x TDinsight.sh ...@@ -71,7 +79,7 @@ chmod +x TDinsight.sh
./TDinsight.sh ./TDinsight.sh
``` ```
This script will automatically download the latest [Grafana TDengine data source plugin](https://github.com/taosdata/grafanaplugin/releases/latest) and [TDinsight dashboard](https://grafana.com/grafana/dashboards/15167) with configurable parameters for command-line options to the [Grafana Provisioning](https://grafana.com/docs/grafana/latest/administration/provisioning/) configuration file to automate deployment and updates, etc. With the alert setting options provided by this script, you can also get built-in support for AliCloud SMS alert notifications. This script will automatically download the latest [Grafana TDengine data source plugin](https://github.com/taosdata/grafanaplugin/releases/latest) and [TDinsight dashboard](https://github.com/taosdata/grafanaplugin/blob/master/dashboards/TDinsightV3.json) with configurable parameters for command-line options to the [Grafana Provisioning](https://grafana.com/docs/grafana/latest/administration/provisioning/) configuration file to automate deployment and updates, etc. With the alert setting options provided by this script, you can also get built-in support for AliCloud SMS alert notifications.
Assume you use TDengine and Grafana's default services on the same host. Run `. /TDinsight.sh` and open the Grafana browser window to see the TDinsight dashboard. Assume you use TDengine and Grafana's default services on the same host. Run `. /TDinsight.sh` and open the Grafana browser window to see the TDinsight dashboard.
...@@ -106,18 +114,6 @@ Install and configure TDinsight dashboard in Grafana on Ubuntu 18.04/20.04 syste ...@@ -106,18 +114,6 @@ Install and configure TDinsight dashboard in Grafana on Ubuntu 18.04/20.04 syste
-E, --external-notifier <string> Apply external notifier uid to TDinsight dashboard. -E, --external-notifier <string> Apply external notifier uid to TDinsight dashboard.
Alibaba Cloud SMS as Notifier:
-s, --sms-enabled To enable tdengine-datasource plugin builtin Alibaba Cloud SMS webhook.
-N, --sms-notifier-name <string> Provisioning notifier name.[default: TDinsight Builtin SMS]
-U, --sms-notifier-uid <string> Provisioning notifier uid, use lowercase notifier name by default.
-D, --sms-notifier-is-default Set notifier as default.
-I, --sms-access-key-id <string> Alibaba Cloud SMS access key id
-K, --sms-access-key-secret <string> Alibaba Cloud SMS access key secret
-S, --sms-sign-name <string> Sign name
-C, --sms-template-code <string> Template code
-T, --sms-template-param <string> Template param, a escaped JSON string like '{"alarm_level":"%s","time":"%s","name":"%s","content":"%s"}'
-B, --sms-phone-numbers <string> Comma-separated numbers list, eg "189xxxxxxxx,132xxxxxxxx"
-L, --sms-listen-addr <string> [default: 127.0.0.1:9100]
``` ```
Most command-line options can take effect the same as environment variables. Most command-line options can take effect the same as environment variables.
...@@ -136,17 +132,6 @@ Most command-line options can take effect the same as environment variables. ...@@ -136,17 +132,6 @@ Most command-line options can take effect the same as environment variables.
| -t | --tdinsight-title | TDINSIGHT_DASHBOARD_TITLE | TDinsight dashboard title. [Default: TDinsight] | -e | -tdinsight-title | -t | --tdinsight-title | TDINSIGHT_DASHBOARD_TITLE | TDinsight dashboard title. [Default: TDinsight] | -e | -tdinsight-title
| -e | --tdinsight-editable | TDINSIGHT_DASHBOARD_EDITABLE | If the dashboard is configured to be editable. [Default: false] | -e | --external | -e | --tdinsight-editable | TDINSIGHT_DASHBOARD_EDITABLE | If the dashboard is configured to be editable. [Default: false] | -e | --external
| -E | --external-notifier | EXTERNAL_NOTIFIER | Apply the external notifier uid to the TDinsight dashboard. | -s | -E | --external-notifier | EXTERNAL_NOTIFIER | Apply the external notifier uid to the TDinsight dashboard. | -s
| -s | --sms-enabled | SMS_ENABLED | Enable the tdengine-datasource plugin built into Alibaba Cloud SMS webhook. | -s
| -N | --sms-notifier-name | SMS_NOTIFIER_NAME | The name of the provisioning notifier. [Default: `TDinsight Builtin SMS`] | -U
| -U | --sms-notifier-uid | SMS_NOTIFIER_UID | "Notification Channel" `uid`, lowercase of the program name is used by default, other characters are replaced by "-". |-sms
| -D | --sms-notifier-is-default | SMS_NOTIFIER_IS_DEFAULT | Set built-in SMS notification to default value. |-sms-notifier-is-default
| -I | --sms-access-key-id | SMS_ACCESS_KEY_ID | Alibaba Cloud SMS access key id |
| -K | --sms-access-key-secret | SMS_ACCESS_KEY_SECRET | AliCloud SMS-access-secret-key |
| -S | --sms-sign-name | SMS_SIGN_NAME | Signature |
| -C | --sms-template-code | SMS_TEMPLATE_CODE | Template code |
| -T | --sms-template-param | SMS_TEMPLATE_PARAM | JSON template for template parameters |
| -B | --sms-phone-numbers | SMS_PHONE_NUMBERS | A comma-separated list of phone numbers, e.g. `"189xxxxxxxx,132xxxxxxxx"` |
| -L | --sms-listen-addr | SMS_LISTEN_ADDR | Built-in SMS webhook listener address, default is `127.0.0.1:9100` |
Suppose you start a TDengine database on host `tdengine` with HTTP API port `6041`, user `root1`, and password `pass5ord`. Execute the script. Suppose you start a TDengine database on host `tdengine` with HTTP API port `6041`, user `root1`, and password `pass5ord`. Execute the script.
...@@ -166,24 +151,10 @@ Use the `uid` value obtained above as `-E` input. ...@@ -166,24 +151,10 @@ Use the `uid` value obtained above as `-E` input.
sudo ./TDinsight.sh -a http://tdengine:6041 -u root1 -p pass5ord -E existing-notifier sudo ./TDinsight.sh -a http://tdengine:6041 -u root1 -p pass5ord -E existing-notifier
``` ```
If you want to use the [Alibaba Cloud SMS](https://www.aliyun.com/product/sms) service as a notification channel, you should enable it with the `-s` flag add the following parameters.
- `-N`: Notification Channel name, default is `TDinsight Builtin SMS`.
- `-U`: Channel uid, default is lowercase of `name`, any other character is replaced with -, for the default `-N`, its uid is `tdinsight-builtin-sms`.
- `-I`: Alibaba Cloud SMS access key id.
- `-K`: Alibaba Cloud SMS access secret key.
- `-S`: Alibaba Cloud SMS signature.
- `-C`: Alibaba Cloud SMS template id.
- `-T`: Alibaba Cloud SMS template parameters, for JSON format template, example is as follows `'{"alarm_level":"%s", "time":"%s", "name":"%s", "content":"%s"}'`. There are four parameters: alarm level, time, name and alarm content.
- `-B`: a list of phone numbers, separated by a comma `,`.
If you want to monitor multiple TDengine clusters, you need to set up numerous TDinsight dashboards. Setting up non-default TDinsight requires some changes: the `-n` `-i` `-t` options need to be changed to non-default names, and `-N` and `-L` should also be changed if using the built-in SMS alerting feature. If you want to monitor multiple TDengine clusters, you need to set up numerous TDinsight dashboards. Setting up non-default TDinsight requires some changes: the `-n` `-i` `-t` options need to be changed to non-default names, and `-N` and `-L` should also be changed if using the built-in SMS alerting feature.
```bash ```bash
sudo . /TDengine.sh -n TDengine-Env1 -a http://another:6041 -u root -p taosdata -i tdinsight-env1 -t 'TDinsight Env1' sudo . /TDengine.sh -n TDengine-Env1 -a http://another:6041 -u root -p taosdata -i tdinsight-env1 -t 'TDinsight Env1'
# If using built-in SMS notifications
sudo . /TDengine.sh -n TDengine-Env1 -a http://another:6041 -u root -p taosdata -i tdinsight-env1 -t 'TDinsight Env1' \
-s -N 'Env1 SMS' -I xx -K xx -S xx -C SMS_XX -T '' -B 00000000000 -L 127.0.0.01:10611
``` ```
Please note that the configuration data source, notification channel, and dashboard are not changeable on the front end. You should update the configuration again via this script or manually change the configuration file in the `/etc/grafana/provisioning` directory (this is the default directory for Grafana, use the `-P` option to change it as needed). Please note that the configuration data source, notification channel, and dashboard are not changeable on the front end. You should update the configuration again via this script or manually change the configuration file in the `/etc/grafana/provisioning` directory (this is the default directory for Grafana, use the `-P` option to change it as needed).
...@@ -249,21 +220,23 @@ Save and test. It will report 'TDengine Data source is working' under normal cir ...@@ -249,21 +220,23 @@ Save and test. It will report 'TDengine Data source is working' under normal cir
### Importing dashboards ### Importing dashboards
Point to **+** / **Create** - **import** (or `/dashboard/import` url). In the page of configuring data source, click **Dashboards** tab.
![TDengine Database TDinsight Import Dashboard and Configuration](./assets/import_dashboard.webp) ![TDengine Database TDinsight Import Dashboard and Configuration](./assets/import_dashboard.webp)
Type the dashboard ID `15167` in the **Import via grafana.com** location and **Load**. Choose `TDengine for 3.x` and click `import`.
After the importing is done, `TDinsight for 3.x` dashboard is available on the page of `search dashboards by name`.
![TDengine Database TDinsight Import via grafana.com](./assets/import-dashboard-15167.webp) ![TDengine Database TDinsight Import via grafana.com](./assets/import_dashboard_view.webp)
Once the import is complete, the full page view of TDinsight is shown below. In the `TDinsight for 3.x` dashboard, choose the database used by taosKeeper to store monitoring data, you can see the monitoring result.
![TDengine Database TDinsight show](./assets/TDinsight-full.webp) ![TDengine Database TDinsight 选择数据库](./assets/select_dashboard_db.webp)
## TDinsight dashboard details ## TDinsight dashboard details
The TDinsight dashboard is designed to provide the usage and status of TDengine-related resources [dnodes, mnodes, vnodes](../../taos-sql/node/) or databases. The TDinsight dashboard is designed to provide the usage and status of TDengine-related resources, e.g. dnodes, mnodes, vnodes and databases.
Details of the metrics are as follows. Details of the metrics are as follows.
...@@ -285,7 +258,6 @@ This section contains the current information and status of the cluster, the ale ...@@ -285,7 +258,6 @@ This section contains the current information and status of the cluster, the ale
- **Measuring Points Used**: The number of measuring points used to enable the alert rule (no data available in the community version, healthy by default). - **Measuring Points Used**: The number of measuring points used to enable the alert rule (no data available in the community version, healthy by default).
- **Grants Expire Time**: the expiration time of the enterprise version of the enabled alert rule (no data available for the community version, healthy by default). - **Grants Expire Time**: the expiration time of the enterprise version of the enabled alert rule (no data available for the community version, healthy by default).
- **Error Rate**: Aggregate error rate (average number of errors per second) for alert-enabled clusters. - **Error Rate**: Aggregate error rate (average number of errors per second) for alert-enabled clusters.
- **Variables**: `show variables` table display.
### DNodes Status ### DNodes Status
...@@ -294,7 +266,6 @@ This section contains the current information and status of the cluster, the ale ...@@ -294,7 +266,6 @@ This section contains the current information and status of the cluster, the ale
- **DNodes Status**: simple table view of `show dnodes`. - **DNodes Status**: simple table view of `show dnodes`.
- **DNodes Lifetime**: the time elapsed since the dnode was created. - **DNodes Lifetime**: the time elapsed since the dnode was created.
- **DNodes Number**: the number of DNodes changes. - **DNodes Number**: the number of DNodes changes.
- **Offline Reason**: if any dnode status is offline, the reason for offline is shown as a pie chart.
### MNode Overview ### MNode Overview
...@@ -309,7 +280,6 @@ This section contains the current information and status of the cluster, the ale ...@@ -309,7 +280,6 @@ This section contains the current information and status of the cluster, the ale
1. **Requests Rate(Inserts per Second)**: average number of inserts per second. 1. **Requests Rate(Inserts per Second)**: average number of inserts per second.
2. **Requests (Selects)**: number of query requests and change rate (count of second). 2. **Requests (Selects)**: number of query requests and change rate (count of second).
3. **Requests (HTTP)**: number of HTTP requests and request rate (count of second).
### Database ### Database
...@@ -319,9 +289,8 @@ Database usage, repeated for each value of the variable `$database` i.e. multipl ...@@ -319,9 +289,8 @@ Database usage, repeated for each value of the variable `$database` i.e. multipl
1. **STables**: number of super tables. 1. **STables**: number of super tables.
2. **Total Tables**: number of all tables. 2. **Total Tables**: number of all tables.
3. **Sub Tables**: the number of all super table subtables. 3. **Tables**: number of normal tables.
4. **Tables**: graph of all normal table numbers over time. 4. **Table number for each vgroup**: number of tables per vgroup.
5. **Tables Number Foreach VGroups**: The number of tables contained in each VGroups.
### DNode Resource Usage ### DNode Resource Usage
...@@ -356,12 +325,11 @@ Currently, only the number of logins per minute is reported. ...@@ -356,12 +325,11 @@ Currently, only the number of logins per minute is reported.
Support monitoring taosAdapter request statistics and status details. Includes. Support monitoring taosAdapter request statistics and status details. Includes.
1. **http_request**: contains the total number of requests, the number of failed requests, and the number of requests being processed 1. **http_request_inflight**: number of real-time requests.
2. **top 3 request endpoint**: data of the top 3 requests by endpoint group 2. **http_request_total**: number of total requests.
3. **Memory Used**: taosAdapter memory usage 3. **http_request_fail**: number of failed requets.
4. **latency_quantile(ms)**: quantile of (1, 2, 5, 9, 99) stages 4. **CPU Used**: CPU usage of taosAdapter.
5. **top 3 failed request endpoint**: data of the top 3 failed requests by endpoint grouping 5. **Memory Used**: Memory usage of taosAdapter.
6. **CPU Used**: taosAdapter CPU usage
## Upgrade ## Upgrade
...@@ -403,13 +371,6 @@ services: ...@@ -403,13 +371,6 @@ services:
TDENGINE_API: ${TDENGINE_API} TDENGINE_API: ${TDENGINE_API}
TDENGINE_USER: ${TDENGINE_USER} TDENGINE_USER: ${TDENGINE_USER}
TDENGINE_PASS: ${TDENGINE_PASS} TDENGINE_PASS: ${TDENGINE_PASS}
SMS_ACCESS_KEY_ID: ${SMS_ACCESS_KEY_ID}
SMS_ACCESS_KEY_SECRET: ${SMS_ACCESS_KEY_SECRET}
SMS_SIGN_NAME: ${SMS_SIGN_NAME}
SMS_TEMPLATE_CODE: ${SMS_TEMPLATE_CODE}
SMS_TEMPLATE_PARAM: '${SMS_TEMPLATE_PARAM}'
SMS_PHONE_NUMBERS: $SMS_PHONE_NUMBERS
SMS_LISTEN_ADDR: ${SMS_LISTEN_ADDR}
ports: ports:
- 3000:3000 - 3000:3000
volumes: volumes:
......
---
sidebar_label: JupyterLab
title: Connect JupyterLab to TDengine
---
JupyterLab is the next generation of the ubiquitous Jupyter Notebook. In this note we show you how to install the TDengine Python connector to connect to TDengine in JupyterLab. You can then insert data and perform queries against the TDengine instance within JupyterLab.
## Install JupyterLab
Installing JupyterLab is very easy. Installation instructions can be found at:
https://jupyterlab.readthedocs.io/en/stable/getting_started/installation.html.
If you don't feel like clicking on the link here are the instructions.
Jupyter's preferred Python package manager is pip, so we show the instructions for pip.
You can also use **conda** or **pipenv** if you are managing Python environments.
````
pip install jupyterlab
````
For **conda** you can run:
````
conda install -c conda-forge jupyterlab
````
For **pipenv** you can run:
````
pipenv install jupyterlab
pipenv shell
````
## Run JupyterLab
You can start JupyterLab from the command line by running:
````
jupyter lab
````
This will automatically launch your default browser and connect to your JupyterLab instance, usually on port 8888.
## Install the TDengine Python connector
You can now install the TDengine Python connector as follows.
Start a new Python kernel in JupyterLab.
If using **conda** run the following:
````
# Install a conda package in the current Jupyter kernel
import sys
!conda install --yes --prefix {sys.prefix} taospy
````
If using **pip** run the following:
````
# Install a pip package in the current Jupyter kernel
import sys
!{sys.executable} -m pip install taospy
````
## Connect to TDengine
You can find detailed examples to use the Python connector, in the TDengine documentation here.
Once you have installed the TDengine Python connector in your JupyterLab kernel, the process of connecting to TDengine is the same as that you would use if you weren't using JupyterLab.
Each TDengine instance, has a database called "log" which has monitoring information about the TDengine instance.
In the "log" database there is a [supertable](https://docs.tdengine.com/taos-sql/stable/) called "disks_info".
The structure of this table is as follows:
````
taos> desc disks_info;
Field | Type | Length | Note |
=================================================================================
ts | TIMESTAMP | 8 | |
datadir_l0_used | FLOAT | 4 | |
datadir_l0_total | FLOAT | 4 | |
datadir_l1_used | FLOAT | 4 | |
datadir_l1_total | FLOAT | 4 | |
datadir_l2_used | FLOAT | 4 | |
datadir_l2_total | FLOAT | 4 | |
dnode_id | INT | 4 | TAG |
dnode_ep | BINARY | 134 | TAG |
Query OK, 9 row(s) in set (0.000238s)
````
The code below is used to fetch data from this table into a pandas DataFrame.
````
import sys
import taos
import pandas
def sqlQuery(conn):
df: pandas.DataFrame = pandas.read_sql("select * from log.disks_info limit 500", conn)
print(df)
return df
conn = taos.connect()
result = sqlQuery(conn)
print(result)
````
TDengine has connectors for various languages including Node.js, Go, PHP and there are kernels for these languages which can be found [here](https://github.com/jupyter/jupyter/wiki/Jupyter-kernels).
---
sidebar_label: Releases
title: Released Versions
---
import Release from "/components/ReleaseV3";
<Release versionPrefix="3.0" />
---
sidebar_label: TDengine
title: TDengine
description: TDengine release history, Release Notes and download links.
---
import Release from "/components/ReleaseV3";
## 3.0.1.1
<Release type="tdengine" version="3.0.1.1" />
## 3.0.1.0
<Release type="tdengine" version="3.0.1.0" />
---
sidebar_label: taosTools
title: taosTools
description: taosTools release history, Release Notes, download links.
---
import Release from "/components/ReleaseV3";
## 2.2.0
<Release type="tools" version="2.2.0" />
## 2.1.3
<Release type="tools" version="2.1.3" />
label: Releases
\ No newline at end of file
...@@ -38,12 +38,12 @@ public class SubscribeDemo { ...@@ -38,12 +38,12 @@ public class SubscribeDemo {
statement.executeUpdate("create database " + DB_NAME); statement.executeUpdate("create database " + DB_NAME);
statement.executeUpdate("use " + DB_NAME); statement.executeUpdate("use " + DB_NAME);
statement.executeUpdate( statement.executeUpdate(
"CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT) TAGS (`groupid` INT, `location` BINARY(16))"); "CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT) TAGS (`groupid` INT, `location` BINARY(24))");
statement.executeUpdate("CREATE TABLE `d0` USING `meters` TAGS(0, 'Los Angles')"); statement.executeUpdate("CREATE TABLE `d0` USING `meters` TAGS(0, 'California.LosAngles')");
statement.executeUpdate("INSERT INTO `d0` values(now - 10s, 0.32, 116)"); statement.executeUpdate("INSERT INTO `d0` values(now - 10s, 0.32, 116)");
statement.executeUpdate("INSERT INTO `d0` values(now - 8s, NULL, NULL)"); statement.executeUpdate("INSERT INTO `d0` values(now - 8s, NULL, NULL)");
statement.executeUpdate( statement.executeUpdate(
"INSERT INTO `d1` USING `meters` TAGS(1, 'San Francisco') values(now - 9s, 10.1, 119)"); "INSERT INTO `d1` USING `meters` TAGS(1, 'California.SanFrancisco') values(now - 9s, 10.1, 119)");
statement.executeUpdate( statement.executeUpdate(
"INSERT INTO `d1` values (now-8s, 10, 120) (now - 6s, 10, 119) (now - 4s, 11.2, 118)"); "INSERT INTO `d1` values (now-8s, 10, 120) (now - 6s, 10, 119) (now - 4s, 11.2, 118)");
// create topic // create topic
...@@ -75,4 +75,4 @@ public class SubscribeDemo { ...@@ -75,4 +75,4 @@ public class SubscribeDemo {
} }
timer.cancel(); timer.cancel();
} }
} }
\ No newline at end of file
...@@ -16,7 +16,7 @@ class MockDataSource implements Iterator { ...@@ -16,7 +16,7 @@ class MockDataSource implements Iterator {
private int currentTbId = -1; private int currentTbId = -1;
// mock values // mock values
String[] location = {"LosAngeles", "SanDiego", "Hollywood", "Compton", "San Francisco"}; String[] location = {"California.LosAngeles", "California.SanDiego", "California.SanJose", "California.Campbell", "California.SanFrancisco"};
float[] current = {8.8f, 10.7f, 9.9f, 8.9f, 9.4f}; float[] current = {8.8f, 10.7f, 9.9f, 8.9f, 9.4f};
int[] voltage = {119, 116, 111, 113, 118}; int[] voltage = {119, 116, 111, 113, 118};
float[] phase = {0.32f, 0.34f, 0.33f, 0.329f, 0.141f}; float[] phase = {0.32f, 0.34f, 0.33f, 0.329f, 0.141f};
...@@ -50,4 +50,4 @@ class MockDataSource implements Iterator { ...@@ -50,4 +50,4 @@ class MockDataSource implements Iterator {
return sb.toString(); return sb.toString();
} }
} }
\ No newline at end of file
...@@ -3,11 +3,11 @@ import time ...@@ -3,11 +3,11 @@ import time
class MockDataSource: class MockDataSource:
samples = [ samples = [
"8.8,119,0.32,LosAngeles,0", "8.8,119,0.32,California.LosAngeles,0",
"10.7,116,0.34,SanDiego,1", "10.7,116,0.34,California.SanDiego,1",
"9.9,111,0.33,Hollywood,2", "9.9,111,0.33,California.SanJose,2",
"8.9,113,0.329,Compton,3", "8.9,113,0.329,California.Campbell,3",
"9.4,118,0.141,San Francisco,4" "9.4,118,0.141,California.SanFrancisco,4"
] ]
def __init__(self, tb_name_prefix, table_count): def __init__(self, tb_name_prefix, table_count):
......
...@@ -12,7 +12,7 @@ async fn main() -> anyhow::Result<()> { ...@@ -12,7 +12,7 @@ async fn main() -> anyhow::Result<()> {
// bind table name and tags // bind table name and tags
stmt.set_tbname_tags( stmt.set_tbname_tags(
"d1001", "d1001",
&[Value::VarChar("San Fransico".into()), Value::Int(2)], &[Value::VarChar("California.SanFransico".into()), Value::Int(2)],
)?; )?;
// bind values. // bind values.
let values = vec![ let values = vec![
......
...@@ -19,13 +19,13 @@ struct Record { ...@@ -19,13 +19,13 @@ struct Record {
async fn prepare(taos: Taos) -> anyhow::Result<()> { async fn prepare(taos: Taos) -> anyhow::Result<()> {
let inserted = taos.exec_many([ let inserted = taos.exec_many([
// create child table // create child table
"CREATE TABLE `d0` USING `meters` TAGS(0, 'Los Angles')", "CREATE TABLE `d0` USING `meters` TAGS(0, 'California.LosAngles')",
// insert into child table // insert into child table
"INSERT INTO `d0` values(now - 10s, 10, 116, 0.32)", "INSERT INTO `d0` values(now - 10s, 10, 116, 0.32)",
// insert with NULL values // insert with NULL values
"INSERT INTO `d0` values(now - 8s, NULL, NULL, NULL)", "INSERT INTO `d0` values(now - 8s, NULL, NULL, NULL)",
// insert and automatically create table with tags if not exists // insert and automatically create table with tags if not exists
"INSERT INTO `d1` USING `meters` TAGS(1, 'San Francisco') values(now - 9s, 10.1, 119, 0.33)", "INSERT INTO `d1` USING `meters` TAGS(1, 'California.SanFrancisco') values(now - 9s, 10.1, 119, 0.33)",
// insert many records in a single sql // insert many records in a single sql
"INSERT INTO `d1` values (now-8s, 10, 120, 0.33) (now - 6s, 10, 119, 0.34) (now - 4s, 11.2, 118, 0.322)", "INSERT INTO `d1` values (now-8s, 10, 120, 0.33) (now - 6s, 10, 119, 0.34) (now - 4s, 11.2, 118, 0.322)",
]).await?; ]).await?;
...@@ -48,7 +48,7 @@ async fn main() -> anyhow::Result<()> { ...@@ -48,7 +48,7 @@ async fn main() -> anyhow::Result<()> {
format!("CREATE DATABASE `{db}`"), format!("CREATE DATABASE `{db}`"),
format!("USE `{db}`"), format!("USE `{db}`"),
// create super table // create super table
format!("CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT, `phase` FLOAT) TAGS (`groupid` INT, `location` BINARY(16))"), format!("CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT, `phase` FLOAT) TAGS (`groupid` INT, `location` BINARY(24))"),
// create topic for subscription // create topic for subscription
format!("CREATE TOPIC tmq_meters with META AS DATABASE {db}") format!("CREATE TOPIC tmq_meters with META AS DATABASE {db}")
]) ])
......
...@@ -14,14 +14,14 @@ async fn main() -> anyhow::Result<()> { ...@@ -14,14 +14,14 @@ async fn main() -> anyhow::Result<()> {
]).await?; ]).await?;
let inserted = taos.exec("INSERT INTO let inserted = taos.exec("INSERT INTO
power.d1001 USING power.meters TAGS('San Francisco', 2) power.d1001 USING power.meters TAGS('California.SanFrancisco', 2)
VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000)
('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
power.d1002 USING power.meters TAGS('San Francisco', 3) power.d1002 USING power.meters TAGS('California.SanFrancisco', 3)
VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
power.d1003 USING power.meters TAGS('Los Angeles', 2) power.d1003 USING power.meters TAGS('California.LosAngeles', 2)
VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
power.d1004 USING power.meters TAGS('Los Angeles', 3) power.d1004 USING power.meters TAGS('California.LosAngeles', 3)
VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)").await?; VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)").await?;
assert_eq!(inserted, 8); assert_eq!(inserted, 8);
......
...@@ -52,7 +52,7 @@ taos> ...@@ -52,7 +52,7 @@ taos>
$ taosBenchmark $ taosBenchmark
``` ```
该命令将在数据库 `test` 下面自动创建一张超级表 `meters`,该超级表下有 1 万张表,表名为 `d0``d9999`,每张表有 1 万条记录,每条记录有 `ts``current``voltage``phase` 四个字段,时间戳从 2017-07-14 10:40:00 000 到 2017-07-14 10:40:09 999,每张表带有标签 `location``groupId`,groupId 被设置为 1 到 10,location 被设置为 `Campbell``Cupertino``Los Angeles``Mountain View``Palo Alto``San Diego``San Francisco``San Jose``Santa Clara` 或者 `Sunnyvale` 该命令将在数据库 `test` 下面自动创建一张超级表 `meters`,该超级表下有 1 万张表,表名为 `d0``d9999`,每张表有 1 万条记录,每条记录有 `ts``current``voltage``phase` 四个字段,时间戳从 2017-07-14 10:40:00 000 到 2017-07-14 10:40:09 999,每张表带有标签 `location``groupId`,groupId 被设置为 1 到 10,location 被设置为 `California.Campbell``California.Cupertino``California.LosAngeles``California.MountainView``California.PaloAlto``California.SanDiego``California.SanFrancisco``California.SanJose``California.SantaClara` 或者 `California.Sunnyvale`
这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能,即使在一台普通的 PC 服务器往往也仅需十几秒。 这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能,即使在一台普通的 PC 服务器往往也仅需十几秒。
...@@ -74,10 +74,10 @@ SELECT COUNT(*) FROM test.meters; ...@@ -74,10 +74,10 @@ SELECT COUNT(*) FROM test.meters;
SELECT AVG(current), MAX(voltage), MIN(phase) FROM test.meters; SELECT AVG(current), MAX(voltage), MIN(phase) FROM test.meters;
``` ```
查询 location = "San Francisco" 的记录总条数: 查询 location = "California.SanFrancisco" 的记录总条数:
```sql ```sql
SELECT COUNT(*) FROM test.meters WHERE location = "San Francisco"; SELECT COUNT(*) FROM test.meters WHERE location = "California.SanFrancisco";
``` ```
查询 groupId = 10 的所有记录的平均值、最大值、最小值等: 查询 groupId = 10 的所有记录的平均值、最大值、最小值等:
......
...@@ -73,6 +73,7 @@ install.sh 安装脚本在执行过程中,会通过命令行交互界面询问 ...@@ -73,6 +73,7 @@ install.sh 安装脚本在执行过程中,会通过命令行交互界面询问
</TabItem> </TabItem>
<TabItem value="apt-get" label="apt-get"> <TabItem value="apt-get" label="apt-get">
可以使用 `apt-get` 工具从官方仓库安装。 可以使用 `apt-get` 工具从官方仓库安装。
**配置包仓库** **配置包仓库**
...@@ -223,7 +224,7 @@ Query OK, 2 row(s) in set (0.003128s) ...@@ -223,7 +224,7 @@ Query OK, 2 row(s) in set (0.003128s)
$ taosBenchmark $ taosBenchmark
``` ```
该命令将在数据库 `test` 下面自动创建一张超级表 `meters`,该超级表下有 1 万张表,表名为 `d0``d9999`,每张表有 1 万条记录,每条记录有 `ts``current``voltage``phase` 四个字段,时间戳从 2017-07-14 10:40:00 000 到 2017-07-14 10:40:09 999,每张表带有标签 `location``groupId`,groupId 被设置为 1 到 10,location 被设置为 `Campbell``Cupertino``Los Angeles``Mountain View``Palo Alto``San Diego``San Francisco``San Jose``Santa Clara` 或者 `Sunnyvale` 该命令将在数据库 `test` 下面自动创建一张超级表 `meters`,该超级表下有 1 万张表,表名为 `d0``d9999`,每张表有 1 万条记录,每条记录有 `ts``current``voltage``phase` 四个字段,时间戳从 2017-07-14 10:40:00 000 到 2017-07-14 10:40:09 999,每张表带有标签 `location``groupId`,groupId 被设置为 1 到 10,location 被设置为 `California.Campbell``California.Cupertino``California.LosAngeles``California.MountainView``California.PaloAlto``California.SanDiego``California.SanFrancisco``California.SanJose``California.SantaClara` 或者 `California.Sunnyvale`
这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能,即使在一台普通的 PC 服务器往往也仅需十几秒。 这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能,即使在一台普通的 PC 服务器往往也仅需十几秒。
...@@ -245,10 +246,10 @@ SELECT COUNT(*) FROM test.meters; ...@@ -245,10 +246,10 @@ SELECT COUNT(*) FROM test.meters;
SELECT AVG(current), MAX(voltage), MIN(phase) FROM test.meters; SELECT AVG(current), MAX(voltage), MIN(phase) FROM test.meters;
``` ```
查询 location = "San Francisco" 的记录总条数: 查询 location = "California.SanFrancisco" 的记录总条数:
```sql ```sql
SELECT COUNT(*) FROM test.meters WHERE location = "San Francisco"; SELECT COUNT(*) FROM test.meters WHERE location = "Calaifornia.SanFrancisco";
``` ```
查询 groupId = 10 的所有记录的平均值、最大值、最小值等: 查询 groupId = 10 的所有记录的平均值、最大值、最小值等:
......
...@@ -218,7 +218,7 @@ void Close() ...@@ -218,7 +218,7 @@ void Close()
```sql ```sql
DROP DATABASE IF EXISTS tmqdb; DROP DATABASE IF EXISTS tmqdb;
CREATE DATABASE tmqdb; CREATE DATABASE tmqdb;
CREATE TABLE tmqdb.stb (ts TIMESTAMP, c1 INT, c2 FLOAT, c3 VARCHAR(16) TAGS(t1 INT, t3 VARCHAR(16)); CREATE TABLE tmqdb.stb (ts TIMESTAMP, c1 INT, c2 FLOAT, c3 VARCHAR(16)) TAGS(t1 INT, t3 VARCHAR(16));
CREATE TABLE tmqdb.ctb0 USING tmqdb.stb TAGS(0, "subtable0"); CREATE TABLE tmqdb.ctb0 USING tmqdb.stb TAGS(0, "subtable0");
CREATE TABLE tmqdb.ctb1 USING tmqdb.stb TAGS(1, "subtable1"); CREATE TABLE tmqdb.ctb1 USING tmqdb.stb TAGS(1, "subtable1");
INSERT INTO tmqdb.ctb0 VALUES(now, 0, 0, 'a0')(now+1s, 0, 0, 'a00'); INSERT INTO tmqdb.ctb0 VALUES(now, 0, 0, 'a0')(now+1s, 0, 0, 'a00');
......
...@@ -4,7 +4,7 @@ sidebar_label: REST API ...@@ -4,7 +4,7 @@ sidebar_label: REST API
description: 详细介绍 TDengine 提供的 RESTful API. description: 详细介绍 TDengine 提供的 RESTful API.
--- ---
为支持各种不同类型平台的开发,TDengine 提供符合 REST 设计标准的 API,即 REST API。为最大程度降低学习成本,不同于其他数据库 REST API 的设计方法,TDengine 直接通过 HTTP POST 请求 BODY 中包含的 SQL 语句来操作数据库,仅需要一个 URL。REST 连接器的使用参见 [视频教程](https://www.taosdata.com/blog/2020/11/11/1965.html)。 为支持各种不同类型平台的开发,TDengine 提供符合 RESTful 设计标准的 API,即 REST API。为最大程度降低学习成本,不同于其他数据库 REST API 的设计方法,TDengine 直接通过 HTTP POST 请求 BODY 中包含的 SQL 语句来操作数据库,仅需要一个 URL。REST API 的使用参见 [视频教程](https://www.taosdata.com/blog/2020/11/11/1965.html)。
:::note :::note
与原生连接器的一个区别是,RESTful 接口是无状态的,因此 `USE db_name` 指令没有效果,所有对表名、超级表名的引用都需要指定数据库名前缀。支持在 RESTful URL 中指定 db_name,这时如果 SQL 语句中没有指定数据库名前缀的话,会使用 URL 中指定的这个 db_name。 与原生连接器的一个区别是,RESTful 接口是无状态的,因此 `USE db_name` 指令没有效果,所有对表名、超级表名的引用都需要指定数据库名前缀。支持在 RESTful URL 中指定 db_name,这时如果 SQL 语句中没有指定数据库名前缀的话,会使用 URL 中指定的这个 db_name。
...@@ -18,7 +18,7 @@ RESTful 接口不依赖于任何 TDengine 的库,因此客户端不需要安 ...@@ -18,7 +18,7 @@ RESTful 接口不依赖于任何 TDengine 的库,因此客户端不需要安
在已经安装 TDengine 服务器端的情况下,可以按照如下方式进行验证。 在已经安装 TDengine 服务器端的情况下,可以按照如下方式进行验证。
下面以 Ubuntu 环境中使用 curl 工具(确认已经安装)来验证 RESTful 接口的正常,验证前请确认 taosAdapter 服务已开启,在 Linux 系统上此服务默认由 systemd 管理,使用命令 `systemctl start taosadapter` 启动。 下面以 Ubuntu 环境中使用 `curl` 工具(请确认已经安装)来验证 RESTful 接口是否工作正常,验证前请确认 taosAdapter 服务已开启,在 Linux 系统上此服务默认由 systemd 管理,使用命令 `systemctl start taosadapter` 启动。
下面示例是列出所有的数据库,请把 h1.taosdata.com 和 6041(缺省值)替换为实际运行的 TDengine 服务 FQDN 和端口号: 下面示例是列出所有的数据库,请把 h1.taosdata.com 和 6041(缺省值)替换为实际运行的 TDengine 服务 FQDN 和端口号:
......
---
title: Schemaless API
sidebar_label: Schemaless API
description: 详细介绍 TDengine 提供的 Schemaless API.
---
TDengine 提供了兼容 InfluxDB (v1) 和 OpenTSDB 行协议的 Schemaless API。支持 InfluxDB(v1) 或 OpenTSDB 行协议写入数据的第三方软件无需修改代码,只要修改配置的 EndPoint URL 就可以直接把数据写入 TDengine 数据库。
### 兼容 InfluxDB 行协议写入的方法
您可以配置任何支持使用 InfluxDB(v1) 行协议的应用访问地址 `http://<fqdn>:6041/<APIEndPoint>` 来写入 InfluxDB 兼容格式的数据到 TDengine。EndPoint 如下:
```text
/influxdb/v1/write?<param1=value1>?<param2=value2>...
```
支持 InfluxDB 查询参数如下:
- `db` 指定 TDengine 使用的数据库名
- `precision` TDengine 使用的时间精度
- `u` TDengine 用户名
- `p` TDengine 密码
注意: 目前不支持 InfluxDB 的 token 验证方式,仅支持 Basic 验证和查询参数验证。
参考链接:[InfluxDB v1 写接口](https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/)
### 兼容 OpenTSDB 行协议写入的方法
您可以配置任何支持 OpenTSDB 行协议的应用访问地址 `http://<fqdn>:6041/<APIEndPoint>` 来写入 OpenTSDB 兼容格式的数据到 TDengine。EndPoint 如下:
```text
/opentsdb/v1/put/json/<db>
/opentsdb/v1/put/telnet/<db>
```
参考链接:
- [OpenTSDB JSON](http://opentsdb.net/docs/build/html/api_http/put.html)
- [OpenTSDB Telnet](http://opentsdb.net/docs/build/html/api_telnet/put.html)
...@@ -155,15 +155,15 @@ async fn demo(taos: &Taos, db: &str) -> Result<(), Error> { ...@@ -155,15 +155,15 @@ async fn demo(taos: &Taos, db: &str) -> Result<(), Error> {
let inserted = taos.exec_many([ let inserted = taos.exec_many([
// create super table // create super table
"CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT, `phase` FLOAT) \ "CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT, `phase` FLOAT) \
TAGS (`groupid` INT, `location` BINARY(16))", TAGS (`groupid` INT, `location` BINARY(24))",
// create child table // create child table
"CREATE TABLE `d0` USING `meters` TAGS(0, 'Los Angles')", "CREATE TABLE `d0` USING `meters` TAGS(0, 'California.LosAngles')",
// insert into child table // insert into child table
"INSERT INTO `d0` values(now - 10s, 10, 116, 0.32)", "INSERT INTO `d0` values(now - 10s, 10, 116, 0.32)",
// insert with NULL values // insert with NULL values
"INSERT INTO `d0` values(now - 8s, NULL, NULL, NULL)", "INSERT INTO `d0` values(now - 8s, NULL, NULL, NULL)",
// insert and automatically create table with tags if not exists // insert and automatically create table with tags if not exists
"INSERT INTO `d1` USING `meters` TAGS(1, 'San Francisco') values(now - 9s, 10.1, 119, 0.33)", "INSERT INTO `d1` USING `meters` TAGS(1, 'California.SanFrancisco') values(now - 9s, 10.1, 119, 0.33)",
// insert many records in a single sql // insert many records in a single sql
"INSERT INTO `d1` values (now-8s, 10, 120, 0.33) (now - 6s, 10, 119, 0.34) (now - 4s, 11.2, 118, 0.322)", "INSERT INTO `d1` values (now-8s, 10, 120, 0.33) (now - 6s, 10, 119, 0.34) (now - 4s, 11.2, 118, 0.322)",
]).await?; ]).await?;
......
...@@ -41,14 +41,14 @@ TDengine 版本更新往往会增加新的功能特性,列表中的连接器 ...@@ -41,14 +41,14 @@ TDengine 版本更新往往会增加新的功能特性,列表中的连接器
### 使用原生接口(taosc) ### 使用原生接口(taosc)
| **功能特性** | **Java** | **Python** | **Go** | **C#** | **Node.js** | **Rust** | | **功能特性** | **Java** | **Python** | **Go** | **C#** | **Node.js** | **Rust** |
| -------------- | -------- | ---------- | ------ | ------ | ----------- | -------- | | ------------------- | -------- | ---------- | ------ | ------ | ----------- | -------- |
| **连接管理** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 | | **连接管理** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **普通查询** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 | | **普通查询** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **参数绑定** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 | | **参数绑定** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| ** TMQ ** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 | | **数据订阅(TMQ)** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **Schemaless** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 | | **Schemaless** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **DataFrame** | 不支持 | 支持 | 不支持 | 不支持 | 不支持 | 不支持 | | **DataFrame** | 不支持 | 支持 | 不支持 | 不支持 | 不支持 | 不支持 |
:::info :::info
由于不同编程语言数据库框架规范不同,并不意味着所有 C/C++ 接口都需要对应封装支持。 由于不同编程语言数据库框架规范不同,并不意味着所有 C/C++ 接口都需要对应封装支持。
...@@ -56,16 +56,15 @@ TDengine 版本更新往往会增加新的功能特性,列表中的连接器 ...@@ -56,16 +56,15 @@ TDengine 版本更新往往会增加新的功能特性,列表中的连接器
### 使用 http (REST 或 WebSocket) 接口 ### 使用 http (REST 或 WebSocket) 接口
| **功能特性** | **Java** | **Python** | **Go** | **C#(暂不支持)** | **Node.js** | **Rust** | | **功能特性** | **Java** | **Python** | **Go** | **C# ** | **Node.js** | **Rust** |
| ------------------------------ | -------- | ---------- | -------- | ------------------ | ----------- | -------- | | ------------------------------ | -------- | ---------- | -------- | -------- | ----------- | -------- |
| **连接管理** | 支持 | 支持 | 支持 | N/A | 支持 | 支持 | | **连接管理** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **普通查询** | 支持 | 支持 | 支持 | N/A | 支持 | 支持 | | **普通查询** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **连续查询** | 支持 | 支持 | 支持 | N/A | 支持 | 支持 | | **参数绑定** | 暂不支持 | 暂不支持 | 暂不支持 | 支持 | 暂不支持 | 支持 |
| **参数绑定** | 不支持 | 暂不支持 | 暂不支持 | N/A | 不支持 | 支持 | | **数据订阅(TMQ)** | 暂不支持 | 暂不支持 | 暂不支持 | 暂不支持 | 暂不支持 | 支持 |
| ** TMQ ** | 不支持 | 暂不支持 | 暂不支持 | N/A | 不支持 | 支持 | | **Schemaless** | 暂不支持 | 暂不支持 | 暂不支持 | 暂不支持 | 暂不支持 | 暂不支持 |
| **Schemaless** | 暂不支持 | 暂不支持 | 暂不支持 | N/A | 不支持 | 暂不支持 | | **批量拉取(基于 WebSocket)** | 支持 | 支持 | 暂不支持 | 支持 | 暂不支持 | 支持 |
| **批量拉取(基于 WebSocket)** | 支持 | 支持 | 暂不支持 | N/A | 不支持 | 支持 | | **DataFrame** | 不支持 | 支持 | 不支持 | 不支持 | 不支持 | 不支持 |
| **DataFrame** | 不支持 | 支持 | 不支持 | N/A | 不支持 | 不支持 |
:::warning :::warning
......
...@@ -168,7 +168,7 @@ Query OK, 8 row(s) in set (0.001154s) ...@@ -168,7 +168,7 @@ Query OK, 8 row(s) in set (0.001154s)
## 删除数据节点 ## 删除数据节点
先停止要删除的数据节点的 taosd 进程,然后启动 CLI 程序 taos,执行: 启动 CLI 程序 taos,执行:
```sql ```sql
DROP DNODE "fqdn:port"; DROP DNODE "fqdn:port";
......
...@@ -17,6 +17,8 @@ INSERT INTO ...@@ -17,6 +17,8 @@ INSERT INTO
[(field1_name, ...)] [(field1_name, ...)]
VALUES (field1_value, ...) [(field1_value2, ...) ...] | FILE csv_file_path VALUES (field1_value, ...) [(field1_value2, ...) ...] | FILE csv_file_path
...]; ...];
INSERT INTO tb_name [(field1_name, ...)] subquery
``` ```
**关于时间戳** **关于时间戳**
...@@ -38,7 +40,7 @@ INSERT INTO ...@@ -38,7 +40,7 @@ INSERT INTO
4. FILE 语法表示数据来自于 CSV 文件(英文逗号分隔、英文单引号括住每个值),CSV 文件无需表头。 4. FILE 语法表示数据来自于 CSV 文件(英文逗号分隔、英文单引号括住每个值),CSV 文件无需表头。
5. 无论使用哪种语法,均可以在一条 INSERT 语句中同时向多个表插入数据。 5. `INSERT ... VALUES` 语句和 `INSERT ... FILE` 语句均可以在一条 INSERT 语句中同时向多个表插入数据。
6. INSERT 语句是完整解析后再执行的,对如下语句,不会再出现数据错误但建表成功的情况: 6. INSERT 语句是完整解析后再执行的,对如下语句,不会再出现数据错误但建表成功的情况:
...@@ -48,6 +50,8 @@ INSERT INTO ...@@ -48,6 +50,8 @@ INSERT INTO
7. 对于向多个子表插入数据的情况,依然会有部分数据写入失败,部分数据写入成功的情况。这是因为多个子表可能分布在不同的 VNODE 上,客户端将 INSERT 语句完整解析后,将数据发往各个涉及的 VNODE 上,每个 VNODE 独立进行写入操作。如果某个 VNODE 因为某些原因(比如网络问题或磁盘故障)导致写入失败,并不会影响其他 VNODE 节点的写入。 7. 对于向多个子表插入数据的情况,依然会有部分数据写入失败,部分数据写入成功的情况。这是因为多个子表可能分布在不同的 VNODE 上,客户端将 INSERT 语句完整解析后,将数据发往各个涉及的 VNODE 上,每个 VNODE 独立进行写入操作。如果某个 VNODE 因为某些原因(比如网络问题或磁盘故障)导致写入失败,并不会影响其他 VNODE 节点的写入。
8. 可以使用 `INSERT ... subquery` 语句将 TDengine 中的数据插入到指定表中。subquery 可以是任意的查询语句。此语法只能用于子表和普通表,且不支持自动建表。
## 插入一条记录 ## 插入一条记录
指定已经创建好的数据子表的表名,并通过 VALUES 关键字提供一行或多行数据,即可向数据库写入这些数据。例如,执行如下语句可以写入一行记录: 指定已经创建好的数据子表的表名,并通过 VALUES 关键字提供一行或多行数据,即可向数据库写入这些数据。例如,执行如下语句可以写入一行记录:
......
...@@ -104,7 +104,7 @@ SELECT location, groupid, current FROM d1001 LIMIT 2; ...@@ -104,7 +104,7 @@ SELECT location, groupid, current FROM d1001 LIMIT 2;
### 结果去重 ### 结果去重
`DISINTCT` 关键字可以对结果集中的一列或多列进行去重,去除的列既可以是标签列也可以是数据列。 `DISTINCT` 关键字可以对结果集中的一列或多列进行去重,去除的列既可以是标签列也可以是数据列。
对标签列去重: 对标签列去重:
...@@ -137,6 +137,8 @@ taos> SELECT ts, ts AS primary_key_ts FROM d1001; ...@@ -137,6 +137,8 @@ taos> SELECT ts, ts AS primary_key_ts FROM d1001;
### 伪列 ### 伪列
**伪列**: 伪列的行为表现与普通数据列相似但其并不实际存储在表中。可以查询伪列,但不能对其做插入、更新和删除的操作。伪列有点像没有参数的函数。下面介绍是可用的伪列:
**TBNAME** **TBNAME**
`TBNAME` 可以视为超级表中一个特殊的标签,代表子表的表名。 `TBNAME` 可以视为超级表中一个特殊的标签,代表子表的表名。
...@@ -356,7 +358,7 @@ SELECT ... FROM (SELECT ... FROM ...) ...; ...@@ -356,7 +358,7 @@ SELECT ... FROM (SELECT ... FROM ...) ...;
- 与非嵌套的查询语句相比,外层查询所能支持的功能特性存在如下限制: - 与非嵌套的查询语句相比,外层查询所能支持的功能特性存在如下限制:
- 计算函数部分: - 计算函数部分:
- 如果内层查询的结果数据未提供时间戳,那么计算过程隐式依赖时间戳的函数在外层会无法正常工作。例如:INTERP, DERIVATIVE, IRATE, LAST_ROW, FIRST, LAST, TWA, STATEDURATION, TAIL, UNIQUE。 - 如果内层查询的结果数据未提供时间戳,那么计算过程隐式依赖时间戳的函数在外层会无法正常工作。例如:INTERP, DERIVATIVE, IRATE, LAST_ROW, FIRST, LAST, TWA, STATEDURATION, TAIL, UNIQUE。
- 如果内层查询的结果数据不是有效的时间序列,那么计算过程依赖数据为时间序列的函数在外层会无法正常工作。例如:LEASTSQUARES, ELAPSED, INTERP, DERIVATIVE, IRATE, TWA, DIFF, STATECOUNT, STATEDURATION, CSUM, MAVG, TAIL, UNIQUE。 - 如果内层查询的结果数据不是按时间戳有序,那么计算过程依赖数据按时间有序的函数在外层会无法正常工作。例如:LEASTSQUARES, ELAPSED, INTERP, DERIVATIVE, IRATE, TWA, DIFF, STATECOUNT, STATEDURATION, CSUM, MAVG, TAIL, UNIQUE。
- 计算过程需要两遍扫描的函数,在外层查询中无法正常工作。例如:此类函数包括:PERCENTILE。 - 计算过程需要两遍扫描的函数,在外层查询中无法正常工作。例如:此类函数包括:PERCENTILE。
::: :::
......
...@@ -127,7 +127,7 @@ SELECT COS(field_name) FROM { tb_name | stb_name } [WHERE clause] ...@@ -127,7 +127,7 @@ SELECT COS(field_name) FROM { tb_name | stb_name } [WHERE clause]
SELECT FLOOR(field_name) FROM { tb_name | stb_name } [WHERE clause]; SELECT FLOOR(field_name) FROM { tb_name | stb_name } [WHERE clause];
``` ```
**功能说明**:获得指定字段的向下取整数的结果。 **功能说明**:获得指定字段的向下取整数的结果。
其他使用说明参见 CEIL 函数描述。 其他使用说明参见 CEIL 函数描述。
#### LOG #### LOG
...@@ -174,7 +174,7 @@ SELECT POW(field_name, power) FROM { tb_name | stb_name } [WHERE clause] ...@@ -174,7 +174,7 @@ SELECT POW(field_name, power) FROM { tb_name | stb_name } [WHERE clause]
SELECT ROUND(field_name) FROM { tb_name | stb_name } [WHERE clause]; SELECT ROUND(field_name) FROM { tb_name | stb_name } [WHERE clause];
``` ```
**功能说明**:获得指定字段的四舍五入的结果。 **功能说明**:获得指定字段的四舍五入的结果。
其他使用说明参见 CEIL 函数描述。 其他使用说明参见 CEIL 函数描述。
...@@ -435,7 +435,7 @@ SELECT TO_ISO8601(ts[, timezone]) FROM { tb_name | stb_name } [WHERE clause]; ...@@ -435,7 +435,7 @@ SELECT TO_ISO8601(ts[, timezone]) FROM { tb_name | stb_name } [WHERE clause];
**使用说明** **使用说明**
- timezone 参数允许输入的时区格式为: [z/Z, +/-hhmm, +/-hh, +/-hh:mm]。例如,TO_ISO8601(1, "+00:00")。 - timezone 参数允许输入的时区格式为: [z/Z, +/-hhmm, +/-hh, +/-hh:mm]。例如,TO_ISO8601(1, "+00:00")。
- 如果输入是表示 UNIX 时间戳的整形,返回格式精度由时间戳的位数决定; - 如果输入是表示 UNIX 时间戳的整形,返回格式精度由时间戳的位数决定;
- 如果输入是 TIMESTAMP 类型的列,返回格式的时间戳精度与当前 DATABASE 设置的时间精度一致。 - 如果输入是 TIMESTAMP 类型的列,返回格式的时间戳精度与当前 DATABASE 设置的时间精度一致。
...@@ -770,14 +770,14 @@ SELECT HISTOGRAM(field_name,bin_type, bin_description, normalized) FROM tb_nam ...@@ -770,14 +770,14 @@ SELECT HISTOGRAM(field_name,bin_type, bin_description, normalized) FROM tb_nam
**详细说明** **详细说明**
- bin_type 用户指定的分桶类型, 有效输入类型为"user_input“, ”linear_bin", "log_bin"。 - bin_type 用户指定的分桶类型, 有效输入类型为"user_input“, ”linear_bin", "log_bin"。
- bin_description 描述如何生成分桶区间,针对三种桶类型,分别为以下描述格式(均为 JSON 格式字符串): - bin_description 描述如何生成分桶区间,针对三种桶类型,分别为以下描述格式(均为 JSON 格式字符串):
- "user_input": "[1, 3, 5, 7]" - "user_input": "[1, 3, 5, 7]"
用户指定 bin 的具体数值。 用户指定 bin 的具体数值。
- "linear_bin": "{"start": 0.0, "width": 5.0, "count": 5, "infinity": true}" - "linear_bin": "{"start": 0.0, "width": 5.0, "count": 5, "infinity": true}"
"start" 表示数据起始点,"width" 表示每次 bin 偏移量, "count" 为 bin 的总数,"infinity" 表示是否添加(-inf, inf)作为区间起点和终点, "start" 表示数据起始点,"width" 表示每次 bin 偏移量, "count" 为 bin 的总数,"infinity" 表示是否添加(-inf, inf)作为区间起点和终点,
生成区间为[-inf, 0.0, 5.0, 10.0, 15.0, 20.0, +inf]。 生成区间为[-inf, 0.0, 5.0, 10.0, 15.0, 20.0, +inf]。
- "log_bin": "{"start":1.0, "factor": 2.0, "count": 5, "infinity": true}" - "log_bin": "{"start":1.0, "factor": 2.0, "count": 5, "infinity": true}"
"start" 表示数据起始点,"factor" 表示按指数递增的因子,"count" 为 bin 的总数,"infinity" 表示是否添加(-inf, inf)作为区间起点和终点, "start" 表示数据起始点,"factor" 表示按指数递增的因子,"count" 为 bin 的总数,"infinity" 表示是否添加(-inf, inf)作为区间起点和终点,
生成区间为[-inf, 1.0, 2.0, 4.0, 8.0, 16.0, +inf]。 生成区间为[-inf, 1.0, 2.0, 4.0, 8.0, 16.0, +inf]。
...@@ -918,7 +918,7 @@ SELECT MAX(field_name) FROM { tb_name | stb_name } [WHERE clause]; ...@@ -918,7 +918,7 @@ SELECT MAX(field_name) FROM { tb_name | stb_name } [WHERE clause];
**返回数据类型**:同应用的字段。 **返回数据类型**:同应用的字段。
**适用数据类型**:数值类型,时间戳类型 **适用数据类型**:数值类型。
**适用于**:表和超级表。 **适用于**:表和超级表。
...@@ -933,7 +933,7 @@ SELECT MIN(field_name) FROM {tb_name | stb_name} [WHERE clause]; ...@@ -933,7 +933,7 @@ SELECT MIN(field_name) FROM {tb_name | stb_name} [WHERE clause];
**返回数据类型**:同应用的字段。 **返回数据类型**:同应用的字段。
**适用数据类型**:数值类型,时间戳类型 **适用数据类型**:数值类型。
**适用于**:表和超级表。 **适用于**:表和超级表。
...@@ -969,7 +969,7 @@ SELECT SAMPLE(field_name, K) FROM { tb_name | stb_name } [WHERE clause] ...@@ -969,7 +969,7 @@ SELECT SAMPLE(field_name, K) FROM { tb_name | stb_name } [WHERE clause]
**适用于**:表和超级表。 **适用于**:表和超级表。
**使用说明** **使用说明**
- 不能参与表达式计算;该函数可以应用在普通表和超级表上; - 不能参与表达式计算;该函数可以应用在普通表和超级表上;
- 使用在超级表上的时候,需要搭配 PARTITION by tbname 使用,将结果强制规约到单个时间线。 - 使用在超级表上的时候,需要搭配 PARTITION by tbname 使用,将结果强制规约到单个时间线。
...@@ -1047,10 +1047,10 @@ SELECT CSUM(field_name) FROM { tb_name | stb_name } [WHERE clause] ...@@ -1047,10 +1047,10 @@ SELECT CSUM(field_name) FROM { tb_name | stb_name } [WHERE clause]
**适用于**:表和超级表。 **适用于**:表和超级表。
**使用说明** **使用说明**
- 不支持 +、-、*、/ 运算,如 csum(col1) + csum(col2)。 - 不支持 +、-、*、/ 运算,如 csum(col1) + csum(col2)。
- 只能与聚合(Aggregation)函数一起使用。 该函数可以应用在普通表和超级表上。 - 只能与聚合(Aggregation)函数一起使用。 该函数可以应用在普通表和超级表上。
- 使用在超级表上的时候,需要搭配 PARTITION BY tbname使用,将结果强制规约到单个时间线。 - 使用在超级表上的时候,需要搭配 PARTITION BY tbname使用,将结果强制规约到单个时间线。
...@@ -1068,8 +1068,8 @@ SELECT DERIVATIVE(field_name, time_interval, ignore_negative) FROM tb_name [WHER ...@@ -1068,8 +1068,8 @@ SELECT DERIVATIVE(field_name, time_interval, ignore_negative) FROM tb_name [WHER
**适用于**:表和超级表。 **适用于**:表和超级表。
**使用说明**: **使用说明**:
- DERIVATIVE 函数可以在由 PARTITION BY 划分出单独时间线的情况下用于超级表(也即 PARTITION BY tbname)。 - DERIVATIVE 函数可以在由 PARTITION BY 划分出单独时间线的情况下用于超级表(也即 PARTITION BY tbname)。
- 可以与选择相关联的列一起使用。 例如: select \_rowts, DERIVATIVE() from。 - 可以与选择相关联的列一起使用。 例如: select \_rowts, DERIVATIVE() from。
...@@ -1087,7 +1087,7 @@ SELECT {DIFF(field_name, ignore_negative) | DIFF(field_name)} FROM tb_name [WHER ...@@ -1087,7 +1087,7 @@ SELECT {DIFF(field_name, ignore_negative) | DIFF(field_name)} FROM tb_name [WHER
**适用于**:表和超级表。 **适用于**:表和超级表。
**使用说明**: **使用说明**:
- 输出结果行数是范围内总行数减一,第一行没有结果输出。 - 输出结果行数是范围内总行数减一,第一行没有结果输出。
- 可以与选择相关联的列一起使用。 例如: select \_rowts, DIFF() from。 - 可以与选择相关联的列一起使用。 例如: select \_rowts, DIFF() from。
...@@ -1124,9 +1124,9 @@ SELECT MAVG(field_name, K) FROM { tb_name | stb_name } [WHERE clause] ...@@ -1124,9 +1124,9 @@ SELECT MAVG(field_name, K) FROM { tb_name | stb_name } [WHERE clause]
**适用于**:表和超级表。 **适用于**:表和超级表。
**使用说明** **使用说明**
- 不支持 +、-、*、/ 运算,如 mavg(col1, k1) + mavg(col2, k1); - 不支持 +、-、*、/ 运算,如 mavg(col1, k1) + mavg(col2, k1);
- 只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用; - 只能与普通列,选择(Selection)、投影(Projection)函数一起使用,不能与聚合(Aggregation)函数一起使用;
- 使用在超级表上的时候,需要搭配 PARTITION BY tbname使用,将结果强制规约到单个时间线。 - 使用在超级表上的时候,需要搭配 PARTITION BY tbname使用,将结果强制规约到单个时间线。
......
...@@ -46,7 +46,7 @@ SELECT select_list FROM tb_name ...@@ -46,7 +46,7 @@ SELECT select_list FROM tb_name
### 窗口子句的规则 ### 窗口子句的规则
- 窗口子句位于数据切分子句之后,GROUP BY 子句之前,且不可以和 GROUP BY 子句一起使用。 - 窗口子句位于数据切分子句之后,不可以和 GROUP BY 子句一起使用。
- 窗口子句将数据按窗口进行切分,对每个窗口进行 SELECT 列表中的表达式的计算,SELECT 列表中的表达式只能包含: - 窗口子句将数据按窗口进行切分,对每个窗口进行 SELECT 列表中的表达式的计算,SELECT 列表中的表达式只能包含:
- 常量。 - 常量。
- _wstart伪列、_wend伪列和_wduration伪列。 - _wstart伪列、_wend伪列和_wduration伪列。
...@@ -71,7 +71,7 @@ FILL 语句指定某一窗口区间数据缺失的情况下的填充模式。填 ...@@ -71,7 +71,7 @@ FILL 语句指定某一窗口区间数据缺失的情况下的填充模式。填
1. 使用 FILL 语句的时候可能生成大量的填充输出,务必指定查询的时间区间。针对每次查询,系统可返回不超过 1 千万条具有插值的结果。 1. 使用 FILL 语句的时候可能生成大量的填充输出,务必指定查询的时间区间。针对每次查询,系统可返回不超过 1 千万条具有插值的结果。
2. 在时间维度聚合中,返回的结果中时间序列严格单调递增。 2. 在时间维度聚合中,返回的结果中时间序列严格单调递增。
3. 如果查询对象是超级表,则聚合函数会作用于该超级表下满足值过滤条件的所有表的数据。如果查询中没有使用 PARTITION BY 语句,则返回的结果按照时间序列严格单调递增;如果查询中使用了 PARTITION BY 语句分组,则返回结果中每个 PARTITION 内按照时间序列严格单调递增。 3. 如果查询对象是超级表,则聚合函数会作用于该超级表下满足值过滤条件的所有表的数据。如果查询中没有使用 PARTITION BY 语句,则返回的结果按照时间序列严格单调递增;如果查询中使用了 PARTITION BY 语句分组,则返回结果中每个 PARTITION 内按照时间序列严格单调递增。
::: :::
...@@ -113,6 +113,12 @@ SELECT COUNT(*) FROM temp_tb_1 INTERVAL(1m) SLIDING(2m); ...@@ -113,6 +113,12 @@ SELECT COUNT(*) FROM temp_tb_1 INTERVAL(1m) SLIDING(2m);
SELECT COUNT(*), FIRST(ts), status FROM temp_tb_1 STATE_WINDOW(status); SELECT COUNT(*), FIRST(ts), status FROM temp_tb_1 STATE_WINDOW(status);
``` ```
仅关心 status 为 2 时的状态窗口的信息。例如:
```
SELECT * FROM (SELECT COUNT(*) AS cnt, FIRST(ts) AS fst, status FROM temp_tb_1 STATE_WINDOW(status)) t WHERE status = 2;
```
### 会话窗口 ### 会话窗口
会话窗口根据记录的时间戳主键的值来确定是否属于同一个会话。如下图所示,如果设置时间戳的连续的间隔小于等于 12 秒,则以下 6 条记录构成 2 个会话窗口,分别是:[2019-04-28 14:22:10,2019-04-28 14:22:30]和[2019-04-28 14:23:10,2019-04-28 14:23:30]。因为 2019-04-28 14:22:30 与 2019-04-28 14:23:10 之间的时间间隔是 40 秒,超过了连续时间间隔(12 秒)。 会话窗口根据记录的时间戳主键的值来确定是否属于同一个会话。如下图所示,如果设置时间戳的连续的间隔小于等于 12 秒,则以下 6 条记录构成 2 个会话窗口,分别是:[2019-04-28 14:22:10,2019-04-28 14:22:30]和[2019-04-28 14:23:10,2019-04-28 14:23:30]。因为 2019-04-28 14:22:30 与 2019-04-28 14:23:10 之间的时间间隔是 40 秒,超过了连续时间间隔(12 秒)。
......
...@@ -6,7 +6,8 @@ description: TDengine 保留关键字的详细列表 ...@@ -6,7 +6,8 @@ description: TDengine 保留关键字的详细列表
## 保留关键字 ## 保留关键字
目前 TDengine 有将近 200 个内部保留关键字,这些关键字无论大小写如果需要用作库名、表名、STable 名、数据列名及标签列名等,需要使用符合``将关键字括起来使用,例如`ADD` 目前 TDengine 有 200 多个内部保留关键字,这些关键字如果需要用作库名、表名、超级表名、子表名、数据列名及标签列名等,无论大小写,需要使用符号 `` ` `` 将关键字括起来使用,例如 \`ADD\`
关键字列表如下: 关键字列表如下:
### A ### A
...@@ -16,15 +17,20 @@ description: TDengine 保留关键字的详细列表 ...@@ -16,15 +17,20 @@ description: TDengine 保留关键字的详细列表
- ACCOUNTS - ACCOUNTS
- ADD - ADD
- AFTER - AFTER
- AGGREGATE
- ALL - ALL
- ALTER - ALTER
- ANALYZE
- AND - AND
- APPS
- AS - AS
- ASC - ASC
- AT_ONCE
- ATTACH - ATTACH
### B ### B
- BALANCE
- BEFORE - BEFORE
- BEGIN - BEGIN
- BETWEEN - BETWEEN
...@@ -34,19 +40,27 @@ description: TDengine 保留关键字的详细列表 ...@@ -34,19 +40,27 @@ description: TDengine 保留关键字的详细列表
- BITNOT - BITNOT
- BITOR - BITOR
- BLOCKS - BLOCKS
- BNODE
- BNODES
- BOOL - BOOL
- BUFFER
- BUFSIZE
- BY - BY
### C ### C
- CACHE - CACHE
- CACHELAST - CACHEMODEL
- CACHESIZE
- CASCADE - CASCADE
- CAST
- CHANGE - CHANGE
- CLIENT_VERSION
- CLUSTER - CLUSTER
- COLON - COLON
- COLUMN - COLUMN
- COMMA - COMMA
- COMMENT
- COMP - COMP
- COMPACT - COMPACT
- CONCAT - CONCAT
...@@ -54,15 +68,18 @@ description: TDengine 保留关键字的详细列表 ...@@ -54,15 +68,18 @@ description: TDengine 保留关键字的详细列表
- CONNECTION - CONNECTION
- CONNECTIONS - CONNECTIONS
- CONNS - CONNS
- CONSUMER
- CONSUMERS
- CONTAINS
- COPY - COPY
- COUNT
- CREATE - CREATE
- CTIME - CURRENT_USER
### D ### D
- DATABASE - DATABASE
- DATABASES - DATABASES
- DAYS
- DBS - DBS
- DEFERRED - DEFERRED
- DELETE - DELETE
...@@ -71,18 +88,23 @@ description: TDengine 保留关键字的详细列表 ...@@ -71,18 +88,23 @@ description: TDengine 保留关键字的详细列表
- DESCRIBE - DESCRIBE
- DETACH - DETACH
- DISTINCT - DISTINCT
- DISTRIBUTED
- DIVIDE - DIVIDE
- DNODE - DNODE
- DNODES - DNODES
- DOT - DOT
- DOUBLE - DOUBLE
- DROP - DROP
- DURATION
### E ### E
- EACH
- ENABLE
- END - END
- EQ - EVERY
- EXISTS - EXISTS
- EXPIRED
- EXPLAIN - EXPLAIN
### F ### F
...@@ -90,18 +112,20 @@ description: TDengine 保留关键字的详细列表 ...@@ -90,18 +112,20 @@ description: TDengine 保留关键字的详细列表
- FAIL - FAIL
- FILE - FILE
- FILL - FILL
- FIRST
- FLOAT - FLOAT
- FLUSH
- FOR - FOR
- FROM - FROM
- FSYNC - FUNCTION
- FUNCTIONS
### G ### G
- GE
- GLOB - GLOB
- GRANT
- GRANTS - GRANTS
- GROUP - GROUP
- GT
### H ### H
...@@ -112,15 +136,18 @@ description: TDengine 保留关键字的详细列表 ...@@ -112,15 +136,18 @@ description: TDengine 保留关键字的详细列表
- ID - ID
- IF - IF
- IGNORE - IGNORE
- IMMEDIA - IMMEDIATE
- IMPORT - IMPORT
- IN - IN
- INITIAL - INDEX
- INDEXES
- INITIALLY
- INNER
- INSERT - INSERT
- INSTEAD - INSTEAD
- INT - INT
- INTEGER - INTEGER
- INTERVA - INTERVAL
- INTO - INTO
- IS - IS
- ISNULL - ISNULL
...@@ -128,6 +155,7 @@ description: TDengine 保留关键字的详细列表 ...@@ -128,6 +155,7 @@ description: TDengine 保留关键字的详细列表
### J ### J
- JOIN - JOIN
- JSON
### K ### K
...@@ -137,46 +165,57 @@ description: TDengine 保留关键字的详细列表 ...@@ -137,46 +165,57 @@ description: TDengine 保留关键字的详细列表
### L ### L
- LE - LAST
- LAST_ROW
- LICENCES
- LIKE - LIKE
- LIMIT - LIMIT
- LINEAR - LINEAR
- LOCAL - LOCAL
- LP
- LSHIFT
- LT
### M ### M
- MATCH - MATCH
- MAX_DELAY
- MAXROWS - MAXROWS
- MERGE
- META
- MINROWS - MINROWS
- MINUS - MINUS
- MNODE
- MNODES - MNODES
- MODIFY - MODIFY
- MODULES - MODULES
### N ### N
- NE - NCHAR
- NEXT
- NMATCH
- NONE - NONE
- NOT - NOT
- NOTNULL - NOTNULL
- NOW - NOW
- NULL - NULL
- NULLS
### O ### O
- OF - OF
- OFFSET - OFFSET
- ON
- OR - OR
- ORDER - ORDER
- OUTPUTTYPE
### P ### P
- PARTITION - PAGES
- PAGESIZE
- PARTITIONS
- PASS - PASS
- PLUS - PLUS
- PORT
- PPS - PPS
- PRECISION - PRECISION
- PREV - PREV
...@@ -184,47 +223,63 @@ description: TDengine 保留关键字的详细列表 ...@@ -184,47 +223,63 @@ description: TDengine 保留关键字的详细列表
### Q ### Q
- QNODE
- QNODES
- QTIME - QTIME
- QUERIE - QUERIES
- QUERY - QUERY
- QUORUM
### R ### R
- RAISE - RAISE
- REM - RANGE
- RATIO
- READ
- REDISTRIBUTE
- RENAME
- REPLACE - REPLACE
- REPLICA - REPLICA
- RESET - RESET
- RESTRIC - RESTRICT
- RETENTIONS
- REVOKE
- ROLLUP
- ROW - ROW
- RP
- RSHIFT
### S ### S
- SCHEMALESS
- SCORES - SCORES
- SELECT - SELECT
- SEMI - SEMI
- SERVER_STATUS
- SERVER_VERSION
- SESSION - SESSION
- SET - SET
- SHOW - SHOW
- SLASH - SINGLE_STABLE
- SLIDING - SLIDING
- SLIMIT - SLIMIT
- SMALLIN - SMA
- SMALLINT
- SNODE
- SNODES
- SOFFSET - SOFFSET
- STable - SPLIT
- STableS - STABLE
- STABLES
- STAR - STAR
- STATE - STATE
- STATEMEN - STATE_WINDOW
- STATE_WI - STATEMENT
- STORAGE - STORAGE
- STREAM - STREAM
- STREAMS - STREAMS
- STRICT
- STRING - STRING
- SUBSCRIPTIONS
- SYNCDB - SYNCDB
- SYSINFO
### T ### T
...@@ -235,20 +290,24 @@ description: TDengine 保留关键字的详细列表 ...@@ -235,20 +290,24 @@ description: TDengine 保留关键字的详细列表
- TBNAME - TBNAME
- TIMES - TIMES
- TIMESTAMP - TIMESTAMP
- TIMEZONE
- TINYINT - TINYINT
- TO
- TODAY
- TOPIC - TOPIC
- TOPICS - TOPICS
- TRANSACTION
- TRANSACTIONS
- TRIGGER - TRIGGER
- TRIM
- TSERIES - TSERIES
- TTL - TTL
### U ### U
- UMINUS
- UNION - UNION
- UNSIGNED - UNSIGNED
- UPDATE - UPDATE
- UPLUS
- USE - USE
- USER - USER
- USERS - USERS
...@@ -256,9 +315,13 @@ description: TDengine 保留关键字的详细列表 ...@@ -256,9 +315,13 @@ description: TDengine 保留关键字的详细列表
### V ### V
- VALUE
- VALUES - VALUES
- VARCHAR
- VARIABLE - VARIABLE
- VARIABLES - VARIABLES
- VERBOSE
- VGROUP
- VGROUPS - VGROUPS
- VIEW - VIEW
- VNODES - VNODES
...@@ -266,14 +329,25 @@ description: TDengine 保留关键字的详细列表 ...@@ -266,14 +329,25 @@ description: TDengine 保留关键字的详细列表
### W ### W
- WAL - WAL
- WAL_FSYNC_PERIOD
- WAL_LEVEL
- WAL_RETENTION_PERIOD
- WAL_RETENTION_SIZE
- WAL_ROLL_PERIOD
- WAL_SEGMENT_SIZE
- WATERMARK
- WHERE - WHERE
- WINDOW_CLOSE
- WITH
- WRITE
### \_ ### \_
- \_C0 - \_C0
- \_QSTART
- \_QSTOP
- \_QDURATION - \_QDURATION
- \_WSTART - \_QEND
- \_WSTOP - \_QSTART
- \_ROWTS
- \_WDURATION - \_WDURATION
- \_WEND
- \_WSTART
...@@ -196,7 +196,7 @@ AllowWebSockets ...@@ -196,7 +196,7 @@ AllowWebSockets
- `u` TDengine 用户名 - `u` TDengine 用户名
- `p` TDengine 密码 - `p` TDengine 密码
注意: 目前不支持 InfluxDB 的 token 验证方式支持 Basic 验证和查询参数验证。 注意: 目前不支持 InfluxDB 的 token 验证方式,仅支持 Basic 验证和查询参数验证。
### OpenTSDB ### OpenTSDB
......
...@@ -79,7 +79,7 @@ password = "taosdata" ...@@ -79,7 +79,7 @@ password = "taosdata"
# 需要被监控的 taosAdapter # 需要被监控的 taosAdapter
[taosAdapter] [taosAdapter]
address = ["127.0.0.1:6041","192.168.1.95:6041"] address = ["127.0.0.1:6041"]
[metrics] [metrics]
# 监控指标前缀 # 监控指标前缀
...@@ -92,7 +92,7 @@ cluster = "production" ...@@ -92,7 +92,7 @@ cluster = "production"
database = "log" database = "log"
# 指定需要监控的普通表 # 指定需要监控的普通表
tables = ["normal_table"] tables = []
``` ```
### 获取监控指标 ### 获取监控指标
...@@ -141,4 +141,4 @@ taos_cluster_info_dnodes_total{cluster_id="5981392874047724755"} 1 ...@@ -141,4 +141,4 @@ taos_cluster_info_dnodes_total{cluster_id="5981392874047724755"} 1
# HELP taos_cluster_info_first_ep # HELP taos_cluster_info_first_ep
# TYPE taos_cluster_info_first_ep gauge # TYPE taos_cluster_info_first_ep gauge
taos_cluster_info_first_ep{cluster_id="5981392874047724755",value="hlb:6030"} 1 taos_cluster_info_first_ep{cluster_id="5981392874047724755",value="hlb:6030"} 1
``` ```
\ No newline at end of file
...@@ -26,7 +26,7 @@ TDengine 分布式架构的逻辑结构图如下: ...@@ -26,7 +26,7 @@ TDengine 分布式架构的逻辑结构图如下:
**管理节点(mnode):** 一个虚拟的逻辑单元,负责所有数据节点运行状态的监控和维护,以及节点之间的负载均衡(图中 M)。同时,管理节点也负责元数据(包括用户、数据库、超级表等)的存储和管理,因此也称为 Meta Node。TDengine 集群中可配置多个(最多不超过 3 个)mnode,它们自动构建成为一个虚拟管理节点组(图中 M1,M2,M3)。mnode 支持多副本,采用 RAFT 一致性协议,保证系统的高可用与高可靠,任何数据更新操作只能在 Leader 上进行。mnode 集群的第一个节点在集群部署时自动完成,其他节点的创建与删除由用户通过 SQL 命令完成。每个 dnode 上至多有一个 mnode,由所属的数据节点的 EP 来唯一标识。每个 dnode 通过内部消息交互自动获取整个集群中所有 mnode 所在的 dnode 的 EP。 **管理节点(mnode):** 一个虚拟的逻辑单元,负责所有数据节点运行状态的监控和维护,以及节点之间的负载均衡(图中 M)。同时,管理节点也负责元数据(包括用户、数据库、超级表等)的存储和管理,因此也称为 Meta Node。TDengine 集群中可配置多个(最多不超过 3 个)mnode,它们自动构建成为一个虚拟管理节点组(图中 M1,M2,M3)。mnode 支持多副本,采用 RAFT 一致性协议,保证系统的高可用与高可靠,任何数据更新操作只能在 Leader 上进行。mnode 集群的第一个节点在集群部署时自动完成,其他节点的创建与删除由用户通过 SQL 命令完成。每个 dnode 上至多有一个 mnode,由所属的数据节点的 EP 来唯一标识。每个 dnode 通过内部消息交互自动获取整个集群中所有 mnode 所在的 dnode 的 EP。
**弹性计算节点(qnode):** 一个虚拟的逻辑单元,运行查询计算任务,也包括基于系统表来实现的 show 命令(图中 Q)。集群中可配置多个 qnode,在整个集群内部共享使用(图中 Q1,Q2,Q3)。qnode 不与具体的 DB 绑定,即一个 qnode 可以同时执行多个 DB 的查询任务。每个 dnode 上至多有一个 qnode,由所属的数据节点的 EP 来唯一标识。客户端通过与 mnode 交互,获取可用的 qnode 列表,当没有可用的 qnode 时,计算任务在 vnode 中执行 **计算节点(qnode):** 一个虚拟的逻辑单元,运行查询计算任务,也包括基于系统表来实现的 show 命令(图中 Q)。集群中可配置多个 qnode,在整个集群内部共享使用(图中 Q1,Q2,Q3)。qnode 不与具体的 DB 绑定,即一个 qnode 可以同时执行多个 DB 的查询任务。每个 dnode 上至多有一个 qnode,由所属的数据节点的 EP 来唯一标识。客户端通过与 mnode 交互,获取可用的 qnode 列表,当没有可用的 qnode 时,计算任务在 vnode 中执行。当一个查询执行时,依赖执行计划,调度器会安排一个或多个 qnode 来一起执行。qnode 能从 vnode 获取数据,也可以将自己的计算结果发给其他 qnode 做进一步的处理。通过引入独立的计算节点,TDengine 实现了存储和计算分离
**流计算节点(snode):** 一个虚拟的逻辑单元,只运行流计算任务(图中 S)。集群中可配置多个 snode,在整个集群内部共享使用(图中 S1,S2,S3)。snode 不与具体的 stream 绑定,即一个 snode 可以同时执行多个 stream 的计算任务。每个 dnode 上至多有一个 snode,由所属的数据节点的 EP 来唯一标识。由 mnode 调度可用的 snode 完成流计算任务,当没有可用的 snode 时,流计算任务在 vnode 中执行。 **流计算节点(snode):** 一个虚拟的逻辑单元,只运行流计算任务(图中 S)。集群中可配置多个 snode,在整个集群内部共享使用(图中 S1,S2,S3)。snode 不与具体的 stream 绑定,即一个 snode 可以同时执行多个 stream 的计算任务。每个 dnode 上至多有一个 snode,由所属的数据节点的 EP 来唯一标识。由 mnode 调度可用的 snode 完成流计算任务,当没有可用的 snode 时,流计算任务在 vnode 中执行。
......
...@@ -6,6 +6,11 @@ description: TDengine 发布历史、Release Notes 及下载链接 ...@@ -6,6 +6,11 @@ description: TDengine 发布历史、Release Notes 及下载链接
import Release from "/components/ReleaseV3"; import Release from "/components/ReleaseV3";
## 3.0.1.1
<Release type="tdengine" version="3.0.1.1" />
## 3.0.1.0 ## 3.0.1.0
<Release type="tdengine" version="3.0.1.0" /> <Release type="tdengine" version="3.0.1.0" />
......
...@@ -6,6 +6,10 @@ description: taosTools 的发布历史、Release Notes 和下载链接 ...@@ -6,6 +6,10 @@ description: taosTools 的发布历史、Release Notes 和下载链接
import Release from "/components/ReleaseV3"; import Release from "/components/ReleaseV3";
## 2.2.0
<Release type="tools" version="2.2.0" />
## 2.1.3 ## 2.1.3
<Release type="tools" version="2.1.3" /> <Release type="tools" version="2.1.3" />
...@@ -45,8 +45,8 @@ enum { ...@@ -45,8 +45,8 @@ enum {
// clang-format on // clang-format on
typedef struct { typedef struct {
TSKEY ts;
uint64_t groupId; uint64_t groupId;
TSKEY ts;
} SWinKey; } SWinKey;
static inline int SWinKeyCmpr(const void* pKey1, int kLen1, const void* pKey2, int kLen2) { static inline int SWinKeyCmpr(const void* pKey1, int kLen1, const void* pKey2, int kLen2) {
...@@ -68,6 +68,37 @@ static inline int SWinKeyCmpr(const void* pKey1, int kLen1, const void* pKey2, i ...@@ -68,6 +68,37 @@ static inline int SWinKeyCmpr(const void* pKey1, int kLen1, const void* pKey2, i
return 0; return 0;
} }
typedef struct {
uint64_t groupId;
TSKEY ts;
int32_t exprIdx;
} STupleKey;
static inline int STupleKeyCmpr(const void* pKey1, int kLen1, const void* pKey2, int kLen2) {
STupleKey* pTuple1 = (STupleKey*)pKey1;
STupleKey* pTuple2 = (STupleKey*)pKey2;
if (pTuple1->groupId > pTuple2->groupId) {
return 1;
} else if (pTuple1->groupId < pTuple2->groupId) {
return -1;
}
if (pTuple1->ts > pTuple2->ts) {
return 1;
} else if (pTuple1->ts < pTuple2->ts) {
return -1;
}
if (pTuple1->exprIdx > pTuple2->exprIdx) {
return 1;
} else if (pTuple1->exprIdx < pTuple2->exprIdx) {
return -1;
}
return 0;
}
enum { enum {
TMQ_MSG_TYPE__DUMMY = 0, TMQ_MSG_TYPE__DUMMY = 0,
TMQ_MSG_TYPE__POLL_RSP, TMQ_MSG_TYPE__POLL_RSP,
......
...@@ -36,8 +36,13 @@ typedef struct STSRow2 STSRow2; ...@@ -36,8 +36,13 @@ typedef struct STSRow2 STSRow2;
typedef struct STSRowBuilder STSRowBuilder; typedef struct STSRowBuilder STSRowBuilder;
typedef struct STagVal STagVal; typedef struct STagVal STagVal;
typedef struct STag STag; typedef struct STag STag;
typedef struct SColData SColData;
// bitmap #define HAS_NONE ((uint8_t)0x1)
#define HAS_NULL ((uint8_t)0x2)
#define HAS_VALUE ((uint8_t)0x4)
// bitmap ================================
const static uint8_t BIT2_MAP[4][4] = {{0b00000000, 0b00000001, 0b00000010, 0}, const static uint8_t BIT2_MAP[4][4] = {{0b00000000, 0b00000001, 0b00000010, 0},
{0b00000000, 0b00000100, 0b00001000, 2}, {0b00000000, 0b00000100, 0b00001000, 2},
{0b00000000, 0b00010000, 0b00100000, 4}, {0b00000000, 0b00010000, 0b00100000, 4},
...@@ -51,21 +56,21 @@ const static uint8_t BIT2_MAP[4][4] = {{0b00000000, 0b00000001, 0b00000010, 0}, ...@@ -51,21 +56,21 @@ const static uint8_t BIT2_MAP[4][4] = {{0b00000000, 0b00000001, 0b00000010, 0},
#define SET_BIT2(p, i, v) ((p)[(i) >> 2] = (p)[(i) >> 2] & N1(BIT2_MAP[(i)&3][3]) | BIT2_MAP[(i)&3][(v)]) #define SET_BIT2(p, i, v) ((p)[(i) >> 2] = (p)[(i) >> 2] & N1(BIT2_MAP[(i)&3][3]) | BIT2_MAP[(i)&3][(v)])
#define GET_BIT2(p, i) (((p)[(i) >> 2] >> BIT2_MAP[(i)&3][3]) & ((uint8_t)3)) #define GET_BIT2(p, i) (((p)[(i) >> 2] >> BIT2_MAP[(i)&3][3]) & ((uint8_t)3))
// STSchema // STSchema ================================
int32_t tTSchemaCreate(int32_t sver, SSchema *pSchema, int32_t nCols, STSchema **ppTSchema); int32_t tTSchemaCreate(int32_t sver, SSchema *pSchema, int32_t nCols, STSchema **ppTSchema);
void tTSchemaDestroy(STSchema *pTSchema); void tTSchemaDestroy(STSchema *pTSchema);
// SValue // SValue ================================
int32_t tPutValue(uint8_t *p, SValue *pValue, int8_t type); int32_t tPutValue(uint8_t *p, SValue *pValue, int8_t type);
int32_t tGetValue(uint8_t *p, SValue *pValue, int8_t type); int32_t tGetValue(uint8_t *p, SValue *pValue, int8_t type);
int tValueCmprFn(const SValue *pValue1, const SValue *pValue2, int8_t type); int tValueCmprFn(const SValue *pValue1, const SValue *pValue2, int8_t type);
// SColVal // SColVal ================================
#define COL_VAL_NONE(CID, TYPE) ((SColVal){.cid = (CID), .type = (TYPE), .isNone = 1}) #define COL_VAL_NONE(CID, TYPE) ((SColVal){.cid = (CID), .type = (TYPE), .isNone = 1})
#define COL_VAL_NULL(CID, TYPE) ((SColVal){.cid = (CID), .type = (TYPE), .isNull = 1}) #define COL_VAL_NULL(CID, TYPE) ((SColVal){.cid = (CID), .type = (TYPE), .isNull = 1})
#define COL_VAL_VALUE(CID, TYPE, V) ((SColVal){.cid = (CID), .type = (TYPE), .value = (V)}) #define COL_VAL_VALUE(CID, TYPE, V) ((SColVal){.cid = (CID), .type = (TYPE), .value = (V)})
// STSRow2 // STSRow2 ================================
#define TSROW_LEN(PROW, V) tGetI32v((uint8_t *)(PROW)->data, (V) ? &(V) : NULL) #define TSROW_LEN(PROW, V) tGetI32v((uint8_t *)(PROW)->data, (V) ? &(V) : NULL)
#define TSROW_SVER(PROW, V) tGetI32v((PROW)->data + TSROW_LEN(PROW, NULL), (V) ? &(V) : NULL) #define TSROW_SVER(PROW, V) tGetI32v((PROW)->data + TSROW_LEN(PROW, NULL), (V) ? &(V) : NULL)
...@@ -77,7 +82,7 @@ int32_t tTSRowToArray(STSRow2 *pRow, STSchema *pTSchema, SArray **ppArray); ...@@ -77,7 +82,7 @@ int32_t tTSRowToArray(STSRow2 *pRow, STSchema *pTSchema, SArray **ppArray);
int32_t tPutTSRow(uint8_t *p, STSRow2 *pRow); int32_t tPutTSRow(uint8_t *p, STSRow2 *pRow);
int32_t tGetTSRow(uint8_t *p, STSRow2 **ppRow); int32_t tGetTSRow(uint8_t *p, STSRow2 **ppRow);
// STSRowBuilder // STSRowBuilder ================================
#define tsRowBuilderInit() ((STSRowBuilder){0}) #define tsRowBuilderInit() ((STSRowBuilder){0})
#define tsRowBuilderClear(B) \ #define tsRowBuilderClear(B) \
do { \ do { \
...@@ -86,7 +91,7 @@ int32_t tGetTSRow(uint8_t *p, STSRow2 **ppRow); ...@@ -86,7 +91,7 @@ int32_t tGetTSRow(uint8_t *p, STSRow2 **ppRow);
} \ } \
} while (0) } while (0)
// STag // STag ================================
int32_t tTagNew(SArray *pArray, int32_t version, int8_t isJson, STag **ppTag); int32_t tTagNew(SArray *pArray, int32_t version, int8_t isJson, STag **ppTag);
void tTagFree(STag *pTag); void tTagFree(STag *pTag);
bool tTagIsJson(const void *pTag); bool tTagIsJson(const void *pTag);
...@@ -100,7 +105,16 @@ void tTagSetCid(const STag *pTag, int16_t iTag, int16_t cid); ...@@ -100,7 +105,16 @@ void tTagSetCid(const STag *pTag, int16_t iTag, int16_t cid);
void debugPrintSTag(STag *pTag, const char *tag, int32_t ln); // TODO: remove void debugPrintSTag(STag *pTag, const char *tag, int32_t ln); // TODO: remove
int32_t parseJsontoTagData(const char *json, SArray *pTagVals, STag **ppTag, void *pMsgBuf); int32_t parseJsontoTagData(const char *json, SArray *pTagVals, STag **ppTag, void *pMsgBuf);
// STRUCT ================= // SColData ================================
void tColDataDestroy(void *ph);
void tColDataInit(SColData *pColData, int16_t cid, int8_t type, int8_t smaOn);
void tColDataClear(SColData *pColData);
int32_t tColDataAppendValue(SColData *pColData, SColVal *pColVal);
void tColDataGetValue(SColData *pColData, int32_t iVal, SColVal *pColVal);
uint8_t tColDataGetBitValue(SColData *pColData, int32_t iVal);
int32_t tColDataCopy(SColData *pColDataSrc, SColData *pColDataDest);
// STRUCT ================================
struct STColumn { struct STColumn {
col_id_t colId; col_id_t colId;
int8_t type; int8_t type;
...@@ -166,6 +180,18 @@ struct SColVal { ...@@ -166,6 +180,18 @@ struct SColVal {
SValue value; SValue value;
}; };
struct SColData {
int16_t cid;
int8_t type;
int8_t smaOn;
int32_t nVal;
uint8_t flag;
uint8_t *pBitMap;
int32_t *aOffset;
int32_t nData;
uint8_t *pData;
};
#pragma pack(push, 1) #pragma pack(push, 1)
struct STagVal { struct STagVal {
// char colName[TSDB_COL_NAME_LEN]; // only used for tmq_get_meta // char colName[TSDB_COL_NAME_LEN]; // only used for tmq_get_meta
......
...@@ -120,6 +120,7 @@ extern SDiskCfg tsDiskCfg[]; ...@@ -120,6 +120,7 @@ extern SDiskCfg tsDiskCfg[];
// udf // udf
extern bool tsStartUdfd; extern bool tsStartUdfd;
extern char tsUdfdResFuncs[];
// schemaless // schemaless
extern char tsSmlChildTableName[]; extern char tsSmlChildTableName[];
......
...@@ -787,6 +787,7 @@ typedef struct { ...@@ -787,6 +787,7 @@ typedef struct {
int32_t sstTrigger; int32_t sstTrigger;
int16_t hashPrefix; int16_t hashPrefix;
int16_t hashSuffix; int16_t hashSuffix;
int32_t tsdbPageSize;
} SCreateDbReq; } SCreateDbReq;
int32_t tSerializeSCreateDbReq(void* buf, int32_t bufLen, SCreateDbReq* pReq); int32_t tSerializeSCreateDbReq(void* buf, int32_t bufLen, SCreateDbReq* pReq);
...@@ -1200,6 +1201,7 @@ typedef struct { ...@@ -1200,6 +1201,7 @@ typedef struct {
int16_t sstTrigger; int16_t sstTrigger;
int16_t hashPrefix; int16_t hashPrefix;
int16_t hashSuffix; int16_t hashSuffix;
int32_t tsdbPageSize;
} SCreateVnodeReq; } SCreateVnodeReq;
int32_t tSerializeSCreateVnodeReq(void* buf, int32_t bufLen, SCreateVnodeReq* pReq); int32_t tSerializeSCreateVnodeReq(void* buf, int32_t bufLen, SCreateVnodeReq* pReq);
...@@ -2954,7 +2956,7 @@ static FORCE_INLINE void* tDecodeSMqSubTopicEp(void* buf, SMqSubTopicEp* pTopicE ...@@ -2954,7 +2956,7 @@ static FORCE_INLINE void* tDecodeSMqSubTopicEp(void* buf, SMqSubTopicEp* pTopicE
} }
static FORCE_INLINE void tDeleteSMqSubTopicEp(SMqSubTopicEp* pSubTopicEp) { static FORCE_INLINE void tDeleteSMqSubTopicEp(SMqSubTopicEp* pSubTopicEp) {
// taosMemoryFree(pSubTopicEp->schema.pSchema); if (pSubTopicEp->schema.nCols) taosMemoryFreeClear(pSubTopicEp->schema.pSchema);
taosArrayDestroy(pSubTopicEp->vgs); taosArrayDestroy(pSubTopicEp->vgs);
} }
......
...@@ -89,240 +89,241 @@ ...@@ -89,240 +89,241 @@
#define TK_KEEP 71 #define TK_KEEP 71
#define TK_PAGES 72 #define TK_PAGES 72
#define TK_PAGESIZE 73 #define TK_PAGESIZE 73
#define TK_PRECISION 74 #define TK_TSDB_PAGESIZE 74
#define TK_REPLICA 75 #define TK_PRECISION 75
#define TK_STRICT 76 #define TK_REPLICA 76
#define TK_VGROUPS 77 #define TK_STRICT 77
#define TK_SINGLE_STABLE 78 #define TK_VGROUPS 78
#define TK_RETENTIONS 79 #define TK_SINGLE_STABLE 79
#define TK_SCHEMALESS 80 #define TK_RETENTIONS 80
#define TK_WAL_LEVEL 81 #define TK_SCHEMALESS 81
#define TK_WAL_FSYNC_PERIOD 82 #define TK_WAL_LEVEL 82
#define TK_WAL_RETENTION_PERIOD 83 #define TK_WAL_FSYNC_PERIOD 83
#define TK_WAL_RETENTION_SIZE 84 #define TK_WAL_RETENTION_PERIOD 84
#define TK_WAL_ROLL_PERIOD 85 #define TK_WAL_RETENTION_SIZE 85
#define TK_WAL_SEGMENT_SIZE 86 #define TK_WAL_ROLL_PERIOD 86
#define TK_SST_TRIGGER 87 #define TK_WAL_SEGMENT_SIZE 87
#define TK_TABLE_PREFIX 88 #define TK_STT_TRIGGER 88
#define TK_TABLE_SUFFIX 89 #define TK_TABLE_PREFIX 89
#define TK_NK_COLON 90 #define TK_TABLE_SUFFIX 90
#define TK_TABLE 91 #define TK_NK_COLON 91
#define TK_NK_LP 92 #define TK_TABLE 92
#define TK_NK_RP 93 #define TK_NK_LP 93
#define TK_STABLE 94 #define TK_NK_RP 94
#define TK_ADD 95 #define TK_STABLE 95
#define TK_COLUMN 96 #define TK_ADD 96
#define TK_MODIFY 97 #define TK_COLUMN 97
#define TK_RENAME 98 #define TK_MODIFY 98
#define TK_TAG 99 #define TK_RENAME 99
#define TK_SET 100 #define TK_TAG 100
#define TK_NK_EQ 101 #define TK_SET 101
#define TK_USING 102 #define TK_NK_EQ 102
#define TK_TAGS 103 #define TK_USING 103
#define TK_COMMENT 104 #define TK_TAGS 104
#define TK_BOOL 105 #define TK_COMMENT 105
#define TK_TINYINT 106 #define TK_BOOL 106
#define TK_SMALLINT 107 #define TK_TINYINT 107
#define TK_INT 108 #define TK_SMALLINT 108
#define TK_INTEGER 109 #define TK_INT 109
#define TK_BIGINT 110 #define TK_INTEGER 110
#define TK_FLOAT 111 #define TK_BIGINT 111
#define TK_DOUBLE 112 #define TK_FLOAT 112
#define TK_BINARY 113 #define TK_DOUBLE 113
#define TK_TIMESTAMP 114 #define TK_BINARY 114
#define TK_NCHAR 115 #define TK_TIMESTAMP 115
#define TK_UNSIGNED 116 #define TK_NCHAR 116
#define TK_JSON 117 #define TK_UNSIGNED 117
#define TK_VARCHAR 118 #define TK_JSON 118
#define TK_MEDIUMBLOB 119 #define TK_VARCHAR 119
#define TK_BLOB 120 #define TK_MEDIUMBLOB 120
#define TK_VARBINARY 121 #define TK_BLOB 121
#define TK_DECIMAL 122 #define TK_VARBINARY 122
#define TK_MAX_DELAY 123 #define TK_DECIMAL 123
#define TK_WATERMARK 124 #define TK_MAX_DELAY 124
#define TK_ROLLUP 125 #define TK_WATERMARK 125
#define TK_TTL 126 #define TK_ROLLUP 126
#define TK_SMA 127 #define TK_TTL 127
#define TK_FIRST 128 #define TK_SMA 128
#define TK_LAST 129 #define TK_FIRST 129
#define TK_SHOW 130 #define TK_LAST 130
#define TK_DATABASES 131 #define TK_SHOW 131
#define TK_TABLES 132 #define TK_DATABASES 132
#define TK_STABLES 133 #define TK_TABLES 133
#define TK_MNODES 134 #define TK_STABLES 134
#define TK_MODULES 135 #define TK_MNODES 135
#define TK_QNODES 136 #define TK_MODULES 136
#define TK_FUNCTIONS 137 #define TK_QNODES 137
#define TK_INDEXES 138 #define TK_FUNCTIONS 138
#define TK_ACCOUNTS 139 #define TK_INDEXES 139
#define TK_APPS 140 #define TK_ACCOUNTS 140
#define TK_CONNECTIONS 141 #define TK_APPS 141
#define TK_LICENCES 142 #define TK_CONNECTIONS 142
#define TK_GRANTS 143 #define TK_LICENCES 143
#define TK_QUERIES 144 #define TK_GRANTS 144
#define TK_SCORES 145 #define TK_QUERIES 145
#define TK_TOPICS 146 #define TK_SCORES 146
#define TK_VARIABLES 147 #define TK_TOPICS 147
#define TK_BNODES 148 #define TK_VARIABLES 148
#define TK_SNODES 149 #define TK_BNODES 149
#define TK_CLUSTER 150 #define TK_SNODES 150
#define TK_TRANSACTIONS 151 #define TK_CLUSTER 151
#define TK_DISTRIBUTED 152 #define TK_TRANSACTIONS 152
#define TK_CONSUMERS 153 #define TK_DISTRIBUTED 153
#define TK_SUBSCRIPTIONS 154 #define TK_CONSUMERS 154
#define TK_VNODES 155 #define TK_SUBSCRIPTIONS 155
#define TK_LIKE 156 #define TK_VNODES 156
#define TK_INDEX 157 #define TK_LIKE 157
#define TK_FUNCTION 158 #define TK_INDEX 158
#define TK_INTERVAL 159 #define TK_FUNCTION 159
#define TK_TOPIC 160 #define TK_INTERVAL 160
#define TK_AS 161 #define TK_TOPIC 161
#define TK_WITH 162 #define TK_AS 162
#define TK_META 163 #define TK_WITH 163
#define TK_CONSUMER 164 #define TK_META 164
#define TK_GROUP 165 #define TK_CONSUMER 165
#define TK_DESC 166 #define TK_GROUP 166
#define TK_DESCRIBE 167 #define TK_DESC 167
#define TK_RESET 168 #define TK_DESCRIBE 168
#define TK_QUERY 169 #define TK_RESET 169
#define TK_CACHE 170 #define TK_QUERY 170
#define TK_EXPLAIN 171 #define TK_CACHE 171
#define TK_ANALYZE 172 #define TK_EXPLAIN 172
#define TK_VERBOSE 173 #define TK_ANALYZE 173
#define TK_NK_BOOL 174 #define TK_VERBOSE 174
#define TK_RATIO 175 #define TK_NK_BOOL 175
#define TK_NK_FLOAT 176 #define TK_RATIO 176
#define TK_OUTPUTTYPE 177 #define TK_NK_FLOAT 177
#define TK_AGGREGATE 178 #define TK_OUTPUTTYPE 178
#define TK_BUFSIZE 179 #define TK_AGGREGATE 179
#define TK_STREAM 180 #define TK_BUFSIZE 180
#define TK_INTO 181 #define TK_STREAM 181
#define TK_TRIGGER 182 #define TK_INTO 182
#define TK_AT_ONCE 183 #define TK_TRIGGER 183
#define TK_WINDOW_CLOSE 184 #define TK_AT_ONCE 184
#define TK_IGNORE 185 #define TK_WINDOW_CLOSE 185
#define TK_EXPIRED 186 #define TK_IGNORE 186
#define TK_KILL 187 #define TK_EXPIRED 187
#define TK_CONNECTION 188 #define TK_KILL 188
#define TK_TRANSACTION 189 #define TK_CONNECTION 189
#define TK_BALANCE 190 #define TK_TRANSACTION 190
#define TK_VGROUP 191 #define TK_BALANCE 191
#define TK_MERGE 192 #define TK_VGROUP 192
#define TK_REDISTRIBUTE 193 #define TK_MERGE 193
#define TK_SPLIT 194 #define TK_REDISTRIBUTE 194
#define TK_DELETE 195 #define TK_SPLIT 195
#define TK_INSERT 196 #define TK_DELETE 196
#define TK_NULL 197 #define TK_INSERT 197
#define TK_NK_QUESTION 198 #define TK_NULL 198
#define TK_NK_ARROW 199 #define TK_NK_QUESTION 199
#define TK_ROWTS 200 #define TK_NK_ARROW 200
#define TK_TBNAME 201 #define TK_ROWTS 201
#define TK_QSTART 202 #define TK_TBNAME 202
#define TK_QEND 203 #define TK_QSTART 203
#define TK_QDURATION 204 #define TK_QEND 204
#define TK_WSTART 205 #define TK_QDURATION 205
#define TK_WEND 206 #define TK_WSTART 206
#define TK_WDURATION 207 #define TK_WEND 207
#define TK_CAST 208 #define TK_WDURATION 208
#define TK_NOW 209 #define TK_CAST 209
#define TK_TODAY 210 #define TK_NOW 210
#define TK_TIMEZONE 211 #define TK_TODAY 211
#define TK_CLIENT_VERSION 212 #define TK_TIMEZONE 212
#define TK_SERVER_VERSION 213 #define TK_CLIENT_VERSION 213
#define TK_SERVER_STATUS 214 #define TK_SERVER_VERSION 214
#define TK_CURRENT_USER 215 #define TK_SERVER_STATUS 215
#define TK_COUNT 216 #define TK_CURRENT_USER 216
#define TK_LAST_ROW 217 #define TK_COUNT 217
#define TK_BETWEEN 218 #define TK_LAST_ROW 218
#define TK_IS 219 #define TK_BETWEEN 219
#define TK_NK_LT 220 #define TK_IS 220
#define TK_NK_GT 221 #define TK_NK_LT 221
#define TK_NK_LE 222 #define TK_NK_GT 222
#define TK_NK_GE 223 #define TK_NK_LE 223
#define TK_NK_NE 224 #define TK_NK_GE 224
#define TK_MATCH 225 #define TK_NK_NE 225
#define TK_NMATCH 226 #define TK_MATCH 226
#define TK_CONTAINS 227 #define TK_NMATCH 227
#define TK_IN 228 #define TK_CONTAINS 228
#define TK_JOIN 229 #define TK_IN 229
#define TK_INNER 230 #define TK_JOIN 230
#define TK_SELECT 231 #define TK_INNER 231
#define TK_DISTINCT 232 #define TK_SELECT 232
#define TK_WHERE 233 #define TK_DISTINCT 233
#define TK_PARTITION 234 #define TK_WHERE 234
#define TK_BY 235 #define TK_PARTITION 235
#define TK_SESSION 236 #define TK_BY 236
#define TK_STATE_WINDOW 237 #define TK_SESSION 237
#define TK_SLIDING 238 #define TK_STATE_WINDOW 238
#define TK_FILL 239 #define TK_SLIDING 239
#define TK_VALUE 240 #define TK_FILL 240
#define TK_NONE 241 #define TK_VALUE 241
#define TK_PREV 242 #define TK_NONE 242
#define TK_LINEAR 243 #define TK_PREV 243
#define TK_NEXT 244 #define TK_LINEAR 244
#define TK_HAVING 245 #define TK_NEXT 245
#define TK_RANGE 246 #define TK_HAVING 246
#define TK_EVERY 247 #define TK_RANGE 247
#define TK_ORDER 248 #define TK_EVERY 248
#define TK_SLIMIT 249 #define TK_ORDER 249
#define TK_SOFFSET 250 #define TK_SLIMIT 250
#define TK_LIMIT 251 #define TK_SOFFSET 251
#define TK_OFFSET 252 #define TK_LIMIT 252
#define TK_ASC 253 #define TK_OFFSET 253
#define TK_NULLS 254 #define TK_ASC 254
#define TK_ABORT 255 #define TK_NULLS 255
#define TK_AFTER 256 #define TK_ABORT 256
#define TK_ATTACH 257 #define TK_AFTER 257
#define TK_BEFORE 258 #define TK_ATTACH 258
#define TK_BEGIN 259 #define TK_BEFORE 259
#define TK_BITAND 260 #define TK_BEGIN 260
#define TK_BITNOT 261 #define TK_BITAND 261
#define TK_BITOR 262 #define TK_BITNOT 262
#define TK_BLOCKS 263 #define TK_BITOR 263
#define TK_CHANGE 264 #define TK_BLOCKS 264
#define TK_COMMA 265 #define TK_CHANGE 265
#define TK_COMPACT 266 #define TK_COMMA 266
#define TK_CONCAT 267 #define TK_COMPACT 267
#define TK_CONFLICT 268 #define TK_CONCAT 268
#define TK_COPY 269 #define TK_CONFLICT 269
#define TK_DEFERRED 270 #define TK_COPY 270
#define TK_DELIMITERS 271 #define TK_DEFERRED 271
#define TK_DETACH 272 #define TK_DELIMITERS 272
#define TK_DIVIDE 273 #define TK_DETACH 273
#define TK_DOT 274 #define TK_DIVIDE 274
#define TK_EACH 275 #define TK_DOT 275
#define TK_END 276 #define TK_EACH 276
#define TK_FAIL 277 #define TK_END 277
#define TK_FILE 278 #define TK_FAIL 278
#define TK_FOR 279 #define TK_FILE 279
#define TK_GLOB 280 #define TK_FOR 280
#define TK_ID 281 #define TK_GLOB 281
#define TK_IMMEDIATE 282 #define TK_ID 282
#define TK_IMPORT 283 #define TK_IMMEDIATE 283
#define TK_INITIALLY 284 #define TK_IMPORT 284
#define TK_INSTEAD 285 #define TK_INITIALLY 285
#define TK_ISNULL 286 #define TK_INSTEAD 286
#define TK_KEY 287 #define TK_ISNULL 287
#define TK_NK_BITNOT 288 #define TK_KEY 288
#define TK_NK_SEMI 289 #define TK_NK_BITNOT 289
#define TK_NOTNULL 290 #define TK_NK_SEMI 290
#define TK_OF 291 #define TK_NOTNULL 291
#define TK_PLUS 292 #define TK_OF 292
#define TK_PRIVILEGE 293 #define TK_PLUS 293
#define TK_RAISE 294 #define TK_PRIVILEGE 294
#define TK_REPLACE 295 #define TK_RAISE 295
#define TK_RESTRICT 296 #define TK_REPLACE 296
#define TK_ROW 297 #define TK_RESTRICT 297
#define TK_SEMI 298 #define TK_ROW 298
#define TK_STAR 299 #define TK_SEMI 299
#define TK_STATEMENT 300 #define TK_STAR 300
#define TK_STRING 301 #define TK_STATEMENT 301
#define TK_TIMES 302 #define TK_STRING 302
#define TK_UPDATE 303 #define TK_TIMES 303
#define TK_VALUES 304 #define TK_UPDATE 304
#define TK_VARIABLE 305 #define TK_VALUES 305
#define TK_VIEW 306 #define TK_VARIABLE 306
#define TK_WAL 307 #define TK_VIEW 307
#define TK_WAL 308
#define TK_NK_SPACE 300 #define TK_NK_SPACE 300
#define TK_NK_COMMENT 301 #define TK_NK_COMMENT 301
......
...@@ -34,66 +34,69 @@ typedef struct SFuncExecEnv { ...@@ -34,66 +34,69 @@ typedef struct SFuncExecEnv {
int32_t calcMemSize; int32_t calcMemSize;
} SFuncExecEnv; } SFuncExecEnv;
typedef bool (*FExecGetEnv)(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv); typedef bool (*FExecGetEnv)(struct SFunctionNode *pFunc, SFuncExecEnv *pEnv);
typedef bool (*FExecInit)(struct SqlFunctionCtx *pCtx, struct SResultRowEntryInfo* pResultCellInfo); typedef bool (*FExecInit)(struct SqlFunctionCtx *pCtx, struct SResultRowEntryInfo *pResultCellInfo);
typedef int32_t (*FExecProcess)(struct SqlFunctionCtx *pCtx); typedef int32_t (*FExecProcess)(struct SqlFunctionCtx *pCtx);
typedef int32_t (*FExecFinalize)(struct SqlFunctionCtx *pCtx, SSDataBlock* pBlock); typedef int32_t (*FExecFinalize)(struct SqlFunctionCtx *pCtx, SSDataBlock *pBlock);
typedef int32_t (*FScalarExecProcess)(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput); typedef int32_t (*FScalarExecProcess)(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput);
typedef int32_t (*FExecCombine)(struct SqlFunctionCtx *pDestCtx, struct SqlFunctionCtx *pSourceCtx); typedef int32_t (*FExecCombine)(struct SqlFunctionCtx *pDestCtx, struct SqlFunctionCtx *pSourceCtx);
typedef struct SScalarFuncExecFuncs { typedef struct SScalarFuncExecFuncs {
FExecGetEnv getEnv; FExecGetEnv getEnv;
FScalarExecProcess process; FScalarExecProcess process;
} SScalarFuncExecFuncs; } SScalarFuncExecFuncs;
typedef struct SFuncExecFuncs { typedef struct SFuncExecFuncs {
FExecGetEnv getEnv; FExecGetEnv getEnv;
FExecInit init; FExecInit init;
FExecProcess process; FExecProcess process;
FExecFinalize finalize; FExecFinalize finalize;
FExecCombine combine; FExecCombine combine;
} SFuncExecFuncs; } SFuncExecFuncs;
#define MAX_INTERVAL_TIME_WINDOW 1000000 // maximum allowed time windows in final results #define MAX_INTERVAL_TIME_WINDOW 1000000 // maximum allowed time windows in final results
#define TOP_BOTTOM_QUERY_LIMIT 100 #define TOP_BOTTOM_QUERY_LIMIT 100
#define FUNCTIONS_NAME_MAX_LENGTH 16 #define FUNCTIONS_NAME_MAX_LENGTH 16
typedef struct SResultRowEntryInfo { typedef struct SResultRowEntryInfo {
bool initialized:1; // output buffer has been initialized bool initialized : 1; // output buffer has been initialized
bool complete:1; // query has completed bool complete : 1; // query has completed
uint8_t isNullRes:6; // the result is null uint8_t isNullRes : 6; // the result is null
uint16_t numOfRes; // num of output result in current buffer. NOT NULL RESULT uint16_t numOfRes; // num of output result in current buffer. NOT NULL RESULT
} SResultRowEntryInfo; } SResultRowEntryInfo;
// determine the real data need to calculated the result // determine the real data need to calculated the result
enum { enum {
BLK_DATA_NOT_LOAD = 0x0, BLK_DATA_NOT_LOAD = 0x0,
BLK_DATA_SMA_LOAD = 0x1, BLK_DATA_SMA_LOAD = 0x1,
BLK_DATA_DATA_LOAD = 0x3, BLK_DATA_DATA_LOAD = 0x3,
BLK_DATA_FILTEROUT = 0x4, // discard current data block since it is not qualified for filter BLK_DATA_FILTEROUT = 0x4, // discard current data block since it is not qualified for filter
}; };
enum { enum {
MAIN_SCAN = 0x0u, MAIN_SCAN = 0x0u,
REVERSE_SCAN = 0x1u, // todo remove it REVERSE_SCAN = 0x1u, // todo remove it
REPEAT_SCAN = 0x2u, //repeat scan belongs to the master scan REPEAT_SCAN = 0x2u, // repeat scan belongs to the master scan
MERGE_STAGE = 0x20u, MERGE_STAGE = 0x20u,
}; };
typedef struct SPoint1 { typedef struct SPoint1 {
int64_t key; int64_t key;
union{double val; char* ptr;}; union {
double val;
char *ptr;
};
} SPoint1; } SPoint1;
struct SqlFunctionCtx; struct SqlFunctionCtx;
struct SResultRowEntryInfo; struct SResultRowEntryInfo;
//for selectivity query, the corresponding tag value is assigned if the data is qualified // for selectivity query, the corresponding tag value is assigned if the data is qualified
typedef struct SSubsidiaryResInfo { typedef struct SSubsidiaryResInfo {
int16_t num; int16_t num;
int32_t rowLen; int32_t rowLen;
char* buf; // serialize data buffer char *buf; // serialize data buffer
struct SqlFunctionCtx **pCtx; struct SqlFunctionCtx **pCtx;
} SSubsidiaryResInfo; } SSubsidiaryResInfo;
...@@ -106,69 +109,70 @@ typedef struct SResultDataInfo { ...@@ -106,69 +109,70 @@ typedef struct SResultDataInfo {
} SResultDataInfo; } SResultDataInfo;
#define GET_RES_INFO(ctx) ((ctx)->resultInfo) #define GET_RES_INFO(ctx) ((ctx)->resultInfo)
#define GET_ROWCELL_INTERBUF(_c) ((void*) ((char*)(_c) + sizeof(SResultRowEntryInfo))) #define GET_ROWCELL_INTERBUF(_c) ((void *)((char *)(_c) + sizeof(SResultRowEntryInfo)))
typedef struct SInputColumnInfoData { typedef struct SInputColumnInfoData {
int32_t totalRows; // total rows in current columnar data int32_t totalRows; // total rows in current columnar data
int32_t startRowIndex; // handle started row index int32_t startRowIndex; // handle started row index
int32_t numOfRows; // the number of rows needs to be handled int32_t numOfRows; // the number of rows needs to be handled
int32_t numOfInputCols; // PTS is not included int32_t numOfInputCols; // PTS is not included
bool colDataAggIsSet;// if agg is set or not bool colDataAggIsSet; // if agg is set or not
SColumnInfoData *pPTS; // primary timestamp column SColumnInfoData *pPTS; // primary timestamp column
SColumnInfoData **pData; SColumnInfoData **pData;
SColumnDataAgg **pColumnDataAgg; SColumnDataAgg **pColumnDataAgg;
uint64_t uid; // table uid, used to set the tag value when building the final query result for selectivity functions. uint64_t uid; // table uid, used to set the tag value when building the final query result for selectivity functions.
} SInputColumnInfoData; } SInputColumnInfoData;
typedef struct SSerializeDataHandle { typedef struct SSerializeDataHandle {
struct SDiskbasedBuf* pBuf; struct SDiskbasedBuf *pBuf;
int32_t currentPage; int32_t currentPage;
void *pState;
} SSerializeDataHandle; } SSerializeDataHandle;
// sql function runtime context // sql function runtime context
typedef struct SqlFunctionCtx { typedef struct SqlFunctionCtx {
SInputColumnInfoData input; SInputColumnInfoData input;
SResultDataInfo resDataInfo; SResultDataInfo resDataInfo;
uint32_t order; // data block scanner order: asc|desc uint32_t order; // data block scanner order: asc|desc
uint8_t scanFlag; // record current running step, default: 0 uint8_t scanFlag; // record current running step, default: 0
int16_t functionId; // function id int16_t functionId; // function id
char *pOutput; // final result output buffer, point to sdata->data char *pOutput; // final result output buffer, point to sdata->data
int32_t numOfParams; int32_t numOfParams;
SFunctParam *param; // input parameter, e.g., top(k, 20), the number of results for top query is kept in param SFunctParam *param; // input parameter, e.g., top(k, 20), the number of results for top query is kept in param
SColumnInfoData *pTsOutput; // corresponding output buffer for timestamp of each result, e.g., top/bottom*/ SColumnInfoData *pTsOutput; // corresponding output buffer for timestamp of each result, e.g., top/bottom*/
int32_t offset; int32_t offset;
struct SResultRowEntryInfo *resultInfo; struct SResultRowEntryInfo *resultInfo;
SSubsidiaryResInfo subsidiaries; SSubsidiaryResInfo subsidiaries;
SPoint1 start; SPoint1 start;
SPoint1 end; SPoint1 end;
SFuncExecFuncs fpSet; SFuncExecFuncs fpSet;
SScalarFuncExecFuncs sfp; SScalarFuncExecFuncs sfp;
struct SExprInfo *pExpr; struct SExprInfo *pExpr;
struct SSDataBlock *pSrcBlock; struct SSDataBlock *pSrcBlock;
struct SSDataBlock *pDstBlock; // used by indefinite rows function to set selectivity struct SSDataBlock *pDstBlock; // used by indefinite rows function to set selectivity
SSerializeDataHandle saveHandle; SSerializeDataHandle saveHandle;
bool isStream; bool isStream;
char udfName[TSDB_FUNC_NAME_LEN]; char udfName[TSDB_FUNC_NAME_LEN];
} SqlFunctionCtx; } SqlFunctionCtx;
enum { enum {
TEXPR_BINARYEXPR_NODE= 0x1, TEXPR_BINARYEXPR_NODE = 0x1,
TEXPR_UNARYEXPR_NODE = 0x2, TEXPR_UNARYEXPR_NODE = 0x2,
}; };
typedef struct tExprNode { typedef struct tExprNode {
int32_t nodeType; int32_t nodeType;
union { union {
struct {// function node struct { // function node
char functionName[FUNCTIONS_NAME_MAX_LENGTH]; // todo refactor char functionName[FUNCTIONS_NAME_MAX_LENGTH]; // todo refactor
int32_t functionId; int32_t functionId;
int32_t num; int32_t num;
struct SFunctionNode *pFunctNode; struct SFunctionNode *pFunctNode;
} _function; } _function;
struct { struct {
struct SNode* pRootNode; struct SNode *pRootNode;
} _optrRoot; } _optrRoot;
}; };
} tExprNode; } tExprNode;
...@@ -182,17 +186,18 @@ struct SScalarParam { ...@@ -182,17 +186,18 @@ struct SScalarParam {
int32_t numOfRows; int32_t numOfRows;
}; };
void cleanupResultRowEntry(struct SResultRowEntryInfo* pCell); void cleanupResultRowEntry(struct SResultRowEntryInfo *pCell);
int32_t getNumOfResult(SqlFunctionCtx* pCtx, int32_t num, SSDataBlock* pResBlock); int32_t getNumOfResult(SqlFunctionCtx *pCtx, int32_t num, SSDataBlock *pResBlock);
bool isRowEntryCompleted(struct SResultRowEntryInfo* pEntry); bool isRowEntryCompleted(struct SResultRowEntryInfo *pEntry);
bool isRowEntryInitialized(struct SResultRowEntryInfo* pEntry); bool isRowEntryInitialized(struct SResultRowEntryInfo *pEntry);
typedef struct SPoint { typedef struct SPoint {
int64_t key; int64_t key;
void * val; void *val;
} SPoint; } SPoint;
int32_t taosGetLinearInterpolationVal(SPoint* point, int32_t outputType, SPoint* point1, SPoint* point2, int32_t inputType); int32_t taosGetLinearInterpolationVal(SPoint *point, int32_t outputType, SPoint *point1, SPoint *point2,
int32_t inputType);
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// udf api // udf api
......
...@@ -64,6 +64,7 @@ typedef struct SDatabaseOptions { ...@@ -64,6 +64,7 @@ typedef struct SDatabaseOptions {
int64_t keep[3]; int64_t keep[3];
int32_t pages; int32_t pages;
int32_t pagesize; int32_t pagesize;
int32_t tsdbPageSize;
char precisionStr[3]; char precisionStr[3];
int8_t precision; int8_t precision;
int8_t replica; int8_t replica;
......
...@@ -151,6 +151,8 @@ typedef struct SVnodeModifyLogicNode { ...@@ -151,6 +151,8 @@ typedef struct SVnodeModifyLogicNode {
SArray* pDataBlocks; SArray* pDataBlocks;
SVgDataBlocks* pVgDataBlocks; SVgDataBlocks* pVgDataBlocks;
SNode* pAffectedRows; // SColumnNode SNode* pAffectedRows; // SColumnNode
SNode* pStartTs; // SColumnNode
SNode* pEndTs; // SColumnNode
uint64_t tableId; uint64_t tableId;
uint64_t stableId; uint64_t stableId;
int8_t tableType; // table type int8_t tableType; // table type
...@@ -525,6 +527,8 @@ typedef struct SDataDeleterNode { ...@@ -525,6 +527,8 @@ typedef struct SDataDeleterNode {
char tsColName[TSDB_COL_NAME_LEN]; char tsColName[TSDB_COL_NAME_LEN];
STimeWindow deleteTimeRange; STimeWindow deleteTimeRange;
SNode* pAffectedRows; SNode* pAffectedRows;
SNode* pStartTs;
SNode* pEndTs;
} SDataDeleterNode; } SDataDeleterNode;
typedef struct SSubplan { typedef struct SSubplan {
......
...@@ -315,6 +315,8 @@ typedef struct SDeleteStmt { ...@@ -315,6 +315,8 @@ typedef struct SDeleteStmt {
SNode* pFromTable; // FROM clause SNode* pFromTable; // FROM clause
SNode* pWhere; // WHERE clause SNode* pWhere; // WHERE clause
SNode* pCountFunc; // count the number of rows affected SNode* pCountFunc; // count the number of rows affected
SNode* pFirstFunc; // the start timestamp when the data was actually deleted
SNode* pLastFunc; // the end timestamp when the data was actually deleted
SNode* pTagCond; // pWhere divided into pTagCond and timeRange SNode* pTagCond; // pWhere divided into pTagCond and timeRange
STimeWindow timeRange; STimeWindow timeRange;
uint8_t precision; uint8_t precision;
......
...@@ -52,10 +52,14 @@ int32_t qSetSubplanExecutionNode(SSubplan* pSubplan, int32_t groupId, SDownstrea ...@@ -52,10 +52,14 @@ int32_t qSetSubplanExecutionNode(SSubplan* pSubplan, int32_t groupId, SDownstrea
void qClearSubplanExecutionNode(SSubplan* pSubplan); void qClearSubplanExecutionNode(SSubplan* pSubplan);
// Convert to subplan to string for the scheduler to send to the executor // Convert to subplan to display string for the scheduler to send to the executor
int32_t qSubPlanToString(const SSubplan* pSubplan, char** pStr, int32_t* pLen); int32_t qSubPlanToString(const SSubplan* pSubplan, char** pStr, int32_t* pLen);
int32_t qStringToSubplan(const char* pStr, SSubplan** pSubplan); int32_t qStringToSubplan(const char* pStr, SSubplan** pSubplan);
// Convert to subplan to msg for the scheduler to send to the executor
int32_t qSubPlanToMsg(const SSubplan* pSubplan, char** pStr, int32_t* pLen);
int32_t qMsgToSubplan(const char* pStr, int32_t len, SSubplan** pSubplan);
char* qQueryPlanToString(const SQueryPlan* pPlan); char* qQueryPlanToString(const SQueryPlan* pPlan);
SQueryPlan* qStringToQueryPlan(const char* pStr); SQueryPlan* qStringToQueryPlan(const char* pStr);
......
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include "tdatablock.h"
#include "tdbInt.h"
#ifdef __cplusplus
extern "C" {
#endif
#ifndef _STREAM_STATE_H_
#define _STREAM_STATE_H_
typedef struct SStreamTask SStreamTask;
// incremental state storage
typedef struct {
SStreamTask* pOwner;
TDB* db;
TTB* pStateDb;
TTB* pFuncStateDb;
TXN txn;
} SStreamState;
SStreamState* streamStateOpen(char* path, SStreamTask* pTask, bool specPath);
void streamStateClose(SStreamState* pState);
int32_t streamStateBegin(SStreamState* pState);
int32_t streamStateCommit(SStreamState* pState);
int32_t streamStateAbort(SStreamState* pState);
typedef struct {
TBC* pCur;
} SStreamStateCur;
int32_t streamStateFuncPut(SStreamState* pState, const STupleKey* key, const void* value, int32_t vLen);
int32_t streamStateFuncGet(SStreamState* pState, const STupleKey* key, void** pVal, int32_t* pVLen);
int32_t streamStateFuncDel(SStreamState* pState, const STupleKey* key);
int32_t streamStatePut(SStreamState* pState, const SWinKey* key, const void* value, int32_t vLen);
int32_t streamStateGet(SStreamState* pState, const SWinKey* key, void** pVal, int32_t* pVLen);
int32_t streamStateDel(SStreamState* pState, const SWinKey* key);
int32_t streamStateAddIfNotExist(SStreamState* pState, const SWinKey* key, void** pVal, int32_t* pVLen);
int32_t streamStateReleaseBuf(SStreamState* pState, const SWinKey* key, void* pVal);
void streamFreeVal(void* val);
SStreamStateCur* streamStateGetCur(SStreamState* pState, const SWinKey* key);
SStreamStateCur* streamStateSeekKeyNext(SStreamState* pState, const SWinKey* key);
SStreamStateCur* streamStateSeekKeyPrev(SStreamState* pState, const SWinKey* key);
void streamStateFreeCur(SStreamStateCur* pCur);
int32_t streamStateGetKVByCur(SStreamStateCur* pCur, SWinKey* pKey, const void** pVal, int32_t* pVLen);
int32_t streamStateSeekFirst(SStreamState* pState, SStreamStateCur* pCur);
int32_t streamStateSeekLast(SStreamState* pState, SStreamStateCur* pCur);
int32_t streamStateCurNext(SStreamState* pState, SStreamStateCur* pCur);
int32_t streamStateCurPrev(SStreamState* pState, SStreamStateCur* pCur);
#ifdef __cplusplus
}
#endif
#endif /* ifndef _STREAM_STATE_H_ */
...@@ -16,6 +16,7 @@ ...@@ -16,6 +16,7 @@
#include "executor.h" #include "executor.h"
#include "os.h" #include "os.h"
#include "query.h" #include "query.h"
#include "streamState.h"
#include "tdatablock.h" #include "tdatablock.h"
#include "tdbInt.h" #include "tdbInt.h"
#include "tmsg.h" #include "tmsg.h"
...@@ -263,14 +264,6 @@ typedef struct { ...@@ -263,14 +264,6 @@ typedef struct {
SArray* checkpointVer; SArray* checkpointVer;
} SStreamRecoveringState; } SStreamRecoveringState;
// incremental state storage
typedef struct {
SStreamTask* pOwner;
TDB* db;
TTB* pStateDb;
TXN txn;
} SStreamState;
typedef struct SStreamTask { typedef struct SStreamTask {
int64_t streamId; int64_t streamId;
int32_t taskId; int32_t taskId;
...@@ -540,39 +533,6 @@ int32_t streamMetaCommit(SStreamMeta* pMeta); ...@@ -540,39 +533,6 @@ int32_t streamMetaCommit(SStreamMeta* pMeta);
int32_t streamMetaRollBack(SStreamMeta* pMeta); int32_t streamMetaRollBack(SStreamMeta* pMeta);
int32_t streamLoadTasks(SStreamMeta* pMeta); int32_t streamLoadTasks(SStreamMeta* pMeta);
SStreamState* streamStateOpen(char* path, SStreamTask* pTask);
void streamStateClose(SStreamState* pState);
int32_t streamStateBegin(SStreamState* pState);
int32_t streamStateCommit(SStreamState* pState);
int32_t streamStateAbort(SStreamState* pState);
typedef struct {
TBC* pCur;
} SStreamStateCur;
#if 1
int32_t streamStatePut(SStreamState* pState, const SWinKey* key, const void* value, int32_t vLen);
int32_t streamStateGet(SStreamState* pState, const SWinKey* key, void** pVal, int32_t* pVLen);
int32_t streamStateDel(SStreamState* pState, const SWinKey* key);
int32_t streamStateAddIfNotExist(SStreamState* pState, const SWinKey* key, void** pVal, int32_t* pVLen);
int32_t streamStateReleaseBuf(SStreamState* pState, const SWinKey* key, void* pVal);
void streamFreeVal(void* val);
SStreamStateCur* streamStateGetCur(SStreamState* pState, const SWinKey* key);
SStreamStateCur* streamStateSeekKeyNext(SStreamState* pState, const SWinKey* key);
SStreamStateCur* streamStateSeekKeyPrev(SStreamState* pState, const SWinKey* key);
void streamStateFreeCur(SStreamStateCur* pCur);
int32_t streamStateGetKVByCur(SStreamStateCur* pCur, SWinKey* pKey, const void** pVal, int32_t* pVLen);
int32_t streamStateSeekFirst(SStreamState* pState, SStreamStateCur* pCur);
int32_t streamStateSeekLast(SStreamState* pState, SStreamStateCur* pCur);
int32_t streamStateCurNext(SStreamState* pState, SStreamStateCur* pCur);
int32_t streamStateCurPrev(SStreamState* pState, SStreamStateCur* pCur);
#endif
#ifdef __cplusplus #ifdef __cplusplus
} }
#endif #endif
......
...@@ -69,6 +69,14 @@ void tfsUpdateSize(STfs *pTfs); ...@@ -69,6 +69,14 @@ void tfsUpdateSize(STfs *pTfs);
*/ */
SDiskSize tfsGetSize(STfs *pTfs); SDiskSize tfsGetSize(STfs *pTfs);
/**
* @brief Get level of multi-tier storage.
*
* @param pTfs
* @return int32_t
*/
int32_t tfsGetLevel(STfs *pTfs);
/** /**
* @brief Allocate an existing available tier level from fs. * @brief Allocate an existing available tier level from fs.
* *
......
...@@ -285,6 +285,7 @@ int32_t* taosGetErrno(); ...@@ -285,6 +285,7 @@ int32_t* taosGetErrno();
#define TSDB_CODE_MND_TOPIC_SUBSCRIBED TAOS_DEF_ERROR_CODE(0, 0x03EB) #define TSDB_CODE_MND_TOPIC_SUBSCRIBED TAOS_DEF_ERROR_CODE(0, 0x03EB)
#define TSDB_CODE_MND_CGROUP_USED TAOS_DEF_ERROR_CODE(0, 0x03EC) #define TSDB_CODE_MND_CGROUP_USED TAOS_DEF_ERROR_CODE(0, 0x03EC)
#define TSDB_CODE_MND_TOPIC_MUST_BE_DELETED TAOS_DEF_ERROR_CODE(0, 0x03ED) #define TSDB_CODE_MND_TOPIC_MUST_BE_DELETED TAOS_DEF_ERROR_CODE(0, 0x03ED)
#define TSDB_CODE_MND_IN_REBALANCE TAOS_DEF_ERROR_CODE(0, 0x03EF)
// mnode-stream // mnode-stream
#define TSDB_CODE_MND_STREAM_ALREADY_EXIST TAOS_DEF_ERROR_CODE(0, 0x03F0) #define TSDB_CODE_MND_STREAM_ALREADY_EXIST TAOS_DEF_ERROR_CODE(0, 0x03F0)
...@@ -577,6 +578,7 @@ int32_t* taosGetErrno(); ...@@ -577,6 +578,7 @@ int32_t* taosGetErrno();
#define TSDB_CODE_FUNC_FUNTION_PARA_TYPE TAOS_DEF_ERROR_CODE(0, 0x2802) #define TSDB_CODE_FUNC_FUNTION_PARA_TYPE TAOS_DEF_ERROR_CODE(0, 0x2802)
#define TSDB_CODE_FUNC_FUNTION_PARA_VALUE TAOS_DEF_ERROR_CODE(0, 0x2803) #define TSDB_CODE_FUNC_FUNTION_PARA_VALUE TAOS_DEF_ERROR_CODE(0, 0x2803)
#define TSDB_CODE_FUNC_NOT_BUILTIN_FUNTION TAOS_DEF_ERROR_CODE(0, 0x2804) #define TSDB_CODE_FUNC_NOT_BUILTIN_FUNTION TAOS_DEF_ERROR_CODE(0, 0x2804)
#define TSDB_CODE_FUNC_DUP_TIMESTAMP TAOS_DEF_ERROR_CODE(0, 0x2805)
//udf //udf
#define TSDB_CODE_UDF_STOPPING TAOS_DEF_ERROR_CODE(0, 0x2901) #define TSDB_CODE_UDF_STOPPING TAOS_DEF_ERROR_CODE(0, 0x2901)
...@@ -617,6 +619,8 @@ int32_t* taosGetErrno(); ...@@ -617,6 +619,8 @@ int32_t* taosGetErrno();
#define TSDB_CODE_RSMA_EMPTY_INFO TAOS_DEF_ERROR_CODE(0, 0x3156) #define TSDB_CODE_RSMA_EMPTY_INFO TAOS_DEF_ERROR_CODE(0, 0x3156)
#define TSDB_CODE_RSMA_INVALID_SCHEMA TAOS_DEF_ERROR_CODE(0, 0x3157) #define TSDB_CODE_RSMA_INVALID_SCHEMA TAOS_DEF_ERROR_CODE(0, 0x3157)
#define TSDB_CODE_RSMA_REGEX_MATCH TAOS_DEF_ERROR_CODE(0, 0x3158) #define TSDB_CODE_RSMA_REGEX_MATCH TAOS_DEF_ERROR_CODE(0, 0x3158)
#define TSDB_CODE_RSMA_STREAM_STATE_OPEN TAOS_DEF_ERROR_CODE(0, 0x3159)
#define TSDB_CODE_RSMA_STREAM_STATE_COMMIT TAOS_DEF_ERROR_CODE(0, 0x3160)
//index //index
#define TSDB_CODE_INDEX_REBUILDING TAOS_DEF_ERROR_CODE(0, 0x3200) #define TSDB_CODE_INDEX_REBUILDING TAOS_DEF_ERROR_CODE(0, 0x3200)
......
...@@ -300,6 +300,9 @@ typedef enum ELogicConditionType { ...@@ -300,6 +300,9 @@ typedef enum ELogicConditionType {
#define TSDB_DEFAULT_PAGES_PER_VNODE 256 #define TSDB_DEFAULT_PAGES_PER_VNODE 256
#define TSDB_MIN_PAGESIZE_PER_VNODE 1 // unit KB #define TSDB_MIN_PAGESIZE_PER_VNODE 1 // unit KB
#define TSDB_MAX_PAGESIZE_PER_VNODE 16384 #define TSDB_MAX_PAGESIZE_PER_VNODE 16384
#define TSDB_DEFAULT_TSDB_PAGESIZE 4
#define TSDB_MIN_TSDB_PAGESIZE 1 // unit KB
#define TSDB_MAX_TSDB_PAGESIZE 16384
#define TSDB_DEFAULT_PAGESIZE_PER_VNODE 4 #define TSDB_DEFAULT_PAGESIZE_PER_VNODE 4
#define TSDB_MIN_DAYS_PER_FILE 60 // unit minute #define TSDB_MIN_DAYS_PER_FILE 60 // unit minute
#define TSDB_MAX_DAYS_PER_FILE (3650 * 1440) #define TSDB_MAX_DAYS_PER_FILE (3650 * 1440)
......
...@@ -31,7 +31,6 @@ typedef struct SSchedMsg { ...@@ -31,7 +31,6 @@ typedef struct SSchedMsg {
void *thandle; void *thandle;
} SSchedMsg; } SSchedMsg;
typedef struct { typedef struct {
char label[TSDB_LABEL_LEN]; char label[TSDB_LABEL_LEN];
tsem_t emptySem; tsem_t emptySem;
...@@ -48,7 +47,6 @@ typedef struct { ...@@ -48,7 +47,6 @@ typedef struct {
void *pTimer; void *pTimer;
} SSchedQueue; } SSchedQueue;
/** /**
* Create a thread-safe ring-buffer based task queue and return the instance. A thread * Create a thread-safe ring-buffer based task queue and return the instance. A thread
* pool will be created to consume the messages in the queue. * pool will be created to consume the messages in the queue.
...@@ -57,7 +55,7 @@ typedef struct { ...@@ -57,7 +55,7 @@ typedef struct {
* @param label the label of the queue * @param label the label of the queue
* @return the created queue scheduler * @return the created queue scheduler
*/ */
void *taosInitScheduler(int32_t capacity, int32_t numOfThreads, const char *label, SSchedQueue* pSched); void *taosInitScheduler(int32_t capacity, int32_t numOfThreads, const char *label, SSchedQueue *pSched);
/** /**
* Create a thread-safe ring-buffer based task queue and return the instance. * Create a thread-safe ring-buffer based task queue and return the instance.
...@@ -83,7 +81,7 @@ void taosCleanUpScheduler(void *queueScheduler); ...@@ -83,7 +81,7 @@ void taosCleanUpScheduler(void *queueScheduler);
* @param queueScheduler the queue scheduler instance * @param queueScheduler the queue scheduler instance
* @param pMsg the message for the task * @param pMsg the message for the task
*/ */
void taosScheduleTask(void *queueScheduler, SSchedMsg *pMsg); int taosScheduleTask(void *queueScheduler, SSchedMsg *pMsg);
#ifdef __cplusplus #ifdef __cplusplus
} }
......
...@@ -840,14 +840,20 @@ function updateProduct() { ...@@ -840,14 +840,20 @@ function updateProduct() {
echo echo
echo -e "${GREEN_DARK}To configure ${productName} ${NC}: edit ${cfg_install_dir}/${configFile}" echo -e "${GREEN_DARK}To configure ${productName} ${NC}: edit ${cfg_install_dir}/${configFile}"
echo -e "${GREEN_DARK}To configure Adapter (if has) ${NC}: edit ${cfg_install_dir}/${adapterName}.toml" [ -f ${configDir}/taosadapter.toml ] && [ -f ${installDir}/bin/taosadapter ] && \
echo -e "${GREEN_DARK}To configure Taos Adapter ${NC}: edit ${configDir}/taosadapter.toml"
if ((${service_mod} == 0)); then if ((${service_mod} == 0)); then
echo -e "${GREEN_DARK}To start ${productName} ${NC}: ${csudo}systemctl start ${serverName}${NC}" echo -e "${GREEN_DARK}To start ${productName} ${NC}: ${csudo}systemctl start ${serverName}${NC}"
[ -f ${service_config_dir}/taosadapter.service ] && [ -f ${installDir}/bin/taosadapter ] && \
echo -e "${GREEN_DARK}To start Taos Adatper ${NC}: ${csudo}systemctl start taosadapter ${NC}"
elif ((${service_mod} == 1)); then elif ((${service_mod} == 1)); then
echo -e "${GREEN_DARK}To start ${productName} ${NC}: ${csudo}service ${serverName} start${NC}" echo -e "${GREEN_DARK}To start ${productName} ${NC}: ${csudo}service ${serverName} start${NC}"
[ -f ${service_config_dir}/taosadapter.service ] && [ -f ${installDir}/bin/taosadapter ] && \
echo -e "${GREEN_DARK}To start Taos Adapter ${NC}: ${csudo}service taosadapter start${NC}"
else else
echo -e "${GREEN_DARK}To start Adapter (if has)${NC}: ${adapterName} &${NC}"
echo -e "${GREEN_DARK}To start ${productName} ${NC}: ./${serverName}${NC}" echo -e "${GREEN_DARK}To start ${productName} ${NC}: ./${serverName}${NC}"
[ -f ${installDir}/bin/taosadapter ] && \
echo -e "${GREEN_DARK}To start Taos Adapter ${NC}: taosadapter &${NC}"
fi fi
if [ ${openresty_work} = 'true' ]; then if [ ${openresty_work} = 'true' ]; then
...@@ -926,14 +932,20 @@ function installProduct() { ...@@ -926,14 +932,20 @@ function installProduct() {
# Ask if to start the service # Ask if to start the service
echo echo
echo -e "${GREEN_DARK}To configure ${productName} ${NC}: edit ${cfg_install_dir}/${configFile}" echo -e "${GREEN_DARK}To configure ${productName} ${NC}: edit ${cfg_install_dir}/${configFile}"
echo -e "${GREEN_DARK}To configure ${adapterName} (if has) ${NC}: edit ${cfg_install_dir}/${adapterName}.toml" [ -f ${configDir}/taosadapter.toml ] && [ -f ${installDir}/bin/taosadapter ] && \
echo -e "${GREEN_DARK}To configure Taos Adapter ${NC}: edit ${configDir}/taosadapter.toml"
if ((${service_mod} == 0)); then if ((${service_mod} == 0)); then
echo -e "${GREEN_DARK}To start ${productName} ${NC}: ${csudo}systemctl start ${serverName}${NC}" echo -e "${GREEN_DARK}To start ${productName} ${NC}: ${csudo}systemctl start ${serverName}${NC}"
[ -f ${service_config_dir}/taosadapter.service ] && [ -f ${installDir}/bin/taosadapter ] && \
echo -e "${GREEN_DARK}To start Taos Adatper ${NC}: ${csudo}systemctl start taosadapter ${NC}"
elif ((${service_mod} == 1)); then elif ((${service_mod} == 1)); then
echo -e "${GREEN_DARK}To start ${productName} ${NC}: ${csudo}service ${serverName} start${NC}" echo -e "${GREEN_DARK}To start ${productName} ${NC}: ${csudo}service ${serverName} start${NC}"
[ -f ${service_config_dir}/taosadapter.service ] && [ -f ${installDir}/bin/taosadapter ] && \
echo -e "${GREEN_DARK}To start Taos Adapter ${NC}: ${csudo}service taosadapter start${NC}"
else else
echo -e "${GREEN_DARK}To start Adapter (if has)${NC}: ${adapterName} &${NC}"
echo -e "${GREEN_DARK}To start ${productName} ${NC}: ${serverName}${NC}" echo -e "${GREEN_DARK}To start ${productName} ${NC}: ${serverName}${NC}"
[ -f ${installDir}/bin/taosadapter ] && \
echo -e "${GREEN_DARK}To start Taos Adapter ${NC}: taosadapter &${NC}"
fi fi
if [ ! -z "$firstEp" ]; then if [ ! -z "$firstEp" ]; then
......
...@@ -609,14 +609,20 @@ function update_TDengine() { ...@@ -609,14 +609,20 @@ function update_TDengine() {
echo echo
echo -e "${GREEN_DARK}To configure ${productName} ${NC}: edit ${configDir}/${configFile}" echo -e "${GREEN_DARK}To configure ${productName} ${NC}: edit ${configDir}/${configFile}"
echo -e "${GREEN_DARK}To configure Taos Adapter (if has) ${NC}: edit ${configDir}/taosadapter.toml" [ -f ${configDir}/taosadapter.toml ] && [ -f ${installDir}/bin/taosadapter ] && \
echo -e "${GREEN_DARK}To configure Taos Adapter ${NC}: edit ${configDir}/taosadapter.toml"
if ((${service_mod} == 0)); then if ((${service_mod} == 0)); then
echo -e "${GREEN_DARK}To start ${productName} ${NC}: ${csudo}systemctl start ${serverName}${NC}" echo -e "${GREEN_DARK}To start ${productName} ${NC}: ${csudo}systemctl start ${serverName}${NC}"
[ -f ${service_config_dir}/taosadapter.service ] && [ -f ${installDir}/bin/taosadapter ] && \
echo -e "${GREEN_DARK}To start Taos Adatper ${NC}: ${csudo}systemctl start taosadapter ${NC}"
elif ((${service_mod} == 1)); then elif ((${service_mod} == 1)); then
echo -e "${GREEN_DARK}To start ${productName} ${NC}: ${csudo}service ${serverName} start${NC}" echo -e "${GREEN_DARK}To start ${productName} ${NC}: ${csudo}service ${serverName} start${NC}"
[ -f ${service_config_dir}/taosadapter.service ] && [ -f ${installDir}/bin/taosadapter ] && \
echo -e "${GREEN_DARK}To start Taos Adapter ${NC}: ${csudo}service taosadapter start${NC}"
else else
echo -e "${GREEN_DARK}To start Taos Adapter (if has)${NC}: taosadapter &${NC}"
echo -e "${GREEN_DARK}To start ${productName} ${NC}: ${serverName}${NC}" echo -e "${GREEN_DARK}To start ${productName} ${NC}: ${serverName}${NC}"
[ -f ${installDir}/bin/taosadapter ] && \
echo -e "${GREEN_DARK}To start Taos Adapter ${NC}: taosadapter &${NC}"
fi fi
echo -e "${GREEN_DARK}To access ${productName} ${NC}: use ${GREEN_UNDERLINE}${clientName}${NC} in shell${NC}" echo -e "${GREEN_DARK}To access ${productName} ${NC}: use ${GREEN_UNDERLINE}${clientName}${NC} in shell${NC}"
...@@ -649,14 +655,20 @@ function install_TDengine() { ...@@ -649,14 +655,20 @@ function install_TDengine() {
echo -e "\033[44;32;1m${productName} is installed successfully!${NC}" echo -e "\033[44;32;1m${productName} is installed successfully!${NC}"
echo echo
echo -e "${GREEN_DARK}To configure ${productName} ${NC}: edit ${configDir}/${configFile}" echo -e "${GREEN_DARK}To configure ${productName} ${NC}: edit ${configDir}/${configFile}"
echo -e "${GREEN_DARK}To configure taosadapter (if has) ${NC}: edit ${configDir}/taosadapter.toml" [ -f ${configDir}/taosadapter.toml ] && [ -f ${installDir}/bin/taosadapter ] && \
echo -e "${GREEN_DARK}To configure Taos Adapter ${NC}: edit ${configDir}/taosadapter.toml"
if ((${service_mod} == 0)); then if ((${service_mod} == 0)); then
echo -e "${GREEN_DARK}To start ${productName} ${NC}: ${csudo}systemctl start ${serverName}${NC}" echo -e "${GREEN_DARK}To start ${productName} ${NC}: ${csudo}systemctl start ${serverName}${NC}"
[ -f ${service_config_dir}/taosadapter.service ] && [ -f ${installDir}/bin/taosadapter ] && \
echo -e "${GREEN_DARK}To start Taos Adapter ${NC}: ${csudo}systemctl start taosadapter ${NC}"
elif ((${service_mod} == 1)); then elif ((${service_mod} == 1)); then
echo -e "${GREEN_DARK}To start ${productName} ${NC}: ${csudo}service ${serverName} start${NC}" echo -e "${GREEN_DARK}To start ${productName} ${NC}: ${csudo}service ${serverName} start${NC}"
[ -f ${service_config_dir}/taosadapter.service ] && [ -f ${installDir}/bin/taosadapter ] && \
echo -e "${GREEN_DARK}To start Taos Adapter ${NC}: ${csudo}service taosadapter start${NC}"
else else
echo -e "${GREEN_DARK}To start Taos Adapter (if has)${NC}: taosadapter &${NC}"
echo -e "${GREEN_DARK}To start ${productName} ${NC}: ./${serverName}${NC}" echo -e "${GREEN_DARK}To start ${productName} ${NC}: ./${serverName}${NC}"
[ -f ${installDir}/bin/taosadapter ] && \
echo -e "${GREEN_DARK}To start Taos Adapter ${NC}: taosadapter &${NC}"
fi fi
echo -e "${GREEN_DARK}To access ${productName} ${NC}: use ${GREEN_UNDERLINE}${clientName}${NC} in shell${NC}" echo -e "${GREEN_DARK}To access ${productName} ${NC}: use ${GREEN_UNDERLINE}${clientName}${NC} in shell${NC}"
......
...@@ -212,7 +212,7 @@ JNIEXPORT void JNICALL Java_com_taosdata_jdbc_tmq_TMQConnector_tmqCommitAsync(JN ...@@ -212,7 +212,7 @@ JNIEXPORT void JNICALL Java_com_taosdata_jdbc_tmq_TMQConnector_tmqCommitAsync(JN
tmq_commit_async(tmq, res, commit_cb, consumer); tmq_commit_async(tmq, res, commit_cb, consumer);
} }
JNIEXPORT int JNICALL Java_com_taosdata_jdbc_tmq_TMQConnector_tmqUnsubscribeImp(JNIEnv *env, jobject jobj, jlong jtmq) { JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_tmq_TMQConnector_tmqUnsubscribeImp(JNIEnv *env, jobject jobj, jlong jtmq) {
tmq_t *tmq = (tmq_t *)jtmq; tmq_t *tmq = (tmq_t *)jtmq;
if (tmq == NULL) { if (tmq == NULL) {
jniError("jobj:%p, tmq is closed", jobj); jniError("jobj:%p, tmq is closed", jobj);
...@@ -222,7 +222,7 @@ JNIEXPORT int JNICALL Java_com_taosdata_jdbc_tmq_TMQConnector_tmqUnsubscribeImp( ...@@ -222,7 +222,7 @@ JNIEXPORT int JNICALL Java_com_taosdata_jdbc_tmq_TMQConnector_tmqUnsubscribeImp(
return tmq_unsubscribe((tmq_t *)tmq); return tmq_unsubscribe((tmq_t *)tmq);
} }
JNIEXPORT int JNICALL Java_com_taosdata_jdbc_tmq_TMQConnector_tmqConsumerCloseImp(JNIEnv *env, jobject jobj, JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_tmq_TMQConnector_tmqConsumerCloseImp(JNIEnv *env, jobject jobj,
jlong jtmq) { jlong jtmq) {
tmq_t *tmq = (tmq_t *)jtmq; tmq_t *tmq = (tmq_t *)jtmq;
if (tmq == NULL) { if (tmq == NULL) {
......
...@@ -173,7 +173,8 @@ static int32_t hbQueryHbRspHandle(SAppHbMgr *pAppHbMgr, SClientHbRsp *pRsp) { ...@@ -173,7 +173,8 @@ static int32_t hbQueryHbRspHandle(SAppHbMgr *pAppHbMgr, SClientHbRsp *pRsp) {
pTscObj->pAppInfo->totalDnodes = pRsp->query->totalDnodes; pTscObj->pAppInfo->totalDnodes = pRsp->query->totalDnodes;
pTscObj->pAppInfo->onlineDnodes = pRsp->query->onlineDnodes; pTscObj->pAppInfo->onlineDnodes = pRsp->query->onlineDnodes;
pTscObj->connId = pRsp->query->connId; pTscObj->connId = pRsp->query->connId;
tscTrace("conn %p hb rsp, dnodes %d/%d", pTscObj->connId, pTscObj->pAppInfo->onlineDnodes, pTscObj->pAppInfo->totalDnodes); tscTrace("conn %p hb rsp, dnodes %d/%d", pTscObj->connId, pTscObj->pAppInfo->onlineDnodes,
pTscObj->pAppInfo->totalDnodes);
if (pRsp->query->killRid) { if (pRsp->query->killRid) {
tscDebug("request rid %" PRIx64 " need to be killed now", pRsp->query->killRid); tscDebug("request rid %" PRIx64 " need to be killed now", pRsp->query->killRid);
...@@ -297,7 +298,8 @@ static int32_t hbAsyncCallBack(void *param, SDataBuf *pMsg, int32_t code) { ...@@ -297,7 +298,8 @@ static int32_t hbAsyncCallBack(void *param, SDataBuf *pMsg, int32_t code) {
if (code != 0) { if (code != 0) {
(*pInst)->onlineDnodes = ((*pInst)->totalDnodes ? 0 : -1); (*pInst)->onlineDnodes = ((*pInst)->totalDnodes ? 0 : -1);
tscDebug("hb rsp error %s, update server status %d/%d", tstrerror(code), (*pInst)->onlineDnodes, (*pInst)->totalDnodes); tscDebug("hb rsp error %s, update server status %d/%d", tstrerror(code), (*pInst)->onlineDnodes,
(*pInst)->totalDnodes);
} }
if (rspNum) { if (rspNum) {
...@@ -414,6 +416,9 @@ int32_t hbGetQueryBasicInfo(SClientHbKey *connKey, SClientHbReq *req) { ...@@ -414,6 +416,9 @@ int32_t hbGetQueryBasicInfo(SClientHbKey *connKey, SClientHbReq *req) {
int32_t code = hbBuildQueryDesc(hbBasic, pTscObj); int32_t code = hbBuildQueryDesc(hbBasic, pTscObj);
if (code) { if (code) {
releaseTscObj(connKey->tscRid); releaseTscObj(connKey->tscRid);
if (hbBasic->queryDesc) {
taosArrayDestroyEx(hbBasic->queryDesc, tFreeClientHbQueryDesc);
}
taosMemoryFree(hbBasic); taosMemoryFree(hbBasic);
return code; return code;
} }
...@@ -654,6 +659,8 @@ int32_t hbGatherAppInfo(void) { ...@@ -654,6 +659,8 @@ int32_t hbGatherAppInfo(void) {
for (int32_t i = 0; i < sz; ++i) { for (int32_t i = 0; i < sz; ++i) {
SAppHbMgr *pAppHbMgr = taosArrayGetP(clientHbMgr.appHbMgrs, i); SAppHbMgr *pAppHbMgr = taosArrayGetP(clientHbMgr.appHbMgrs, i);
if (pAppHbMgr == NULL) continue;
uint64_t clusterId = pAppHbMgr->pAppInstInfo->clusterId; uint64_t clusterId = pAppHbMgr->pAppInstInfo->clusterId;
SAppHbReq *pApp = taosHashGet(clientHbMgr.appSummary, &clusterId, sizeof(clusterId)); SAppHbReq *pApp = taosHashGet(clientHbMgr.appSummary, &clusterId, sizeof(clusterId));
if (NULL == pApp) { if (NULL == pApp) {
...@@ -691,15 +698,21 @@ static void *hbThreadFunc(void *param) { ...@@ -691,15 +698,21 @@ static void *hbThreadFunc(void *param) {
hbGatherAppInfo(); hbGatherAppInfo();
} }
SArray *mgr = taosArrayInit(sz, sizeof(void *));
for (int i = 0; i < sz; i++) { for (int i = 0; i < sz; i++) {
SAppHbMgr *pAppHbMgr = taosArrayGetP(clientHbMgr.appHbMgrs, i); SAppHbMgr *pAppHbMgr = taosArrayGetP(clientHbMgr.appHbMgrs, i);
if (pAppHbMgr == NULL) {
continue;
}
int32_t connCnt = atomic_load_32(&pAppHbMgr->connKeyCnt); int32_t connCnt = atomic_load_32(&pAppHbMgr->connKeyCnt);
if (connCnt == 0) { if (connCnt == 0) {
taosArrayPush(mgr, &pAppHbMgr);
continue; continue;
} }
SClientHbBatchReq *pReq = hbGatherAllInfo(pAppHbMgr); SClientHbBatchReq *pReq = hbGatherAllInfo(pAppHbMgr);
if (pReq == NULL) { if (pReq == NULL || taosArrayGetP(clientHbMgr.appHbMgrs, i) == NULL) {
tFreeClientHbBatchReq(pReq);
continue; continue;
} }
int tlen = tSerializeSClientHbBatchReq(NULL, 0, pReq); int tlen = tSerializeSClientHbBatchReq(NULL, 0, pReq);
...@@ -708,6 +721,7 @@ static void *hbThreadFunc(void *param) { ...@@ -708,6 +721,7 @@ static void *hbThreadFunc(void *param) {
terrno = TSDB_CODE_TSC_OUT_OF_MEMORY; terrno = TSDB_CODE_TSC_OUT_OF_MEMORY;
tFreeClientHbBatchReq(pReq); tFreeClientHbBatchReq(pReq);
// hbClearReqInfo(pAppHbMgr); // hbClearReqInfo(pAppHbMgr);
taosArrayPush(mgr, &pAppHbMgr);
break; break;
} }
...@@ -719,6 +733,7 @@ static void *hbThreadFunc(void *param) { ...@@ -719,6 +733,7 @@ static void *hbThreadFunc(void *param) {
tFreeClientHbBatchReq(pReq); tFreeClientHbBatchReq(pReq);
// hbClearReqInfo(pAppHbMgr); // hbClearReqInfo(pAppHbMgr);
taosMemoryFree(buf); taosMemoryFree(buf);
taosArrayPush(mgr, &pAppHbMgr);
break; break;
} }
pInfo->fp = hbAsyncCallBack; pInfo->fp = hbAsyncCallBack;
...@@ -726,7 +741,7 @@ static void *hbThreadFunc(void *param) { ...@@ -726,7 +741,7 @@ static void *hbThreadFunc(void *param) {
pInfo->msgInfo.len = tlen; pInfo->msgInfo.len = tlen;
pInfo->msgType = TDMT_MND_HEARTBEAT; pInfo->msgType = TDMT_MND_HEARTBEAT;
pInfo->param = strdup(pAppHbMgr->key); pInfo->param = strdup(pAppHbMgr->key);
pInfo->paramFreeFp = taosMemoryFree; pInfo->paramFreeFp = taosMemoryFree;
pInfo->requestId = generateRequestId(); pInfo->requestId = generateRequestId();
pInfo->requestObjRefId = 0; pInfo->requestObjRefId = 0;
...@@ -738,8 +753,12 @@ static void *hbThreadFunc(void *param) { ...@@ -738,8 +753,12 @@ static void *hbThreadFunc(void *param) {
// hbClearReqInfo(pAppHbMgr); // hbClearReqInfo(pAppHbMgr);
atomic_add_fetch_32(&pAppHbMgr->reportCnt, 1); atomic_add_fetch_32(&pAppHbMgr->reportCnt, 1);
taosArrayPush(mgr, &pAppHbMgr);
} }
taosArrayDestroy(clientHbMgr.appHbMgrs);
clientHbMgr.appHbMgrs = mgr;
taosThreadMutexUnlock(&clientHbMgr.lock); taosThreadMutexUnlock(&clientHbMgr.lock);
taosMsleep(HEARTBEAT_INTERVAL); taosMsleep(HEARTBEAT_INTERVAL);
...@@ -831,7 +850,7 @@ void hbRemoveAppHbMrg(SAppHbMgr **pAppHbMgr) { ...@@ -831,7 +850,7 @@ void hbRemoveAppHbMrg(SAppHbMgr **pAppHbMgr) {
if (pItem == *pAppHbMgr) { if (pItem == *pAppHbMgr) {
hbFreeAppHbMgr(*pAppHbMgr); hbFreeAppHbMgr(*pAppHbMgr);
*pAppHbMgr = NULL; *pAppHbMgr = NULL;
taosArrayRemove(clientHbMgr.appHbMgrs, i); taosArraySet(clientHbMgr.appHbMgrs, i, pAppHbMgr);
break; break;
} }
} }
...@@ -842,6 +861,7 @@ void appHbMgrCleanup(void) { ...@@ -842,6 +861,7 @@ void appHbMgrCleanup(void) {
int sz = taosArrayGetSize(clientHbMgr.appHbMgrs); int sz = taosArrayGetSize(clientHbMgr.appHbMgrs);
for (int i = 0; i < sz; i++) { for (int i = 0; i < sz; i++) {
SAppHbMgr *pTarget = taosArrayGetP(clientHbMgr.appHbMgrs, i); SAppHbMgr *pTarget = taosArrayGetP(clientHbMgr.appHbMgrs, i);
if (pTarget == NULL) continue;
hbFreeAppHbMgr(pTarget); hbFreeAppHbMgr(pTarget);
} }
} }
...@@ -856,7 +876,20 @@ int hbMgrInit() { ...@@ -856,7 +876,20 @@ int hbMgrInit() {
clientHbMgr.appSummary = taosHashInit(10, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_NO_LOCK); clientHbMgr.appSummary = taosHashInit(10, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_NO_LOCK);
clientHbMgr.appHbMgrs = taosArrayInit(0, sizeof(void *)); clientHbMgr.appHbMgrs = taosArrayInit(0, sizeof(void *));
taosThreadMutexInit(&clientHbMgr.lock, NULL);
TdThreadMutexAttr attr = {0};
int ret = taosThreadMutexAttrInit(&attr);
assert(ret == 0);
ret = taosThreadMutexAttrSetType(&attr, PTHREAD_MUTEX_RECURSIVE);
assert(ret == 0);
ret = taosThreadMutexInit(&clientHbMgr.lock, &attr);
assert(ret == 0);
ret = taosThreadMutexAttrDestroy(&attr);
assert(ret == 0);
// init handle funcs // init handle funcs
hbMgrInitHandle(); hbMgrInitHandle();
......
...@@ -438,6 +438,7 @@ void setResSchemaInfo(SReqResultInfo* pResInfo, const SSchema* pSchema, int32_t ...@@ -438,6 +438,7 @@ void setResSchemaInfo(SReqResultInfo* pResInfo, const SSchema* pSchema, int32_t
} }
pResInfo->fields = taosMemoryCalloc(numOfCols, sizeof(TAOS_FIELD)); pResInfo->fields = taosMemoryCalloc(numOfCols, sizeof(TAOS_FIELD));
pResInfo->userFields = taosMemoryCalloc(numOfCols, sizeof(TAOS_FIELD)); pResInfo->userFields = taosMemoryCalloc(numOfCols, sizeof(TAOS_FIELD));
ASSERT(numOfCols == pResInfo->numOfCols);
for (int32_t i = 0; i < pResInfo->numOfCols; ++i) { for (int32_t i = 0; i < pResInfo->numOfCols; ++i) {
pResInfo->fields[i].bytes = pSchema[i].bytes; pResInfo->fields[i].bytes = pSchema[i].bytes;
...@@ -854,6 +855,7 @@ void schedulerExecCb(SExecResult* pResult, void* param, int32_t code) { ...@@ -854,6 +855,7 @@ void schedulerExecCb(SExecResult* pResult, void* param, int32_t code) {
pRequest->metric.resultReady = taosGetTimestampUs(); pRequest->metric.resultReady = taosGetTimestampUs();
if (pResult) { if (pResult) {
destroyQueryExecRes(&pRequest->body.resInfo.execRes);
memcpy(&pRequest->body.resInfo.execRes, pResult, sizeof(*pResult)); memcpy(&pRequest->body.resInfo.execRes, pResult, sizeof(*pResult));
} }
...@@ -1384,6 +1386,7 @@ int32_t doProcessMsgFromServer(void* param) { ...@@ -1384,6 +1386,7 @@ int32_t doProcessMsgFromServer(void* param) {
pSendInfo->fp(pSendInfo->param, &buf, pMsg->code); pSendInfo->fp(pSendInfo->param, &buf, pMsg->code);
rpcFreeCont(pMsg->pCont); rpcFreeCont(pMsg->pCont);
destroySendMsgInfo(pSendInfo); destroySendMsgInfo(pSendInfo);
taosMemoryFree(arg); taosMemoryFree(arg);
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
} }
...@@ -1399,7 +1402,12 @@ void processMsgFromServer(void* parent, SRpcMsg* pMsg, SEpSet* pEpSet) { ...@@ -1399,7 +1402,12 @@ void processMsgFromServer(void* parent, SRpcMsg* pMsg, SEpSet* pEpSet) {
arg->msg = *pMsg; arg->msg = *pMsg;
arg->pEpset = tEpSet; arg->pEpset = tEpSet;
taosAsyncExec(doProcessMsgFromServer, arg, NULL); if (0 != taosAsyncExec(doProcessMsgFromServer, arg, NULL)) {
tscError("failed to sched msg to tsc, tsc ready to quit");
rpcFreeCont(pMsg->pCont);
taosMemoryFree(arg->pEpset);
taosMemoryFree(arg);
}
} }
TAOS* taos_connect_auth(const char* ip, const char* user, const char* auth, const char* db, uint16_t port) { TAOS* taos_connect_auth(const char* ip, const char* user, const char* auth, const char* db, uint16_t port) {
......
...@@ -870,11 +870,13 @@ static void fetchCallback(void *pResult, void *param, int32_t code) { ...@@ -870,11 +870,13 @@ static void fetchCallback(void *pResult, void *param, int32_t code) {
if (code != TSDB_CODE_SUCCESS) { if (code != TSDB_CODE_SUCCESS) {
pRequest->code = code; pRequest->code = code;
taosMemoryFreeClear(pResultInfo->pData);
pRequest->body.fetchFp(pRequest->body.param, pRequest, 0); pRequest->body.fetchFp(pRequest->body.param, pRequest, 0);
return; return;
} }
if (pRequest->code != TSDB_CODE_SUCCESS) { if (pRequest->code != TSDB_CODE_SUCCESS) {
taosMemoryFreeClear(pResultInfo->pData);
pRequest->body.fetchFp(pRequest->body.param, pRequest, 0); pRequest->body.fetchFp(pRequest->body.param, pRequest, 0);
return; return;
} }
......
...@@ -34,6 +34,7 @@ int32_t genericRspCallback(void* param, SDataBuf* pMsg, int32_t code) { ...@@ -34,6 +34,7 @@ int32_t genericRspCallback(void* param, SDataBuf* pMsg, int32_t code) {
removeMeta(pRequest->pTscObj, pRequest->targetTableList); removeMeta(pRequest->pTscObj, pRequest->targetTableList);
} }
taosMemoryFree(pMsg->pEpSet);
taosMemoryFree(pMsg->pData); taosMemoryFree(pMsg->pData);
if (pRequest->body.queryFp != NULL) { if (pRequest->body.queryFp != NULL) {
pRequest->body.queryFp(pRequest->body.param, pRequest, code); pRequest->body.queryFp(pRequest->body.param, pRequest, code);
...@@ -46,6 +47,7 @@ int32_t genericRspCallback(void* param, SDataBuf* pMsg, int32_t code) { ...@@ -46,6 +47,7 @@ int32_t genericRspCallback(void* param, SDataBuf* pMsg, int32_t code) {
int32_t processConnectRsp(void* param, SDataBuf* pMsg, int32_t code) { int32_t processConnectRsp(void* param, SDataBuf* pMsg, int32_t code) {
SRequestObj* pRequest = param; SRequestObj* pRequest = param;
if (code != TSDB_CODE_SUCCESS) { if (code != TSDB_CODE_SUCCESS) {
taosMemoryFree(pMsg->pEpSet);
taosMemoryFree(pMsg->pData); taosMemoryFree(pMsg->pData);
setErrno(pRequest, code); setErrno(pRequest, code);
tsem_post(&pRequest->body.rspSem); tsem_post(&pRequest->body.rspSem);
...@@ -62,6 +64,7 @@ int32_t processConnectRsp(void* param, SDataBuf* pMsg, int32_t code) { ...@@ -62,6 +64,7 @@ int32_t processConnectRsp(void* param, SDataBuf* pMsg, int32_t code) {
if (delta > timestampDeltaLimit) { if (delta > timestampDeltaLimit) {
code = TSDB_CODE_TIME_UNSYNCED; code = TSDB_CODE_TIME_UNSYNCED;
tscError("time diff:%ds is too big", delta); tscError("time diff:%ds is too big", delta);
taosMemoryFree(pMsg->pEpSet);
taosMemoryFree(pMsg->pData); taosMemoryFree(pMsg->pData);
setErrno(pRequest, code); setErrno(pRequest, code);
tsem_post(&pRequest->body.rspSem); tsem_post(&pRequest->body.rspSem);
...@@ -70,6 +73,7 @@ int32_t processConnectRsp(void* param, SDataBuf* pMsg, int32_t code) { ...@@ -70,6 +73,7 @@ int32_t processConnectRsp(void* param, SDataBuf* pMsg, int32_t code) {
/*assert(connectRsp.epSet.numOfEps > 0);*/ /*assert(connectRsp.epSet.numOfEps > 0);*/
if (connectRsp.epSet.numOfEps == 0) { if (connectRsp.epSet.numOfEps == 0) {
taosMemoryFree(pMsg->pEpSet);
taosMemoryFree(pMsg->pData); taosMemoryFree(pMsg->pData);
setErrno(pRequest, TSDB_CODE_MND_APP_ERROR); setErrno(pRequest, TSDB_CODE_MND_APP_ERROR);
tsem_post(&pRequest->body.rspSem); tsem_post(&pRequest->body.rspSem);
...@@ -114,6 +118,7 @@ int32_t processConnectRsp(void* param, SDataBuf* pMsg, int32_t code) { ...@@ -114,6 +118,7 @@ int32_t processConnectRsp(void* param, SDataBuf* pMsg, int32_t code) {
pTscObj->pAppInfo->numOfConns); pTscObj->pAppInfo->numOfConns);
taosMemoryFree(pMsg->pData); taosMemoryFree(pMsg->pData);
taosMemoryFree(pMsg->pEpSet);
tsem_post(&pRequest->body.rspSem); tsem_post(&pRequest->body.rspSem);
return 0; return 0;
} }
...@@ -137,6 +142,7 @@ int32_t processCreateDbRsp(void* param, SDataBuf* pMsg, int32_t code) { ...@@ -137,6 +142,7 @@ int32_t processCreateDbRsp(void* param, SDataBuf* pMsg, int32_t code) {
// todo rsp with the vnode id list // todo rsp with the vnode id list
SRequestObj* pRequest = param; SRequestObj* pRequest = param;
taosMemoryFree(pMsg->pData); taosMemoryFree(pMsg->pData);
taosMemoryFree(pMsg->pEpSet);
if (code != TSDB_CODE_SUCCESS) { if (code != TSDB_CODE_SUCCESS) {
setErrno(pRequest, code); setErrno(pRequest, code);
} }
...@@ -173,6 +179,7 @@ int32_t processUseDbRsp(void* param, SDataBuf* pMsg, int32_t code) { ...@@ -173,6 +179,7 @@ int32_t processUseDbRsp(void* param, SDataBuf* pMsg, int32_t code) {
if (code != TSDB_CODE_SUCCESS) { if (code != TSDB_CODE_SUCCESS) {
taosMemoryFree(pMsg->pData); taosMemoryFree(pMsg->pData);
taosMemoryFree(pMsg->pEpSet);
setErrno(pRequest, code); setErrno(pRequest, code);
if (pRequest->body.queryFp != NULL) { if (pRequest->body.queryFp != NULL) {
...@@ -220,6 +227,7 @@ int32_t processUseDbRsp(void* param, SDataBuf* pMsg, int32_t code) { ...@@ -220,6 +227,7 @@ int32_t processUseDbRsp(void* param, SDataBuf* pMsg, int32_t code) {
setConnectionDB(pRequest->pTscObj, db); setConnectionDB(pRequest->pTscObj, db);
taosMemoryFree(pMsg->pData); taosMemoryFree(pMsg->pData);
taosMemoryFree(pMsg->pEpSet);
if (pRequest->body.queryFp != NULL) { if (pRequest->body.queryFp != NULL) {
pRequest->body.queryFp(pRequest->body.param, pRequest, pRequest->code); pRequest->body.queryFp(pRequest->body.param, pRequest, pRequest->code);
...@@ -237,7 +245,7 @@ int32_t processCreateSTableRsp(void* param, SDataBuf* pMsg, int32_t code) { ...@@ -237,7 +245,7 @@ int32_t processCreateSTableRsp(void* param, SDataBuf* pMsg, int32_t code) {
setErrno(pRequest, code); setErrno(pRequest, code);
} else { } else {
SMCreateStbRsp createRsp = {0}; SMCreateStbRsp createRsp = {0};
SDecoder coder = {0}; SDecoder coder = {0};
tDecoderInit(&coder, pMsg->pData, pMsg->len); tDecoderInit(&coder, pMsg->pData, pMsg->len);
tDecodeSMCreateStbRsp(&coder, &createRsp); tDecodeSMCreateStbRsp(&coder, &createRsp);
tDecoderClear(&coder); tDecoderClear(&coder);
...@@ -246,6 +254,7 @@ int32_t processCreateSTableRsp(void* param, SDataBuf* pMsg, int32_t code) { ...@@ -246,6 +254,7 @@ int32_t processCreateSTableRsp(void* param, SDataBuf* pMsg, int32_t code) {
pRequest->body.resInfo.execRes.res = createRsp.pMeta; pRequest->body.resInfo.execRes.res = createRsp.pMeta;
} }
taosMemoryFree(pMsg->pEpSet);
taosMemoryFree(pMsg->pData); taosMemoryFree(pMsg->pData);
if (pRequest->body.queryFp != NULL) { if (pRequest->body.queryFp != NULL) {
...@@ -262,7 +271,7 @@ int32_t processCreateSTableRsp(void* param, SDataBuf* pMsg, int32_t code) { ...@@ -262,7 +271,7 @@ int32_t processCreateSTableRsp(void* param, SDataBuf* pMsg, int32_t code) {
code = ret; code = ret;
} }
} }
pRequest->body.queryFp(pRequest->body.param, pRequest, code); pRequest->body.queryFp(pRequest->body.param, pRequest, code);
} else { } else {
tsem_post(&pRequest->body.rspSem); tsem_post(&pRequest->body.rspSem);
...@@ -284,6 +293,7 @@ int32_t processDropDbRsp(void* param, SDataBuf* pMsg, int32_t code) { ...@@ -284,6 +293,7 @@ int32_t processDropDbRsp(void* param, SDataBuf* pMsg, int32_t code) {
} }
taosMemoryFree(pMsg->pData); taosMemoryFree(pMsg->pData);
taosMemoryFree(pMsg->pEpSet);
if (pRequest->body.queryFp != NULL) { if (pRequest->body.queryFp != NULL) {
pRequest->body.queryFp(pRequest->body.param, pRequest, code); pRequest->body.queryFp(pRequest->body.param, pRequest, code);
...@@ -309,6 +319,7 @@ int32_t processAlterStbRsp(void* param, SDataBuf* pMsg, int32_t code) { ...@@ -309,6 +319,7 @@ int32_t processAlterStbRsp(void* param, SDataBuf* pMsg, int32_t code) {
} }
taosMemoryFree(pMsg->pData); taosMemoryFree(pMsg->pData);
taosMemoryFree(pMsg->pEpSet);
if (pRequest->body.queryFp != NULL) { if (pRequest->body.queryFp != NULL) {
SExecResult* pRes = &pRequest->body.resInfo.execRes; SExecResult* pRes = &pRequest->body.resInfo.execRes;
...@@ -420,6 +431,7 @@ int32_t processShowVariablesRsp(void* param, SDataBuf* pMsg, int32_t code) { ...@@ -420,6 +431,7 @@ int32_t processShowVariablesRsp(void* param, SDataBuf* pMsg, int32_t code) {
} }
taosMemoryFree(pMsg->pData); taosMemoryFree(pMsg->pData);
taosMemoryFree(pMsg->pEpSet);
if (pRequest->body.queryFp != NULL) { if (pRequest->body.queryFp != NULL) {
pRequest->body.queryFp(pRequest->body.param, pRequest, code); pRequest->body.queryFp(pRequest->body.param, pRequest, code);
......
...@@ -841,7 +841,7 @@ void tmqFreeImpl(void* handle) { ...@@ -841,7 +841,7 @@ void tmqFreeImpl(void* handle) {
int32_t sz = taosArrayGetSize(tmq->clientTopics); int32_t sz = taosArrayGetSize(tmq->clientTopics);
for (int32_t i = 0; i < sz; i++) { for (int32_t i = 0; i < sz; i++) {
SMqClientTopic* pTopic = taosArrayGet(tmq->clientTopics, i); SMqClientTopic* pTopic = taosArrayGet(tmq->clientTopics, i);
if (pTopic->schema.nCols) taosMemoryFree(pTopic->schema.pSchema); if (pTopic->schema.nCols) taosMemoryFreeClear(pTopic->schema.pSchema);
int32_t vgSz = taosArrayGetSize(pTopic->vgs); int32_t vgSz = taosArrayGetSize(pTopic->vgs);
taosArrayDestroy(pTopic->vgs); taosArrayDestroy(pTopic->vgs);
} }
...@@ -1077,6 +1077,7 @@ int32_t tmqPollCb(void* param, SDataBuf* pMsg, int32_t code) { ...@@ -1077,6 +1077,7 @@ int32_t tmqPollCb(void* param, SDataBuf* pMsg, int32_t code) {
tsem_destroy(&pParam->rspSem); tsem_destroy(&pParam->rspSem);
taosMemoryFree(pParam); taosMemoryFree(pParam);
taosMemoryFree(pMsg->pData); taosMemoryFree(pMsg->pData);
taosMemoryFree(pMsg->pEpSet);
terrno = TSDB_CODE_TMQ_CONSUMER_CLOSED; terrno = TSDB_CODE_TMQ_CONSUMER_CLOSED;
return -1; return -1;
} }
...@@ -1115,6 +1116,7 @@ int32_t tmqPollCb(void* param, SDataBuf* pMsg, int32_t code) { ...@@ -1115,6 +1116,7 @@ int32_t tmqPollCb(void* param, SDataBuf* pMsg, int32_t code) {
tmqEpoch); tmqEpoch);
tsem_post(&tmq->rspSem); tsem_post(&tmq->rspSem);
taosMemoryFree(pMsg->pData); taosMemoryFree(pMsg->pData);
taosMemoryFree(pMsg->pEpSet);
return 0; return 0;
} }
...@@ -1128,6 +1130,7 @@ int32_t tmqPollCb(void* param, SDataBuf* pMsg, int32_t code) { ...@@ -1128,6 +1130,7 @@ int32_t tmqPollCb(void* param, SDataBuf* pMsg, int32_t code) {
SMqPollRspWrapper* pRspWrapper = taosAllocateQitem(sizeof(SMqPollRspWrapper), DEF_QITEM); SMqPollRspWrapper* pRspWrapper = taosAllocateQitem(sizeof(SMqPollRspWrapper), DEF_QITEM);
if (pRspWrapper == NULL) { if (pRspWrapper == NULL) {
taosMemoryFree(pMsg->pData); taosMemoryFree(pMsg->pData);
taosMemoryFree(pMsg->pEpSet);
tscWarn("msg discard from vgId:%d, epoch %d since out of memory", vgId, epoch); tscWarn("msg discard from vgId:%d, epoch %d since out of memory", vgId, epoch);
goto CREATE_MSG_FAIL; goto CREATE_MSG_FAIL;
} }
...@@ -1164,6 +1167,7 @@ int32_t tmqPollCb(void* param, SDataBuf* pMsg, int32_t code) { ...@@ -1164,6 +1167,7 @@ int32_t tmqPollCb(void* param, SDataBuf* pMsg, int32_t code) {
} }
taosMemoryFree(pMsg->pData); taosMemoryFree(pMsg->pData);
taosMemoryFree(pMsg->pEpSet);
taosWriteQitem(tmq->mqueue, pRspWrapper); taosWriteQitem(tmq->mqueue, pRspWrapper);
tsem_post(&tmq->rspSem); tsem_post(&tmq->rspSem);
...@@ -1218,6 +1222,8 @@ bool tmqUpdateEp(tmq_t* tmq, int32_t epoch, SMqAskEpRsp* pRsp) { ...@@ -1218,6 +1222,8 @@ bool tmqUpdateEp(tmq_t* tmq, int32_t epoch, SMqAskEpRsp* pRsp) {
SMqClientTopic topic = {0}; SMqClientTopic topic = {0};
SMqSubTopicEp* pTopicEp = taosArrayGet(pRsp->topics, i); SMqSubTopicEp* pTopicEp = taosArrayGet(pRsp->topics, i);
topic.schema = pTopicEp->schema; topic.schema = pTopicEp->schema;
pTopicEp->schema.nCols = 0;
pTopicEp->schema.pSchema = NULL;
tstrncpy(topic.topicName, pTopicEp->topic, TSDB_TOPIC_FNAME_LEN); tstrncpy(topic.topicName, pTopicEp->topic, TSDB_TOPIC_FNAME_LEN);
tstrncpy(topic.db, pTopicEp->db, TSDB_DB_FNAME_LEN); tstrncpy(topic.db, pTopicEp->db, TSDB_DB_FNAME_LEN);
...@@ -1251,7 +1257,7 @@ bool tmqUpdateEp(tmq_t* tmq, int32_t epoch, SMqAskEpRsp* pRsp) { ...@@ -1251,7 +1257,7 @@ bool tmqUpdateEp(tmq_t* tmq, int32_t epoch, SMqAskEpRsp* pRsp) {
int32_t sz = taosArrayGetSize(tmq->clientTopics); int32_t sz = taosArrayGetSize(tmq->clientTopics);
for (int32_t i = 0; i < sz; i++) { for (int32_t i = 0; i < sz; i++) {
SMqClientTopic* pTopic = taosArrayGet(tmq->clientTopics, i); SMqClientTopic* pTopic = taosArrayGet(tmq->clientTopics, i);
if (pTopic->schema.nCols) taosMemoryFree(pTopic->schema.pSchema); if (pTopic->schema.nCols) taosMemoryFreeClear(pTopic->schema.pSchema);
int32_t vgSz = taosArrayGetSize(pTopic->vgs); int32_t vgSz = taosArrayGetSize(pTopic->vgs);
taosArrayDestroy(pTopic->vgs); taosArrayDestroy(pTopic->vgs);
} }
......
...@@ -102,9 +102,10 @@ static const SSysDbTableSchema userDBSchema[] = { ...@@ -102,9 +102,10 @@ static const SSysDbTableSchema userDBSchema[] = {
{.name = "wal_retention_size", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = true}, {.name = "wal_retention_size", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = true},
{.name = "wal_roll_period", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, {.name = "wal_roll_period", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true},
{.name = "wal_segment_size", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = true}, {.name = "wal_segment_size", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = true},
{.name = "sst_trigger", .bytes = 2, .type = TSDB_DATA_TYPE_SMALLINT, .sysInfo = true}, {.name = "stt_trigger", .bytes = 2, .type = TSDB_DATA_TYPE_SMALLINT, .sysInfo = true},
{.name = "table_prefix", .bytes = 2, .type = TSDB_DATA_TYPE_SMALLINT, .sysInfo = true}, {.name = "table_prefix", .bytes = 2, .type = TSDB_DATA_TYPE_SMALLINT, .sysInfo = true},
{.name = "table_suffix", .bytes = 2, .type = TSDB_DATA_TYPE_SMALLINT, .sysInfo = true}, {.name = "table_suffix", .bytes = 2, .type = TSDB_DATA_TYPE_SMALLINT, .sysInfo = true},
{.name = "tsdb_pagesize", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true},
}; };
static const SSysDbTableSchema userFuncSchema[] = { static const SSysDbTableSchema userFuncSchema[] = {
...@@ -226,8 +227,8 @@ static const SSysDbTableSchema transSchema[] = { ...@@ -226,8 +227,8 @@ static const SSysDbTableSchema transSchema[] = {
{.name = "id", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = false}, {.name = "id", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = false},
{.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false}, {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false},
{.name = "stage", .bytes = TSDB_TRANS_STAGE_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, {.name = "stage", .bytes = TSDB_TRANS_STAGE_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false},
{.name = "db1", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, {.name = "db", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false},
{.name = "db2", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, {.name = "stable", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false},
{.name = "failed_times", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = false}, {.name = "failed_times", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = false},
{.name = "last_exec_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false}, {.name = "last_exec_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false},
{.name = "last_action_info", .bytes = (TSDB_TRANS_ERROR_LEN - 1) + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, {.name = "last_action_info", .bytes = (TSDB_TRANS_ERROR_LEN - 1) + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR},
......
...@@ -1446,6 +1446,7 @@ size_t blockDataGetCapacityInRow(const SSDataBlock* pBlock, size_t pageSize) { ...@@ -1446,6 +1446,7 @@ size_t blockDataGetCapacityInRow(const SSDataBlock* pBlock, size_t pageSize) {
int32_t payloadSize = pageSize - blockDataGetSerialMetaSize(numOfCols); int32_t payloadSize = pageSize - blockDataGetSerialMetaSize(numOfCols);
int32_t rowSize = pBlock->info.rowSize; int32_t rowSize = pBlock->info.rowSize;
int32_t nRows = payloadSize / rowSize; int32_t nRows = payloadSize / rowSize;
ASSERT(nRows >= 1);
// the true value must be less than the value of nRows // the true value must be less than the value of nRows
int32_t additional = 0; int32_t additional = 0;
......
...@@ -15,6 +15,7 @@ ...@@ -15,6 +15,7 @@
#define _DEFAULT_SOURCE #define _DEFAULT_SOURCE
#include "tdataformat.h" #include "tdataformat.h"
#include "tRealloc.h"
#include "tcoding.h" #include "tcoding.h"
#include "tdatablock.h" #include "tdatablock.h"
#include "tlog.h" #include "tlog.h"
...@@ -680,7 +681,7 @@ int32_t tGetTSRow(uint8_t *p, STSRow2 **ppRow) { ...@@ -680,7 +681,7 @@ int32_t tGetTSRow(uint8_t *p, STSRow2 **ppRow) {
return n; return n;
} }
// STSchema // STSchema ========================================
int32_t tTSchemaCreate(int32_t sver, SSchema *pSchema, int32_t ncols, STSchema **ppTSchema) { int32_t tTSchemaCreate(int32_t sver, SSchema *pSchema, int32_t ncols, STSchema **ppTSchema) {
*ppTSchema = (STSchema *)taosMemoryMalloc(sizeof(STSchema) + sizeof(STColumn) * ncols); *ppTSchema = (STSchema *)taosMemoryMalloc(sizeof(STSchema) + sizeof(STColumn) * ncols);
if (*ppTSchema == NULL) { if (*ppTSchema == NULL) {
...@@ -720,9 +721,7 @@ void tTSchemaDestroy(STSchema *pTSchema) { ...@@ -720,9 +721,7 @@ void tTSchemaDestroy(STSchema *pTSchema) {
if (pTSchema) taosMemoryFree(pTSchema); if (pTSchema) taosMemoryFree(pTSchema);
} }
// STSRowBuilder // STag ========================================
// STag
static int tTagValCmprFn(const void *p1, const void *p2) { static int tTagValCmprFn(const void *p1, const void *p2) {
if (((STagVal *)p1)->cid < ((STagVal *)p2)->cid) { if (((STagVal *)p1)->cid < ((STagVal *)p2)->cid) {
return -1; return -1;
...@@ -1172,4 +1171,495 @@ STSchema *tdGetSchemaFromBuilder(STSchemaBuilder *pBuilder) { ...@@ -1172,4 +1171,495 @@ STSchema *tdGetSchemaFromBuilder(STSchemaBuilder *pBuilder) {
return pSchema; return pSchema;
} }
#endif #endif
\ No newline at end of file
// SColData ========================================
void tColDataDestroy(void *ph) {
SColData *pColData = (SColData *)ph;
tFree(pColData->pBitMap);
tFree((uint8_t *)pColData->aOffset);
tFree(pColData->pData);
}
void tColDataInit(SColData *pColData, int16_t cid, int8_t type, int8_t smaOn) {
pColData->cid = cid;
pColData->type = type;
pColData->smaOn = smaOn;
tColDataClear(pColData);
}
void tColDataClear(SColData *pColData) {
pColData->nVal = 0;
pColData->flag = 0;
pColData->nData = 0;
}
static FORCE_INLINE int32_t tColDataPutValue(SColData *pColData, SColVal *pColVal) {
int32_t code = 0;
if (IS_VAR_DATA_TYPE(pColData->type)) {
code = tRealloc((uint8_t **)(&pColData->aOffset), sizeof(int32_t) * (pColData->nVal + 1));
if (code) goto _exit;
pColData->aOffset[pColData->nVal] = pColData->nData;
if (pColVal->value.nData) {
code = tRealloc(&pColData->pData, pColData->nData + pColVal->value.nData);
if (code) goto _exit;
memcpy(pColData->pData + pColData->nData, pColVal->value.pData, pColVal->value.nData);
pColData->nData += pColVal->value.nData;
}
} else {
ASSERT(pColData->nData == tDataTypes[pColData->type].bytes * pColData->nVal);
code = tRealloc(&pColData->pData, pColData->nData + tDataTypes[pColData->type].bytes);
if (code) goto _exit;
pColData->nData += tPutValue(pColData->pData + pColData->nData, &pColVal->value, pColVal->type);
}
_exit:
return code;
}
static FORCE_INLINE int32_t tColDataAppendValue0(SColData *pColData, SColVal *pColVal) { // 0
int32_t code = 0;
if (pColVal->isNone) {
pColData->flag = HAS_NONE;
} else if (pColVal->isNull) {
pColData->flag = HAS_NULL;
} else {
pColData->flag = HAS_VALUE;
code = tColDataPutValue(pColData, pColVal);
if (code) goto _exit;
}
pColData->nVal++;
_exit:
return code;
}
static FORCE_INLINE int32_t tColDataAppendValue1(SColData *pColData, SColVal *pColVal) { // HAS_NONE
int32_t code = 0;
if (!pColVal->isNone) {
int32_t nBit = BIT1_SIZE(pColData->nVal + 1);
code = tRealloc(&pColData->pBitMap, nBit);
if (code) goto _exit;
memset(pColData->pBitMap, 0, nBit);
SET_BIT1(pColData->pBitMap, pColData->nVal, 1);
if (pColVal->isNull) {
pColData->flag |= HAS_NULL;
} else {
pColData->flag |= HAS_VALUE;
if (pColData->nVal) {
if (IS_VAR_DATA_TYPE(pColData->type)) {
int32_t nOffset = sizeof(int32_t) * pColData->nVal;
code = tRealloc((uint8_t **)(&pColData->aOffset), nOffset);
if (code) goto _exit;
memset(pColData->aOffset, 0, nOffset);
} else {
pColData->nData = tDataTypes[pColData->type].bytes * pColData->nVal;
code = tRealloc(&pColData->pData, pColData->nData);
if (code) goto _exit;
memset(pColData->pData, 0, pColData->nData);
}
}
code = tColDataPutValue(pColData, pColVal);
if (code) goto _exit;
}
}
pColData->nVal++;
_exit:
return code;
}
static FORCE_INLINE int32_t tColDataAppendValue2(SColData *pColData, SColVal *pColVal) { // HAS_NULL
int32_t code = 0;
if (!pColVal->isNull) {
int32_t nBit = BIT1_SIZE(pColData->nVal + 1);
code = tRealloc(&pColData->pBitMap, nBit);
if (code) goto _exit;
if (pColVal->isNone) {
pColData->flag |= HAS_NONE;
memset(pColData->pBitMap, 255, nBit);
SET_BIT1(pColData->pBitMap, pColData->nVal, 0);
} else {
pColData->flag |= HAS_VALUE;
memset(pColData->pBitMap, 0, nBit);
SET_BIT1(pColData->pBitMap, pColData->nVal, 1);
if (pColData->nVal) {
if (IS_VAR_DATA_TYPE(pColData->type)) {
int32_t nOffset = sizeof(int32_t) * pColData->nVal;
code = tRealloc((uint8_t **)(&pColData->aOffset), nOffset);
if (code) goto _exit;
memset(pColData->aOffset, 0, nOffset);
} else {
pColData->nData = tDataTypes[pColData->type].bytes * pColData->nVal;
code = tRealloc(&pColData->pData, pColData->nData);
if (code) goto _exit;
memset(pColData->pData, 0, pColData->nData);
}
}
code = tColDataPutValue(pColData, pColVal);
if (code) goto _exit;
}
}
pColData->nVal++;
_exit:
return code;
}
static FORCE_INLINE int32_t tColDataAppendValue3(SColData *pColData, SColVal *pColVal) { // HAS_NULL|HAS_NONE
int32_t code = 0;
if (pColVal->isNone) {
code = tRealloc(&pColData->pBitMap, BIT1_SIZE(pColData->nVal + 1));
if (code) goto _exit;
SET_BIT1(pColData->pBitMap, pColData->nVal, 0);
} else if (pColVal->isNull) {
code = tRealloc(&pColData->pBitMap, BIT1_SIZE(pColData->nVal + 1));
if (code) goto _exit;
SET_BIT1(pColData->pBitMap, pColData->nVal, 1);
} else {
pColData->flag |= HAS_VALUE;
uint8_t *pBitMap = NULL;
code = tRealloc(&pBitMap, BIT2_SIZE(pColData->nVal + 1));
if (code) goto _exit;
for (int32_t iVal = 0; iVal < pColData->nVal; iVal++) {
SET_BIT2(pBitMap, iVal, GET_BIT1(pColData->pBitMap, iVal));
}
SET_BIT2(pBitMap, pColData->nVal, 2);
tFree(pColData->pBitMap);
pColData->pBitMap = pBitMap;
if (pColData->nVal) {
if (IS_VAR_DATA_TYPE(pColData->type)) {
int32_t nOffset = sizeof(int32_t) * pColData->nVal;
code = tRealloc((uint8_t **)(&pColData->aOffset), nOffset);
if (code) goto _exit;
memset(pColData->aOffset, 0, nOffset);
} else {
pColData->nData = tDataTypes[pColData->type].bytes * pColData->nVal;
code = tRealloc(&pColData->pData, pColData->nData);
if (code) goto _exit;
memset(pColData->pData, 0, pColData->nData);
}
}
code = tColDataPutValue(pColData, pColVal);
if (code) goto _exit;
}
pColData->nVal++;
_exit:
return code;
}
static FORCE_INLINE int32_t tColDataAppendValue4(SColData *pColData, SColVal *pColVal) { // HAS_VALUE
int32_t code = 0;
if (pColVal->isNone || pColVal->isNull) {
if (pColVal->isNone) {
pColData->flag |= HAS_NONE;
} else {
pColData->flag |= HAS_NULL;
}
int32_t nBit = BIT1_SIZE(pColData->nVal + 1);
code = tRealloc(&pColData->pBitMap, nBit);
if (code) goto _exit;
memset(pColData->pBitMap, 255, nBit);
SET_BIT1(pColData->pBitMap, pColData->nVal, 0);
code = tColDataPutValue(pColData, pColVal);
if (code) goto _exit;
} else {
code = tColDataPutValue(pColData, pColVal);
if (code) goto _exit;
}
pColData->nVal++;
_exit:
return code;
}
static FORCE_INLINE int32_t tColDataAppendValue5(SColData *pColData, SColVal *pColVal) { // HAS_VALUE|HAS_NONE
int32_t code = 0;
if (pColVal->isNull) {
pColData->flag |= HAS_NULL;
uint8_t *pBitMap = NULL;
code = tRealloc(&pBitMap, BIT2_SIZE(pColData->nVal + 1));
if (code) goto _exit;
for (int32_t iVal = 0; iVal < pColData->nVal; iVal++) {
SET_BIT2(pBitMap, iVal, GET_BIT1(pColData->pBitMap, iVal) ? 2 : 0);
}
SET_BIT2(pBitMap, pColData->nVal, 1);
tFree(pColData->pBitMap);
pColData->pBitMap = pBitMap;
} else {
code = tRealloc(&pColData->pBitMap, BIT1_SIZE(pColData->nVal + 1));
if (code) goto _exit;
if (pColVal->isNone) {
SET_BIT1(pColData->pBitMap, pColData->nVal, 0);
} else {
SET_BIT1(pColData->pBitMap, pColData->nVal, 1);
}
}
code = tColDataPutValue(pColData, pColVal);
if (code) goto _exit;
pColData->nVal++;
_exit:
return code;
}
static FORCE_INLINE int32_t tColDataAppendValue6(SColData *pColData, SColVal *pColVal) { // HAS_VALUE|HAS_NULL
int32_t code = 0;
if (pColVal->isNone) {
pColData->flag |= HAS_NONE;
uint8_t *pBitMap = NULL;
code = tRealloc(&pBitMap, BIT2_SIZE(pColData->nVal + 1));
if (code) goto _exit;
for (int32_t iVal = 0; iVal < pColData->nVal; iVal++) {
SET_BIT2(pBitMap, iVal, GET_BIT1(pColData->pBitMap, iVal) ? 2 : 1);
}
SET_BIT2(pBitMap, pColData->nVal, 0);
tFree(pColData->pBitMap);
pColData->pBitMap = pBitMap;
} else {
code = tRealloc(&pColData->pBitMap, BIT1_SIZE(pColData->nVal + 1));
if (code) goto _exit;
if (pColVal->isNull) {
SET_BIT1(pColData->pBitMap, pColData->nVal, 0);
} else {
SET_BIT1(pColData->pBitMap, pColData->nVal, 1);
}
}
code = tColDataPutValue(pColData, pColVal);
if (code) goto _exit;
pColData->nVal++;
_exit:
return code;
}
static FORCE_INLINE int32_t tColDataAppendValue7(SColData *pColData,
SColVal *pColVal) { // HAS_VALUE|HAS_NULL|HAS_NONE
int32_t code = 0;
code = tRealloc(&pColData->pBitMap, BIT2_SIZE(pColData->nVal + 1));
if (code) goto _exit;
if (pColVal->isNone) {
SET_BIT2(pColData->pBitMap, pColData->nVal, 0);
} else if (pColVal->isNull) {
SET_BIT2(pColData->pBitMap, pColData->nVal, 1);
} else {
SET_BIT2(pColData->pBitMap, pColData->nVal, 2);
}
code = tColDataPutValue(pColData, pColVal);
if (code) goto _exit;
pColData->nVal++;
_exit:
return code;
}
static int32_t (*tColDataAppendValueImpl[])(SColData *pColData, SColVal *pColVal) = {
tColDataAppendValue0, // 0
tColDataAppendValue1, // HAS_NONE
tColDataAppendValue2, // HAS_NULL
tColDataAppendValue3, // HAS_NULL|HAS_NONE
tColDataAppendValue4, // HAS_VALUE
tColDataAppendValue5, // HAS_VALUE|HAS_NONE
tColDataAppendValue6, // HAS_VALUE|HAS_NULL
tColDataAppendValue7 // HAS_VALUE|HAS_NULL|HAS_NONE
};
int32_t tColDataAppendValue(SColData *pColData, SColVal *pColVal) {
ASSERT(pColData->cid == pColVal->cid && pColData->type == pColVal->type);
return tColDataAppendValueImpl[pColData->flag](pColData, pColVal);
}
static FORCE_INLINE void tColDataGetValue1(SColData *pColData, int32_t iVal, SColVal *pColVal) { // HAS_NONE
*pColVal = COL_VAL_NONE(pColData->cid, pColData->type);
}
static FORCE_INLINE void tColDataGetValue2(SColData *pColData, int32_t iVal, SColVal *pColVal) { // HAS_NULL
*pColVal = COL_VAL_NULL(pColData->cid, pColData->type);
}
static FORCE_INLINE void tColDataGetValue3(SColData *pColData, int32_t iVal, SColVal *pColVal) { // HAS_NULL|HAS_NONE
switch (GET_BIT1(pColData->pBitMap, iVal)) {
case 0:
*pColVal = COL_VAL_NONE(pColData->cid, pColData->type);
break;
case 1:
*pColVal = COL_VAL_NULL(pColData->cid, pColData->type);
break;
default:
ASSERT(0);
}
}
static FORCE_INLINE void tColDataGetValue4(SColData *pColData, int32_t iVal, SColVal *pColVal) { // HAS_VALUE
SValue value;
if (IS_VAR_DATA_TYPE(pColData->type)) {
if (iVal + 1 < pColData->nVal) {
value.nData = pColData->aOffset[iVal + 1] - pColData->aOffset[iVal];
} else {
value.nData = pColData->nData - pColData->aOffset[iVal];
}
value.pData = pColData->pData + pColData->aOffset[iVal];
} else {
tGetValue(pColData->pData + tDataTypes[pColData->type].bytes * iVal, &value, pColData->type);
}
*pColVal = COL_VAL_VALUE(pColData->cid, pColData->type, value);
}
static FORCE_INLINE void tColDataGetValue5(SColData *pColData, int32_t iVal,
SColVal *pColVal) { // HAS_VALUE|HAS_NONE
switch (GET_BIT1(pColData->pBitMap, iVal)) {
case 0:
*pColVal = COL_VAL_NONE(pColData->cid, pColData->type);
break;
case 1:
tColDataGetValue4(pColData, iVal, pColVal);
break;
default:
ASSERT(0);
}
}
static FORCE_INLINE void tColDataGetValue6(SColData *pColData, int32_t iVal,
SColVal *pColVal) { // HAS_VALUE|HAS_NULL
switch (GET_BIT1(pColData->pBitMap, iVal)) {
case 0:
*pColVal = COL_VAL_NULL(pColData->cid, pColData->type);
break;
case 1:
tColDataGetValue4(pColData, iVal, pColVal);
break;
default:
ASSERT(0);
}
}
static FORCE_INLINE void tColDataGetValue7(SColData *pColData, int32_t iVal,
SColVal *pColVal) { // HAS_VALUE|HAS_NULL|HAS_NONE
switch (GET_BIT2(pColData->pBitMap, iVal)) {
case 0:
*pColVal = COL_VAL_NONE(pColData->cid, pColData->type);
break;
case 1:
*pColVal = COL_VAL_NULL(pColData->cid, pColData->type);
break;
case 2:
tColDataGetValue4(pColData, iVal, pColVal);
break;
default:
ASSERT(0);
}
}
static void (*tColDataGetValueImpl[])(SColData *pColData, int32_t iVal, SColVal *pColVal) = {
NULL, // 0
tColDataGetValue1, // HAS_NONE
tColDataGetValue2, // HAS_NULL
tColDataGetValue3, // HAS_NULL | HAS_NONE
tColDataGetValue4, // HAS_VALUE
tColDataGetValue5, // HAS_VALUE | HAS_NONE
tColDataGetValue6, // HAS_VALUE | HAS_NULL
tColDataGetValue7 // HAS_VALUE | HAS_NULL | HAS_NONE
};
void tColDataGetValue(SColData *pColData, int32_t iVal, SColVal *pColVal) {
ASSERT(iVal >= 0 && iVal < pColData->nVal && pColData->flag);
tColDataGetValueImpl[pColData->flag](pColData, iVal, pColVal);
}
uint8_t tColDataGetBitValue(SColData *pColData, int32_t iVal) {
uint8_t v;
switch (pColData->flag) {
case HAS_NONE:
v = 0;
break;
case HAS_NULL:
v = 1;
break;
case (HAS_NULL | HAS_NONE):
v = GET_BIT1(pColData->pBitMap, iVal);
break;
case HAS_VALUE:
v = 2;
break;
case (HAS_VALUE | HAS_NONE):
v = GET_BIT1(pColData->pBitMap, iVal);
if (v) v = 2;
break;
case (HAS_VALUE | HAS_NULL):
v = GET_BIT1(pColData->pBitMap, iVal) + 1;
break;
case (HAS_VALUE | HAS_NULL | HAS_NONE):
v = GET_BIT2(pColData->pBitMap, iVal);
break;
default:
ASSERT(0);
break;
}
return v;
}
int32_t tColDataCopy(SColData *pColDataSrc, SColData *pColDataDest) {
int32_t code = 0;
int32_t size;
ASSERT(pColDataSrc->nVal > 0);
ASSERT(pColDataDest->cid = pColDataSrc->cid);
ASSERT(pColDataDest->type = pColDataSrc->type);
pColDataDest->smaOn = pColDataSrc->smaOn;
pColDataDest->nVal = pColDataSrc->nVal;
pColDataDest->flag = pColDataSrc->flag;
// bitmap
if (pColDataSrc->flag != HAS_NONE && pColDataSrc->flag != HAS_NULL && pColDataSrc->flag != HAS_VALUE) {
size = BIT2_SIZE(pColDataSrc->nVal);
code = tRealloc(&pColDataDest->pBitMap, size);
if (code) goto _exit;
memcpy(pColDataDest->pBitMap, pColDataSrc->pBitMap, size);
}
// offset
if (IS_VAR_DATA_TYPE(pColDataDest->type)) {
size = sizeof(int32_t) * pColDataSrc->nVal;
code = tRealloc((uint8_t **)&pColDataDest->aOffset, size);
if (code) goto _exit;
memcpy(pColDataDest->aOffset, pColDataSrc->aOffset, size);
}
// value
pColDataDest->nData = pColDataSrc->nData;
code = tRealloc(&pColDataDest->pData, pColDataSrc->nData);
if (code) goto _exit;
memcpy(pColDataDest->pData, pColDataSrc->pData, pColDataDest->nData);
_exit:
return code;
}
\ No newline at end of file
...@@ -63,7 +63,7 @@ int32_t tsNumOfVnodeWriteThreads = 2; ...@@ -63,7 +63,7 @@ int32_t tsNumOfVnodeWriteThreads = 2;
int32_t tsNumOfVnodeSyncThreads = 2; int32_t tsNumOfVnodeSyncThreads = 2;
int32_t tsNumOfVnodeRsmaThreads = 2; int32_t tsNumOfVnodeRsmaThreads = 2;
int32_t tsNumOfQnodeQueryThreads = 4; int32_t tsNumOfQnodeQueryThreads = 4;
int32_t tsNumOfQnodeFetchThreads = 4; int32_t tsNumOfQnodeFetchThreads = 1;
int32_t tsNumOfSnodeSharedThreads = 2; int32_t tsNumOfSnodeSharedThreads = 2;
int32_t tsNumOfSnodeUniqueThreads = 2; int32_t tsNumOfSnodeUniqueThreads = 2;
...@@ -163,6 +163,7 @@ int32_t tsTtlUnit = 86400; ...@@ -163,6 +163,7 @@ int32_t tsTtlUnit = 86400;
int32_t tsTtlPushInterval = 86400; int32_t tsTtlPushInterval = 86400;
int32_t tsGrantHBInterval = 60; int32_t tsGrantHBInterval = 60;
int32_t tsUptimeInterval = 300; // seconds int32_t tsUptimeInterval = 300; // seconds
char tsUdfdResFuncs[1024] = ""; // udfd resident funcs that teardown when udfd exits
#ifndef _STORAGE #ifndef _STORAGE
int32_t taosSetTfsCfg(SConfig *pCfg) { int32_t taosSetTfsCfg(SConfig *pCfg) {
...@@ -385,9 +386,9 @@ static int32_t taosAddServerCfg(SConfig *pCfg) { ...@@ -385,9 +386,9 @@ static int32_t taosAddServerCfg(SConfig *pCfg) {
tsNumOfQnodeQueryThreads = TMAX(tsNumOfQnodeQueryThreads, 4); tsNumOfQnodeQueryThreads = TMAX(tsNumOfQnodeQueryThreads, 4);
if (cfgAddInt32(pCfg, "numOfQnodeQueryThreads", tsNumOfQnodeQueryThreads, 1, 1024, 0) != 0) return -1; if (cfgAddInt32(pCfg, "numOfQnodeQueryThreads", tsNumOfQnodeQueryThreads, 1, 1024, 0) != 0) return -1;
tsNumOfQnodeFetchThreads = tsNumOfCores / 2; // tsNumOfQnodeFetchThreads = tsNumOfCores / 2;
tsNumOfQnodeFetchThreads = TMAX(tsNumOfQnodeFetchThreads, 4); // tsNumOfQnodeFetchThreads = TMAX(tsNumOfQnodeFetchThreads, 4);
if (cfgAddInt32(pCfg, "numOfQnodeFetchThreads", tsNumOfQnodeFetchThreads, 1, 1024, 0) != 0) return -1; // if (cfgAddInt32(pCfg, "numOfQnodeFetchThreads", tsNumOfQnodeFetchThreads, 1, 1024, 0) != 0) return -1;
tsNumOfSnodeSharedThreads = tsNumOfCores / 4; tsNumOfSnodeSharedThreads = tsNumOfCores / 4;
tsNumOfSnodeSharedThreads = TRANGE(tsNumOfSnodeSharedThreads, 2, 4); tsNumOfSnodeSharedThreads = TRANGE(tsNumOfSnodeSharedThreads, 2, 4);
...@@ -421,6 +422,7 @@ static int32_t taosAddServerCfg(SConfig *pCfg) { ...@@ -421,6 +422,7 @@ static int32_t taosAddServerCfg(SConfig *pCfg) {
if (cfgAddInt32(pCfg, "uptimeInterval", tsUptimeInterval, 1, 100000, 1) != 0) return -1; if (cfgAddInt32(pCfg, "uptimeInterval", tsUptimeInterval, 1, 100000, 1) != 0) return -1;
if (cfgAddBool(pCfg, "udf", tsStartUdfd, 0) != 0) return -1; if (cfgAddBool(pCfg, "udf", tsStartUdfd, 0) != 0) return -1;
if (cfgAddString(pCfg, "udfdResFuncs", tsUdfdResFuncs, 0) != 0) return -1;
GRANT_CFG_ADD; GRANT_CFG_ADD;
return 0; return 0;
} }
...@@ -527,13 +529,15 @@ static int32_t taosUpdateServerCfg(SConfig *pCfg) { ...@@ -527,13 +529,15 @@ static int32_t taosUpdateServerCfg(SConfig *pCfg) {
pItem->stype = stype; pItem->stype = stype;
} }
pItem = cfgGetItem(tsCfg, "numOfQnodeFetchThreads"); /*
if (pItem != NULL && pItem->stype == CFG_STYPE_DEFAULT) { pItem = cfgGetItem(tsCfg, "numOfQnodeFetchThreads");
tsNumOfQnodeFetchThreads = numOfCores / 2; if (pItem != NULL && pItem->stype == CFG_STYPE_DEFAULT) {
tsNumOfQnodeFetchThreads = TMAX(tsNumOfQnodeFetchThreads, 4); tsNumOfQnodeFetchThreads = numOfCores / 2;
pItem->i32 = tsNumOfQnodeFetchThreads; tsNumOfQnodeFetchThreads = TMAX(tsNumOfQnodeFetchThreads, 4);
pItem->stype = stype; pItem->i32 = tsNumOfQnodeFetchThreads;
} pItem->stype = stype;
}
*/
pItem = cfgGetItem(tsCfg, "numOfSnodeSharedThreads"); pItem = cfgGetItem(tsCfg, "numOfSnodeSharedThreads");
if (pItem != NULL && pItem->stype == CFG_STYPE_DEFAULT) { if (pItem != NULL && pItem->stype == CFG_STYPE_DEFAULT) {
...@@ -691,7 +695,7 @@ static int32_t taosSetServerCfg(SConfig *pCfg) { ...@@ -691,7 +695,7 @@ static int32_t taosSetServerCfg(SConfig *pCfg) {
tsNumOfVnodeSyncThreads = cfgGetItem(pCfg, "numOfVnodeSyncThreads")->i32; tsNumOfVnodeSyncThreads = cfgGetItem(pCfg, "numOfVnodeSyncThreads")->i32;
tsNumOfVnodeRsmaThreads = cfgGetItem(pCfg, "numOfVnodeRsmaThreads")->i32; tsNumOfVnodeRsmaThreads = cfgGetItem(pCfg, "numOfVnodeRsmaThreads")->i32;
tsNumOfQnodeQueryThreads = cfgGetItem(pCfg, "numOfQnodeQueryThreads")->i32; tsNumOfQnodeQueryThreads = cfgGetItem(pCfg, "numOfQnodeQueryThreads")->i32;
tsNumOfQnodeFetchThreads = cfgGetItem(pCfg, "numOfQnodeFetchThreads")->i32; // tsNumOfQnodeFetchThreads = cfgGetItem(pCfg, "numOfQnodeFetchThreads")->i32;
tsNumOfSnodeSharedThreads = cfgGetItem(pCfg, "numOfSnodeSharedThreads")->i32; tsNumOfSnodeSharedThreads = cfgGetItem(pCfg, "numOfSnodeSharedThreads")->i32;
tsNumOfSnodeUniqueThreads = cfgGetItem(pCfg, "numOfSnodeUniqueThreads")->i32; tsNumOfSnodeUniqueThreads = cfgGetItem(pCfg, "numOfSnodeUniqueThreads")->i32;
tsRpcQueueMemoryAllowed = cfgGetItem(pCfg, "rpcQueueMemoryAllowed")->i64; tsRpcQueueMemoryAllowed = cfgGetItem(pCfg, "rpcQueueMemoryAllowed")->i64;
...@@ -715,6 +719,7 @@ static int32_t taosSetServerCfg(SConfig *pCfg) { ...@@ -715,6 +719,7 @@ static int32_t taosSetServerCfg(SConfig *pCfg) {
tsUptimeInterval = cfgGetItem(pCfg, "uptimeInterval")->i32; tsUptimeInterval = cfgGetItem(pCfg, "uptimeInterval")->i32;
tsStartUdfd = cfgGetItem(pCfg, "udf")->bval; tsStartUdfd = cfgGetItem(pCfg, "udf")->bval;
tstrncpy(tsUdfdResFuncs, cfgGetItem(pCfg, "udfdResFuncs")->str, sizeof(tsUdfdResFuncs));
if (tsQueryBufferSize >= 0) { if (tsQueryBufferSize >= 0) {
tsQueryBufferSizeBytes = tsQueryBufferSize * 1048576UL; tsQueryBufferSizeBytes = tsQueryBufferSize * 1048576UL;
...@@ -939,8 +944,10 @@ int32_t taosSetCfg(SConfig *pCfg, char *name) { ...@@ -939,8 +944,10 @@ int32_t taosSetCfg(SConfig *pCfg, char *name) {
tsNumOfVnodeRsmaThreads = cfgGetItem(pCfg, "numOfVnodeRsmaThreads")->i32; tsNumOfVnodeRsmaThreads = cfgGetItem(pCfg, "numOfVnodeRsmaThreads")->i32;
} else if (strcasecmp("numOfQnodeQueryThreads", name) == 0) { } else if (strcasecmp("numOfQnodeQueryThreads", name) == 0) {
tsNumOfQnodeQueryThreads = cfgGetItem(pCfg, "numOfQnodeQueryThreads")->i32; tsNumOfQnodeQueryThreads = cfgGetItem(pCfg, "numOfQnodeQueryThreads")->i32;
} else if (strcasecmp("numOfQnodeFetchThreads", name) == 0) { /*
tsNumOfQnodeFetchThreads = cfgGetItem(pCfg, "numOfQnodeFetchThreads")->i32; } else if (strcasecmp("numOfQnodeFetchThreads", name) == 0) {
tsNumOfQnodeFetchThreads = cfgGetItem(pCfg, "numOfQnodeFetchThreads")->i32;
*/
} else if (strcasecmp("numOfSnodeSharedThreads", name) == 0) { } else if (strcasecmp("numOfSnodeSharedThreads", name) == 0) {
tsNumOfSnodeSharedThreads = cfgGetItem(pCfg, "numOfSnodeSharedThreads")->i32; tsNumOfSnodeSharedThreads = cfgGetItem(pCfg, "numOfSnodeSharedThreads")->i32;
} else if (strcasecmp("numOfSnodeUniqueThreads", name) == 0) { } else if (strcasecmp("numOfSnodeUniqueThreads", name) == 0) {
......
...@@ -2038,6 +2038,7 @@ int32_t tSerializeSCreateDbReq(void *buf, int32_t bufLen, SCreateDbReq *pReq) { ...@@ -2038,6 +2038,7 @@ int32_t tSerializeSCreateDbReq(void *buf, int32_t bufLen, SCreateDbReq *pReq) {
if (tEncodeI8(&encoder, pRetension->freqUnit) < 0) return -1; if (tEncodeI8(&encoder, pRetension->freqUnit) < 0) return -1;
if (tEncodeI8(&encoder, pRetension->keepUnit) < 0) return -1; if (tEncodeI8(&encoder, pRetension->keepUnit) < 0) return -1;
} }
if (tEncodeI32(&encoder, pReq->tsdbPageSize) < 0) return -1;
tEndEncode(&encoder); tEndEncode(&encoder);
int32_t tlen = encoder.pos; int32_t tlen = encoder.pos;
...@@ -2098,6 +2099,8 @@ int32_t tDeserializeSCreateDbReq(void *buf, int32_t bufLen, SCreateDbReq *pReq) ...@@ -2098,6 +2099,8 @@ int32_t tDeserializeSCreateDbReq(void *buf, int32_t bufLen, SCreateDbReq *pReq)
} }
} }
if (tDecodeI32(&decoder, &pReq->tsdbPageSize) < 0) return -1;
tEndDecode(&decoder); tEndDecode(&decoder);
tDecoderClear(&decoder); tDecoderClear(&decoder);
...@@ -3344,7 +3347,13 @@ int32_t tDeserializeSSTbHbRsp(void *buf, int32_t bufLen, SSTbHbRsp *pRsp) { ...@@ -3344,7 +3347,13 @@ int32_t tDeserializeSSTbHbRsp(void *buf, int32_t bufLen, SSTbHbRsp *pRsp) {
return 0; return 0;
} }
void tFreeSTableMetaRsp(void *pRsp) { taosMemoryFreeClear(((STableMetaRsp *)pRsp)->pSchemas); } void tFreeSTableMetaRsp(void *pRsp) {
if (NULL == pRsp) {
return;
}
taosMemoryFreeClear(((STableMetaRsp *)pRsp)->pSchemas);
}
void tFreeSTableIndexRsp(void *info) { void tFreeSTableIndexRsp(void *info) {
if (NULL == info) { if (NULL == info) {
...@@ -3779,6 +3788,7 @@ int32_t tSerializeSCreateVnodeReq(void *buf, int32_t bufLen, SCreateVnodeReq *pR ...@@ -3779,6 +3788,7 @@ int32_t tSerializeSCreateVnodeReq(void *buf, int32_t bufLen, SCreateVnodeReq *pR
if (tEncodeI16(&encoder, pReq->sstTrigger) < 0) return -1; if (tEncodeI16(&encoder, pReq->sstTrigger) < 0) return -1;
if (tEncodeI16(&encoder, pReq->hashPrefix) < 0) return -1; if (tEncodeI16(&encoder, pReq->hashPrefix) < 0) return -1;
if (tEncodeI16(&encoder, pReq->hashSuffix) < 0) return -1; if (tEncodeI16(&encoder, pReq->hashSuffix) < 0) return -1;
if (tEncodeI32(&encoder, pReq->tsdbPageSize) < 0) return -1;
tEndEncode(&encoder); tEndEncode(&encoder);
...@@ -3854,6 +3864,7 @@ int32_t tDeserializeSCreateVnodeReq(void *buf, int32_t bufLen, SCreateVnodeReq * ...@@ -3854,6 +3864,7 @@ int32_t tDeserializeSCreateVnodeReq(void *buf, int32_t bufLen, SCreateVnodeReq *
if (tDecodeI16(&decoder, &pReq->sstTrigger) < 0) return -1; if (tDecodeI16(&decoder, &pReq->sstTrigger) < 0) return -1;
if (tDecodeI16(&decoder, &pReq->hashPrefix) < 0) return -1; if (tDecodeI16(&decoder, &pReq->hashPrefix) < 0) return -1;
if (tDecodeI16(&decoder, &pReq->hashSuffix) < 0) return -1; if (tDecodeI16(&decoder, &pReq->hashSuffix) < 0) return -1;
if (tDecodeI32(&decoder, &pReq->tsdbPageSize) < 0) return -1;
tEndDecode(&decoder); tEndDecode(&decoder);
tDecoderClear(&decoder); tDecoderClear(&decoder);
...@@ -4718,9 +4729,8 @@ int32_t tSerializeSVDeleteReq(void *buf, int32_t bufLen, SVDeleteReq *pReq) { ...@@ -4718,9 +4729,8 @@ int32_t tSerializeSVDeleteReq(void *buf, int32_t bufLen, SVDeleteReq *pReq) {
if (tEncodeU64(&encoder, pReq->queryId) < 0) return -1; if (tEncodeU64(&encoder, pReq->queryId) < 0) return -1;
if (tEncodeU64(&encoder, pReq->taskId) < 0) return -1; if (tEncodeU64(&encoder, pReq->taskId) < 0) return -1;
if (tEncodeU32(&encoder, pReq->sqlLen) < 0) return -1; if (tEncodeU32(&encoder, pReq->sqlLen) < 0) return -1;
if (tEncodeU32(&encoder, pReq->phyLen) < 0) return -1;
if (tEncodeCStr(&encoder, pReq->sql) < 0) return -1; if (tEncodeCStr(&encoder, pReq->sql) < 0) return -1;
if (tEncodeCStr(&encoder, pReq->msg) < 0) return -1; if (tEncodeBinary(&encoder, pReq->msg, pReq->phyLen) < 0) return -1;
tEndEncode(&encoder); tEndEncode(&encoder);
int32_t tlen = encoder.pos; int32_t tlen = encoder.pos;
...@@ -4750,13 +4760,12 @@ int32_t tDeserializeSVDeleteReq(void *buf, int32_t bufLen, SVDeleteReq *pReq) { ...@@ -4750,13 +4760,12 @@ int32_t tDeserializeSVDeleteReq(void *buf, int32_t bufLen, SVDeleteReq *pReq) {
if (tDecodeU64(&decoder, &pReq->queryId) < 0) return -1; if (tDecodeU64(&decoder, &pReq->queryId) < 0) return -1;
if (tDecodeU64(&decoder, &pReq->taskId) < 0) return -1; if (tDecodeU64(&decoder, &pReq->taskId) < 0) return -1;
if (tDecodeU32(&decoder, &pReq->sqlLen) < 0) return -1; if (tDecodeU32(&decoder, &pReq->sqlLen) < 0) return -1;
if (tDecodeU32(&decoder, &pReq->phyLen) < 0) return -1;
pReq->sql = taosMemoryCalloc(1, pReq->sqlLen + 1); pReq->sql = taosMemoryCalloc(1, pReq->sqlLen + 1);
if (NULL == pReq->sql) return -1; if (NULL == pReq->sql) return -1;
pReq->msg = taosMemoryCalloc(1, pReq->phyLen + 1);
if (NULL == pReq->msg) return -1;
if (tDecodeCStrTo(&decoder, pReq->sql) < 0) return -1; if (tDecodeCStrTo(&decoder, pReq->sql) < 0) return -1;
if (tDecodeCStrTo(&decoder, pReq->msg) < 0) return -1; uint64_t msgLen = 0;
if (tDecodeBinaryAlloc(&decoder, (void **)&pReq->msg, &msgLen) < 0) return -1;
pReq->phyLen = msgLen;
tEndDecode(&decoder); tEndDecode(&decoder);
...@@ -5436,6 +5445,8 @@ void tFreeSSubmitRsp(SSubmitRsp *pRsp) { ...@@ -5436,6 +5445,8 @@ void tFreeSSubmitRsp(SSubmitRsp *pRsp) {
for (int32_t i = 0; i < pRsp->nBlocks; ++i) { for (int32_t i = 0; i < pRsp->nBlocks; ++i) {
SSubmitBlkRsp *sRsp = pRsp->pBlocks + i; SSubmitBlkRsp *sRsp = pRsp->pBlocks + i;
taosMemoryFree(sRsp->tblFName); taosMemoryFree(sRsp->tblFName);
tFreeSTableMetaRsp(sRsp->pMeta);
taosMemoryFree(sRsp->pMeta);
} }
taosMemoryFree(pRsp->pBlocks); taosMemoryFree(pRsp->pBlocks);
......
...@@ -173,6 +173,7 @@ static void vmGenerateVnodeCfg(SCreateVnodeReq *pCreate, SVnodeCfg *pCfg) { ...@@ -173,6 +173,7 @@ static void vmGenerateVnodeCfg(SCreateVnodeReq *pCreate, SVnodeCfg *pCfg) {
pCfg->hashMethod = pCreate->hashMethod; pCfg->hashMethod = pCreate->hashMethod;
pCfg->hashPrefix = pCreate->hashPrefix; pCfg->hashPrefix = pCreate->hashPrefix;
pCfg->hashSuffix = pCreate->hashSuffix; pCfg->hashSuffix = pCreate->hashSuffix;
pCfg->tsdbPageSize = pCreate->tsdbPageSize * 1024;
pCfg->standby = pCfg->standby; pCfg->standby = pCfg->standby;
pCfg->syncCfg.myIndex = pCreate->selfIndex; pCfg->syncCfg.myIndex = pCreate->selfIndex;
...@@ -222,9 +223,11 @@ int32_t vmProcessCreateVnodeReq(SVnodeMgmt *pMgmt, SRpcMsg *pMsg) { ...@@ -222,9 +223,11 @@ int32_t vmProcessCreateVnodeReq(SVnodeMgmt *pMgmt, SRpcMsg *pMsg) {
return -1; return -1;
} }
dInfo("vgId:%d, start to create vnode, tsma:%d standby:%d cacheLast:%d cacheLastSize:%d sstTrigger:%d", dInfo(
createReq.vgId, createReq.isTsma, createReq.standby, createReq.cacheLast, createReq.cacheLastSize, "vgId:%d, start to create vnode, tsma:%d standby:%d cacheLast:%d cacheLastSize:%d sstTrigger:%d "
createReq.sstTrigger); "tsdbPageSize:%d",
createReq.vgId, createReq.isTsma, createReq.standby, createReq.cacheLast, createReq.cacheLastSize,
createReq.sstTrigger, createReq.tsdbPageSize);
dInfo("vgId:%d, hashMethod:%d begin:%u end:%u prefix:%d surfix:%d", createReq.vgId, createReq.hashMethod, dInfo("vgId:%d, hashMethod:%d begin:%u end:%u prefix:%d surfix:%d", createReq.vgId, createReq.hashMethod,
createReq.hashBegin, createReq.hashEnd, createReq.hashPrefix, createReq.hashSuffix); createReq.hashBegin, createReq.hashEnd, createReq.hashPrefix, createReq.hashSuffix);
vmGenerateVnodeCfg(&createReq, &vnodeCfg); vmGenerateVnodeCfg(&createReq, &vnodeCfg);
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册