提交 7d7a0769 编写于 作者: haoranc's avatar haoranc

Merge remote-tracking branch 'origin/main' into enh/tsbsPerf.1

......@@ -365,6 +365,6 @@ Please follow the [contribution guidelines](CONTRIBUTING.md) to contribute to th
For more information about TDengine, you can follow us on social media and join our Discord server:
- [Discord](https://discord.com/invite/VZdSuUg4pS)
- [Twitter](https://twitter.com/TaosData)
- [Twitter](https://twitter.com/TDengineDB)
- [LinkedIn](https://www.linkedin.com/company/tdengine/)
- [YouTube](https://www.youtube.com/channel/UCmp-1U6GS_3V3hjir6Uq5DQ)
- [YouTube](https://www.youtube.com/@tdengine)
......@@ -122,8 +122,8 @@ ELSE ()
ELSE ()
SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Werror -Werror=return-type -fPIC -O3 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k")
#SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Werror -Werror=return-type -fPIC -gdwarf-2 -g3 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k")
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror -Wno-literal-suffix -Werror=return-type -fPIC -O3 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k")
#SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror -Wno-literal-suffix -Werror=return-type -fPIC -gdwarf-2 -g3 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k")
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror -Wno-reserved-user-defined-literal -Wno-literal-suffix -Werror=return-type -fPIC -O3 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k")
#SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror -Wno-reserved-user-defined-literal -Wno-literal-suffix -Werror=return-type -fPIC -gdwarf-2 -g3 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k")
ENDIF ()
# disable all assert
......
......@@ -2,7 +2,7 @@
IF (DEFINED VERNUMBER)
SET(TD_VER_NUMBER ${VERNUMBER})
ELSE ()
SET(TD_VER_NUMBER "3.0.3.2")
SET(TD_VER_NUMBER "3.0.4.0")
ENDIF ()
IF (DEFINED VERCOMPATIBLE)
......
......@@ -2,7 +2,7 @@
# taosadapter
ExternalProject_Add(taosadapter
GIT_REPOSITORY https://github.com/taosdata/taosadapter.git
GIT_TAG cb1e89c
GIT_TAG e02ddb2
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taosadapter"
BINARY_DIR ""
#BUILD_IN_SOURCE TRUE
......
......@@ -2,7 +2,7 @@
# taos-tools
ExternalProject_Add(taos-tools
GIT_REPOSITORY https://github.com/taosdata/taos-tools.git
GIT_TAG 149ac34
GIT_TAG 0681d8b
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools"
BINARY_DIR ""
#BUILD_IN_SOURCE TRUE
......
-DLINUX
-DWEBSOCKET
-I/usr/include
-Iinclude
-Iinclude/os
-Iinclude/common
-Iinclude/util
-Iinclude/libs/transport
-Itools/shell/inc
......@@ -204,7 +204,7 @@ group vnodeProcessReqs()
s -> s:
note right
save the requests in log store
and wait for comfirmation or
and wait for confirmation or
other cases
end note
......@@ -236,7 +236,7 @@ s -> s: syncAppendReqToLogStore()
s -> v: walWrite()
alt has meta req
<- s: comfirmation
<- s: confirmation
else
s -> v: vnodeApplyReqs()
end
......
......@@ -123,11 +123,11 @@ As a high-performance, scalable and SQL supported time-series database, TDengine
## Comparison with other databases
- [Writing Performance Comparison of TDengine and InfluxDB ](https://tdengine.com/2022/02/23/4975.html)
- [Query Performance Comparison of TDengine and InfluxDB](https://tdengine.com/2022/02/24/5120.html)
- [TDengine vs OpenTSDB](https://tdengine.com/2019/09/12/710.html)
- [TDengine vs Cassandra](https://tdengine.com/2019/09/12/708.html)
- [TDengine vs InfluxDB](https://tdengine.com/2019/09/12/706.html)
- [Writing Performance Comparison of TDengine and InfluxDB ](https://tdengine.com/performance-comparison-of-tdengine-and-influxdb/)
- [Query Performance Comparison of TDengine and InfluxDB](https://tdengine.com/query-performance-comparison-test-report-tdengine-vs-influxdb/)
- [TDengine vs OpenTSDB](https://tdengine.com/performance-tdengine-vs-opentsdb/)
- [TDengine vs Cassandra](https://tdengine.com/performance-tdengine-vs-cassandra/)
- [TDengine vs InfluxDB](https://tdengine.com/performance-tdengine-vs-influxdb/)
## More readings
- [Introduction to Time-Series Database](https://tdengine.com/tsdb/)
......
......@@ -188,7 +188,7 @@ You can use the TDengine CLI to monitor your TDengine deployment and execute ad
<TabItem label="Windows" value="windows">
After the installation is complete, run `C:\TDengine\taosd.exe` to start TDengine Server.
After the installation is complete, please run `sc start taosd` or run `C:\TDengine\taosd.exe` with administrator privilege to start TDengine Server.
## Command Line Interface (CLI)
......@@ -202,16 +202,18 @@ After the installation is complete, double-click the /applications/TDengine to s
The following `launchctl` commands can help you manage TDengine service:
- Start TDengine Server: `launchctl start com.tdengine.taosd`
- Start TDengine Server: `sudo launchctl start com.tdengine.taosd`
- Stop TDengine Server: `launchctl stop com.tdengine.taosd`
- Stop TDengine Server: `sudo launchctl stop com.tdengine.taosd`
- Check TDengine Server status: `launchctl list | grep taosd`
- Check TDengine Server status: `sudo launchctl list | grep taosd`
:::info
- The `launchctl` command does not require _root_ privileges. You don't need to use the `sudo` command.
- The first content returned by the `launchctl list | grep taosd` command is the PID of the program, if '-' indicates that the TDengine service is not running.
- Please use `sudo` to run `launchctl` to manage _com.tdengine.taosd_ with administrator privileges.
- The administrator privilege is required for service management to enhance security.
- Troubleshooting:
- The first column returned by the command `launchctl list | grep taosd` is the PID of the program. If it's `-`, that means the TDengine service is not running.
- If the service is abnormal, please check the `launchd.log` file from the system log or the `taosdlog` from the `/var/log/taos directory` for more information.
:::
......
......@@ -28,7 +28,7 @@ From the perspective of application program, you need to consider:
- Writing to known existing tables is more efficient than writing to uncertain tables in automatic creating mode because the later needs to check whether the table exists or not before actually writing data into it.
- Writing in SQL is more efficient than writing in schemaless mode because schemaless writing creates table automatically and may alter table schema.
Application programs need to take care of the above factors and try to take advantage of them. The application progam should write to single table in each write batch. The batch size needs to be tuned to a proper value on a specific system. The number of concurrent connections needs to be tuned to a proper value too to achieve the best writing throughput.
Application programs need to take care of the above factors and try to take advantage of them. The application program should write to single table in each write batch. The batch size needs to be tuned to a proper value on a specific system. The number of concurrent connections needs to be tuned to a proper value too to achieve the best writing throughput.
### Data Source
......
......@@ -7,6 +7,7 @@ title: Data Subscription
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import Java from "./_sub_java.mdx";
import JavaWS from "./_sub_java_ws.mdx"
import Python from "./_sub_python.mdx";
import Go from "./_sub_go.mdx";
import Rust from "./_sub_rust.mdx";
......@@ -22,7 +23,7 @@ By subscribing to a topic, a consumer can obtain the latest data in that topic i
To implement these features, TDengine indexes its write-ahead log (WAL) file for fast random access and provides configurable methods for replacing and retaining this file. You can define a retention period and size for this file. For information, see the CREATE DATABASE statement. In this way, the WAL file is transformed into a persistent storage engine that remembers the order in which events occur. However, note that configuring an overly long retention period for your WAL files makes database compression inefficient. TDengine then uses the WAL file instead of the time-series database as its storage engine for queries in the form of topics. TDengine reads the data from the WAL file; uses a unified query engine instance to perform filtering, transformations, and other operations; and finally pushes the data to consumers.
Tips:The default data subscription is to consume data from the wal. If the wal is deleted, the consumed data will be incomplete. At this time, you can set the parameter experimental.snapshot.enable to true to obtain all data from the tsdb, but in this way, the consumption order of the data cannot be guaranteed. Therefore, it is recommended to set a reasonable retention policy for WAL based on your consumption situation to ensure that you can subscribe all data from WAL.
## Data Schema and API
......@@ -284,17 +285,17 @@ You configure the following parameters when creating a consumer:
| Parameter | Type | Description | Remarks |
| :----------------------------: | :-----: | -------------------------------------------------------- | ------------------------------------------- |
| `td.connect.ip` | string | Used in establishing a connection; same as `taos_connect` | |
| `td.connect.user` | string | Used in establishing a connection; same as `taos_connect` | |
| `td.connect.pass` | string | Used in establishing a connection; same as `taos_connect` | |
| `td.connect.port` | string | Used in establishing a connection; same as `taos_connect` | |
| `td.connect.ip` | string | Used in establishing a connection; same as `taos_connect` | Only valid for establishing native connection |
| `td.connect.user` | string | Used in establishing a connection; same as `taos_connect` | Only valid for establishing native connection |
| `td.connect.pass` | string | Used in establishing a connection; same as `taos_connect` | Only valid for establishing native connection |
| `td.connect.port` | string | Used in establishing a connection; same as `taos_connect` | Only valid for establishing native connection |
| `group.id` | string | Consumer group ID; consumers with the same ID are in the same group | **Required**. Maximum length: 192. |
| `client.id` | string | Client ID | Maximum length: 192. |
| `auto.offset.reset` | enum | Initial offset for the consumer group | Specify `earliest`, `latest`, or `none`(default) |
| `enable.auto.commit` | boolean | Commit automatically | Specify `true` or `false`. |
| `enable.auto.commit` | boolean | Commit automatically; true: user application doesn't need to explicitly commit; false: user application need to handle commit by itself | Default value is true |
| `auto.commit.interval.ms` | integer | Interval for automatic commits, in milliseconds |
| `experimental.snapshot.enable` | boolean | Specify whether to consume messages from the WAL or from TSBS | |
| `msg.with.table.name` | boolean | Specify whether to deserialize table names from messages |
| `experimental.snapshot.enable` | boolean | Specify whether to consume data in TSDB; true: both data in WAL and in TSDB can be consumed; false: only data in WAL can be consumed | default value: false |
| `msg.with.table.name` | boolean | Specify whether to deserialize table names from messages | default value: false
The method of specifying these parameters depends on the language used:
......@@ -415,7 +416,8 @@ Python programs use the following parameters:
| `enable.auto.commit` | string | Commit automatically | pecify `true` or `false` |
| `auto.commit.interval.ms` | string | Interval for automatic commits, in milliseconds | |
| `auto.offset.reset` | string | Initial offset for the consumer group | Specify `earliest`, `latest`, or `none`(default) |
| `experimental.snapshot.enable` | string | Specify whether to consume messages from the WAL or from TSDB | Specify `true` or `false` |
| `experimental.snapshot.enable` | string | Specify whether it's allowed to consume messages from the WAL or from TSDB | Specify `true` or `false` |
| `enable.heartbeat.background` | string | Backend heartbeat; if enabled, the consumer does not go offline even if it has not polled for a long time | Specify `true` or `false` |
</TabItem>
......@@ -804,7 +806,14 @@ The following section shows sample code in various languages.
</TabItem>
<TabItem label="Java" value="java">
<Java />
<Tabs defaultValue="native">
<TabItem value="native" label="native connection">
<Java />
</TabItem>
<TabItem value="ws" label="WebSocket connection">
<JavaWS />
</TabItem>
</Tabs>
</TabItem>
<TabItem label="Go" value="Go">
......
......@@ -65,11 +65,11 @@ int32_t aggfn_init() {
}
// aggregate start function. The intermediate value or the state(@interBuf) is initialized in this function. The function name shall be concatenation of udf name and _start suffix
// @param interbuf intermediate value to intialize
// @param interbuf intermediate value to initialize
// @return error number defined in taoserror.h
int32_t aggfn_start(SUdfInterBuf* interBuf) {
// initialize intermediate value in interBuf
return TSDB_CODE_SUCESS;
return TSDB_CODE_SUCCESS;
}
// aggregate reduce function. This function aggregate old state(@interbuf) and one data bock(inputBlock) and output a new state(@newInterBuf).
......
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
<Tabs defaultValue="native">
<TabItem value="native" label="native connection">
```java
{{#include docs/examples/java/src/main/java/com/taos/example/SubscribeDemo.java}}
```
```java
{{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}}
```
```java
{{#include docs/examples/java/src/main/java/com/taos/example/Meters.java}}
```
</TabItem>
<TabItem value="ws" label="WebSocket connection">
```java
{{#include docs/examples/java/src/main/java/com/taos/example/WebsocketSubscribeDemo.java}}
```
```java
{{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}}
```
```java
{{#include docs/examples/java/src/main/java/com/taos/example/Meters.java}}
```
</TabItem>
</Tabs>
```java
{{#include docs/examples/java/src/main/java/com/taos/example/SubscribeDemo.java}}
```
```java
{{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}}
```
```java
{{#include docs/examples/java/src/main/java/com/taos/example/Meters.java}}
```
```java
{{#include docs/examples/java/src/main/java/com/taos/example/WebsocketSubscribeDemo.java}}
```
```java
{{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}}
```
```java
{{#include docs/examples/java/src/main/java/com/taos/example/Meters.java}}
```
......@@ -35,8 +35,8 @@ database_option: {
| TABLE_SUFFIX value
| TSDB_PAGESIZE value
| WAL_RETENTION_PERIOD value
| WAL_ROLL_PERIOD value
| WAL_RETENTION_SIZE value
| WAL_ROLL_PERIOD value
| WAL_SEGMENT_SIZE value
}
```
......@@ -75,11 +75,10 @@ database_option: {
- TABLE_PREFIX:The prefix length in the table name that is ignored when distributing table to vnode based on table name.
- TABLE_SUFFIX:The suffix length in the table name that is ignored when distributing table to vnode based on table name.
- TSDB_PAGESIZE: The page size of the data storage engine in a vnode. The unit is KB. The default is 4 KB. The range is 1 to 16384, that is, 1 KB to 16 MB.
- WAL_RETENTION_PERIOD: specifies the maximum time of which WAL files are to be kept after consumption. This parameter is used for data subscription. Enter a time in seconds. The default value 0. A value of 0 indicates that WAL files are not required to keep after consumption. -1: the time of WAL files to keep has no upper limit.
- WAL_RETENTION_SIZE: specifies the maximum total size of which WAL files are to be kept after consumption. This parameter is used for data subscription. Enter a size in KB. The default value is 0. A value of 0 indicates that WAL files are not required to keep after consumption. -1: the total size of WAL files to keep has no upper limit.
- WAL_RETENTION_PERIOD: specifies the maximum time of which WAL files are to be kept for consumption. This parameter is used for data subscription. Enter a time in seconds. The default value 0. A value of 0 indicates that WAL files are not required to keep for consumption. Alter it with a proper value at first to create topics.
- WAL_RETENTION_SIZE: specifies the maximum total size of which WAL files are to be kept for consumption. This parameter is used for data subscription. Enter a size in KB. The default value is 0. A value of 0 indicates that the total size of WAL files to keep for consumption has no upper limit.
- WAL_ROLL_PERIOD: specifies the time after which WAL files are rotated. After this period elapses, a new WAL file is created. The default value is 0. A value of 0 indicates that a new WAL file is created only after TSDB data in memory are flushed to disk.
- WAL_SEGMENT_SIZE: specifies the maximum size of a WAL file. After the current WAL file reaches this size, a new WAL file is created. The default value is 0. A value of 0 indicates that a new WAL file is created only after TSDB data in memory are flushed to disk.
### Example Statement
```sql
......@@ -123,6 +122,8 @@ alter_database_option: {
| WAL_LEVEL value
| WAL_FSYNC_PERIOD value
| KEEP value
| WAL_RETENTION_PERIOD value
| WAL_RETENTION_SIZE value
}
```
......@@ -179,6 +180,14 @@ TRIM DATABASE db_name;
The preceding SQL statement deletes data that has expired and orders the remaining data in accordance with the storage configuration.
## Flush Data
```sql
FLUSH DATABASE db_name;
```
Flush data from memory onto disk. Before shutting down a node, executing this command can avoid data restore after restarting and speed up the startup process.
## Redistribute Vgroup
```sql
......
......@@ -13,12 +13,11 @@ create_definition:
col_name column_definition
column_definition:
type_name [COMMENT 'string_value']
type_name
```
**More explanations**
- Each supertable can have a maximum of 4096 columns, including tags. The minimum number of columns is 3: a timestamp column used as the key, one tag column, and one data column.
- When you create a supertable, you can add comments to columns and tags.
- The TAGS keyword defines the tag columns for the supertable. The following restrictions apply to tag columns:
- A tag column can use the TIMESTAMP data type, but the values in the column must be fixed numbers. Timestamps including formulae, such as "now + 10s", cannot be stored in a tag column.
- The name of a tag column cannot be the same as the name of any other column.
......@@ -34,7 +33,7 @@ column_definition:
SHOW STABLES [LIKE tb_name_wildcard];
```
The preceding SQL statement shows all supertables in the current TDengine database, including the name, creation time, number of columns, number of tags, and number of subtabels for each supertable.
The preceding SQL statement shows all supertables in the current TDengine database, including the name, creation time, number of columns, number of tags, and number of subtables for each supertable.
### View the CREATE Statement for a Supertable
......
......@@ -248,13 +248,13 @@ You can also use the NULLS keyword to specify the position of null values. Ascen
The LIMIT keyword controls the number of results that are displayed. You can also use the OFFSET keyword to specify the result to display first. `LIMIT` and `OFFSET` are executed after `ORDER BY` in the query execution. You can include an offset in a LIMIT clause. For example, LIMIT 5 OFFSET 2 can also be written LIMIT 2, 5. Both of these clauses display the third through the seventh results.
In a statement that includes a PARTITON BY clause, the LIMIT keyword is performed on each partition, not on the entire set of results.
In a statement that includes a PARTITION BY/GROUP BY clause, the LIMIT keyword is performed on each partition/group, not on the entire set of results.
## SLIMIT
The SLIMIT keyword is used with a PARTITION BY clause to control the number of partitions that are displayed. You can include an offset in a SLIMIT clause. For example, SLIMIT 5 OFFSET 2 can also be written LIMIT 2, 5. Both of these clauses display the third through the seventh partitions.
The SLIMIT keyword is used with a PARTITION BY/GROUP BY clause to control the number of partitions/groups that are displayed. You can include an offset in a SLIMIT clause. For example, SLIMIT 5 OFFSET 2 can also be written LIMIT 2, 5. Both of these clauses display the third through the seventh partitions/groups.
Note: If you include an ORDER BY clause, only one partition can be displayed.
Note: If you include an ORDER BY clause, only one partition/group can be displayed.
## Special Query
......
---
sidebar_label: Tag Index
title: Tag Index
description: Use Tag Index to Improve Query Performance
---
## Introduction
Prior to TDengine 3.0.3.0 (excluded),only one index is created by default on the first tag of each super talbe, but it's not allowed to dynamically create index on any other tags. From version 3.0.30, you can dynamically create index on any tag of any type. The index created automatically by TDengine is still valid. Query performance can benefit from indexes if you use properly.
## Syntax
1. The syntax of creating an index
```sql
CREATE INDEX index_name ON tbl_name (tagColName
```
In the above statement, `index_name` if the name of the index, `tbl_name` is the name of the super table,`tagColName` is the name of the tag on which the index is being created. `tagColName` can be any type supported by TDengine.
2. The syntax of drop an index
```sql
DROP INDEX index_name
```
In the above statement, `index_name` is the name of an existing index. If the index doesn't exist, the command would fail but doesn't generate any impact to the system.
3. The syntax of show indexes in the system
```sql
SELECT * FROM information_schema.INS_INDEXES
```
You can also add filter conditions to limit the results.
## Detailed Specification
1. Indexes can improve query performance significantly if they are used properly. The operators supported by tag index include `=`, `>`, `>=`, `<`, `<=`. If you use these operators with tags, indexes can improve query performance significantly. However, for operators not in this scope, indexes don't help. More and more operators will be added in future.
2. Only one index can be created on each tag, error would be reported if you try to create more than one indexes on same tag.
3. Each time you can create an index on a single tag, you are not allowed to create indexes on multiple tags together.
4. The name of each index must be unique across the whole system, regardless of the type of the index, e.g. tag index or sma index.
5. There is no limit on the number of indexes, but each index may add some burden on the metadata subsystem. So too many indexes may decrease the efficiency of reading or writing metadata and then decrease the system performance. So it's better not to add unnecessary indexes.
6. You can' create index on a normal table or a child table.
7. If the unique values of a tag column are too few, it's better not to create index on such tag columns, the benefit would be very small.
\ No newline at end of file
......@@ -666,13 +666,13 @@ If you input a specific column, the number of non-null values in the column is r
ELAPSED(ts_primary_key [, time_unit])
```
**Description**`elapsed` function can be used to calculate the continuous time length in which there is valid data. If it's used with `INTERVAL` clause, the returned result is the calcualted time length within each time window. If it's used without `INTERVAL` caluse, the returned result is the calculated time length within the specified time range. Please be noted that the return value of `elapsed` is the number of `time_unit` in the calculated time length.
**Description**`elapsed` function can be used to calculate the continuous time length in which there is valid data. If it's used with `INTERVAL` clause, the returned result is the calculated time length within each time window. If it's used without `INTERVAL` caluse, the returned result is the calculated time length within the specified time range. Please be noted that the return value of `elapsed` is the number of `time_unit` in the calculated time length.
**Return value type**: Double if the input value is not NULL;
**Return value type**: TIMESTAMP
**Applicable tables**: table, STable, outter in nested query
**Applicable tables**: table, STable, outer in nested query
**Explanations**
- `ts_primary_key` parameter can only be the first column of a table, i.e. timestamp primary key.
......@@ -754,7 +754,7 @@ HYPERLOGLOG(expr)
**Description**:
The cardinal number of a specific column is returned by using hyperloglog algorithm. The benefit of using hyperloglog algorithm is that the memory usage is under control when the data volume is huge.
However, when the data volume is very small, the result may be not accurate, it's recommented to use `select count(data) from (select unique(col) as data from table)` in this case.
However, when the data volume is very small, the result may be not accurate, it's recommended to use `select count(data) from (select unique(col) as data from table)` in this case.
**Return value type**: Integer
......@@ -801,7 +801,7 @@ PERCENTILE(expr, p [, p1] ...)
**Description**: The value whose rank in a specific column matches the specified percentage. If such a value matching the specified percentage doesn't exist in the column, an interpolation value will be returned.
**Return value type**: This function takes 2 minumum and 11 maximum parameters, and it can simultaneously return 10 percentiles at most. If 2 parameters are given, a single percentile is returned and the value type is DOUBLE.
**Return value type**: This function takes 2 minimum and 11 maximum parameters, and it can simultaneously return 10 percentiles at most. If 2 parameters are given, a single percentile is returned and the value type is DOUBLE.
If more than 2 parameters are given, the return value type is a VARCHAR string, the format of which is a JSON ARRAY containing all return values.
**Applicable column types**: Numeric
......@@ -811,7 +811,7 @@ PERCENTILE(expr, p [, p1] ...)
**More explanations**:
- _p_ is in range [0,100], when _p_ is 0, the result is same as using function MIN; when _p_ is 100, the result is same as function MAX.
- When calculating multiple percentiles of a specific column, a single PERCENTILE function with multiple parameters is adviced, as this can largely reduce the query response time.
- When calculating multiple percentiles of a specific column, a single PERCENTILE function with multiple parameters is advised, as this can largely reduce the query response time.
For example, using SELECT percentile(col, 90, 95, 99) FROM table will perform better than SELECT percentile(col, 90), percentile(col, 95), percentile(col, 99) from table.
## Selection Functions
......@@ -884,6 +884,15 @@ INTERP(expr)
- Pseudocolumn `_irowts` can be used along with `INTERP` to return the timestamps associated with interpolation points(support after version 3.0.2.0).
- Pseudocolumn `_isfilled` can be used along with `INTERP` to indicate whether the results are original records or data points generated by interpolation algorithm(support after version 3.0.3.0).
**Example**
- We use the smart meters example used in this documentation to illustrate how to use the INTERP function.
- We want to downsample every 1 hour and use a linear fill for missing values. Note the order in which the "partition by" clause and the "range", "every" and "fill" parameters are used.
```sql
SELECT _irowts,INTERP(current) FROM test.meters PARTITION BY TBNAME RANGE('2017-07-22 00:00:00','2017-07-24 12:25:00') EVERY(1h) FILL(LINEAR)
```
### LAST
```sql
......
......@@ -21,7 +21,7 @@ part_list can be any scalar expression, such as a column, constant, scalar funct
A PARTITION BY clause is processed as follows:
- The PARTITION BY clause must occur after the WHERE clause
- The PARTITION BY caluse partitions the data according to the specified dimentions, then perform computation on each partition. The performed computation is determined by the rest of the statement - a window clause, GROUP BY clause, or SELECT clause.
- The PARTITION BY caluse partitions the data according to the specified dimensions, then perform computation on each partition. The performed computation is determined by the rest of the statement - a window clause, GROUP BY clause, or SELECT clause.
- The PARTITION BY clause can be used together with a window clause or GROUP BY clause. In this case, the window or GROUP BY clause takes effect on every partition. For example, the following statement partitions the table by the location tag, performs downsampling over a 10 minute window, and returns the maximum value:
```sql
......@@ -32,15 +32,15 @@ The most common usage of PARTITION BY is partitioning the data in subtables by t
## Windowed Queries
Aggregation by time window is supported in TDengine. For example, in the case where temperature sensors report the temperature every seconds, the average temperature for every 10 minutes can be retrieved by performing a query with a time window. Window related clauses are used to divide the data set to be queried into subsets and then aggregation is performed across the subsets. There are three kinds of windows: time window, status window, and session window. There are two kinds of time windows: sliding window and flip time/tumbling window. The query syntax is as follows:
Aggregation by time window is supported in TDengine. For example, in the case where temperature sensors report the temperature every seconds, the average temperature for every 10 minutes can be retrieved by performing a query with a time window. Window related clauses are used to divide the data set to be queried into subsets and then aggregation is performed across the subsets. There are four kinds of windows: time window, status window, session window, and event window. There are two kinds of time windows: sliding window and flip time/tumbling window. The syntax of window clause is as follows:
```sql
SELECT select_list FROM tb_name
[WHERE where_condition]
[SESSION(ts_col, tol_val)]
[STATE_WINDOW(col)]
[INTERVAL(interval [, offset]) [SLIDING sliding]]
[FILL({NONE | VALUE | PREV | NULL | LINEAR | NEXT})]
window_clause: {
SESSION(ts_col, tol_val)
| STATE_WINDOW(col)
| INTERVAL(interval [, offset]) [SLIDING sliding] [FILL({NONE | VALUE | PREV | NULL | LINEAR | NEXT})]
| EVENT_WINDOW START WITH start_trigger_condition END WITH end_trigger_condition
}
```
The following restrictions apply:
......@@ -75,6 +75,16 @@ These pseudocolumns occur after the aggregation clause.
5. LINEAR:Fill with the closest non-NULL value, `FILL(LINEAR)`
6. NEXT:Fill with the next non-NULL value, `FILL(NEXT)`
In the above filling modes, except for `NONE` mode, the `fill` clause will be ignored if there is no data in the defined time range, i.e. no data would be filled and the query result would be empty. This behavior is reasonable when the filling mode is `PREV`, `NEXT`, `LINEAR`, because filling can't be performed if there is not any data. For filling modes `NULL` and `VALUE`, however, filling can be performed even though there is not any data, filling or not depends on the choice of user's application. To accomplish the need of this force filling behavior and not break the behavior of existing filling modes, TDengine added two new filling modes since version 3.0.3.0.
1. NULL_F: Fill `NULL` by force
2. VALUE_F: Fill `VALUE` by force
The detailed beaviors of `NULL`, `NULL_F`, `VALUE`, and VALUE_F are described below:
- When used with `INTERVAL`: `NULL_F` and `VALUE_F` are filling by force;`NULL` and `VALUE` don't fill by force. The behavior of each filling mode is exactly same as what the name suggests.
- When used with `INTERVAL` in stream processing: `NULL_F` and `NULL` are same, i.e. don't fill by force; `VALUE_F` and `VALUE` and same, i.e. don't fill by force. It's suggested that there is no filling by force in stream processing.
- When used with `INTERP`: `NULL` and `NULL_F` and same, i.e. filling by force; `VALUE` and `VALUE_F` are same, i.e. filling by force. It's suggested that there is always filling by force when used with `INTERP`.
:::info
1. A huge volume of interpolation output may be returned using `FILL`, so it's recommended to specify the time range when using `FILL`. The maximum number of interpolation values that can be returned in a single query is 10,000,000.
......@@ -105,7 +115,7 @@ SELECT COUNT(*) FROM temp_tb_1 INTERVAL(1m) SLIDING(2m);
When using time windows, note the following:
- The window length for aggregation depends on the value of INTERVAL. The minimum interval is 10 ms. You can configure a window as an offset from UTC 0:00. The offset cannot be smaler than the interval. You can use SLIDING to specify the length of time that the window moves forward.
- The window length for aggregation depends on the value of INTERVAL. The minimum interval is 10 ms. You can configure a window as an offset from UTC 0:00. The offset cannot be smaller than the interval. You can use SLIDING to specify the length of time that the window moves forward.
Please note that the `timezone` parameter should be configured to be the same value in the `taos.cfg` configuration file on client side and server side.
- The result set is in ascending order of timestamp when you aggregate by time window.
......@@ -146,6 +156,26 @@ If the time interval between two continuous rows are within the time interval sp
SELECT COUNT(*), FIRST(ts) FROM temp_tb_1 SESSION(ts, tol_val);
```
### Event Window
Event window is determined according to the window start condition and the window close condition. The window is started when `start_trigger_condition` is evaluated to true, the window is closed when `end_trigger_condition` is evaluated to true. `start_trigger_condition` and `end_trigger_condition` can be any conditional expressions supported by TDengine and can include multiple columns.
There may be only one row of data in an event window, when a row meets both the `start_trigger_condition` and the `end_trigger_condition`.
The window is treated as invalid or non-existing if the `end_trigger_condition` can't be met. There will be no output in case that a window can't be closed.
If the event window query is performed on a super table, TDengine consolidates all the data of all child tables into a single timeline then perform event window based query.
If you want to perform event window based query on the result set of a sub-query, the result set of the sub-query should be arranged in the order of timestamp and include the column of timestamp.
For example, the diagram below illustrates the event windows generated by the query below:
```sql
select _wstart, _wend, count(*) from t event_window start with c1 > 0 end with c2 < 10
```
![Event Window Illustration](./event_window.webp)
### Examples
A table of intelligent meters can be created by the SQL statement below:
......
......@@ -55,7 +55,7 @@ description: This document describes the JSON data type in TDengine.
4. Tag Operations
The value of a JSON tag can be altered. Please note that the full JSON will be overriden when doing this.
The value of a JSON tag can be altered. Please note that the full JSON will be overridden when doing this.
The name of a JSON tag can be altered.
......
......@@ -179,6 +179,20 @@ Provides information about standard tables and subtables.
| 5 | tag_type | BINARY(64) | Tag type |
| 6 | tag_value | BINARY(16384) | Tag value |
## INS_COLUMNS
| # | **列名** | **数据类型** | **说明** |
| --- | :---------: | ------------- | ---------------------- |
| 1 | table_name | BINARY(192) | Table name |
| 2 | db_name | BINARY(64) | Database name |
| 3 | table_type | BINARY(21) | Table type |
| 4 | col_name | BINARY(64) | Column name |
| 5 | col_type | BINARY(32) | Column type |
| 6 | col_length | INT | Column length |
| 7 | col_precision | INT | Column precision |
| 8 | col_scale | INT | Column scale |
| 9 | col_nullable | INT | Column nullable |
## INS_USERS
Provides information about TDengine users.
......@@ -274,9 +288,9 @@ Provides dnode configuration information.
| 1 | stream_name | BINARY(64) | Stream name |
| 2 | create_time | TIMESTAMP | Creation time |
| 3 | sql | BINARY(1024) | SQL statement used to create the stream |
| 4 | status | BIANRY(20) | Current status |
| 4 | status | BINARY(20) | Current status |
| 5 | source_db | BINARY(64) | Source database |
| 6 | target_db | BIANRY(64) | Target database |
| 6 | target_db | BINARY(64) | Target database |
| 7 | target_table | BINARY(192) | Target table |
| 8 | watermark | BIGINT | Watermark (see stream processing documentation). It should be noted that `watermark` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 9 | trigger | INT | Method of triggering the result push (see stream processing documentation). It should be noted that `trigger` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
......@@ -4,7 +4,7 @@ sidebar_label: SHOW Statement
description: This document describes how to use the SHOW statement in TDengine.
---
`SHOW` command can be used to get brief system information. To get details about metatadata, information, and status in the system, please use `select` to query the tables in database `INFORMATION_SCHEMA`.
`SHOW` command can be used to get brief system information. To get details about metadata, information, and status in the system, please use `select` to query the tables in database `INFORMATION_SCHEMA`.
## SHOW APPS
......@@ -86,10 +86,10 @@ SHOW FUNCTIONS;
Shows all user-defined functions in the system.
## SHOW LICENSE
## SHOW LICENCES
```sql
SHOW LICENSE;
SHOW LICENCES;
SHOW GRANTS;
```
......@@ -308,9 +308,11 @@ Query OK, 24 row(s) in set (0.002444s)
</code></pre>
</details>
The above show the block distribution percentage according to the number of rows in each block. In the above example, we can get below information:
- `_block_dist: 3483 ||||||||||||||||| 1 (20.00%)` means there is one block whose rows is between 3,483 and 3,681.
- `_block_dist: 3881 ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 4 (80.00%)` means there are 4 blocks whose rows is between 3,881 and 4,096. - The number of blocks whose rows fall in other range is zero.
The above show the block distribution percentage according to the number of rows in each block. In the above example, we can get below information:
- `_block_dist: 3483 ||||||||||||||||| 1 (20.00%)` means there is one block whose rows is between 3,483 and 3,681.
- `_block_dist: 3881 ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 4 (80.00%)` means there are 4 blocks whose rows is between 3,881 and 4,096. - The number of blocks whose rows fall in other range is zero.
Note that only the information about the data blocks in the data file will be displayed here, and the information about the data in the stt file will not be displayed.
## SHOW TAGS
......@@ -359,7 +361,7 @@ Shows the working configuration of the parameters that must be the same on each
SHOW [db_name.]VGROUPS;
```
Shows information about all vgroups in the system or about the vgroups for a specified database.
Shows information about all vgroups in the current database.
## SHOW VNODES
......
......@@ -27,7 +27,7 @@ The following data types can be used in the schema for standard tables.
| - | :------- | :-------- | :------- |
| 1 | ALTER ACCOUNT | Deprecated| This Enterprise Edition-only statement has been removed. It returns the error "This statement is no longer supported."
| 2 | ALTER ALL DNODES | Added | Modifies the configuration of all dnodes.
| 3 | ALTER DATABASE | Modified | Deprecated<ul><li>QUORUM: Specified the required number of confirmations. STRICT is now used to specify strong or weak consistency. The STRICT parameter cannot be modified. </li><li>BLOCKS: Specified the memory blocks used by each vnode. BUFFER is now used to specify the size of the write cache pool for each vnode. </li><li>UPDATE: Specified whether update operations were supported. All databases now support updating data in certain columns. </li><li>CACHELAST: Specified how to cache the newest row of data. CACHEMODEL now replaces CACHELAST. </li><li>COMP: Cannot be modified. <br/>Added</li><li>CACHEMODEL: Specifies whether to cache the latest subtable data. </li><li>CACHESIZE: Specifies the size of the cache for the newest subtable data. </li><li>WAL_FSYNC_PERIOD: Replaces the FSYNC parameter. </li><li>WAL_LEVEL: Replaces the WAL parameter. <br/>Modified</li><li>REPLICA: Cannot be modified. </li><li>KEEP: Now supports units. </li></ul>
| 3 | ALTER DATABASE | Modified | Deprecated<ul><li>QUORUM: Specified the required number of confirmations. TDengine 3.0 provides strict consistency by default and doesn't allow to change to weak consitency. </li><li>BLOCKS: Specified the memory blocks used by each vnode. BUFFER is now used to specify the size of the write cache pool for each vnode. </li><li>UPDATE: Specified whether update operations were supported. All databases now support updating data in certain columns. </li><li>CACHELAST: Specified how to cache the newest row of data. CACHEMODEL now replaces CACHELAST. </li><li>COMP: Cannot be modified. <br/>Added</li><li>CACHEMODEL: Specifies whether to cache the latest subtable data. </li><li>CACHESIZE: Specifies the size of the cache for the newest subtable data. </li><li>WAL_FSYNC_PERIOD: Replaces the FSYNC parameter. </li><li>WAL_LEVEL: Replaces the WAL parameter. </li><li>WAL_RETENTION_PERIOD: specifies the time after which WAL files are deleted. This parameter is used for data subscription. </li><li>WAL_RETENTION_SIZE: specifies the size at which WAL files are deleted. This parameter is used for data subscription. <br/>Modified</li><li>REPLICA: Cannot be modified. </li><li>KEEP: Now supports units. </li></ul>
| 4 | ALTER STABLE | Modified | Deprecated<ul><li>CHANGE TAG: Modified the name of a tag. Replaced by RENAME TAG. <br/>Added</li><li>RENAME TAG: Replaces CHANGE TAG. </li><li>COMMENT: Specifies comments for a supertable. </li></ul>
| 5 | ALTER TABLE | Modified | Deprecated<ul><li>CHANGE TAG: Modified the name of a tag. Replaced by RENAME TAG. <br/>Added</li><li>RENAME TAG: Replaces CHANGE TAG. </li><li>COMMENT: Specifies comments for a standard table. </li><li>TTL: Specifies the time-to-live for a standard table. </li></ul>
| 6 | ALTER USER | Modified | Deprecated<ul><li>PRIVILEGE: Specified user permissions. Replaced by GRANT and REVOKE. <br/>Added</li><li>ENABLE: Enables or disables a user. </li><li>SYSINFO: Specifies whether a user can query system information. </li></ul>
......
......@@ -15,14 +15,14 @@ About details of installing TDenine, please refer to [Installation Guide](../../
## Uninstall
<Tabs>
<TabItem label="Uninstall apt-get" value="aptremove">
<TabItem label="Uninstall by apt-get" value="aptremove">
Apt-get package of TDengine can be uninstalled as below:
Uninstall package of TDengine by apt-get can be uninstalled as below:
```bash
$ sudo apt-get remove tdengine
Reading package lists... Done
Building dependency tree
Building dependency tree
Reading state information... Done
The following packages will be REMOVED:
tdengine
......@@ -35,7 +35,7 @@ TDengine is removed successfully!
```
Apt-get package of taosTools can be uninstalled as below:
If you have installed taos-tools, please uninstall it first before uninstall TDengine. The command of uninstall is following:
```
$ sudo apt remove taostools
......@@ -111,8 +111,20 @@ taos tools is uninstalled successfully!
```
</TabItem>
<TabItem label="Windows uninstall" value="windows">
Run C:\TDengine\unins000.exe to uninstall TDengine on a Windows system.
</TabItem>
<TabItem label="Mac uninstall" value="mac">
TDengine can be uninstalled as below:
```
$ rmtaos
TDengine is removed successfully!
```
</TabItem>
</Tabs>
......@@ -150,13 +162,13 @@ There are two aspects in upgrade operation: upgrade installation package and upg
To upgrade a package, follow the steps mentioned previously to first uninstall the old version then install the new version.
Upgrading a running server is much more complex. First please check the version number of the old version and the new version. The version number of TDengine consists of 4 sections, only if the first 3 sections match can the old version be upgraded to the new version. The steps of upgrading a running server are as below:
Upgrading a running server is much more complex. First please check the version number of the old version and the new version. The version number of TDengine consists of 4 sections, only if the first 2 sections match can the old version be upgraded to the new version. The steps of upgrading a running server are as below:
- Stop inserting data
- Make sure all data is persisted to disk
- Make sure all data is persisted to disk, please use command `flush database`
- Stop the cluster of TDengine
- Uninstall old version and install new version
- Start the cluster of TDengine
- Execute simple queries, such as the ones executed prior to installing the new package, to make sure there is no data loss
- Execute simple queries, such as the ones executed prior to installing the new package, to make sure there is no data loss
- Run some simple data insertion statements to make sure the cluster works well
- Restore business services
......
......@@ -18,14 +18,8 @@ To achieve absolutely no data loss, set wal_level to 2 and wal_fsync_period to 0
## Disaster Recovery
TDengine uses replication to provide high availability.
TDengine provides disaster recovery by using taosX to replicate data between two TDengine clusters which are deployed in two distant data centers. Assume there are two TDengine clusters, A and B, A is the source and B is the target, and A takes the workload of writing and querying. You can deploy `taosX` in the data center where cluster A resides in, `taosX` consumes the data written into cluster A and writes into cluster B. If the data center of cluster A is disrupted because of disaster, you can switch to cluster B to take the workload of data writing and querying, and deploy a `taosX` in the data center of cluster B to replicate data from cluster B to cluster A if cluster A has been recovered, or another cluster C if cluster A has not been recovered.
A TDengine cluster is managed by mnodes. You can configure up to three mnodes to ensure high availability. The data replication between mnode replicas is performed in a synchronous way to guarantee metadata consistency.
You can use the data replication feature of `taosX` to build more complicated disaster recovery solution.
The number of replicas for time series data in TDengine is associated with each database. There can be many databases in a cluster and each database can be configured with a different number of replicas. When creating a database, the parameter `replica` is used to specify the number of replicas. To achieve high availability, set `replica` to 3.
The number of dnodes in a TDengine cluster must NOT be lower than the number of replicas for any database, otherwise it would fail when trying to create a table.
As long as the dnodes of a TDengine cluster are deployed on different physical machines and the replica number is higher than 1, high availability can be achieved without any other assistance. For disaster recovery, dnodes of a TDengine cluster should be deployed in geographically different data centers.
Alternatively, you can use taosX to synchronize the data from one TDengine cluster to another cluster in a remote location. However, taosX is only available in TDengine enterprise version, for more information please contact tdengine.com.
taosX is only provided in TDengine enterprise edition, for more details please contact business@tdengine.com.
......@@ -68,7 +68,7 @@ The following return value results indicate that the verification passed.
## HTTP request URL format
```text
http://<fqdn>:<port>/rest/sql/[db_name][?tz=timezone]
http://<fqdn>:<port>/rest/sql/[db_name][?tz=timezone[&req_id=req_id]]
```
Parameter Description:
......@@ -77,6 +77,7 @@ Parameter Description:
- port: httpPort configuration item in the configuration file, default is 6041.
- db_name: Optional parameter that specifies the default database name for the executed SQL command.
- tz: Optional parameter that specifies the timezone of the returned time, following the IANA Time Zone rules, e.g. `America/New_York`.
- req_id: Optional parameter that specifies the request id for tracing.
For example, `http://h1.taos.com:6041/rest/sql/test` is a URL to `h1.taos.com:6041` and sets the default database name to `test`.
......@@ -99,13 +100,13 @@ The HTTP request's BODY is a complete SQL command, and the data table in the SQL
Use `curl` to initiate an HTTP request with a custom authentication method, with the following syntax.
```bash
curl -L -H "Authorization: Basic <TOKEN>" -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name][?tz=timezone]
curl -L -H "Authorization: Basic <TOKEN>" -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name][?tz=timezone[&req_id=req_id]]
```
or
```bash
curl -L -u username:password -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name][?tz=timezone]
curl -L -u username:password -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name][?tz=timezone[&req_id=req_id]]
```
where `TOKEN` is the string after Base64 encoding of `{username}:{password}`, e.g. `root:taosdata` is encoded as `cm9vdDp0YW9zZGF0YQ==`..
......@@ -114,14 +115,41 @@ where `TOKEN` is the string after Base64 encoding of `{username}:{password}`, e.
### HTTP Response Code
| **Response Code** | **Description** |
|-------------------|----------------|
| 200 | Success. (Also used for C interface errors.) |
| 400 | Parameter error |
| 401 | Authentication failure |
| 404 | Interface not found |
| 500 | Internal error |
| 503 | Insufficient system resources |
Starting from `TDengine 3.0.3.0`, `taosAdapter` provides a configuration parameter `httpCodeServerError` to set whether to return a non-200 http status code when the C interface returns an error
| **Description** | **httpCodeServerError false** | **httpCodeServerError true** |
|--------------------|---------------------------- ------|---------------------------------------|
| taos_errno() returns 0 | 200 | 200 |
| taos_errno() returns non-0 | 200 (except authentication error) | 500 (except authentication error and 400/502 error) |
| Parameter error | 400 (only handle HTTP request URL parameter error) | 400 (handle HTTP request URL parameter error and taosd return error) |
| Authentication error | 401 | 401 |
| Interface does not exist | 404 | 404 |
| Cluster unavailable error | 502 | 502 |
| Insufficient system resources | 503 | 503 |
The C error codes that return http code 400 are:
- TSDB_CODE_TSC_SQL_SYNTAX_ERROR ( 0x0216 )
- TSDB_CODE_TSC_LINE_SYNTAX_ERROR (0x021B)
- TSDB_CODE_PAR_SYNTAX_ERROR (0x2600)
- TSDB_CODE_TDB_TIMESTAMP_OUT_OF_RANGE (0x060B)
- TSDB_CODE_TSC_VALUE_OUT_OF_RANGE (0x0224)
- TSDB_CODE_PAR_INVALID_FILL_TIME_RANGE (0x263B)
The error code that returns http code 401 are:
- TSDB_CODE_MND_USER_ALREADY_EXIST (0x0350)
- TSDB_CODE_MND_USER_NOT_EXIST (0x0351)
- TSDB_CODE_MND_INVALID_USER_FORMAT (0x0352)
- TSDB_CODE_MND_INVALID_PASS_FORMAT (0x0353)
- TSDB_CODE_MND_NO_USER_FROM_CONN (0x0354)
- TSDB_CODE_MND_TOO_MANY_USERS (0x0355)
- TSDB_CODE_MND_INVALID_ALTER_OPER (0x0356)
- TSDB_CODE_MND_AUTH_FAILURE (0x0357)
The error code that returns http code 403 are:
- TSDB_CODE_RPC_SOMENODE_NOT_CONNECTED (0x0020)
### HTTP body structure
......@@ -269,7 +297,6 @@ Response body:
```json
{
"status": "succ",
"code": 0,
"desc": "/KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04"
}
......@@ -355,6 +382,130 @@ Response body:
}
```
## REST API between TDengine 2.x and 3.0
### URI
| URI | TDengine 2.x | TDengine 3.0 |
| :--------------------| :------------------: | :--------------------------------------------------: |
| /rest/sql | Supported | Supported (with different response code and body) |
| /rest/sqlt | Supported | No more supported |
| /rest/sqlutc | Supported | No more supported |
### HTTP code
| HTTP code | TDengine 2.x | TDengine 3.0 | note |
| :--------------------| :------------------: | :----------: | :-----------------------------------: |
| 200 | Supported | Supported | Success or taosc return error |
| 400 | Not supported | Supported | Parameter error |
| 401 | Not supported | Supported | Authentication failure |
| 404 | Supported | Supported | URI not exist |
| 500 | Not supported | Supported | Internal error |
| 503 | Supported | Supported | Insufficient system resources |
### Response body
#### REST response body return from TDengine 2.x
```JSON
{
"status": "succ",
"head": [
"name",
"created_time",
"ntables",
"vgroups",
"replica",
"quorum",
"days",
"keep1,keep2,keep(D)",
"cache(MB)",
"blocks",
"minrows",
"maxrows",
"wallevel",
"fsync",
"comp",
"precision",
"status"
],
"data": [
[
"log",
"2020-09-02 17:23:00.039",
4,
1,
1,
1,
10,
"30,30,30",
1,
3,
100,
4096,
1,
3000,
2,
"us",
"ready"
]
],
"rows": 1
}
```
```
"data": [
[
"information_schema",
16,
"ready"
],
[
"performance_schema",
9,
"ready"
]
],
```
#### REST response body return from TDengine 3.0
```JSON
{
"code": 0,
"column_meta": [
[
"name",
"VARCHAR",
64
],
[
"ntables",
"BIGINT",
8
],
[
"status",
"VARCHAR",
10
]
],
"data": [
[
"information_schema",
16,
"ready"
],
[
"performance_schema",
9,
"ready"
]
],
"rows": 2
}
```
## Reference
[taosAdapter](/reference/taosadapter/)
......@@ -176,6 +176,14 @@ The base API is used to do things like create database connections and provide a
Set the current default database to `db`.
- `int taos_get_current_db(TAOS *taos, char *database, int len, int *required)`
- The variables database and len are applied by the user outside and allocated space. The current database name and length will be assigned to database and len.
- As long as the db name is not assigned to the database normally (including truncation), an error will be returned with the return value of -1, and then the user can use taos_errstr(NULL) to get error message.
- If database==NULL or len<=0, returns an error, the space required to store the db (including the last '\0') in the variable required
- If len is less than the space required to store the db (including the last '\0'), an error is returned. The truncated data assigned in the database ends with '\0'.
- If len is greater than or equal to the space required to store the db (including the last '\0'), return normal 0, and assign the db name ending with '\0' in the database.
- `void taos_close(TAOS *taos)`
Closes the connection, where `taos` is the handle returned by `taos_connect()`.
......@@ -404,5 +412,17 @@ In addition to writing data using the SQL method or the parameter binding API, w
Note that the timestamp resolution parameter only takes effect when the protocol type is `SML_LINE_PROTOCOL`.
For OpenTSDB's text protocol, timestamp resolution follows its official resolution rules - time precision is confirmed by the number of characters contained in the timestamp.
**Supported Versions**
This feature interface is supported from version 2.3.0.0.
schemaless 其他相关的接口
- `TAOS_RES *taos_schemaless_insert_with_reqid(TAOS *taos, char *lines[], int numLines, int protocol, int precision, int64_t reqid)`
- `TAOS_RES *taos_schemaless_insert_raw(TAOS *taos, char *lines, int len, int32_t *totalRows, int protocol, int precision)`
- `TAOS_RES *taos_schemaless_insert_raw_with_reqid(TAOS *taos, char *lines, int len, int32_t *totalRows, int protocol, int precision, int64_t reqid)`
- `TAOS_RES *taos_schemaless_insert_ttl(TAOS *taos, char *lines[], int numLines, int protocol, int precision, int32_t ttl)`
- `TAOS_RES *taos_schemaless_insert_ttl_with_reqid(TAOS *taos, char *lines[], int numLines, int protocol, int precision, int32_t ttl, int64_t reqid)`
- `TAOS_RES *taos_schemaless_insert_raw_ttl(TAOS *taos, char *lines, int len, int32_t *totalRows, int protocol, int precision, int32_t ttl)`
- `TAOS_RES *taos_schemaless_insert_raw_ttl_with_reqid(TAOS *taos, char *lines, int len, int32_t *totalRows, int protocol, int precision, int32_t ttl, int64_t reqid)`
**Description**
- The above seven interfaces are extension interfaces, which are mainly used to pass ttl and reqid parameters, and can be used as needed.
- Withing _raw interfaces represent data through the passed parameters lines and len. In order to solve the problem that the original interface data contains '\0' and is truncated. The totalRows pointer returns the number of parsed data rows.
- Withing _ttl interfaces can pass the ttl parameter to control the ttl expiration time of the table.
- Withing _reqid interfaces can track the entire call chain by passing the reqid parameter.
......@@ -300,7 +300,7 @@ stmt.executeUpdate("create table if not exists tb (ts timestamp, temperature int
> **Note**: If you do not use `use db` to specify the database, all subsequent operations on the table need to add the database name as a prefix, such as db.tb.
### 插入数据
### Insert data
```java
// insert data
......@@ -725,7 +725,7 @@ consumer.close()
For more information, see [Data Subscription](../../../develop/tmq).
### Usage examples
#### Full Sample Code
<Tabs defaultValue="native">
<TabItem value="native" label="native connection">
......
......@@ -120,7 +120,7 @@ _taosSql_ implements Go's `database/sql/driver` interface via cgo. You can use t
Use `taosSql` as `driverName` and use a correct [DSN](#DSN) as `dataSourceName`, DSN supports the following parameters.
* configPath specifies the `taos.cfg` directory
* cfg specifies the `taos.cfg` directory
For example:
......
......@@ -39,7 +39,7 @@ The Rust Connector is still under rapid development and is not guaranteed to be
* Install the Rust development toolchain
* If using the native connection, please install the TDengine client driver. Please refer to [install client driver](/reference/connector#install-client-driver)
# Add taos dependency
### Add taos dependency
Depending on the connection method, add the [taos][taos] dependency in your Rust project as follows:
......@@ -282,7 +282,7 @@ In the application code, use `pool.get()? ` to get a connection object [Taos].
let taos = pool.get()?;
```
# Connectors
### Connectors
The [Taos][struct.Taos] object provides an API to perform operations on multiple databases.
......
......@@ -10,10 +10,11 @@ import TabItem from "@theme/TabItem";
`taospy` is the official Python connector for TDengine. taospy provides a rich API that makes it easy for Python applications to use TDengine. `taospy` wraps both the [native interface](/reference/connector/cpp) and [REST interface](/reference/rest-api) of TDengine, which correspond to the `taos` and `taosrest` modules of the `taospy` package, respectively.
In addition to wrapping the native and REST interfaces, `taospy` also provides a set of programming interfaces that conforms to the [Python Data Access Specification (PEP 249)](https://peps.python.org/pep-0249/). It is easy to integrate `taospy` with many third-party tools, such as [SQLAlchemy](https://www.sqlalchemy.org/) and [pandas](https://pandas.pydata.org/).
The direct connection to the server using the native interface provided by the client driver is referred to hereinafter as a "native connection"; the connection to the server using the REST interface provided by taosAdapter is referred to hereinafter as a "REST connection".
`taos-ws-py` is an optional package to enable using WebSocket to connect TDengine.
The source code for the Python connector is hosted on [GitHub](https://github.com/taosdata/taos-connector-python).
The direct connection to the server using the native interface provided by the client driver is referred to hereinafter as a "native connection"; the connection to the server using the REST or WebSocket interface provided by taosAdapter is referred to hereinafter as a "REST connection" or "WebSocket connection".
The source code for the Python connector is hosted on [GitHub](https://github.com/taosdata/taos-connector-python).
## Supported platforms
- The [supported platforms](/reference/connector/#supported-platforms) for the native connection are the same as the ones supported by the TDengine client.
......@@ -32,7 +33,7 @@ We recommend using the latest version of `taospy`, regardless of the version of
### Preparation
1. Install Python. The recent taospy package requires Python 3.6+. The earlier versions of taospy require Python 3.7+. The taos-ws-py package requires Python 3.7+. If Python is not available on your system, refer to the [Python BeginnersGuide](https://wiki.python.org/moin/BeginnersGuide/Download) to install it.
1. Install Python. The recent taospy package requires Python 3.6.2+. The earlier versions of taospy require Python 3.7+. The taos-ws-py package requires Python 3.7+. If Python is not available on your system, refer to the [Python BeginnersGuide](https://wiki.python.org/moin/BeginnersGuide/Download) to install it.
2. Install [pip](https://pypi.org/project/pip/). In most cases, the Python installer comes with the pip utility. If not, please refer to [pip documentation](https://pip.pypa.io/en/stable/installation/) to install it.
If you use a native connection, you will also need to [Install Client Driver](/reference/connector#Install-Client-Driver). The client install package includes the TDengine client dynamic link library (`libtaos.so` or `taos.dll`) and the TDengine CLI.
......@@ -114,6 +115,15 @@ For REST connections, verifying that the `taosrest` module can be imported succe
import taosrest
```
</TabItem>
<TabItem value="ws" label="WebSocket connection">
For WebSocket connection, verifying that the `taosws` module can be imported successfully can be done in the Python Interactive Shell by typing.
```python
import taosws
```
</TabItem>
</Tabs>
......@@ -182,6 +192,28 @@ If the test is successful, it will output the server version information, e.g.
}
```
</TabItem>
<TabItem value="ws" label="WebSocket connection" groupId="connect">
For WebSocket connection, make sure the cluster and taosAdapter component, are running. This can be testetd using the following `curl` command.
```
curl -i -N -d "show databases" -H "Authorization: Basic cm9vdDp0YW9zZGF0YQ==" -H "Connection: Upgrade" -H "Upgrade: websocket" -H "Host: <FQDN>:<PORT>" -H "Origin: http://<FQDN>:<PORT>" http://<FQDN>:<PORT>/rest/sql
```
The FQDN above is the FQDN of the machine running taosAdapter, PORT is the port taosAdapter listening, default is `6041`.
If the test is successful, it will output the server version information, e.g.
```json
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Tue, 21 Mar 2023 09:29:17 GMT
Transfer-Encoding: chunked
{"status":"succ","head":["server_version()"],"column_meta":[["server_version()",8,8]],"data":[["2.6.0.27"]],"rows":1}
```
</TabItem>
</Tabs>
......@@ -228,6 +260,16 @@ All arguments to the `connect()` function are optional keyword arguments. The fo
- `password`: TDengine user password. The default is `taosdata`.
- `timeout`: HTTP request timeout. Enter a value in seconds. The default is `socket._GLOBAL_DEFAULT_TIMEOUT`. Usually, no configuration is needed.
</TabItem>
<TabItem value="websocket" label="WebSocket connection">
```python
{{#include docs/examples/python/connect_websocket_examples.py:connect}}
```
The parameter of `connect()` is the url of TDengine, and the protocol is `taosws` or `ws`.
</TabItem>
</Tabs>
......@@ -298,7 +340,95 @@ The `RestClient` class is a direct wrapper for the [REST API](/reference/rest-ap
For a more detailed description of the `sql()` method, please refer to [RestClient](https://docs.taosdata.com/api/taospy/taosrest/restclient.html).
</TabItem>
<TabItem value="websocket" label="WebSocket connection">
```python
{{#include docs/examples/python/connect_websocket_examples.py:basic}}
```
- `conn.execute`: can use to execute arbitrary SQL statements, and return the number of rows affected.
- `conn.query`: can use to execute query SQL statements, and return the query results.
</TabItem>
</Tabs>
### Usage with req_id
By using the optional req_id parameter, you can specify a request ID that can be used for tracing.
<Tabs defaultValue="rest">
<TabItem value="native" label="native connection">
##### TaosConnection class
The `TaosConnection` class contains both an implementation of the PEP249 Connection interface (e.g., the `cursor()` method and the `close()` method) and many extensions (e.g., the `execute()`, `query()`, `schemaless_insert()`, and `subscribe()` methods).
```python title="execute method"
{{#include docs/examples/python/connection_usage_native_reference_with_req_id.py:insert}}
```
```python title="query method"
{{#include docs/examples/python/connection_usage_native_reference_with_req_id.py:query}}
```
:::tip
The queried results can only be fetched once. For example, only one of `fetch_all()` and `fetch_all_into_dict()` can be used in the example above. Repeated fetches will result in an empty list.
:::
##### Use of TaosResult class
In the above example of using the `TaosConnection` class, we have shown two ways to get the result of a query: `fetch_all()` and `fetch_all_into_dict()`. In addition, `TaosResult` also provides methods to iterate through the result set by rows (`rows_iter`) or by data blocks (`blocks_iter`). Using these two methods will be more efficient in scenarios where the query has a large amount of data.
```python title="blocks_iter method"
{{#include docs/examples/python/result_set_with_req_id_examples.py}}
```
##### Use of the TaosCursor class
The `TaosConnection` class and the `TaosResult` class already implement all the functionality of the native interface. If you are familiar with the interfaces in the PEP249 specification, you can also use the methods provided by the `TaosCursor` class.
```python title="Use of TaosCursor"
{{#include docs/examples/python/cursor_usage_native_reference_with_req_id.py}}
```
:::note
The TaosCursor class uses native connections for write and query operations. In a client-side multi-threaded scenario, this cursor instance must remain thread exclusive and cannot be shared across threads for use, otherwise, it will result in errors in the returned results.
:::
</TabItem>
<TabItem value="rest" label="REST connection">
##### Use of TaosRestCursor class
The `TaosRestCursor` class is an implementation of the PEP249 Cursor interface.
```python title="Use of TaosRestCursor"
{{#include docs/examples/python/connect_rest_with_req_id_examples.py:basic}}
```
- `cursor.execute`: Used to execute arbitrary SQL statements.
- `cursor.rowcount` : For write operations, returns the number of successful rows written. For query operations, returns the number of rows in the result set.
- `cursor.description` : Returns the description of the field. Please refer to [TaosRestCursor](https://docs.taosdata.com/api/taospy/taosrest/cursor.html) for the specific format of the description information.
##### Use of the RestClient class
The `RestClient` class is a direct wrapper for the [REST API](/reference/rest-api). It contains only a `sql()` method for executing arbitrary SQL statements and returning the result.
```python title="Use of RestClient"
{{#include docs/examples/python/rest_client_with_req_id_example.py}}
```
For a more detailed description of the `sql()` method, please refer to [RestClient](https://docs.taosdata.com/api/taospy/taosrest/restclient.html).
</TabItem>
<TabItem value="websocket" label="WebSocket connection">
```python
{{#include docs/examples/python/connect_websocket_with_req_id_examples.py:basic}}
```
- `conn.execute`: can use to execute arbitrary SQL statements, and return the number of rows affected.
- `conn.query`: can use to execute query SQL statements, and return the query results.
</TabItem>
</Tabs>
......@@ -319,6 +449,13 @@ For a more detailed description of the `sql()` method, please refer to [RestClie
{{#include docs/examples/python/conn_rest_pandas.py}}
```
</TabItem>
<TabItem value="websocket" label="WebSocket connection">
```python
{{#include docs/examples/python/conn_websocket_pandas.py}}
```
</TabItem>
</Tabs>
......
......@@ -94,7 +94,7 @@ In this scenario, modifying your project file is required in order to copy the W
<ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.*" GeneratePathProperty="true" />
</ItemGroup>
<Target Name="copyDLLDepency" BeforeTargets="BeforeBuild">
<Target Name="copyDLLDependency" BeforeTargets="BeforeBuild">
<ItemGroup>
<DepDLLFiles Include="$(PkgTDengine_Connector)\runtimes\**\*.*" />
</ItemGroup>
......
......@@ -87,7 +87,7 @@ In this section a few sample programs which use TDengine PHP connector to access
> Any error would throw exception: `TDengine\Exception\TDengineException`
### Establish Conection
### Establish Connection
<details>
<summary>Establish Connection</summary>
......
label: "connector"
\ No newline at end of file
label: "Connector"
......@@ -14,7 +14,7 @@ import PkgListV3 from "/components/PkgListV3";
Once the package is unzipped, you will see the following files in the directory:
- _ install_client.sh_: install script
- _ taos.tar.gz_: client driver package
- _ package.tar.gz_: client driver package
- _ driver_: TDengine client driver
- _examples_: some example programs of different programming languages (C/C#/go/JDBC/MATLAB/python/R)
You can run `install_client.sh` to install it.
......
......@@ -11,7 +11,7 @@ import PkgListV3 from "/components/PkgListV3";
The default installation path is C:\TDengine, including the following files (directories).
- _taos.exe_: TDengine CLI command-line program
- _taosadapter.exe_: server-side executable that provides RESTful services and accepts writing requests from a variety of other softwares
- _taosadapter.exe_: server-side executable that provides RESTful services and accepts writing requests from a variety of other software
- _taosBenchmark.exe_: TDengine testing tool
- _cfg_: configuration file directory
- _driver_: client driver dynamic link library
......
......@@ -61,7 +61,7 @@ The different database framework specifications for various programming language
| **Connection Management** | Support | Support | Support | Support | Support | Support |
| **Regular Query** | Support | Support | Support | Support | Support | Support |
| **Parameter Binding** | Not Supported | Not Supported | Support | Support | Not Supported | Support |
| **Subscription (TMQ) ** | Not Supported | Support | Support | Not Supported | Not Supported | Support |
| **Subscription (TMQ) ** | Supported | Support | Support | Not Supported | Not Supported | Support |
| **Schemaless** | Not Supported | Not Supported | Not Supported | Not Supported | Not Supported | Not Supported |
| **Bulk Pulling (based on WebSocket) ** | Support | Support | Support | Support | Support | Support |
| **DataFrame** | Not Supported | Support | Not Supported | Not Supported | Not Supported | Not Supported |
......
......@@ -58,9 +58,9 @@ Usage of taosAdapter:
--collectd.enable enable collectd. Env "TAOS_ADAPTER_COLLECTD_ENABLE" (default true)
--collectd.password string collectd password. Env "TAOS_ADAPTER_COLLECTD_PASSWORD" (default "taosdata")
--collectd.port int collectd server port. Env "TAOS_ADAPTER_COLLECTD_PORT" (default 6045)
--collectd.ttl int collectd data ttl. Env "TAOS_ADAPTER_COLLECTD_TTL"
--collectd.user string collectd user. Env "TAOS_ADAPTER_COLLECTD_USER" (default "root")
--collectd.worker int collectd write worker. Env "TAOS_ADAPTER_COLLECTD_WORKER" (default 10)
--collectd.ttl int collectd data ttl. Env "TAOS_ADAPTER_COLLECTD_TTL" (default 0, means no ttl)
-c, --config string config path default /etc/taos/taosadapter.toml
--cors.allowAllOrigins cors allow all origins. Env "TAOS_ADAPTER_CORS_ALLOW_ALL_ORIGINS" (default true)
--cors.allowCredentials cors allow credentials. Env "TAOS_ADAPTER_CORS_ALLOW_Credentials"
......@@ -68,8 +68,9 @@ Usage of taosAdapter:
--cors.allowOrigins stringArray cors allow origins. Env "TAOS_ADAPTER_ALLOW_ORIGINS"
--cors.allowWebSockets cors allow WebSockets. Env "TAOS_ADAPTER_CORS_ALLOW_WebSockets"
--cors.exposeHeaders stringArray cors expose headers. Env "TAOS_ADAPTER_Expose_Headers"
--debug enable debug mode. Env "TAOS_ADAPTER_DEBUG"
--debug enable debug mode. Env "TAOS_ADAPTER_DEBUG" (default true)
--help Print this help message and exit
--httpCodeServerError Use a non-200 http status code when taosd returns an error. Env "TAOS_ADAPTER_HTTP_CODE_SERVER_ERROR"
--influxdb.enable enable influxdb. Env "TAOS_ADAPTER_INFLUXDB_ENABLE" (default true)
--log.enableRecordHttpSql whether to record http sql. Env "TAOS_ADAPTER_LOG_ENABLE_RECORD_HTTP_SQL"
--log.path string log path. Env "TAOS_ADAPTER_LOG_PATH" (default "/var/log/taos")
......@@ -80,14 +81,17 @@ Usage of taosAdapter:
--log.sqlRotationSize string record sql log rotation size(KB MB GB), must be a positive integer. Env "TAOS_ADAPTER_LOG_SQL_ROTATION_SIZE" (default "1GB")
--log.sqlRotationTime duration record sql log rotation time. Env "TAOS_ADAPTER_LOG_SQL_ROTATION_TIME" (default 24h0m0s)
--logLevel string log level (panic fatal error warn warning info debug trace). Env "TAOS_ADAPTER_LOG_LEVEL" (default "info")
--monitor.collectDuration duration Set monitor duration. Env "TAOS_MONITOR_COLLECT_DURATION" (default 3s)
--monitor.identity string The identity of the current instance, or 'hostname:port' if it is empty. Env "TAOS_MONITOR_IDENTITY"
--monitor.incgroup Whether running in cgroup. Env "TAOS_MONITOR_INCGROUP"
--monitor.password string TDengine password. Env "TAOS_MONITOR_PASSWORD" (default "taosdata") --monitor.pauseAllMemoryThreshold float Memory percentage threshold for pause all. Env "TAOS_MONITOR_PAUSE_ALL_MEMORY_THRESHOLD" (default 80)
--monitor.pauseQueryMemoryThreshold float Memory percentage threshold for pause query. Env "TAOS_MONITOR_PAUSE_QUERY_MEMORY_THRESHOLD" (default 70)
--monitor.user string TDengine user. Env "TAOS_MONITOR_USER" (default "root")
--monitor.writeInterval duration Set write to TDengine interval. Env "TAOS_MONITOR_WRITE_INTERVAL" (default 30s)
--monitor.writeToTD Whether write metrics to TDengine. Env "TAOS_MONITOR_WRITE_TO_TD"
--monitor.collectDuration duration Set monitor duration. Env "TAOS_ADAPTER_MONITOR_COLLECT_DURATION" (default 3s)
--monitor.disable Whether to disable monitoring. Env "TAOS_ADAPTER_MONITOR_DISABLE"
--monitor.disableCollectClientIP Whether to disable collecting clientIP. Env "TAOS_ADAPTER_MONITOR_DISABLE_COLLECT_CLIENT_IP"
--monitor.identity string The identity of the current instance, or 'hostname:port' if it is empty. Env "TAOS_ADAPTER_MONITOR_IDENTITY"
--monitor.incgroup Whether running in cgroup. Env "TAOS_ADAPTER_MONITOR_INCGROUP"
--monitor.password string TDengine password. Env "TAOS_ADAPTER_MONITOR_PASSWORD" (default "taosdata")
--monitor.pauseAllMemoryThreshold float Memory percentage threshold for pause all. Env "TAOS_ADAPTER_MONITOR_PAUSE_ALL_MEMORY_THRESHOLD" (default 80)
--monitor.pauseQueryMemoryThreshold float Memory percentage threshold for pause query. Env "TAOS_ADAPTER_MONITOR_PAUSE_QUERY_MEMORY_THRESHOLD" (default 70)
--monitor.user string TDengine user. Env "TAOS_ADAPTER_MONITOR_USER" (default "root")
--monitor.writeInterval duration Set write to TDengine interval. Env "TAOS_ADAPTER_MONITOR_WRITE_INTERVAL" (default 30s)
--monitor.writeToTD Whether write metrics to TDengine. Env "TAOS_ADAPTER_MONITOR_WRITE_TO_TD"
--node_exporter.caCertFile string node_exporter ca cert file path. Env "TAOS_ADAPTER_NODE_EXPORTER_CA_CERT_FILE"
--node_exporter.certFile string node_exporter cert file path. Env "TAOS_ADAPTER_NODE_EXPORTER_CERT_FILE"
--node_exporter.db string node_exporter db name. Env "TAOS_ADAPTER_NODE_EXPORTER_DB" (default "node_exporter")
......@@ -100,9 +104,9 @@ Usage of taosAdapter:
--node_exporter.keyFile string node_exporter cert key file path. Env "TAOS_ADAPTER_NODE_EXPORTER_KEY_FILE"
--node_exporter.password string node_exporter password. Env "TAOS_ADAPTER_NODE_EXPORTER_PASSWORD" (default "taosdata")
--node_exporter.responseTimeout duration node_exporter response timeout. Env "TAOS_ADAPTER_NODE_EXPORTER_RESPONSE_TIMEOUT" (default 5s)
--node_exporter.ttl int node_exporter data ttl. Env "TAOS_ADAPTER_NODE_EXPORTER_TTL"
--node_exporter.urls strings node_exporter urls. Env "TAOS_ADAPTER_NODE_EXPORTER_URLS" (default [http://localhost:9100])
--node_exporter.user string node_exporter user. Env "TAOS_ADAPTER_NODE_EXPORTER_USER" (default "root")
--node_exporter.ttl int node_exporter data ttl. Env "TAOS_ADAPTER_NODE_EXPORTER_TTL"(default 0, means no ttl)
--opentsdb.enable enable opentsdb. Env "TAOS_ADAPTER_OPENTSDB_ENABLE" (default true)
--opentsdb_telnet.batchSize int opentsdb_telnet batch size. Env "TAOS_ADAPTER_OPENTSDB_TELNET_BATCH_SIZE" (default 1)
--opentsdb_telnet.dbs strings opentsdb_telnet db names. Env "TAOS_ADAPTER_OPENTSDB_TELNET_DBS" (default [opentsdb_telnet,collectd_tsdb,icinga2_tsdb,tcollector_tsdb])
......@@ -112,11 +116,11 @@ Usage of taosAdapter:
--opentsdb_telnet.password string opentsdb_telnet password. Env "TAOS_ADAPTER_OPENTSDB_TELNET_PASSWORD" (default "taosdata")
--opentsdb_telnet.ports ints opentsdb telnet tcp port. Env "TAOS_ADAPTER_OPENTSDB_TELNET_PORTS" (default [6046,6047,6048,6049])
--opentsdb_telnet.tcpKeepAlive enable tcp keep alive. Env "TAOS_ADAPTER_OPENTSDB_TELNET_TCP_KEEP_ALIVE"
--opentsdb_telnet.ttl int opentsdb_telnet data ttl. Env "TAOS_ADAPTER_OPENTSDB_TELNET_TTL"
--opentsdb_telnet.user string opentsdb_telnet user. Env "TAOS_ADAPTER_OPENTSDB_TELNET_USER" (default "root")
--opentsdb_telnet.ttl int opentsdb_telnet data ttl. Env "TAOS_ADAPTER_OPENTSDB_TELNET_TTL"(default 0, means no ttl)
--pool.idleTimeout duration Set idle connection timeout. Env "TAOS_ADAPTER_POOL_IDLE_TIMEOUT" (default 1h0m0s)
--pool.maxConnect int max connections to taosd. Env "TAOS_ADAPTER_POOL_MAX_CONNECT" (default 4000)
--pool.maxIdle int max idle connections to taosd. Env "TAOS_ADAPTER_POOL_MAX_IDLE" (default 4000)
--pool.idleTimeout duration Set idle connection timeout. Env "TAOS_ADAPTER_POOL_IDLE_TIMEOUT"
--pool.maxConnect int max connections to taosd. Env "TAOS_ADAPTER_POOL_MAX_CONNECT"
--pool.maxIdle int max idle connections to taosd. Env "TAOS_ADAPTER_POOL_MAX_IDLE"
-P, --port int http port. Env "TAOS_ADAPTER_PORT" (default 6041)
--prometheus.enable enable prometheus. Env "TAOS_ADAPTER_PROMETHEUS_ENABLE" (default true)
--restfulRowLimit int restful returns the maximum number of rows (-1 means no limit). Env "TAOS_ADAPTER_RESTFUL_ROW_LIMIT" (default -1)
......@@ -133,9 +137,9 @@ Usage of taosAdapter:
--statsd.port int statsd server port. Env "TAOS_ADAPTER_STATSD_PORT" (default 6044)
--statsd.protocol string statsd protocol [tcp or udp]. Env "TAOS_ADAPTER_STATSD_PROTOCOL" (default "udp")
--statsd.tcpKeepAlive enable tcp keep alive. Env "TAOS_ADAPTER_STATSD_TCP_KEEP_ALIVE"
--statsd.ttl int statsd data ttl. Env "TAOS_ADAPTER_STATSD_TTL"
--statsd.user string statsd user. Env "TAOS_ADAPTER_STATSD_USER" (default "root")
--statsd.worker int statsd write worker. Env "TAOS_ADAPTER_STATSD_WORKER" (default 10)
--statsd.ttl int statsd data ttl. Env "TAOS_ADAPTER_STATSD_TTL" (default 0, means no ttl)
--taosConfigDir string load taos client config path. Env "TAOS_ADAPTER_TAOS_CONFIG_FILE"
--version Print the version and exit
```
......@@ -324,6 +328,10 @@ This parameter controls the number of results returned by the following interfac
- `http://<fqdn>:6041/rest/sql`
- `http://<fqdn>:6041/prometheus/v1/remote_read/:db`
## Configure http return code
taosAdapter uses the parameter `httpCodeServerError` to set whether to return a non-200 http status code http status code other than when the C interface returns an error. When set to true, different http status codes will be returned according to the error code returned by C. For details, see [RESTful API](https://docs.tdengine.com/reference/rest-api/) HTTP Response Code chapter.
## Troubleshooting
You can check the taosAdapter running status with the `systemctl status taosadapter` command.
......
......@@ -208,7 +208,10 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
Keep trying if failed to insert, default is no. Available with v3.0.9+.
- **-z/--trying-interval <NUMBER\>** :
Specify interval between keep trying insert. Valid value is a postive number. Only valid when keep trying be enabled. Available with v3.0.9+.
Specify interval between keep trying insert. Valid value is a positive number. Only valid when keep trying be enabled. Available with v3.0.9+.
- **-v/--vgroups <NUMBER\>** :
Specify vgroups number for creating a database, only valid with daemon version 3.0+
- **-V/--version** :
Show version information only. Users should not use it with other parameters.
......@@ -239,7 +242,15 @@ The parameters listed in this section apply to all function modes.
- ** keep_trying ** : Keep trying if failed to insert, default is no. Available with v3.0.9+.
- ** trying_interval ** : Specify interval between keep trying insert. Valid value is a postive number. Only valid when keep trying be enabled. Available with v3.0.9+.
- ** trying_interval ** : Specify interval between keep trying insert. Valid value is a positive number. Only valid when keep trying be enabled. Available with v3.0.9+.
- ** childtable_from and childtable_to ** : specify the child table range to create. The range is [childtable_from, childtable_to).
 
- ** continue_if_fail ** : allow the user to specify the reaction if the insertion failed.
- "continue_if_fail" : "no" // means taosBenchmark will exit if it fails to insert as default reaction behavior.
- "continue_if_fail" : "yes" // means taosBenchmark will warn the user if it fails to insert but continue to insert the next record.
- "continue_if_fail": "smart" // means taosBenchmark will try to create the non-existent child table if it fails to insert.
#### Database related configuration parameters
......@@ -352,7 +363,7 @@ The configuration parameters for specifying super table tag columns and data col
- **min**: The minimum value of the column/label of the data type. The generated value will equal or large than the minimum value.
- **max**: The maximum value of the column/label of the data type. The generated value will less than the maxium value.
- **max**: The maximum value of the column/label of the data type. The generated value will less than the maximum value.
- **values**: The value field of the nchar/binary column/label, which will be chosen randomly from the values.
......@@ -392,11 +403,11 @@ See [General Configuration Parameters](#General Configuration Parameters) for de
#### Configuration parameters for executing the specified query statement
The configuration parameters for querying the sub-tables or the normal tables are set in `specified_table_query`.
The configuration parameters for querying the specified table (it can be a super table, a sub-table or a normal table) are set in `specified_table_query`.
- **query_interval** : The query interval in seconds, the default value is 0.
- **threads**: The number of threads to execute the query SQL, the default value is 1.
- **threads/concurrent**: The number of threads to execute the query SQL, the default value is 1.
- **sqls**.
- **sql**: the SQL command to be executed.
......@@ -423,9 +434,9 @@ The configuration parameters of the super table query are set in `super_table_qu
#### Configuration parameters for executing the specified subscription statement
The configuration parameters for subscribing to a sub-table or a generic table are set in `specified_table_query`.
The configuration parameters for subscribing to a specified table (it can be a super table, a sub-table or a generic table) are set in `specified_table_query`.
- **threads**: The number of threads to execute SQL, default is 1.
- **threads/concurrent**: The number of threads to execute SQL, default is 1.
- **interval**: The time interval to execute the subscription, in seconds, default is 0.
......
......@@ -1590,7 +1590,7 @@
},
{
"datasource": "${DS_TDENGINE}",
"description": "taosd max memery last 10 minutes",
"description": "taosd max memory last 10 minutes",
"fieldConfig": {
"defaults": {
"color": {
......@@ -1919,7 +1919,7 @@
},
{
"datasource": "${DS_TDENGINE}",
"description": "taosd max memery last 10 minutes",
"description": "taosd max memory last 10 minutes",
"fieldConfig": {
"defaults": {
"color": {
......@@ -1977,7 +1977,7 @@
},
{
"datasource": "${DS_TDENGINE}",
"description": "taosd max memery last 10 minutes",
"description": "taosd max memory last 10 minutes",
"fieldConfig": {
"defaults": {
"color": {
......@@ -2825,7 +2825,7 @@
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Requets Count per Minutes $fqdn",
"title": "Requests Count per Minutes $fqdn",
"tooltip": {
"shared": true,
"sort": 0,
......
......@@ -1566,7 +1566,7 @@
},
{
"datasource": "${ds}",
"description": "taosd max memery last 10 minutes",
"description": "taosd max memory last 10 minutes",
"fieldConfig": {
"defaults": {
"color": {
......@@ -1933,7 +1933,7 @@
},
{
"datasource": "${ds}",
"description": "taosd max memery last 10 minutes",
"description": "taosd max memory last 10 minutes",
"fieldConfig": {
"defaults": {
"color": {
......@@ -2000,7 +2000,7 @@
},
{
"datasource": "${ds}",
"description": "taosd max memery last 10 minutes",
"description": "taosd max memory last 10 minutes",
"fieldConfig": {
"defaults": {
"color": {
......@@ -2961,7 +2961,7 @@
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Requets Count per Minutes $fqdn",
"title": "Requests Count per Minutes $fqdn",
"tooltip": {
"shared": true,
"sort": 0,
......@@ -3355,4 +3355,4 @@
"title": "TDengine",
"uid": "tdengine",
"version": 8
}
\ No newline at end of file
}
......@@ -186,7 +186,7 @@
},
{
"datasource": "TDengine",
"description": "taosd max memery last 10 minutes",
"description": "taosd max memory last 10 minutes",
"gridPos": {
"h": 6,
"w": 8,
......@@ -253,7 +253,7 @@
],
"timeFrom": null,
"timeShift": null,
"title": "taosd memery",
"title": "taosd memory",
"type": "gauge"
},
{
......
......@@ -61,12 +61,14 @@ And many more parameters.
- -c CONFIGDIR: Specify the directory where configuration file exists. The default is `/etc/taos`, and the default name of the configuration file in this directory is `taos.cfg`
- -C: Print the configuration parameters of `taos.cfg` in the default directory or specified by -c
- -d DATABASE: Specify the database to use when connecting to the server
- -E dsn: connect to the TDengine Cloud or a server which provides WebSocket connection
- -f FILE: Execute the SQL script file in non-interactive mode Note that each SQL statement in the script file must be only one line.
- -k: Test the operational status of the server. 0: unavailable; 1: network ok; 2: service ok; 3: service degraded; 4: exiting
- -l PKTLEN: Test package size to be used for network testing
- -n NETROLE: test scope for network connection test, default is `client`. The value can be `client` or `server`.
- -N PKTNUM: Number of packets used for network testing
- -r: output the timestamp format as unsigned 64-bits integer (uint64_t in C language)
- -R: Use RESTful mode when connecting
- -s COMMAND: execute SQL commands in non-interactive mode
- -t: Test the boot status of the server. The statuses of -k apply.
- -w DISPLAYWIDTH: Specify the number of columns of the server display.
......
......@@ -29,7 +29,7 @@ taos -C
taos --dump-config
```
# Configuration Parameters
## Configuration Parameters
:::note
The parameters described in this document by the effect that they have on the system.
......@@ -83,7 +83,7 @@ The parameters described in this document by the effect that they have on the sy
| :------- | :----------- | :----------------------------------------------- | :--------------------------------------------------------------------------------------------- |
| TCP | 6030 | Communication between client and server. In a multi-node cluster, communication between nodes. serverPort |
| TCP | 6041 | REST connection between client and server | Prior to 2.4.0.0: serverPort+11; After 2.4.0.0 refer to [taosAdapter](/reference/taosadapter/) |
| TCP | 6043 | Service Port of TaosKeeper | The parameter of TaosKeeper |
| TCP | 6043 | Service Port of taosKeeper | The parameter of taosKeeper |
| TCP | 6044 | Data access port for StatsD | Configurable through taosAdapter parameters.
| UDP | 6045 | Data access for statsd | Configurable through taosAdapter parameters.
| TCP | 6060 | Port of Monitoring Service in Enterprise version | |
......@@ -99,6 +99,9 @@ The parameters described in this document by the effect that they have on the sy
## Monitoring Parameters
:::note
Please note the `taoskeeper` needs to be installed and running to create the `log` database and receiving metrics sent by `taosd` as the full monitoring solution.
### monitor
| Attribute | Description |
......@@ -599,7 +602,7 @@ The charset that takes effect is UTF-8.
| Applicable | Client only |
| Meaning | Whether schemaless columns are consistently ordered, depat, discarded since 3.0.3.0|
| Value Range | 0: not consistent; 1: consistent. |
| Default | 1 |
| Default | 0 |
## Compress Parameters
......
......@@ -24,7 +24,7 @@ All executable files of TDengine are in the _/usr/local/taos/bin_ directory by d
- _taosdump_: data import and export tool
- _taosBenchmark_: TDengine testing tool
- _remove.sh_: script to uninstall TDengine, please execute it carefully, link to the **rmtaos** command in the /usr/bin directory. Will remove the TDengine installation directory `/usr/local/taos`, but will keep `/etc/taos`, `/var/lib/taos`, `/var/log/taos`
- _taosadapter_: server-side executable that provides RESTful services and accepts writing requests from a variety of other softwares
- _taosadapter_: server-side executable that provides RESTful services and accepts writing requests from a variety of other software
- _TDinsight.sh_: script to download TDinsight and install it
- _set_core.sh_: script for setting up the system to generate core dump files for easy debugging
- _taosd-dump-cfg.gdb_: script to facilitate debugging of taosd's gdb execution.
......
......@@ -3,13 +3,11 @@ title: Schemaless Writing
description: This document describes how to use the schemaless write component of TDengine.
---
In IoT applications, data is collected for many purposes such as intelligent control, business analysis, device monitoring and so on. Due to changes in business or functional requirements or changes in device hardware, the application logic and even the data collected may change. Schemaless writing automatically creates storage structures for your data as it is being written to TDengine, so that you do not need to create supertables in advance. When necessary, schemaless writing
will automatically add the required columns to ensure that the data written by the user is stored correctly.
In IoT applications, data is collected for many purposes such as intelligent control, business analysis, device monitoring and so on. Due to changes in business or functional requirements or changes in device hardware, the application logic and even the data collected may change. Schemaless writing automatically creates storage structures for your data as it is being written to TDengine, so that you do not need to create supertables in advance. When necessary, schemaless writing will automatically add the required columns to ensure that the data written by the user is stored correctly.
The schemaless writing method creates super tables and their corresponding subtables. These are completely indistinguishable from the super tables and subtables created directly via SQL. You can write data directly to them via SQL statements. Note that the names of tables created by schemaless writing are based on fixed mapping rules for tag values, so they are not explicitly ideographic and they lack readability.
Tips:
The schemaless write will automatically create a table. You do not need to create a table manually, or an unknown error may occur.
Note: Schemaless writing creates tables automatically. Creating tables manually is not supported with schemaless writing.
## Schemaless Writing Line Protocol
......@@ -50,8 +48,7 @@ In the schemaless writing data line protocol, each data item in the field_set ne
- `t`, `T`, `true`, `True`, `TRUE`, `f`, `F`, `false`, and `False` will be handled directly as BOOL types.
For example, the following data rows write c1 column as 3 (BIGINT), c2 column as false (BOOL), c3 column
as "passit" (BINARY), c4 column as 4 (DOUBLE), and the primary key timestamp as 1626006833639000000 to child table with the t1 label as "3" (NCHAR), the t2 label as "4" (NCHAR), and the t3 label as "t3" (NCHAR) and the super table named `st`.
For example, the following string indicates that the one row of data is written to the st supertable with the t1 tag as "3" (NCHAR), the t2 tag as "4" (NCHAR), and the t3 tag as "t3" (NCHAR); the c1 column is 3 (BIGINT), the c2 column is false (BOOL), the c3 column is "passit" (BINARY), the c4 column is 4 (DOUBLE), and the primary key timestamp is 1626006833639000000.
```json
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000
......@@ -69,23 +66,31 @@ Schemaless writes process row data according to the following principles.
"measurement,tag_key1=tag_value1,tag_key2=tag_value2"
```
:::tip
Note that tag_key1, tag_key2 are not the original order of the tags entered by the user but the result of using the tag names in ascending order of the strings. Therefore, tag_key1 is not the first tag entered in the line protocol.
The string's MD5 hash value "md5_val" is calculated after the ranking is completed. The calculation result is then combined with the string to generate the table name: "t_md5_val". "t_" is a fixed prefix that every table generated by this mapping relationship has.
The string's MD5 hash value "md5_val" is calculated after the ranking is completed. The calculation result is then combined with the string to generate the table name: "t_md5_val". "t\_" is a fixed prefix that every table generated by this mapping relationship has.
:::
You can configure smlChildTableName in taos.cfg to specify table names, for example, `smlChildTableName=tname`. You can insert `st,tname=cpul,t1=4 c1=3 1626006833639000000` and the cpu1 table will be automatically created. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored.
2. If the super table obtained by parsing the line protocol does not exist, this super table is created.
**Important:** Manually creating supertables for schemaless writing is not supported. Schemaless writing creates appropriate supertables automatically.
3. If the subtable obtained by the parse line protocol does not exist, Schemaless creates the sub-table according to the subtable name determined in steps 1 or 2.
4. If the specified tag or regular column in the data row does not exist, the corresponding tag or regular column is added to the super table (only incremental).
5. If there are some tag columns or regular columns in the super table that are not specified to take values in a data row, then the values of these columns are set to
NULL.
5. If there are some tag columns or regular columns in the super table that are not specified to take values in a data row, then the values of these columns are set to NULL.
6. For BINARY or NCHAR columns, if the length of the value provided in a data row exceeds the column type limit, the maximum length of characters allowed to be stored in the column is automatically increased (only incremented and not decremented) to ensure complete preservation of the data.
7. Errors encountered throughout the processing will interrupt the writing process and return an error code.
8. It is assumed that the order of field_set in a supertable is consistent, meaning that the first record contains all fields and subsequent records store fields in the same order. If the order is not consistent, set smlDataFormat in taos.cfg to false. Otherwise, data will be written out of order and a database error will occur.(smlDataFormat in taos.cfg default to false after version of 3.0.1.3, discarded since 3.0.3.0)
:::tip
All processing logic of schemaless will still follow TDengine's underlying restrictions on data structures, such as the total length of each row of data cannot exceed
16KB. See [TDengine SQL Boundary Limits](/taos-sql/limit) for specific constraints in this area.
8. It is assumed that the order of field_set in a supertable is consistent, meaning that the first record contains all fields and subsequent records store fields in the same order. If the order is not consistent, set smlDataFormat in taos.cfg to false. Otherwise, data will be written out of order and a database error will occur.
Note: TDengine 3.0.3.0 and later automatically detect whether order is consistent. This parameter is no longer used.
:::tip
All processing logic of schemaless will still follow TDengine's underlying restrictions on data structures, such as the total length of each row of data cannot exceed 48 KB and the total length of a tag value cannot exceed 16 KB. See [TDengine SQL Boundary Limits](/taos-sql/limit) for specific constraints in this area.
:::
## Time resolution recognition
......@@ -114,8 +119,7 @@ In OpenTSDB file and JSON protocol modes, the precision of the timestamp is dete
## Data Model Mapping
This section describes how data in line protocol is mapped to a schema. The data measurement in each line is mapped to a
supertable name. The tag name in tag_set is the tag name in the schema, and the name in field_set is the column name in the schema. The following example shows how data is mapped:
This section describes how data in InfluxDB line protocol is mapped to a schema. The data measurement in each line is mapped to a supertable name. The tag name in tag_set is the tag name in the schema, and the name in field_set is the column name in the schema. The following example shows how data is mapped:
```json
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000
......@@ -131,7 +135,7 @@ create stable st (_ts timestamp, c1 bigint, c2 bool, c3 binary(6), c4 bigint) ta
This section describes the impact on the schema caused by different data being written.
If you use line protocol to write to a specific tag field and then later change the field type, a schema error will ocur. This triggers an error on the write API. This is shown as follows:
If you use line protocol to write to a specific tag field and then later change the field type, a schema error will occur. This triggers an error on the write API. This is shown as follows:
```json
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4 1626006833639000000
......@@ -160,7 +164,7 @@ The preceding data includes a new entry, c6, with type binary(6). When this occu
TDengine guarantees the idempotency of data writes. This means that you can repeatedly call the API to perform write operations with bad data. However, TDengine does not guarantee the atomicity of multi-row writes. In a multi-row write, some data may be written successfully and other data unsuccessfully.
##: Error Codes
## Error Codes
The TSDB_CODE_TSC_LINE_SYNTAX_ERROR indicates an error in the schemaless writing component.
This error occurs when writing text. For other errors, schemaless writing uses the standard TDengine error codes
......
......@@ -4,23 +4,24 @@ title: taosKeeper
description: This document describes how to use taosKeeper, a tool for exporting TDengine monitoring metrics.
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
## Introduction
taosKeeper is a tool for TDengine that exports monitoring metrics. With taosKeeper, you can easily monitor the operational status of your TDengine deployment. taosKeeper uses the TDengine REST API. It is not necessary to install TDengine Client to use taosKeeper.
## Installation
<!-- There are two ways to install taosKeeper: -->
There are two ways to install taosKeeper:
Methods of installing taosKeeper:
<!--- Installing the official TDengine installer will automatically install taosKeeper. Please refer to [TDengine installation](/operation/pkg-install) for details. -->
- You can compile taosKeeper separately and install it. Please refer to the [taosKeeper](https://github.com/taosdata/taoskeeper) repository for details. -->
You can compile taosKeeper separately and install it. Please refer to the [taosKeeper](https://github.com/taosdata/taoskeeper) repository for details.
- Installing the official TDengine installer will automatically install taosKeeper. Please refer to [TDengine installation](/operation/pkg-install) for details.
## Run
- You can compile taosKeeper separately and install it. Please refer to the [taosKeeper](https://github.com/taosdata/taoskeeper) repository for details.
## Configuration and Launch
### Configuration and running methods
### Configuration
taosKeeper needs to be executed on the terminal of the operating system, it supports three configuration methods: [Command-line arguments](#command-line-arguments-in-detail), [environment variable](#environment-variable-in-detail) and [configuration file](#configuration-file-parameters-in-detail). The precedence of those is Command-line, environment variable and configuration file.
......@@ -33,28 +34,81 @@ monitorFqdn localhost # taoskeeper's FQDN
For more information, see [TDengine Monitoring Configuration](../config/#monitoring).
### Command-Line Parameters
### Quick Launch
You can use command-line parameters to run taosKeeper and control its behavior:
<Tabs>
<TabItem label="Linux" value="linux">
```shell
$ taosKeeper
After the installation is complete, run the following command to start the taoskeeper service:
```bash
systemctl start taoskeeper
```
### Environment variable
You can use Environment variable to run taosKeeper and control its behavior:
Run the following command to confirm that taoskeeper is running normally:
```shell
$ export TAOS_KEEPER_TDENGINE_HOST=192.168.64.3
$ taoskeeper
```bash
systemctl status taoskeeper
```
Output similar to the following indicates that taoskeeper is running normally:
```
Active: active (running)
```
Output similar to the following indicates that taoskeeper has not started successfully:
```
Active: inactive (dead)
```
you can run `taoskeeper -h` for more detail.
The following `systemctl` commands can help you manage taoskeeper service:
- Start taoskeeper Server: `systemctl start taoskeeper`
- Stop taoskeeper Server: `systemctl stop taoskeeper`
- Restart taoskeeper Server: `systemctl restart taoskeeper`
- Check taoskeeper Server status: `systemctl status taoskeeper`
:::info
- The `systemctl` command requires _root_ privileges. If you are not logged in as the _root_ user, use the `sudo` command.
- The `systemctl stop taoskeeper` command will instantly stop taoskeeper Server.
- If your system does not include `systemd`, you can run `/usr/local/taos/bin/taoskeeper` to start taoskeeper manually.
:::
</TabItem>
### Configuration File
<TabItem label="macOS" value="macos">
You can quickly launch taosKeeper with the following commands. If you do not specify a configuration file, `/etc/taos/keeper.toml` is used by default. If this file does not specify configurations, the default values are used.
After the installation is complete, run `launchctl start com.tdengine.taoskeeper` to start taoskeeper Server.
The following `launchctl` commands can help you manage taoskeeper service:
- Start taoskeeper Server: `sudo launchctl start com.tdengine.taoskeeper`
- Stop taoskeeper Server: `sudo launchctl stop com.tdengine.taoskeeper`
- Check taoskeeper Server status: `sudo launchctl list | grep taoskeeper`
:::info
- Please use `sudo` to run `launchctl` to manage _com.tdengine.taoskeeper_ with administrator privileges.
- The administrator privilege is required for service management to enhance security.
- Troubleshooting:
- The first column returned by the command `launchctl list | grep taoskeeper` is the PID of the program. If it's `-`, that means the taoskeeper service is not running.
- If the service is abnormal, please check the `launchd.log` file from the system log.
:::
</TabItem>
</Tabs>
#### Launch With Configuration File
You can quickly launch taosKeeper with the following commands. If you do not specify a configuration file, `/etc/taos/keeper.toml` is used by default. If this file does not specify configurations, the default values are used.
```shell
$ taoskeeper -c <keeper config file>
......@@ -132,19 +186,36 @@ $ curl http://127.0.0.1:6043/metrics
Sample result set (excerpt):
```shell
# HELP taos_cluster_info_connections_total
# HELP taos_cluster_info_connections_total
# TYPE taos_cluster_info_connections_total counter
taos_cluster_info_connections_total{cluster_id="5981392874047724755"} 16
# HELP taos_cluster_info_dbs_total
# HELP taos_cluster_info_dbs_total
# TYPE taos_cluster_info_dbs_total counter
taos_cluster_info_dbs_total{cluster_id="5981392874047724755"} 2
# HELP taos_cluster_info_dnodes_alive
# HELP taos_cluster_info_dnodes_alive
# TYPE taos_cluster_info_dnodes_alive counter
taos_cluster_info_dnodes_alive{cluster_id="5981392874047724755"} 1
# HELP taos_cluster_info_dnodes_total
# HELP taos_cluster_info_dnodes_total
# TYPE taos_cluster_info_dnodes_total counter
taos_cluster_info_dnodes_total{cluster_id="5981392874047724755"} 1
# HELP taos_cluster_info_first_ep
# HELP taos_cluster_info_first_ep
# TYPE taos_cluster_info_first_ep gauge
taos_cluster_info_first_ep{cluster_id="5981392874047724755",value="hlb:6030"} 1
```
\ No newline at end of file
```
### check_health
```
$ curl -i http://127.0.0.1:6043/check_health
```
Response:
```
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Mon, 03 Apr 2023 07:20:38 GMT
Content-Length: 19
{"version":"1.0.0"}
```
......@@ -31,7 +31,7 @@ The default database name written by taosAdapter is `statsd`. To specify a diffe
### Configuring StatsD
To use StatsD, you need to download its [source code](https://github.com/statsd/statsd). Please refer to the example file `exampleConfig.js` in the root directory of the source download to modify the configuration file. In <taosAdpater's host\>, please fill in the domain name or IP address of the server running taosAdapter, and <port for StatsD\>, please fill in the port where taosAdapter receives StatsD data (default is 6044).
To use StatsD, you need to download its [source code](https://github.com/statsd/statsd). Please refer to the example file `exampleConfig.js` in the root directory of the source download to modify the configuration file. In <taosAdapter's host\>, please fill in the domain name or IP address of the server running taosAdapter, and <port for StatsD\>, please fill in the port where taosAdapter receives StatsD data (default is 6044).
```
backends section add ". /backends/repeater"
......
......@@ -77,7 +77,7 @@ sudo -u grafana grafana-cli plugins install tdengine-datasource
You can also download zip files from [GitHub](https://github.com/taosdata/grafanaplugin/releases/tag/latest) or [Grafana](https://grafana.com/grafana/plugins/tdengine-datasource/?tab=installation) and install manually. The commands are as follows:
```bash
GF_VERSION=3.2.7
GF_VERSION=3.3.1
# from GitHub
wget https://github.com/taosdata/grafanaplugin/releases/download/v$GF_VERSION/tdengine-datasource-$GF_VERSION.zip
# from Grafana
......
......@@ -77,7 +77,7 @@ Development: false
### Install from source code
```
git clone https://github.com/taosdata/kafka-connect-tdengine.git
git clone --branch 3.0 https://github.com/taosdata/kafka-connect-tdengine.git
cd kafka-connect-tdengine
mvn clean package
unzip -d $CONFLUENT_HOME/share/java/ target/components/packages/taosdata-kafka-connect-tdengine-*.zip
......
......@@ -28,4 +28,4 @@ SHOW MNODES;
The end point and role/status (leader, follower, candidate, offline) of all mnodes can be shown by the above command. When the first dnode is started in a cluster, there must be one mnode in this dnode. Without at least one mnode, the cluster cannot work.
From TDengine 3.0.0, RAFT procotol is used to guarantee the high availability, so the number of mnodes is should be 1 or 3.
From TDengine 3.0.0, RAFT protocol is used to guarantee the high availability, so the number of mnodes is should be 1 or 3.
......@@ -14,8 +14,8 @@ create database db0 vgroups 100;
The proper value of `vgroups` depends on available system resources. Assuming there is only one database to be created in the system, then the number of `vgroups` is determined by the available resources from all dnodes. In principle more vgroups can be created if you have more CPU and memory. Disk I/O is another important factor to consider. Once the bottleneck shows on disk I/O, more vgroups may downgrad the system performance significantly. If multiple databases are to be created in the system, then the total number of `vroups` of all the databases are dependent on the available system resources. It needs to be careful to distribute vgroups among these databases, you need to consider the number of tables, data writing frequency, size of each data row for all these databases. A recommended practice is to firstly choose a starting number for `vgroups`, for example double of the number of CPU cores, then try to adjust and optimize system configurations to find the best setting for `vgroups`, then distribute these vgroups among databases.
Furthermode, TDengine distributes the vgroups of each database equally among all dnodes. In case of replica 3, the distrubtion is even more complex, TDengine tries its best to prevent any dnode from becoming a bottleneck.
Furthermode, TDengine distributes the vgroups of each database equally among all dnodes. In case of replica 3, the distribution is even more complex, TDengine tries its best to prevent any dnode from becoming a bottleneck.
TDegnine utilizes the above ways to achieve load balance in a cluster, and finally achieve higher throughput.
Once the load balance is achieved, after some operations like deleting tables or droping databases, the load across all dnodes may become inbalanced, the method of rebalance will be provided in later versions. However, even without explicit rebalancing, TDengine will try its best to achieve new balance without manual interfering when a new database is created.
\ No newline at end of file
Once the load balance is achieved, after some operations like deleting tables or dropping databases, the load across all dnodes may become imbalanced, the method of rebalance will be provided in later versions. However, even without explicit rebalancing, TDengine will try its best to achieve new balance without manual interfering when a new database is created.
......@@ -67,7 +67,7 @@ sudo systemctl start telegraf
Log in to the Grafana interface using a web browser at `IP:3000`, with the system's initial username and password being `admin/admin`.
Click on the gear icon on the left and select `Plugins`, you should find the TDengine data source plugin icon.
Click on the plus icon on the left and select `Import` to get the data from `https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard-v0.1.0.json`, download the dashboard JSON file and import it. You will then see the dashboard in the following screen.
Click on the plus icon on the left and select `Import` to get the data from `https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard-v3.json` (for TDengine 3.0. for TDengine 2.x, please use `telegraf-dashboard-v2.json`), download the dashboard JSON file and import it. You will then see the dashboard in the following screen.
![TDengine Database IT-DevOps-Solutions-telegraf-dashboard](./IT-DevOps-Solutions-telegraf-dashboard.webp)
......
......@@ -10,6 +10,22 @@ For TDengine 2.x installation packages by version, please visit [here](https://w
import Release from "/components/ReleaseV3";
## 3.0.3.2
<Release type="tdengine" version="3.0.3.2" />
## 3.0.3.1
<Release type="tdengine" version="3.0.3.1" />
## 3.0.3.1
<Release type="tdengine" version="3.0.3.1" />
## 3.0.3.0
<Release type="tdengine" version="3.0.3.0" />
## 3.0.2.6
<Release type="tdengine" version="3.0.2.6" />
......
......@@ -10,6 +10,22 @@ For other historical version installers, please visit [here](https://www.taosdat
import Release from "/components/ReleaseV3";
## 2.4.11
<Release type="tools" version="2.4.11" />
## 2.4.10
<Release type="tools" version="2.4.10" />
## 2.4.9
<Release type="tools" version="2.4.9" />
## 2.4.8
<Release type="tools" version="2.4.8" />
## 2.4.6
<Release type="tools" version="2.4.6" />
......
......@@ -70,7 +70,7 @@ static int32_t init_env() {
taos_free_result(pRes);
// create database
pRes = taos_query(pConn, "create database tmqdb");
pRes = taos_query(pConn, "create database tmqdb wal_retention_period 3600");
if (taos_errno(pRes) != 0) {
printf("error in create tmqdb, reason:%s\n", taos_errstr(pRes));
return -1;
......
......@@ -48,7 +48,7 @@ namespace TDengineExample
static void PrepareDatabase(IntPtr conn)
{
IntPtr res = TDengine.Query(conn, "CREATE DATABASE test");
IntPtr res = TDengine.Query(conn, "CREATE DATABASE test WAL_RETENTION_PERIOD 3600");
if (TDengine.ErrorNo(res) != 0)
{
throw new Exception("failed to create database, reason: " + TDengine.Error(res));
......
......@@ -54,7 +54,7 @@ namespace TDengineExample
static void PrepareDatabase(IntPtr conn)
{
IntPtr res = TDengine.Query(conn, "CREATE DATABASE test");
IntPtr res = TDengine.Query(conn, "CREATE DATABASE test WAL_RETENTION_PERIOD 3600");
if (TDengine.ErrorNo(res) != 0)
{
throw new Exception("failed to create database, reason: " + TDengine.Error(res));
......
......@@ -58,7 +58,7 @@ namespace TDengineExample
static void PrepareDatabase(IntPtr conn)
{
IntPtr res = TDengine.Query(conn, "CREATE DATABASE test");
IntPtr res = TDengine.Query(conn, "CREATE DATABASE test WAL_RETENTION_PERIOD 3600");
if (TDengine.ErrorNo(res) != 0)
{
throw new Exception("failed to create database, reason: " + TDengine.Error(res));
......
......@@ -11,7 +11,7 @@ namespace TDengineExample
IntPtr conn = GetConnection();
try
{
IntPtr res = TDengine.Query(conn, "CREATE DATABASE power");
IntPtr res = TDengine.Query(conn, "CREATE DATABASE power WAL_RETENTION_PERIOD 3600");
CheckRes(conn, res, "failed to create database");
res = TDengine.Query(conn, "USE power");
CheckRes(conn, res, "failed to change database");
......
......@@ -76,7 +76,7 @@ namespace TDengineExample
static void PrepareSTable()
{
IntPtr res = TDengine.Query(conn, "CREATE DATABASE power");
IntPtr res = TDengine.Query(conn, "CREATE DATABASE power WAL_RETENTION_PERIOD 3600");
CheckResPtr(res, "failed to create database");
res = TDengine.Query(conn, "USE power");
CheckResPtr(res, "failed to change database");
......
......@@ -15,7 +15,7 @@ func main() {
panic(err)
}
defer db.Close()
_, err = db.Exec("create database if not exists example_tmq")
_, err = db.Exec("create database if not exists example_tmq wal_retention_period 3600")
if err != nil {
panic(err)
}
......
......@@ -35,7 +35,7 @@ public class SubscribeDemo {
try (Statement statement = connection.createStatement()) {
statement.executeUpdate("drop topic if exists " + TOPIC);
statement.executeUpdate("drop database if exists " + DB_NAME);
statement.executeUpdate("create database " + DB_NAME);
statement.executeUpdate("create database " + DB_NAME + " wal_retention_period 3600");
statement.executeUpdate("use " + DB_NAME);
statement.executeUpdate(
"CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT) TAGS (`groupid` INT, `location` BINARY(24))");
......
......@@ -35,7 +35,7 @@ public class WebsocketSubscribeDemo {
Statement statement = connection.createStatement()) {
statement.executeUpdate("drop topic if exists " + TOPIC);
statement.executeUpdate("drop database if exists " + DB_NAME);
statement.executeUpdate("create database " + DB_NAME);
statement.executeUpdate("create database " + DB_NAME + " wal_retention_period 3600");
statement.executeUpdate("use " + DB_NAME);
statement.executeUpdate(
"CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT) TAGS (`groupid` INT, `location` BINARY(24))");
......
......@@ -36,28 +36,17 @@ public class DataBaseMonitor {
stmt.execute("CREATE STABLE test.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)");
}
public Long count() throws SQLException {
if (!stmt.isClosed()) {
ResultSet result = stmt.executeQuery("SELECT count(*) from test.meters");
public long count() throws SQLException {
try (ResultSet result = stmt.executeQuery("SELECT count(*) from test.meters")) {
result.next();
return result.getLong(1);
}
return null;
}
/**
* show test.stables;
*
* name | created_time | columns | tags | tables |
* ============================================================================================
* meters | 2022-07-20 08:39:30.902 | 4 | 2 | 620000 |
*/
public Long getTableCount() throws SQLException {
if (!stmt.isClosed()) {
ResultSet result = stmt.executeQuery("show test.stables");
public long getTableCount() throws SQLException {
try (ResultSet result = stmt.executeQuery("select count(*) from information_schema.ins_tables where db_name = 'test';")) {
result.next();
return result.getLong(5);
return result.getLong(1);
}
return null;
}
}
\ No newline at end of file
......@@ -42,7 +42,7 @@ public class SQLWriter {
/**
* Maximum SQL length.
*/
private int maxSQLLength;
private int maxSQLLength = 800_000;
/**
* Map from table name to column values. For example:
......@@ -81,14 +81,6 @@ public class SQLWriter {
conn = getConnection();
stmt = conn.createStatement();
stmt.execute("use test");
ResultSet rs = stmt.executeQuery("show variables");
while (rs.next()) {
String configName = rs.getString(1);
if ("maxSQLLength".equals(configName)) {
maxSQLLength = Integer.parseInt(rs.getString(2));
logger.info("maxSQLLength={}", maxSQLLength);
}
}
}
/**
......@@ -149,7 +141,7 @@ public class SQLWriter {
} catch (SQLException e) {
// convert to error code defined in taoserror.h
int errorCode = e.getErrorCode() & 0xffff;
if (errorCode == 0x362 || errorCode == 0x218) {
if (errorCode == 0x2603) {
// Table does not exist
createTables();
executeSQL(sql);
......
import pandas
from sqlalchemy import create_engine, text
import taos
taos_conn = taos.connect()
taos_conn.execute('drop database if exists power')
taos_conn.execute('create database if not exists power wal_retention_period 3600')
taos_conn.execute("use power")
taos_conn.execute(
"CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)")
# insert data
taos_conn.execute("""INSERT INTO power.d1001 USING power.meters TAGS('California.SanFrancisco', 2)
VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000)
('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
power.d1002 USING power.meters TAGS('California.SanFrancisco', 3)
VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
power.d1003 USING power.meters TAGS('California.LosAngeles', 2)
VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
power.d1004 USING power.meters TAGS('California.LosAngeles', 3)
VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)""")
engine = create_engine("taosws://root:taosdata@localhost:6041")
conn = engine.connect()
df: pandas.DataFrame = pandas.read_sql(text("SELECT * FROM power.meters"), conn)
conn.close()
# print index
print(df.index)
# print data type of element in ts column
print(type(df.ts[0]))
print(df.head(3))
# output:
# RangeIndex(start=0, stop=8, step=1)
# <class 'pandas._libs.tslibs.timestamps.Timestamp'>
# ts current ... location groupid
# 0 2018-10-03 14:38:05.000 10.3 ... California.SanFrancisco 2
# 1 2018-10-03 14:38:15.000 12.6 ... California.SanFrancisco 2
# 2 2018-10-03 14:38:16.800 12.3 ... California.SanFrancisco 2
# ANCHOR: connect
from taosrest import connect, TaosRestConnection, TaosRestCursor
conn = connect(url="http://localhost:6041",
user="root",
password="taosdata",
timeout=30)
# ANCHOR_END: connect
# ANCHOR: basic
# create STable
cursor = conn.cursor()
cursor.execute("DROP DATABASE IF EXISTS power", req_id=1)
cursor.execute("CREATE DATABASE power", req_id=2)
cursor.execute(
"CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)", req_id=3)
# insert data
cursor.execute("""INSERT INTO power.d1001 USING power.meters TAGS('California.SanFrancisco', 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
power.d1002 USING power.meters TAGS('California.SanFrancisco', 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
power.d1003 USING power.meters TAGS('California.LosAngeles', 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
power.d1004 USING power.meters TAGS('California.LosAngeles', 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)""", req_id=4)
print("inserted row count:", cursor.rowcount)
# query data
cursor.execute("SELECT * FROM power.meters LIMIT 3", req_id=5)
# get total rows
print("queried row count:", cursor.rowcount)
# get column names from cursor
column_names = [meta[0] for meta in cursor.description]
# get rows
data = cursor.fetchall()
print(column_names)
for row in data:
print(row)
# output:
# inserted row count: 8
# queried row count: 3
# ['ts', 'current', 'voltage', 'phase', 'location', 'groupid']
# [datetime.datetime(2018, 10, 3, 14, 38, 5, 500000, tzinfo=datetime.timezone(datetime.timedelta(seconds=28800), '+08:00')), 11.8, 221, 0.28, 'california.losangeles', 2]
# [datetime.datetime(2018, 10, 3, 14, 38, 16, 600000, tzinfo=datetime.timezone(datetime.timedelta(seconds=28800), '+08:00')), 13.4, 223, 0.29, 'california.losangeles', 2]
# [datetime.datetime(2018, 10, 3, 14, 38, 5, tzinfo=datetime.timezone(datetime.timedelta(seconds=28800), '+08:00')), 10.8, 223, 0.29, 'california.losangeles', 3]
# ANCHOR_END: basic
# ANCHOR: connect
import taosws
conn = taosws.connect("taosws://root:taosdata@localhost:6041")
# ANCHOR_END: connect
# ANCHOR: basic
conn.execute("drop database if exists connwspy")
conn.execute("create database if not exists connwspy wal_retention_period 3600")
conn.execute("use connwspy")
conn.execute("create table if not exists stb (ts timestamp, c1 int) tags (t1 int)")
conn.execute("create table if not exists tb1 using stb tags (1)")
conn.execute("insert into tb1 values (now, 1)")
conn.execute("insert into tb1 values (now, 2)")
conn.execute("insert into tb1 values (now, 3)")
r = conn.execute("select * from stb")
result = conn.query("select * from stb")
num_of_fields = result.field_count
print(num_of_fields)
for row in result:
print(row)
# output:
# 3
# ('2023-02-28 15:56:13.329 +08:00', 1, 1)
# ('2023-02-28 15:56:13.333 +08:00', 2, 1)
# ('2023-02-28 15:56:13.337 +08:00', 3, 1)
# ANCHOR: connect
import taosws
conn = taosws.connect("taosws://root:taosdata@localhost:6041")
# ANCHOR_END: connect
# ANCHOR: basic
conn.execute("drop database if exists connwspy", req_id=1)
conn.execute("create database if not exists connwspy", req_id=2)
conn.execute("use connwspy", req_id=3)
conn.execute("create table if not exists stb (ts timestamp, c1 int) tags (t1 int)", req_id=4)
conn.execute("create table if not exists tb1 using stb tags (1)", req_id=5)
conn.execute("insert into tb1 values (now, 1)", req_id=6)
conn.execute("insert into tb1 values (now, 2)", req_id=7)
conn.execute("insert into tb1 values (now, 3)", req_id=8)
r = conn.execute("select * from stb", req_id=9)
result = conn.query("select * from stb", req_id=10)
num_of_fields = result.field_count
print(num_of_fields)
for row in result:
print(row)
# output:
# 3
# ('2023-02-28 15:56:13.329 +08:00', 1, 1)
# ('2023-02-28 15:56:13.333 +08:00', 2, 1)
# ('2023-02-28 15:56:13.337 +08:00', 3, 1)
import taos
# ANCHOR: insert
conn = taos.connect()
# Execute a sql, ignore the result set, just get affected rows. It's useful for DDL and DML statement.
conn.execute("DROP DATABASE IF EXISTS test", req_id=1)
conn.execute("CREATE DATABASE test", req_id=2)
# change database. same as execute "USE db"
conn.select_db("test")
conn.execute("CREATE STABLE weather(ts TIMESTAMP, temperature FLOAT) TAGS (location INT)", req_id=3)
affected_row = conn.execute("INSERT INTO t1 USING weather TAGS(1) VALUES (now, 23.5) (now+1m, 23.5) (now+2m, 24.4)", req_id=4)
print("affected_row", affected_row)
# output:
# affected_row 3
# ANCHOR_END: insert
# ANCHOR: query
# Execute a sql and get its result set. It's useful for SELECT statement
result = conn.query("SELECT * from weather", req_id=5)
# Get fields from result
fields = result.fields
for field in fields:
print(field) # {name: ts, type: 9, bytes: 8}
# output:
# {name: ts, type: 9, bytes: 8}
# {name: temperature, type: 6, bytes: 4}
# {name: location, type: 4, bytes: 4}
# Get data from result as list of tuple
data = result.fetch_all()
print(data)
# output:
# [(datetime.datetime(2022, 4, 27, 9, 4, 25, 367000), 23.5, 1), (datetime.datetime(2022, 4, 27, 9, 5, 25, 367000), 23.5, 1), (datetime.datetime(2022, 4, 27, 9, 6, 25, 367000), 24.399999618530273, 1)]
# Or get data from result as a list of dict
# map_data = result.fetch_all_into_dict()
# print(map_data)
# output:
# [{'ts': datetime.datetime(2022, 4, 27, 9, 1, 15, 343000), 'temperature': 23.5, 'location': 1}, {'ts': datetime.datetime(2022, 4, 27, 9, 2, 15, 343000), 'temperature': 23.5, 'location': 1}, {'ts': datetime.datetime(2022, 4, 27, 9, 3, 15, 343000), 'temperature': 24.399999618530273, 'location': 1}]
# ANCHOR_END: query
conn.close()
\ No newline at end of file
import taos
conn = taos.connect()
cursor = conn.cursor()
cursor.execute("DROP DATABASE IF EXISTS test", req_id=1)
cursor.execute("CREATE DATABASE test", req_id=2)
cursor.execute("USE test", req_id=3)
cursor.execute("CREATE STABLE weather(ts TIMESTAMP, temperature FLOAT) TAGS (location INT)", req_id=4)
for i in range(1000):
location = str(i % 10)
tb = "t" + location
cursor.execute(f"INSERT INTO {tb} USING weather TAGS({location}) VALUES (now+{i}a, 23.5) (now+{i + 1}a, 23.5)", req_id=5+i)
cursor.execute("SELECT count(*) FROM weather", req_id=1005)
data = cursor.fetchall()
print("count:", data[0][0])
cursor.execute("SELECT tbname, * FROM weather LIMIT 2", req_id=1006)
col_names = [meta[0] for meta in cursor.description]
print(col_names)
rows = cursor.fetchall()
print(rows)
cursor.close()
conn.close()
# output:
# count: 2000
# ['tbname', 'ts', 'temperature', 'location']
# row_count: -1
# [('t0', datetime.datetime(2022, 4, 27, 14, 54, 24, 392000), 23.5, 0), ('t0', datetime.datetime(2022, 4, 27, 14, 54, 24, 393000), 23.5, 0)]
......@@ -5,7 +5,7 @@ LOCATIONS = ['California.SanFrancisco', 'California.LosAngles', 'California.SanD
'California.PaloAlto', 'California.Campbell', 'California.MountainView', 'California.Sunnyvale',
'California.SantaClara', 'California.Cupertino']
CREATE_DATABASE_SQL = 'create database if not exists {} keep 365 duration 10 buffer 16 wal_level 1'
CREATE_DATABASE_SQL = 'create database if not exists {} keep 365 duration 10 buffer 16 wal_level 1 wal_retention_period 3600'
USE_DATABASE_SQL = 'use {}'
DROP_TABLE_SQL = 'drop table if exists meters'
DROP_DATABASE_SQL = 'drop database if exists {}'
......
from taosrest import RestClient
client = RestClient("http://localhost:6041", user="root", password="taosdata")
res: dict = client.sql("SELECT ts, current FROM power.meters LIMIT 1", req_id=1)
print(res)
# output:
# {'status': 'succ', 'head': ['ts', 'current'], 'column_meta': [['ts', 9, 8], ['current', 6, 4]], 'data': [[datetime.datetime(2018, 10, 3, 14, 38, 5, tzinfo=datetime.timezone(datetime.timedelta(seconds=28800), '+08:00')), 10.3]], 'rows': 1}
import taos
conn = taos.connect()
conn.execute("DROP DATABASE IF EXISTS test", req_id=1)
conn.execute("CREATE DATABASE test", req_id=2)
conn.select_db("test")
conn.execute("CREATE STABLE weather(ts TIMESTAMP, temperature FLOAT) TAGS (location INT)", req_id=3)
# prepare data
for i in range(2000):
location = str(i % 10)
tb = "t" + location
conn.execute(f"INSERT INTO {tb} USING weather TAGS({location}) VALUES (now+{i}a, 23.5) (now+{i + 1}a, 23.5)", req_id=4+i)
result: taos.TaosResult = conn.query("SELECT * FROM weather", req_id=2004)
block_index = 0
blocks: taos.TaosBlocks = result.blocks_iter()
for rows, length in blocks:
print("block ", block_index, " length", length)
print("first row in this block:", rows[0])
block_index += 1
conn.close()
# possible output:
# block 0 length 1200
# first row in this block: (datetime.datetime(2022, 4, 27, 15, 14, 52, 46000), 23.5, 0)
# block 1 length 1200
# first row in this block: (datetime.datetime(2022, 4, 27, 15, 14, 52, 76000), 23.5, 3)
# block 2 length 1200
# first row in this block: (datetime.datetime(2022, 4, 27, 15, 14, 52, 99000), 23.5, 6)
# block 3 length 400
# first row in this block: (datetime.datetime(2022, 4, 27, 15, 14, 52, 122000), 23.5, 9)
......@@ -6,7 +6,7 @@ def init_tmq_env(db, topic):
conn = taos.connect()
conn.execute("drop topic if exists {}".format(topic))
conn.execute("drop database if exists {}".format(db))
conn.execute("create database if not exists {}".format(db))
conn.execute("create database if not exists {} wal_retention_period 3600".format(db))
conn.select_db(db)
conn.execute(
"create stable if not exists stb1 (ts timestamp, c1 int, c2 float, c3 varchar(16)) tags(t1 int, t3 varchar(16))")
......
......@@ -149,7 +149,7 @@ TDengine 建议用数据采集点的名字(如上表中的 d1001)来做表
3. 子表一定属于一张超级表,但普通表不属于任何超级表
4. 普通表无法转为子表,子表也无法转为普通表。
超级表与基于超级表建立的子表之间的关系表现在:
超级表与基于超级表建立的子表之间的关系表现在:
1. 一张超级表包含有多张子表,这些子表具有相同的采集量 Schema,但带有不同的标签值。
2. 不能通过子表调整数据或标签的模式,对于超级表的数据模式修改立即对所有的子表生效。
......
......@@ -178,7 +178,7 @@ Active: inactive (dead)
:::
## TDengine 命令行(CLI)
**TDengine 命令行(CLI)**
为便于检查 TDengine 的状态,执行数据库(Database)的各种即席(Ad Hoc)查询,TDengine 提供一命令行应用程序(以下简称为 TDengine CLI)taos。要进入 TDengine 命令行,您只要在终端执行 `taos` 即可。
......@@ -186,9 +186,9 @@ Active: inactive (dead)
<TabItem label="Windows 系统" value="windows">
安装后,在 `C:\TDengine` 目录下,运行 `taosd.exe` 来启动 TDengine 服务进程。
安装后,可以在拥有管理员权限的 cmd 窗口执行 `sc start taosd``C:\TDengine` 目录下,运行 `taosd.exe` 来启动 TDengine 服务进程。
## TDengine 命令行(CLI)
**TDengine 命令行(CLI)**
为便于检查 TDengine 的状态,执行数据库(Database)的各种即席(Ad Hoc)查询,TDengine 提供一命令行应用程序(以下简称为 TDengine CLI)taos。要进入 TDengine 命令行,您只要在终端执行 `taos` 即可。
......@@ -196,24 +196,26 @@ Active: inactive (dead)
<TabItem label="macOS 系统" value="macos">
安装后,在应用程序目录下,双击 TDengine 图标来启动程序,也可以运行 `launchctl start com.tdengine.taosd` 来启动 TDengine 服务进程。
安装后,在应用程序目录下,双击 TDengine 图标来启动程序,也可以运行 `sudo launchctl start com.tdengine.taosd` 来启动 TDengine 服务进程。
如下 `launchctl` 命令可以帮助你管理 TDengine 服务:
如下 `launchctl` 命令用于管理 TDengine 服务:
- 启动服务进程:`launchctl start com.tdengine.taosd`
- 启动服务进程:`sudo launchctl start com.tdengine.taosd`
- 停止服务进程:`launchctl stop com.tdengine.taosd`
- 停止服务进程:`sudo launchctl stop com.tdengine.taosd`
- 查看服务状态:`launchctl list | grep taosd`
- 查看服务状态:`sudo launchctl list | grep taosd`
:::info
- `launchctl` 命令不需要管理员权限,请不要在前面加 `sudo`
- `launchctl list | grep taosd` 指令返回的第一个内容是程序的 PID,若为 `-` 则说明 TDengine 服务未运行。
- `launchctl` 命令管理`com.tdengine.taosd`需要管理员权限,务必在前面加 `sudo` 来增强安全性。
- `sudo launchctl list | grep taosd` 指令返回的第一列是 `taosd` 程序的 PID,若为 `-` 则说明 TDengine 服务未运行。
- 故障排查:
- 如果服务异常请查看系统日志 `launchd.log` 或者 `/var/log/taos` 目录下 `taosdlog` 日志获取更多信息。
:::
## TDengine 命令行(CLI)
**TDengine 命令行(CLI)**
为便于检查 TDengine 的状态,执行数据库(Database)的各种即席(Ad Hoc)查询,TDengine 提供一命令行应用程序(以下简称为 TDengine CLI)taos。要进入 TDengine 命令行,您只要在 Windows 终端的 C:\TDengine 目录下,运行 taos.exe 来启动 TDengine 命令行。
......
......@@ -4,7 +4,7 @@ description: '快速设置 TDengine 环境并体验其高效写入和查询'
---
import xiaot from './xiaot.webp'
import xiaot_new from './xiaot-new.webp'
import xiaot_new from './xiaot-03.webp'
import channel from './channel.webp'
import official_account from './official-account.webp'
......@@ -19,17 +19,6 @@ import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
## 学习 TDengine 知识地图
TDengine 知识地图中涵盖了 TDengine 的各种知识点,揭示了各概念实体之间的调用关系和数据流向。学习和了解 TDengine 知识地图有助于你快速掌握 TDengine 的知识体系。
<figure>
<center>
<a href="pathname:///img/tdengine-map.svg" target="_blank"><img src="/img/tdengine-map.svg" width="80%" /></a>
<figcaption>图 1. TDengine 知识地图</figcaption>
</center>
</figure>
## 加入 TDengine 官方社区
微信扫描以下二维码,学习了解 TDengine 的最新技术,与大家共同交流物联网大数据技术应用、TDengine 使用问题和技巧等话题。
......
......@@ -52,7 +52,7 @@ CREATE TABLE d1004 USING meters TAGS ("California.LosAngeles", 3);
### 创建流
```sql
create stream current_stream into current_stream_output_stb as select _wstart as start, _wend as wend, max(current) as max_current from meters where voltage <= 220 interval (5s);
create stream current_stream trigger at_once into current_stream_output_stb as select _wstart as wstart, _wend as wend, max(current) as max_current from meters where voltage <= 220 interval (5s);
```
### 写入数据
......@@ -70,8 +70,8 @@ insert into d1004 values("2018-10-03 14:38:06.500", 11.50000, 221, 0.35000);
### 查询以观察结果
```sql
taos> select start, wend, max_current from current_stream_output_stb;
start | wend | max_current |
taos> select wstart, wend, max_current from current_stream_output_stb;
wstart | wend | max_current |
===========================================================================
2018-10-03 14:38:05.000 | 2018-10-03 14:38:10.000 | 10.30000 |
2018-10-03 14:38:15.000 | 2018-10-03 14:38:20.000 | 12.60000 |
......@@ -89,7 +89,7 @@ Query OK, 2 rows in database (0.018762s)
### 创建流
```sql
create stream power_stream into power_stream_output_stb as select ts, concat_ws(".", location, tbname) as meter_location, current*voltage*cos(phase) as active_power, current*voltage*sin(phase) as reactive_power from meters partition by tbname;
create stream power_stream trigger at_once into power_stream_output_stb as select ts, concat_ws(".", location, tbname) as meter_location, current*voltage*cos(phase) as active_power, current*voltage*sin(phase) as reactive_power from meters partition by tbname;
```
### 写入数据
......
......@@ -7,6 +7,7 @@ title: 数据订阅
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import Java from "./_sub_java.mdx";
import JavaWS from "./_sub_java_ws.mdx";
import Python from "./_sub_python.mdx";
import Go from "./_sub_go.mdx";
import Rust from "./_sub_rust.mdx";
......@@ -24,6 +25,7 @@ import CDemo from "./_sub_c.mdx";
本文档不对消息队列本身的基础知识做介绍,如果需要了解,请自行搜索。
注意:默认是从wal消费数据,如果wal被删除,消费到的数据会不全,此时可以将参数 experimental.snapshot.enable 设置为true,从tsdb获取全部数据,但是这样的话就不能保证数据的消费顺序。所以建议根据自己的消费情况合理的设置wal的保留策略,保证可以从wal里订阅到全部数据。
## 主要数据结构和 API
不同语言下, TMQ 订阅相关的 API 及数据结构如下:
......@@ -282,17 +284,17 @@ CREATE TOPIC topic_name AS DATABASE db_name;
| 参数名称 | 类型 | 参数说明 | 备注 |
| :----------------------------: | :-----: | -------------------------------------------------------- | ------------------------------------------- |
| `td.connect.ip` | string | 用于创建连接,同 `taos_connect` | |
| `td.connect.user` | string | 用于创建连接,同 `taos_connect` | |
| `td.connect.pass` | string | 用于创建连接,同 `taos_connect` | |
| `td.connect.port` | integer | 用于创建连接,同 `taos_connect` | |
| `td.connect.ip` | string | 用于创建连接,同 `taos_connect` | 仅用于建立原生连接 |
| `td.connect.user` | string | 用于创建连接,同 `taos_connect` | 仅用于建立原生连接 |
| `td.connect.pass` | string | 用于创建连接,同 `taos_connect` | 仅用于建立原生连接 |
| `td.connect.port` | integer | 用于创建连接,同 `taos_connect` | 仅用于建立原生连接 |
| `group.id` | string | 消费组 ID,同一消费组共享消费进度 | **必填项**。最大长度:192。 |
| `client.id` | string | 客户端 ID | 最大长度:192。 |
| `auto.offset.reset` | enum | 消费组订阅的初始位置 | 可选:`earliest`(default), `latest`, `none` |
| `enable.auto.commit` | boolean | 是否启用消费位点自动提交 | 合法值:`true`, `false`。 |
| `auto.commit.interval.ms` | integer | 以毫秒为单位的消费记录自动提交消费位点时间间 | 默认 5000 m |
| `experimental.snapshot.enable` | boolean | 是否允许从 TSDB 消费数据 | 实验功能,默认关闭 |
| `msg.with.table.name` | boolean | 是否允许从消息中解析表名, 不适用于列订阅(列订阅时可将 tbname 作为列写入 subquery 语句) | |
| `auto.offset.reset` | enum | 消费组订阅的初始位置 | <br />`earliest`: default;从头开始订阅; <br/>`latest`: 仅从最新数据开始订阅; <br/>`none`: 没有提交的 offset 无法订阅 |
| `enable.auto.commit` | boolean | 是否启用消费位点自动提交,true: 自动提交,客户端应用无需commit;false:客户端应用需要自行commit | 默认值为 true |
| `auto.commit.interval.ms` | integer | 消费记录自动提交消费位点时间间隔,单位为毫秒 | 默认值为 5000 |
| `experimental.snapshot.enable` | boolean | 是否允许从 TSDB 消费数据。当其关闭时,只能消费依据 WAL 保留策略仍然在WAL中的数据;当其打开时,除WAL中的数据以外,也能够消费已经从WAL中删除但落盘到TSDB中的数据 | 实验功能,默认关闭 |
| `msg.with.table.name` | boolean | 是否允许从消息中解析表名, 不适用于列订阅(列订阅时可将 tbname 作为列写入 subquery 语句) |默认关闭 |
对于不同编程语言,其设置方式如下:
......@@ -804,7 +806,14 @@ SHOW SUBSCRIPTIONS;
</TabItem>
<TabItem label="Java" value="java">
<Java />
<Tabs defaultValue="native">
<TabItem value="native" label="本地连接">
<Java />
</TabItem>
<TabItem value="ws" label="WebSocket 连接">
<JavaWS />
</TabItem>
</Tabs>
</TabItem>
<TabItem label="Go" value="Go">
......
......@@ -65,11 +65,11 @@ int32_t aggfn_init() {
}
// aggregate start function. The intermediate value or the state(@interBuf) is initialized in this function. The function name shall be concatenation of udf name and _start suffix
// @param interbuf intermediate value to intialize
// @param interbuf intermediate value to initialize
// @return error number defined in taoserror.h
int32_t aggfn_start(SUdfInterBuf* interBuf) {
// initialize intermediate value in interBuf
return TSDB_CODE_SUCESS;
return TSDB_CODE_SUCCESS;
}
// aggregate reduce function. This function aggregate old state(@interbuf) and one data bock(inputBlock) and output a new state(@newInterBuf).
......@@ -231,7 +231,7 @@ bit_add 实现多列的按位与功能。如果只有一列,返回这一列。
</details>
### 聚合函数示例 [l2norm](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/l2norm.c)
### 聚合函数示例1 返回值为数值类型 [l2norm](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/l2norm.c)
l2norm 实现了输入列的所有数据的二阶范数,即对每个数据先平方,再累加求和,最后开方。
......@@ -243,3 +243,29 @@ l2norm 实现了输入列的所有数据的二阶范数,即对每个数据先
```
</details>
### 聚合函数示例2 返回值为字符串类型 [max_vol](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/max_vol.c)
max_vol 实现了从多个输入的电压列中找到最大电压,返回由设备ID + 最大电压所在(行,列)+ 最大电压值 组成的组合字符串值
创建表:
```bash
create table battery(ts timestamp, vol1 float, vol2 float, vol3 float, deviceId varchar(16));
```
创建自定义函数:
```bash
create aggregate function max_vol as '/root/udf/libmaxvol.so' outputtype binary(64) bufsize 10240 language 'C';
```
使用自定义函数:
```bash
select max_vol(vol1,vol2,vol3,deviceid) from battery;
```
<details>
<summary>max_vol.c</summary>
```c
{{#include tests/script/sh/max_vol.c}}
```
</details>
\ No newline at end of file
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
<Tabs defaultValue="native">
<TabItem value="native" label="本地连接">
```java
{{#include docs/examples/java/src/main/java/com/taos/example/SubscribeDemo.java}}
```
......@@ -12,20 +6,3 @@ import TabItem from '@theme/TabItem';
```
```java
{{#include docs/examples/java/src/main/java/com/taos/example/Meters.java}}
```
</TabItem>
<TabItem value="ws" label="WebSocket 连接">
```java
{{#include docs/examples/java/src/main/java/com/taos/example/WebsocketSubscribeDemo.java}}
```
```java
{{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}}
```
```java
{{#include docs/examples/java/src/main/java/com/taos/example/Meters.java}}
```
</TabItem>
</Tabs>
```java
{{#include docs/examples/java/src/main/java/com/taos/example/WebsocketSubscribeDemo.java}}
```
```java
{{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}}
```
```java
{{#include docs/examples/java/src/main/java/com/taos/example/Meters.java}}
```
\ No newline at end of file
......@@ -69,7 +69,7 @@ curl -L -H "Authorization: Basic cm9vdDp0YW9zZGF0YQ==" \
## HTTP 请求格式
```text
http://<fqdn>:<port>/rest/sql/[db_name][?tz=timezone]
http://<fqdn>:<port>/rest/sql/[db_name][?tz=timezone[&req_id=req_id]]
```
参数说明:
......@@ -78,6 +78,7 @@ http://<fqdn>:<port>/rest/sql/[db_name][?tz=timezone]
- port: 配置文件中 httpPort 配置项,缺省为 6041。
- db_name: 可选参数,指定本次所执行的 SQL 语句的默认数据库库名。
- tz: 可选参数,指定返回时间的时区,遵照 IANA Time Zone 规则,如 `America/New_York`。
- req_id: 可选参数,指定请求 id,可以用于 tracing。
例如:`http://h1.taos.com:6041/rest/sql/test` 是指向地址为 `h1.taos.com:6041` 的 URL,并将默认使用的数据库库名设置为 `test`。
......@@ -100,13 +101,13 @@ HTTP 请求的 BODY 里就是一个完整的 SQL 语句,SQL 语句中的数据
使用 `curl` 通过自定义身份认证方式来发起一个 HTTP Request,语法如下:
```bash
curl -L -H "Authorization: Basic <TOKEN>" -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name][?tz=timezone]
curl -L -H "Authorization: Basic <TOKEN>" -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name][?tz=timezone[&req_id=req_id]]
```
或者,
```bash
curl -L -u username:password -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name][?tz=timezone]
curl -L -u username:password -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name][?tz=timezone[&req_id=req_id]]
```
其中,`TOKEN` 为 `{username}:{password}` 经过 Base64 编码之后的字符串,例如 `root:taosdata` 编码后为 `cm9vdDp0YW9zZGF0YQ==`。
......@@ -115,14 +116,41 @@ curl -L -u username:password -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name][?tz=timez
### HTTP 响应码
| **response code** | **说明** |
|-------------------|----------------|
| 200 | 正确返回和 C 接口错误返回 |
| 400 | 参数错误返回 |
| 401 | 鉴权失败 |
| 404 | 接口不存在 |
| 500 | 内部错误 |
| 503 | 系统资源不足 |
从 `TDengine 3.0.3.0` 开始 `taosAdapter` 提供配置参数 `httpCodeServerError` 用来设置当 C 接口返回错误时是否返回非 200 的http状态码
| **说明** | **httpCodeServerError false** | **httpCodeServerError true** |
|--------------------|-------------------------------|---------------------------------------|
| taos_errno() 返回 0 | 200 | 200 |
| taos_errno() 返回 非0 | 200(除鉴权错误) | 500 (除鉴权错误和 400/502 错误) |
| 参数错误 | 400 (仅处理 HTTP 请求 URL 参数错误) | 400 (处理 HTTP 请求 URL 参数错误和 taosd 返回错误) |
| 鉴权错误 | 401 | 401 |
| 接口不存在 | 404 | 404 |
| 集群不可用错误 | 502 | 502 |
| 系统资源不足 | 503 | 503 |
返回 400 的 C 错误码为:
- TSDB_CODE_TSC_SQL_SYNTAX_ERROR ( 0x0216)
- TSDB_CODE_TSC_LINE_SYNTAX_ERROR (0x021B)
- TSDB_CODE_PAR_SYNTAX_ERROR (0x2600)
- TSDB_CODE_TDB_TIMESTAMP_OUT_OF_RANGE (0x060B)
- TSDB_CODE_TSC_VALUE_OUT_OF_RANGE (0x0224)
- TSDB_CODE_PAR_INVALID_FILL_TIME_RANGE (0x263B)
返回 401 的错误码为:
- TSDB_CODE_MND_USER_ALREADY_EXIST (0x0350)
- TSDB_CODE_MND_USER_NOT_EXIST ( 0x0351)
- TSDB_CODE_MND_INVALID_USER_FORMAT (0x0352)
- TSDB_CODE_MND_INVALID_PASS_FORMAT (0x0353)
- TSDB_CODE_MND_NO_USER_FROM_CONN (0x0354)
- TSDB_CODE_MND_TOO_MANY_USERS (0x0355)
- TSDB_CODE_MND_INVALID_ALTER_OPER (0x0356)
- TSDB_CODE_MND_AUTH_FAILURE (0x0357)
返回 403 的错误码为:
- TSDB_CODE_RPC_SOMENODE_NOT_CONNECTED (0x0020)
### HTTP body 结构
......@@ -270,7 +298,6 @@ curl http://192.168.0.1:6041/rest/login/root/taosdata
```json
{
"status": "succ",
"code": 0,
"desc": "/KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04"
}
......@@ -356,6 +383,133 @@ curl http://192.168.0.1:6041/rest/login/root/taosdata
}
```
## TDengine 2.x 和 3.0 之间 REST API 的差异
### URI
| URI | TDengine 2.x | TDengine 3.0 |
| :--------------------| :------------------: | :--------------------------------------------------: |
| /rest/sql | 支持 | 支持 (响应代码和消息体不同) |
| /rest/sqlt | 支持 | 不再支持 |
| /rest/sqlutc | 支持 | 不再支持 |
### HTTP code
| HTTP code | TDengine 2.x | TDengine 3.0 | 备注 |
| :--------------------| :------------------: | :----------: | :-----------------------------------: |
| 200 | 支持 | 支持 | 正确返回和 taosc 接口错误返回 |
| 400 | 不支持 | 支持 | 参数错误返回 |
| 401 | 不支持 | 支持 | 鉴权失败 |
| 404 | 支持 | 支持 | 接口不存在 |
| 500 | 不支持 | 支持 | 内部错误 |
| 503 | 支持 | 支持 | 系统资源不足 |
### 响应代码和消息体
#### TDengine 2.x 响应代码和消息体
```JSON
{
"status": "succ",
"head": [
"name",
"created_time",
"ntables",
"vgroups",
"replica",
"quorum",
"days",
"keep1,keep2,keep(D)",
"cache(MB)",
"blocks",
"minrows",
"maxrows",
"wallevel",
"fsync",
"comp",
"precision",
"status"
],
"data": [
[
"log",
"2020-09-02 17:23:00.039",
4,
1,
1,
1,
10,
"30,30,30",
1,
3,
100,
4096,
1,
3000,
2,
"us",
"ready"
]
],
"rows": 1
}
```
```
"data": [
[
"information_schema",
16,
"ready"
],
[
"performance_schema",
9,
"ready"
]
],
```
#### TDengine 3.0 响应代码和消息体
```JSON
{
"code": 0,
"column_meta": [
[
"name",
"VARCHAR",
64
],
[
"ntables",
"BIGINT",
8
],
[
"status",
"VARCHAR",
10
]
],
"data": [
[
"information_schema",
16,
"ready"
],
[
"performance_schema",
9,
"ready"
]
],
"rows": 2
}
```
## 参考
[taosAdapter](/reference/taosadapter/)
......@@ -263,6 +263,14 @@ int taos_print_row(char *str, TAOS_ROW row, TAOS_FIELD *fields, int num_fields)
- `int taos_select_db(TAOS *taos, const char *db)`
将当前的缺省数据库设置为 `db`。
- `int taos_get_current_db(TAOS *taos, char *database, int len, int *required)`
- database,len为用户在外面申请的空间,内部会把当前db赋值到database里。
- 只要是没有正常把db名赋值到database中(包括截断),返回错误,返回值为-1,然后用户可以通过 taos_errstr(NULL) 来获取错误提示。
- 如果,database == NULL 或者 len<=0 返回错误,required里保存存储db需要的空间(包含最后的'\0')
- 如果,len 小于 存储db需要的空间(包含最后的'\0'),返回错误,database里赋值截断的数据,以'\0'结尾。
- 如果,len 大于等于 存储db需要的空间(包含最后的'\0'),返回正常0,database里赋值以'\0‘结尾的db名。
- `void taos_close(TAOS *taos)`
......@@ -302,7 +310,7 @@ int taos_print_row(char *str, TAOS_ROW row, TAOS_FIELD *fields, int num_fields)
- `TAOS_FIELD *taos_fetch_fields(TAOS_RES *res)`
获取查询结果集每列数据的属性(列的名称、列的数据类型、列的长度),与 `taos_num_fileds()` 配合使用,可用来解析 `taos_fetch_row()` 返回的一个元组(一行)的数据。 `TAOS_FIELD` 的结构如下:
获取查询结果集每列数据的属性(列的名称、列的数据类型、列的长度),与 `taos_num_fields()` 配合使用,可用来解析 `taos_fetch_row()` 返回的一个元组(一行)的数据。 `TAOS_FIELD` 的结构如下:
```c
typedef struct taosField {
......@@ -493,5 +501,17 @@ TDengine 的异步 API 均采用非阻塞调用模式。应用程序可以用多
需要注意的是,时间戳分辨率参数只在协议类型为 `SML_LINE_PROTOCOL` 的时候生效。
对于 OpenTSDB 的文本协议,时间戳的解析遵循其官方解析规则 — 按照时间戳包含的字符的数量来确认时间精度。
**支持版本**
该功能接口从 2.3.0.0 版本开始支持。
**schemaless 其他相关的接口**
- `TAOS_RES *taos_schemaless_insert_with_reqid(TAOS *taos, char *lines[], int numLines, int protocol, int precision, int64_t reqid)`
- `TAOS_RES *taos_schemaless_insert_raw(TAOS *taos, char *lines, int len, int32_t *totalRows, int protocol, int precision)`
- `TAOS_RES *taos_schemaless_insert_raw_with_reqid(TAOS *taos, char *lines, int len, int32_t *totalRows, int protocol, int precision, int64_t reqid)`
- `TAOS_RES *taos_schemaless_insert_ttl(TAOS *taos, char *lines[], int numLines, int protocol, int precision, int32_t ttl)`
- `TAOS_RES *taos_schemaless_insert_ttl_with_reqid(TAOS *taos, char *lines[], int numLines, int protocol, int precision, int32_t ttl, int64_t reqid)`
- `TAOS_RES *taos_schemaless_insert_raw_ttl(TAOS *taos, char *lines, int len, int32_t *totalRows, int protocol, int precision, int32_t ttl)`
- `TAOS_RES *taos_schemaless_insert_raw_ttl_with_reqid(TAOS *taos, char *lines, int len, int32_t *totalRows, int protocol, int precision, int32_t ttl, int64_t reqid)`
**说明**
- 上面这7个接口是扩展接口,主要用于在schemaless写入时传递ttl、reqid参数,可以根据需要使用。
- 带_raw的接口通过传递的参数lines指针和长度len来表示数据,为了解决原始接口数据包含'\0'而被截断的问题。totalRows指针返回解析出来的数据行数。
- 带_ttl的接口可以传递ttl参数来控制建表的ttl到期时间。
- 带_reqid的接口可以通过传递reqid参数来追踪整个的调用链。
......@@ -17,7 +17,7 @@ import TabItem from '@theme/TabItem';
- JDBC 原生连接:Java 应用在物理节点 1(pnode1)上使用 TSDBDriver 直接调用客户端驱动(libtaos.so 或 taos.dll)的 API 将写入和查询请求发送到位于物理节点 2(pnode2)上的 taosd 实例。
- JDBC REST 连接:Java 应用通过 RestfulDriver 将 SQL 封装成一个 REST 请求,发送给物理节点 2 的 REST 服务器(taosAdapter),通过 REST 服务器请求 taosd 并返回结果。
使用 REST 连接,不依赖 TDengine 客户端驱动,可以跨平台,更加方便灵活,但性能比原生连接器低约 30%
使用 REST 连接,不依赖 TDengine 客户端驱动,可以跨平台,更加方便灵活。
:::info
TDengine 的 JDBC 驱动实现尽可能与关系型数据库驱动保持一致,但 TDengine 与关系对象型数据库的使用场景和技术特征存在差异,所以`taos-jdbcdriver` 与传统的 JDBC driver 也存在一定差异。在使用时需要注意以下几点:
......@@ -728,7 +728,7 @@ consumer.close()
详情请参考:[数据订阅](../../../develop/tmq)
### 使用示例如下:
#### 完整示例
<Tabs defaultValue="native">
<TabItem value="native" label="原生连接">
......
......@@ -122,7 +122,7 @@ _taosSql_ 通过 cgo 实现了 Go 的 `database/sql/driver` 接口。只需要
使用 `taosSql` 作为 `driverName` 并且使用一个正确的 [DSN](#DSN) 作为 `dataSourceName`DSN 支持的参数:
* configPath 指定 taos.cfg 目录
* cfg 指定 taos.cfg 目录
示例:
......
......@@ -10,7 +10,9 @@ import TabItem from "@theme/TabItem";
`taospy` 是 TDengine 的官方 Python 连接器。`taospy` 提供了丰富的 API, 使得 Python 应用可以很方便地使用 TDengine。`taospy` 对 TDengine 的[原生接口](../cpp)和 [REST 接口](../rest-api)都进行了封装, 分别对应 `taospy` 包的 `taos` 模块 和 `taosrest` 模块。
除了对原生接口和 REST 接口的封装,`taospy` 还提供了符合 [Python 数据访问规范(PEP 249)](https://peps.python.org/pep-0249/) 的编程接口。这使得 `taospy` 和很多第三方工具集成变得简单,比如 [SQLAlchemy](https://www.sqlalchemy.org/) 和 [pandas](https://pandas.pydata.org/)。
使用客户端驱动提供的原生接口直接与服务端建立的连接的方式下文中称为“原生连接”;使用 taosAdapter 提供的 REST 接口与服务端建立的连接的方式下文中称为“REST 连接”。
`taos-ws-py` 是使用 WebSocket 方式连接 TDengine 的 Python 连接器包。可以选装。
使用客户端驱动提供的原生接口直接与服务端建立的连接的方式下文中称为“原生连接”;使用 taosAdapter 提供的 REST 接口或 WebSocket 接口与服务端建立的连接的方式下文中称为“REST 连接”或“WebSocket 连接”。
Python 连接器的源码托管在 [GitHub](https://github.com/taosdata/taos-connector-python)。
......@@ -115,6 +117,15 @@ import taos
import taosrest
```
</TabItem>
<TabItem value="ws" label="WebSocket 连接">
对于 WebSocket 连接,只需验证是否能成功导入 `taosws` 模块。可在 Python 交互式 Shell 中输入:
```python
import taosws
```
</TabItem>
</Tabs>
......@@ -183,6 +194,27 @@ curl -u root:taosdata http://<FQDN>:<PORT>/rest/sql -d "select server_version()"
}
```
</TabItem>
<TabItem value="ws" label="WebSocket 连接" groupId="connect">
对于 WebSocket 连接, 除了确保集群已经启动,还要确保 taosAdapter 组件已经启动。可以使用如下 curl 命令测试:
```
curl -i -N -d "show databases" -H "Authorization: Basic cm9vdDp0YW9zZGF0YQ==" -H "Connection: Upgrade" -H "Upgrade: websocket" -H "Host: <FQDN>:<PORT>" -H "Origin: http://<FQDN>:<PORT>" http://<FQDN>:<PORT>/rest/sql
```
上面的 FQDN 为运行 taosAdapter 的机器的 FQDN, PORT 为 taosAdapter 配置的监听端口, 默认为 6041。
如果测试成功,会输出服务器版本信息,比如:
```json
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Tue, 21 Mar 2023 09:29:17 GMT
Transfer-Encoding: chunked
{"status":"succ","head":["server_version()"],"column_meta":[["server_version()",8,8]],"data":[["2.6.0.27"]],"rows":1}
```
</TabItem>
</Tabs>
......@@ -229,6 +261,16 @@ curl -u root:taosdata http://<FQDN>:<PORT>/rest/sql -d "select server_version()"
- `password`: TDengine 用户密码。默认是 taosdata。
- `timeout`: HTTP 请求超时时间。单位为秒。默认为 `socket._GLOBAL_DEFAULT_TIMEOUT`。 一般无需配置。
</TabItem>
<TabItem value="websocket" label="WebSocket 连接">
```python
{{#include docs/examples/python/connect_websocket_examples.py:connect}}
```
`connect()` 函数参数为连接 url,协议为 `taosws` 或 `ws`
</TabItem>
</Tabs>
......@@ -298,8 +340,94 @@ TaosCursor 类使用原生连接进行写入、查询操作。在客户端多线
```
对于 `sql()` 方法更详细的介绍, 请参考 [RestClient](https://docs.taosdata.com/api/taospy/taosrest/restclient.html)。
</TabItem>
<TabItem value="websocket" label="WebSocket 连接">
```python
{{#include docs/examples/python/connect_websocket_examples.py:basic}}
```
- `conn.execute`: 用来执行任意 SQL 语句,返回影响的行数
- `conn.query`: 用来执行查询 SQL 语句,返回查询结果
</TabItem>
</Tabs>
### 与 req_id 一起使用
使用可选的 req_id 参数,指定请求 id,可以用于 tracing
<Tabs defaultValue="rest">
<TabItem value="native" label="原生连接">
##### TaosConnection 类的使用
`TaosConnection` 类既包含对 PEP249 Connection 接口的实现(如:`cursor`方法和 `close` 方法),也包含很多扩展功能(如: `execute`、 `query`、`schemaless_insert` 和 `subscribe` 方法。
```python title="execute 方法"
{{#include docs/examples/python/connection_usage_native_reference_with_req_id.py:insert}}
```
```python title="query 方法"
{{#include docs/examples/python/connection_usage_native_reference_with_req_id.py:query}}
```
:::tip
查询结果只能获取一次。比如上面的示例中 `fetch_all()` 和 `fetch_all_into_dict()` 只能用一个。重复获取得到的结果为空列表。
:::
##### TaosResult 类的使用
上面 `TaosConnection` 类的使用示例中,我们已经展示了两种获取查询结果的方法: `fetch_all()` 和 `fetch_all_into_dict()`。除此之外 `TaosResult` 还提供了按行迭代(`rows_iter`)或按数据块迭代(`blocks_iter`)结果集的方法。在查询数据量较大的场景,使用这两个方法会更高效。
```python title="blocks_iter 方法"
{{#include docs/examples/python/result_set_with_req_id_examples.py}}
```
##### TaosCursor 类的使用
`TaosConnection` 类和 `TaosResult` 类已经实现了原生接口的所有功能。如果你对 PEP249 规范中的接口比较熟悉也可以使用 `TaosCursor` 类提供的方法。
```python title="TaosCursor 的使用"
{{#include docs/examples/python/cursor_usage_native_reference_with_req_id.py}}
```
:::note
TaosCursor 类使用原生连接进行写入、查询操作。在客户端多线程的场景下,这个游标实例必须保持线程独享,不能跨线程共享使用,否则会导致返回结果出现错误。
:::
</TabItem>
<TabItem value="rest" label="REST 连接">
##### TaosRestCursor 类的使用
`TaosRestCursor` 类是对 PEP249 Cursor 接口的实现。
```python title="TaosRestCursor 的使用"
{{#include docs/examples/python/connect_rest_with_req_id_examples.py:basic}}
```
- `cursor.execute` : 用来执行任意 SQL 语句。
- `cursor.rowcount`: 对于写入操作返回写入成功记录数。对于查询操作,返回结果集行数。
- `cursor.description` : 返回字段的描述信息。关于描述信息的具体格式请参考[TaosRestCursor](https://docs.taosdata.com/api/taospy/taosrest/cursor.html)。
##### RestClient 类的使用
`RestClient` 类是对于 [REST API](../rest-api) 的直接封装。它只包含一个 `sql()` 方法用于执行任意 SQL 语句, 并返回执行结果。
```python title="RestClient 的使用"
{{#include docs/examples/python/rest_client_with_req_id_example.py}}
```
对于 `sql()` 方法更详细的介绍, 请参考 [RestClient](https://docs.taosdata.com/api/taospy/taosrest/restclient.html)。
</TabItem>
<TabItem value="websocket" label="WebSocket 连接">
```python
{{#include docs/examples/python/connect_websocket_with_req_id_examples.py:basic}}
```
- `conn.execute`: 用来执行任意 SQL 语句,返回影响的行数
- `conn.query`: 用来执行查询 SQL 语句,返回查询结果
</TabItem>
</Tabs>
......@@ -320,6 +448,13 @@ TaosCursor 类使用原生连接进行写入、查询操作。在客户端多线
{{#include docs/examples/python/conn_rest_pandas.py}}
```
</TabItem>
<TabItem value="websocket" label="WebSocket 连接">
```python
{{#include docs/examples/python/conn_websocket_pandas.py}}
```
</TabItem>
</Tabs>
......@@ -335,15 +470,17 @@ TaosCursor 类使用原生连接进行写入、查询操作。在客户端多线
```python
{{#include docs/examples/python/tmq_example.py}}
```
</TabItem>
<TabItem value="rest" label="websocket 连接">
<TabItem value="websocket" label="WebSocket 连接">
除了原生的连接方式,Python 连接器还支持通过 websocket 订阅 TMQ 数据。
```python
{{#include docs/examples/python/tmq_websocket_example.py}}
```
</TabItem>
</Tabs>
......@@ -366,7 +503,7 @@ TaosCursor 类使用原生连接进行写入、查询操作。在客户端多线
```python
{{#include docs/examples/python/handle_exception.py}}
```
``
### 关于纳秒 (nanosecond)
由于目前 Python 对 nanosecond 支持的不完善(见下面的链接),目前的实现方式是在 nanosecond 精度时返回整数,而不是 ms 和 us 返回的 datetime 类型,应用开发者需要自行处理,建议使用 pandas 的 to_datetime()。未来如果 Python 正式完整支持了纳秒,Python 连接器可能会修改相关接口。
......
......@@ -96,7 +96,7 @@ dotnet add package TDengine.Connector
<ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.*" GeneratePathProperty="true" />
</ItemGroup>
<Target Name="copyDLLDepency" BeforeTargets="BeforeBuild">
<Target Name="copyDLLDependency" BeforeTargets="BeforeBuild">
<ItemGroup>
<DepDLLFiles Include="$(PkgTDengine_Connector)\runtimes\**\*.*" />
</ItemGroup>
......
......@@ -14,7 +14,7 @@ import PkgListV3 from "/components/PkgListV3";
解压软件包之后,会在解压目录下看到以下文件(目录):
- _ install_client.sh_:安装脚本,用于应用驱动程序
- _ taos.tar.gz_:应用驱动安装包
- _ package.tar.gz_:应用驱动安装包
- _ driver_:TDengine 应用驱动 driver
- _examples_: 各种编程语言的示例程序(c/C#/go/JDBC/MATLAB/python/R)
运行 install_client.sh 进行安装。
......
......@@ -60,7 +60,7 @@ TDengine 版本更新往往会增加新的功能特性,列表中的连接器
| **连接管理** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **普通查询** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **参数绑定** | 暂不支持 | 暂不支持 | 支持 | 支持 | 暂不支持 | 支持 |
| **数据订阅(TMQ)** | 暂不支持 | 支持 | 支持 | 暂不支持 | 暂不支持 | 支持 |
| **数据订阅(TMQ)** | 支持 | 支持 | 支持 | 暂不支持 | 暂不支持 | 支持 |
| **Schemaless** | 暂不支持 | 暂不支持 | 暂不支持 | 暂不支持 | 暂不支持 | 暂不支持 |
| **批量拉取(基于 WebSocket)** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **DataFrame** | 不支持 | 支持 | 不支持 | 不支持 | 不支持 | 不支持 |
......
......@@ -102,7 +102,7 @@ spec:
# Must set if you want a cluster.
- name: TAOS_FIRST_EP
value: "$(STS_NAME)-0.$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local:$(TAOS_SERVER_PORT)"
# TAOS_FQND should always be setted in k8s env.
# TAOS_FQND should always be set in k8s env.
- name: TAOS_FQDN
value: "$(POD_NAME).$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local"
volumeMounts:
......
......@@ -35,8 +35,8 @@ database_option: {
| TABLE_SUFFIX value
| TSDB_PAGESIZE value
| WAL_RETENTION_PERIOD value
| WAL_ROLL_PERIOD value
| WAL_RETENTION_SIZE value
| WAL_ROLL_PERIOD value
| WAL_SEGMENT_SIZE value
}
```
......@@ -75,11 +75,10 @@ database_option: {
- TABLE_PREFIX:内部存储引擎根据表名分配存储该表数据的 VNODE 时要忽略的前缀的长度。
- TABLE_SUFFIX:内部存储引擎根据表名分配存储该表数据的 VNODE 时要忽略的后缀的长度。
- TSDB_PAGESIZE:一个 VNODE 中时序数据存储引擎的页大小,单位为 KB,默认为 4 KB。范围为 1 到 16384,即 1 KB到 16 MB。
- WAL_RETENTION_PERIOD:数据订阅已消费WAL日志,WAL文件的最大额外保留的时长策略。单位为 s。默认为 0,表示无需额外保留。-1, 表示额外保留,时间无上限
- WAL_RETENTION_SIZE:数据订阅已消费WAL日志,WAL文件的最大额外保留的累计大小策略。单位为 KB。默认为 0,表示无需额外保留。-1, 表示额外保留,累计大小无上限。
- WAL_RETENTION_PERIOD: 为了数据订阅消费,需要WAL日志文件额外保留的最大时长策略。WAL日志清理,不受订阅客户端消费状态影响。单位为 s。默认为 0,表示无需为订阅保留。新建订阅,应先设置恰当的时长策略
- WAL_RETENTION_SIZE:为了数据订阅消费,需要WAL日志文件额外保留的最大累计大小策略。单位为 KB。默认为 0,表示累计大小无上限。
- WAL_ROLL_PERIOD:wal 文件切换时长,单位为 s。当WAL文件创建并写入后,经过该时间,会自动创建一个新的WAL文件。默认为 0,即仅在TSDB落盘时创建新文件。
- WAL_SEGMENT_SIZE:wal 单个文件大小,单位为 KB。当前写入文件大小超过上限后会自动创建一个新的WAL文件。默认为 0,即仅在TSDB落盘时创建新文件。
### 创建数据库示例
```sql
......@@ -179,6 +178,14 @@ TRIM DATABASE db_name;
删除过期数据,并根据多级存储的配置归整数据。
## 落盘内存数据
```sql
FLUSH DATABASE db_name;
```
落盘内存中的数据。在关闭节点之前,执行这条命令可以避免重启后的数据回放,加速启动过程。
## 调整VGROUP中VNODE的分布
```sql
......@@ -194,3 +201,11 @@ BALANCE VGROUP
```
自动调整集群所有vgroup中的vnode分布,相当于在vnode级别对集群进行数据的负载均衡操作。
## 查看数据库工作状态
```sql
SHOW db_name.ALIVE;
```
查询数据库 db_name 的可用状态,返回值 0:不可用 1:完全可用 2:部分可用(即数据库包含的 VNODE 部分节点可用,部分节点不可用)
......@@ -13,12 +13,11 @@ create_definition:
col_name column_definition
column_definition:
type_name [COMMENT 'string_value']
type_name
```
**使用说明**
- 超级表中列的最大个数为 4096,需要注意,这里的 4096 是包含 TAG 列在内的,最小个数为 3,包含一个时间戳主键、一个 TAG 列和一个数据列。
- 建表时可以给列或标签附加注释。
- TAGS语法指定超级表的标签列,标签列需要遵循以下约定:
- TAGS 中的 TIMESTAMP 列写入数据时需要提供给定值,而暂不支持四则运算,例如 NOW + 10s 这类表达式。
- TAGS 列名不能与其他列名相同。
......
---
sidebar_label: 标签索引
title: 标签索引
description: 使用标签索引提升查询性能
---
## 简介
在 TDengine 3.0.3.0 版本之前(不含),默认在第一列 TAG 上建立索引,但不支持给其它列动态添加索引。从 3.0.3.0 版本开始,可以动态地为其它 TAG 列添加索引。对于第一个 TAG 列上自动建立的索引,其在查询中默认生效,且用户无法对其进行任何干预。适当地使用索引能够有效地提升查询性能。
## 语法
创建索引的语法如下
```sql
CREATE INDEX index_name ON tbl_name (tagColName
```
其中 `index_name` 为索引名称, `tbl_name` 为超级表名称,`tagColName` 为要在其上建立索引的 tag 列的名称。`tagColName` 的类型不受限制,即任何类型的 tag 列都可以建立索引。
删除索引的语法如下
```sql
DROP INDEX index_name
```
其中 `index_name` 为已经建立的某个索引的名称,如果该索引不存在则该命令执行失败,但不会对系统产生任何其它影响。
查看系统中已经存在的索引
```sql
SELECT * FROM information_schema.INS_INDEXES
```
也可以为上面的查询语句加上过滤条件以缩小查询范围。
## 使用说明
1. 索引使用得当能够提升数据过滤的效率,目前支持的过滤算子有 `=`, `>`, `>=`, `<`, `<=`。如果查询过滤条件中使用了这些算子,则索引能够明显提升查询效率。但如果查询过滤条件中使用的是其它算子,则索引起不到作用,查询效率没有变化。未来会逐步添加更多的算子。
2. 针对一个 tag 列只能建立一个索引,如果重复创建索引则会报错。
3. 每次只能针对一个 tag 列建立一个索引,不能同时对多个 tag 建立索引。
4. 整个系统中不管是哪种类型的索引,其名称必须唯一。
5. 对索引个数没有限制,但每增加一个索引都会导致系统中的元数据增加,过多的索引会降低元数据存取的效率从而降低整个系统的性能。所以请尽量避免添加不必要的索引。
6. 不支持对普通和子表建立索引。
7. 如果某个 tag 列的唯一值较少时,不建议对其建立索引,这种情况下收效甚微。
\ No newline at end of file
......@@ -31,15 +31,17 @@ select max(current) from meters partition by location interval(10m)
## 窗口切分查询
TDengine 支持按时间窗口切分方式进行聚合结果查询,比如温度传感器每秒采集一次数据,但需查询每隔 10 分钟的温度平均值。这种场景下可以使用窗口子句来获得需要的查询结果。窗口子句用于针对查询的数据集合按照窗口切分成为查询子集并进行聚合,窗口包含时间窗口(time window)、状态窗口(status window)、会话窗口(session window)三种窗口。其中时间窗口又可划分为滑动时间窗口和翻转时间窗口。窗口切分查询语法如下:
TDengine 支持按时间窗口切分方式进行聚合结果查询,比如温度传感器每秒采集一次数据,但需查询每隔 10 分钟的温度平均值。这种场景下可以使用窗口子句来获得需要的查询结果。窗口子句用于针对查询的数据集合按照窗口切分成为查询子集并进行聚合,窗口包含时间窗口(time window)、状态窗口(status window)、会话窗口(session window)、条件窗口(event window)四种窗口。其中时间窗口又可划分为滑动时间窗口和翻转时间窗口。
窗口子句语法如下:
```sql
SELECT select_list FROM tb_name
[WHERE where_condition]
[SESSION(ts_col, tol_val)]
[STATE_WINDOW(col)]
[INTERVAL(interval [, offset]) [SLIDING sliding]]
[FILL({NONE | VALUE | PREV | NULL | LINEAR | NEXT})]
window_clause: {
SESSION(ts_col, tol_val)
| STATE_WINDOW(col)
| INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)] [FILL(fill_mod_and_val)]
| EVENT_WINDOW START WITH start_trigger_condition END WITH end_trigger_condition
}
```
在上述语法中的具体限制如下
......@@ -67,6 +69,16 @@ FILL 语句指定某一窗口区间数据缺失的情况下的填充模式。填
5. LINEAR 填充:根据前后距离最近的非 NULL 值做线性插值填充。例如:FILL(LINEAR)。
6. NEXT 填充:使用下一个非 NULL 值填充数据。例如:FILL(NEXT)。
以上填充模式中,除了 NONE 模式默认不填充值之外,其他模式在查询的整个时间范围内如果没有数据 FILL 子句将被忽略,即不产生填充数据,查询结果为空。这种行为在部分模式(PREV、NEXT、LINEAR)下具有合理性,因为在这些模式下没有数据意味着无法产生填充数值。而对另外一些模式(NULL、VALUE)来说,理论上是可以产生填充数值的,至于需不需要输出填充数值,取决于应用的需求。所以为了满足这类需要强制填充数据或 NULL 的应用的需求,同时不破坏现有填充模式的行为兼容性,从 3.0.3.0 版本开始,增加了两种新的填充模式:
7. NULL_F: 强制填充 NULL 值
8. VALUE_F: 强制填充 VALUE 值
NULL, NULL_F, VALUE, VALUE_F 这几种填充模式针对不同场景区别如下:
- INTERVAL 子句: NULL_F, VALUE_F 为强制填充模式;NULL, VALUE 为非强制模式。在这种模式下下各自的语义与名称相符
- 流计算中的 INTERVAL 子句:NULL_F 与 NULL 行为相同,均为非强制模式;VALUE_F 与 VALUE 行为相同,均为非强制模式。即流计算中的 INTERVAL 没有强制模式
- INTERP 子句:NULL 与 NULL_F 行为相同,均为强制模式;VALUE 与 VALUE_F 行为相同,均为强制模式。即 INTERP 中没有非强制模式。
:::info
1. 使用 FILL 语句的时候可能生成大量的填充输出,务必指定查询的时间区间。针对每次查询,系统可返回不超过 1 千万条具有插值的结果。
......@@ -138,6 +150,24 @@ SELECT tbname, _wstart, CASE WHEN voltage >= 205 and voltage <= 235 THEN 1 ELSE
SELECT COUNT(*), FIRST(ts) FROM temp_tb_1 SESSION(ts, tol_val);
```
### 事件窗口
事件窗口根据开始条件和结束条件来划定窗口,当start_trigger_condition满足时则窗口开始,直到end_trigger_condition满足时窗口关闭。start_trigger_condition和end_trigger_condition可以是任意 TDengine 支持的条件表达式,且可以包含不同的列。
事件窗口可以仅包含一条数据。即当一条数据同时满足start_trigger_condition和end_trigger_condition,且当前不在一个窗口内时,这条数据自己构成了一个窗口。
事件窗口无法关闭时,不构成一个窗口,不会被输出。即有数据满足start_trigger_condition,此时窗口打开,但后续数据都不能满足end_trigger_condition,这个窗口无法被关闭,这部分数据不够成一个窗口,不会被输出。
如果直接在超级表上进行事件窗口查询,TDengine 会将超级表的数据汇总成一条时间线,然后进行事件窗口的计算。
如果需要对子查询的结果集进行事件窗口查询,那么子查询的结果集需要满足按时间线输出的要求,且可以输出有效的时间戳列。
以下面的 SQL 语句为例,事件窗口切分如图所示:
```sql
select _wstart, _wend, count(*) from t event_window start with c1 > 0 end with c2 < 10
```
![TDengine Database 事件窗口示意图](./event_window.webp)
### 时间戳伪列
窗口聚合查询结果中,如果 SQL 语句中没有指定输出查询结果中的时间戳列,那么最终结果中不会自动包含窗口的时间列信息。如果需要在结果中输出聚合结果所对应的时间窗口信息,需要在 SELECT 子句中使用时间戳相关的伪列: 时间窗口起始时间 (\_WSTART), 时间窗口结束时间 (\_WEND), 时间窗口持续时间 (\_WDURATION), 以及查询整体窗口相关的伪列: 查询窗口起始时间(\_QSTART) 和查询窗口结束时间(\_QEND)。需要注意的是时间窗口起始时间和结束时间均是闭区间,时间窗口持续时间是数据当前时间分辨率下的数值。例如,如果当前数据库的时间分辨率是毫秒,那么结果中 500 就表示当前时间窗口的持续时间是 500毫秒 (500 ms)。
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册