提交 44e3b115 编写于 作者: S slzhou

Merge branch '3.0' of github.com:taosdata/TDengine into szhou/python-udf

...@@ -40,7 +40,7 @@ def check_docs() { ...@@ -40,7 +40,7 @@ def check_docs() {
sh ''' sh '''
cd ${WKC} cd ${WKC}
git reset --hard git reset --hard
git clean -fxd git clean -f
rm -rf examples/rust/ rm -rf examples/rust/
git remote prune origin git remote prune origin
git fetch git fetch
...@@ -86,7 +86,7 @@ def pre_test(){ ...@@ -86,7 +86,7 @@ def pre_test(){
git fetch git fetch
cd ${WKC} cd ${WKC}
git reset --hard git reset --hard
git clean -fxd git clean -f
rm -rf examples/rust/ rm -rf examples/rust/
git remote prune origin git remote prune origin
git fetch git fetch
...@@ -201,7 +201,7 @@ def pre_test_win(){ ...@@ -201,7 +201,7 @@ def pre_test_win(){
''' '''
bat ''' bat '''
cd %WIN_COMMUNITY_ROOT% cd %WIN_COMMUNITY_ROOT%
git clean -fxd git clean -f
git reset --hard git reset --hard
git remote prune origin git remote prune origin
git fetch git fetch
......
...@@ -365,6 +365,6 @@ Please follow the [contribution guidelines](CONTRIBUTING.md) to contribute to th ...@@ -365,6 +365,6 @@ Please follow the [contribution guidelines](CONTRIBUTING.md) to contribute to th
For more information about TDengine, you can follow us on social media and join our Discord server: For more information about TDengine, you can follow us on social media and join our Discord server:
- [Discord](https://discord.com/invite/VZdSuUg4pS) - [Discord](https://discord.com/invite/VZdSuUg4pS)
- [Twitter](https://twitter.com/TaosData) - [Twitter](https://twitter.com/TDengineDB)
- [LinkedIn](https://www.linkedin.com/company/tdengine/) - [LinkedIn](https://www.linkedin.com/company/tdengine/)
- [YouTube](https://www.youtube.com/channel/UCmp-1U6GS_3V3hjir6Uq5DQ) - [YouTube](https://www.youtube.com/@tdengine)
...@@ -204,7 +204,7 @@ group vnodeProcessReqs() ...@@ -204,7 +204,7 @@ group vnodeProcessReqs()
s -> s: s -> s:
note right note right
save the requests in log store save the requests in log store
and wait for comfirmation or and wait for confirmation or
other cases other cases
end note end note
...@@ -236,7 +236,7 @@ s -> s: syncAppendReqToLogStore() ...@@ -236,7 +236,7 @@ s -> s: syncAppendReqToLogStore()
s -> v: walWrite() s -> v: walWrite()
alt has meta req alt has meta req
<- s: comfirmation <- s: confirmation
else else
s -> v: vnodeApplyReqs() s -> v: vnodeApplyReqs()
end end
......
...@@ -123,11 +123,11 @@ As a high-performance, scalable and SQL supported time-series database, TDengine ...@@ -123,11 +123,11 @@ As a high-performance, scalable and SQL supported time-series database, TDengine
## Comparison with other databases ## Comparison with other databases
- [Writing Performance Comparison of TDengine and InfluxDB ](https://tdengine.com/2022/02/23/4975.html) - [Writing Performance Comparison of TDengine and InfluxDB ](https://tdengine.com/performance-comparison-of-tdengine-and-influxdb/)
- [Query Performance Comparison of TDengine and InfluxDB](https://tdengine.com/2022/02/24/5120.html) - [Query Performance Comparison of TDengine and InfluxDB](https://tdengine.com/query-performance-comparison-test-report-tdengine-vs-influxdb/)
- [TDengine vs OpenTSDB](https://tdengine.com/2019/09/12/710.html) - [TDengine vs OpenTSDB](https://tdengine.com/performance-tdengine-vs-opentsdb/)
- [TDengine vs Cassandra](https://tdengine.com/2019/09/12/708.html) - [TDengine vs Cassandra](https://tdengine.com/performance-tdengine-vs-cassandra/)
- [TDengine vs InfluxDB](https://tdengine.com/2019/09/12/706.html) - [TDengine vs InfluxDB](https://tdengine.com/performance-tdengine-vs-influxdb/)
## More readings ## More readings
- [Introduction to Time-Series Database](https://tdengine.com/tsdb/) - [Introduction to Time-Series Database](https://tdengine.com/tsdb/)
......
...@@ -28,7 +28,7 @@ From the perspective of application program, you need to consider: ...@@ -28,7 +28,7 @@ From the perspective of application program, you need to consider:
- Writing to known existing tables is more efficient than writing to uncertain tables in automatic creating mode because the later needs to check whether the table exists or not before actually writing data into it. - Writing to known existing tables is more efficient than writing to uncertain tables in automatic creating mode because the later needs to check whether the table exists or not before actually writing data into it.
- Writing in SQL is more efficient than writing in schemaless mode because schemaless writing creates table automatically and may alter table schema. - Writing in SQL is more efficient than writing in schemaless mode because schemaless writing creates table automatically and may alter table schema.
Application programs need to take care of the above factors and try to take advantage of them. The application progam should write to single table in each write batch. The batch size needs to be tuned to a proper value on a specific system. The number of concurrent connections needs to be tuned to a proper value too to achieve the best writing throughput. Application programs need to take care of the above factors and try to take advantage of them. The application program should write to single table in each write batch. The batch size needs to be tuned to a proper value on a specific system. The number of concurrent connections needs to be tuned to a proper value too to achieve the best writing throughput.
### Data Source ### Data Source
......
...@@ -65,11 +65,11 @@ int32_t aggfn_init() { ...@@ -65,11 +65,11 @@ int32_t aggfn_init() {
} }
// aggregate start function. The intermediate value or the state(@interBuf) is initialized in this function. The function name shall be concatenation of udf name and _start suffix // aggregate start function. The intermediate value or the state(@interBuf) is initialized in this function. The function name shall be concatenation of udf name and _start suffix
// @param interbuf intermediate value to intialize // @param interbuf intermediate value to initialize
// @return error number defined in taoserror.h // @return error number defined in taoserror.h
int32_t aggfn_start(SUdfInterBuf* interBuf) { int32_t aggfn_start(SUdfInterBuf* interBuf) {
// initialize intermediate value in interBuf // initialize intermediate value in interBuf
return TSDB_CODE_SUCESS; return TSDB_CODE_SUCCESS;
} }
// aggregate reduce function. This function aggregate old state(@interbuf) and one data bock(inputBlock) and output a new state(@newInterBuf). // aggregate reduce function. This function aggregate old state(@interbuf) and one data bock(inputBlock) and output a new state(@newInterBuf).
......
...@@ -34,7 +34,7 @@ column_definition: ...@@ -34,7 +34,7 @@ column_definition:
SHOW STABLES [LIKE tb_name_wildcard]; SHOW STABLES [LIKE tb_name_wildcard];
``` ```
The preceding SQL statement shows all supertables in the current TDengine database, including the name, creation time, number of columns, number of tags, and number of subtabels for each supertable. The preceding SQL statement shows all supertables in the current TDengine database, including the name, creation time, number of columns, number of tags, and number of subtables for each supertable.
### View the CREATE Statement for a Supertable ### View the CREATE Statement for a Supertable
......
...@@ -248,7 +248,7 @@ You can also use the NULLS keyword to specify the position of null values. Ascen ...@@ -248,7 +248,7 @@ You can also use the NULLS keyword to specify the position of null values. Ascen
The LIMIT keyword controls the number of results that are displayed. You can also use the OFFSET keyword to specify the result to display first. `LIMIT` and `OFFSET` are executed after `ORDER BY` in the query execution. You can include an offset in a LIMIT clause. For example, LIMIT 5 OFFSET 2 can also be written LIMIT 2, 5. Both of these clauses display the third through the seventh results. The LIMIT keyword controls the number of results that are displayed. You can also use the OFFSET keyword to specify the result to display first. `LIMIT` and `OFFSET` are executed after `ORDER BY` in the query execution. You can include an offset in a LIMIT clause. For example, LIMIT 5 OFFSET 2 can also be written LIMIT 2, 5. Both of these clauses display the third through the seventh results.
In a statement that includes a PARTITON BY clause, the LIMIT keyword is performed on each partition, not on the entire set of results. In a statement that includes a PARTITION BY clause, the LIMIT keyword is performed on each partition, not on the entire set of results.
## SLIMIT ## SLIMIT
......
...@@ -666,13 +666,13 @@ If you input a specific column, the number of non-null values in the column is r ...@@ -666,13 +666,13 @@ If you input a specific column, the number of non-null values in the column is r
ELAPSED(ts_primary_key [, time_unit]) ELAPSED(ts_primary_key [, time_unit])
``` ```
**Description**`elapsed` function can be used to calculate the continuous time length in which there is valid data. If it's used with `INTERVAL` clause, the returned result is the calcualted time length within each time window. If it's used without `INTERVAL` caluse, the returned result is the calculated time length within the specified time range. Please be noted that the return value of `elapsed` is the number of `time_unit` in the calculated time length. **Description**`elapsed` function can be used to calculate the continuous time length in which there is valid data. If it's used with `INTERVAL` clause, the returned result is the calculated time length within each time window. If it's used without `INTERVAL` caluse, the returned result is the calculated time length within the specified time range. Please be noted that the return value of `elapsed` is the number of `time_unit` in the calculated time length.
**Return value type**: Double if the input value is not NULL; **Return value type**: Double if the input value is not NULL;
**Return value type**: TIMESTAMP **Return value type**: TIMESTAMP
**Applicable tables**: table, STable, outter in nested query **Applicable tables**: table, STable, outer in nested query
**Explanations** **Explanations**
- `ts_primary_key` parameter can only be the first column of a table, i.e. timestamp primary key. - `ts_primary_key` parameter can only be the first column of a table, i.e. timestamp primary key.
...@@ -754,7 +754,7 @@ HYPERLOGLOG(expr) ...@@ -754,7 +754,7 @@ HYPERLOGLOG(expr)
**Description**: **Description**:
The cardinal number of a specific column is returned by using hyperloglog algorithm. The benefit of using hyperloglog algorithm is that the memory usage is under control when the data volume is huge. The cardinal number of a specific column is returned by using hyperloglog algorithm. The benefit of using hyperloglog algorithm is that the memory usage is under control when the data volume is huge.
However, when the data volume is very small, the result may be not accurate, it's recommented to use `select count(data) from (select unique(col) as data from table)` in this case. However, when the data volume is very small, the result may be not accurate, it's recommended to use `select count(data) from (select unique(col) as data from table)` in this case.
**Return value type**: Integer **Return value type**: Integer
...@@ -801,7 +801,7 @@ PERCENTILE(expr, p [, p1] ...) ...@@ -801,7 +801,7 @@ PERCENTILE(expr, p [, p1] ...)
**Description**: The value whose rank in a specific column matches the specified percentage. If such a value matching the specified percentage doesn't exist in the column, an interpolation value will be returned. **Description**: The value whose rank in a specific column matches the specified percentage. If such a value matching the specified percentage doesn't exist in the column, an interpolation value will be returned.
**Return value type**: This function takes 2 minumum and 11 maximum parameters, and it can simultaneously return 10 percentiles at most. If 2 parameters are given, a single percentile is returned and the value type is DOUBLE. **Return value type**: This function takes 2 minimum and 11 maximum parameters, and it can simultaneously return 10 percentiles at most. If 2 parameters are given, a single percentile is returned and the value type is DOUBLE.
If more than 2 parameters are given, the return value type is a VARCHAR string, the format of which is a JSON ARRAY containing all return values. If more than 2 parameters are given, the return value type is a VARCHAR string, the format of which is a JSON ARRAY containing all return values.
**Applicable column types**: Numeric **Applicable column types**: Numeric
...@@ -811,7 +811,7 @@ PERCENTILE(expr, p [, p1] ...) ...@@ -811,7 +811,7 @@ PERCENTILE(expr, p [, p1] ...)
**More explanations**: **More explanations**:
- _p_ is in range [0,100], when _p_ is 0, the result is same as using function MIN; when _p_ is 100, the result is same as function MAX. - _p_ is in range [0,100], when _p_ is 0, the result is same as using function MIN; when _p_ is 100, the result is same as function MAX.
- When calculating multiple percentiles of a specific column, a single PERCENTILE function with multiple parameters is adviced, as this can largely reduce the query response time. - When calculating multiple percentiles of a specific column, a single PERCENTILE function with multiple parameters is advised, as this can largely reduce the query response time.
For example, using SELECT percentile(col, 90, 95, 99) FROM table will perform better than SELECT percentile(col, 90), percentile(col, 95), percentile(col, 99) from table. For example, using SELECT percentile(col, 90, 95, 99) FROM table will perform better than SELECT percentile(col, 90), percentile(col, 95), percentile(col, 99) from table.
## Selection Functions ## Selection Functions
...@@ -884,6 +884,15 @@ INTERP(expr) ...@@ -884,6 +884,15 @@ INTERP(expr)
- Pseudocolumn `_irowts` can be used along with `INTERP` to return the timestamps associated with interpolation points(support after version 3.0.2.0). - Pseudocolumn `_irowts` can be used along with `INTERP` to return the timestamps associated with interpolation points(support after version 3.0.2.0).
- Pseudocolumn `_isfilled` can be used along with `INTERP` to indicate whether the results are original records or data points generated by interpolation algorithm(support after version 3.0.3.0). - Pseudocolumn `_isfilled` can be used along with `INTERP` to indicate whether the results are original records or data points generated by interpolation algorithm(support after version 3.0.3.0).
**Example**
- We use the smart meters example used in this documentation to illustrate how to use the INTERP function.
- We want to downsample every 1 hour and use a linear fill for missing values. Note the order in which the "partition by" clause and the "range", "every" and "fill" parameters are used.
```sql
SELECT _irowts,INTERP(current) FROM test.meters PARTITION BY TBNAME RANGE('2017-07-22 00:00:00','2017-07-24 12:25:00') EVERY(1h) FILL(LINEAR)
```
### LAST ### LAST
```sql ```sql
......
...@@ -21,7 +21,7 @@ part_list can be any scalar expression, such as a column, constant, scalar funct ...@@ -21,7 +21,7 @@ part_list can be any scalar expression, such as a column, constant, scalar funct
A PARTITION BY clause is processed as follows: A PARTITION BY clause is processed as follows:
- The PARTITION BY clause must occur after the WHERE clause - The PARTITION BY clause must occur after the WHERE clause
- The PARTITION BY caluse partitions the data according to the specified dimentions, then perform computation on each partition. The performed computation is determined by the rest of the statement - a window clause, GROUP BY clause, or SELECT clause. - The PARTITION BY caluse partitions the data according to the specified dimensions, then perform computation on each partition. The performed computation is determined by the rest of the statement - a window clause, GROUP BY clause, or SELECT clause.
- The PARTITION BY clause can be used together with a window clause or GROUP BY clause. In this case, the window or GROUP BY clause takes effect on every partition. For example, the following statement partitions the table by the location tag, performs downsampling over a 10 minute window, and returns the maximum value: - The PARTITION BY clause can be used together with a window clause or GROUP BY clause. In this case, the window or GROUP BY clause takes effect on every partition. For example, the following statement partitions the table by the location tag, performs downsampling over a 10 minute window, and returns the maximum value:
```sql ```sql
...@@ -105,7 +105,7 @@ SELECT COUNT(*) FROM temp_tb_1 INTERVAL(1m) SLIDING(2m); ...@@ -105,7 +105,7 @@ SELECT COUNT(*) FROM temp_tb_1 INTERVAL(1m) SLIDING(2m);
When using time windows, note the following: When using time windows, note the following:
- The window length for aggregation depends on the value of INTERVAL. The minimum interval is 10 ms. You can configure a window as an offset from UTC 0:00. The offset cannot be smaler than the interval. You can use SLIDING to specify the length of time that the window moves forward. - The window length for aggregation depends on the value of INTERVAL. The minimum interval is 10 ms. You can configure a window as an offset from UTC 0:00. The offset cannot be smaller than the interval. You can use SLIDING to specify the length of time that the window moves forward.
Please note that the `timezone` parameter should be configured to be the same value in the `taos.cfg` configuration file on client side and server side. Please note that the `timezone` parameter should be configured to be the same value in the `taos.cfg` configuration file on client side and server side.
- The result set is in ascending order of timestamp when you aggregate by time window. - The result set is in ascending order of timestamp when you aggregate by time window.
......
...@@ -55,7 +55,7 @@ description: This document describes the JSON data type in TDengine. ...@@ -55,7 +55,7 @@ description: This document describes the JSON data type in TDengine.
4. Tag Operations 4. Tag Operations
The value of a JSON tag can be altered. Please note that the full JSON will be overriden when doing this. The value of a JSON tag can be altered. Please note that the full JSON will be overridden when doing this.
The name of a JSON tag can be altered. The name of a JSON tag can be altered.
......
...@@ -274,9 +274,9 @@ Provides dnode configuration information. ...@@ -274,9 +274,9 @@ Provides dnode configuration information.
| 1 | stream_name | BINARY(64) | Stream name | | 1 | stream_name | BINARY(64) | Stream name |
| 2 | create_time | TIMESTAMP | Creation time | | 2 | create_time | TIMESTAMP | Creation time |
| 3 | sql | BINARY(1024) | SQL statement used to create the stream | | 3 | sql | BINARY(1024) | SQL statement used to create the stream |
| 4 | status | BIANRY(20) | Current status | | 4 | status | BINARY(20) | Current status |
| 5 | source_db | BINARY(64) | Source database | | 5 | source_db | BINARY(64) | Source database |
| 6 | target_db | BIANRY(64) | Target database | | 6 | target_db | BINARY(64) | Target database |
| 7 | target_table | BINARY(192) | Target table | | 7 | target_table | BINARY(192) | Target table |
| 8 | watermark | BIGINT | Watermark (see stream processing documentation). It should be noted that `watermark` is a TDengine keyword and needs to be escaped with ` when used as a column name. | | 8 | watermark | BIGINT | Watermark (see stream processing documentation). It should be noted that `watermark` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 9 | trigger | INT | Method of triggering the result push (see stream processing documentation). It should be noted that `trigger` is a TDengine keyword and needs to be escaped with ` when used as a column name. | | 9 | trigger | INT | Method of triggering the result push (see stream processing documentation). It should be noted that `trigger` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
...@@ -4,7 +4,7 @@ sidebar_label: SHOW Statement ...@@ -4,7 +4,7 @@ sidebar_label: SHOW Statement
description: This document describes how to use the SHOW statement in TDengine. description: This document describes how to use the SHOW statement in TDengine.
--- ---
`SHOW` command can be used to get brief system information. To get details about metatadata, information, and status in the system, please use `select` to query the tables in database `INFORMATION_SCHEMA`. `SHOW` command can be used to get brief system information. To get details about metadata, information, and status in the system, please use `select` to query the tables in database `INFORMATION_SCHEMA`.
## SHOW APPS ## SHOW APPS
......
...@@ -68,7 +68,7 @@ The following return value results indicate that the verification passed. ...@@ -68,7 +68,7 @@ The following return value results indicate that the verification passed.
## HTTP request URL format ## HTTP request URL format
```text ```text
http://<fqdn>:<port>/rest/sql/[db_name][?tz=timezone] http://<fqdn>:<port>/rest/sql/[db_name][?tz=timezone[&req_id=req_id]]
``` ```
Parameter Description: Parameter Description:
...@@ -77,6 +77,7 @@ Parameter Description: ...@@ -77,6 +77,7 @@ Parameter Description:
- port: httpPort configuration item in the configuration file, default is 6041. - port: httpPort configuration item in the configuration file, default is 6041.
- db_name: Optional parameter that specifies the default database name for the executed SQL command. - db_name: Optional parameter that specifies the default database name for the executed SQL command.
- tz: Optional parameter that specifies the timezone of the returned time, following the IANA Time Zone rules, e.g. `America/New_York`. - tz: Optional parameter that specifies the timezone of the returned time, following the IANA Time Zone rules, e.g. `America/New_York`.
- req_id: Optional parameter that specifies the request id for tracing.
For example, `http://h1.taos.com:6041/rest/sql/test` is a URL to `h1.taos.com:6041` and sets the default database name to `test`. For example, `http://h1.taos.com:6041/rest/sql/test` is a URL to `h1.taos.com:6041` and sets the default database name to `test`.
...@@ -99,13 +100,13 @@ The HTTP request's BODY is a complete SQL command, and the data table in the SQL ...@@ -99,13 +100,13 @@ The HTTP request's BODY is a complete SQL command, and the data table in the SQL
Use `curl` to initiate an HTTP request with a custom authentication method, with the following syntax. Use `curl` to initiate an HTTP request with a custom authentication method, with the following syntax.
```bash ```bash
curl -L -H "Authorization: Basic <TOKEN>" -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name][?tz=timezone] curl -L -H "Authorization: Basic <TOKEN>" -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name][?tz=timezone[&req_id=req_id]]
``` ```
or or
```bash ```bash
curl -L -u username:password -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name][?tz=timezone] curl -L -u username:password -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name][?tz=timezone[&req_id=req_id]]
``` ```
where `TOKEN` is the string after Base64 encoding of `{username}:{password}`, e.g. `root:taosdata` is encoded as `cm9vdDp0YW9zZGF0YQ==`.. where `TOKEN` is the string after Base64 encoding of `{username}:{password}`, e.g. `root:taosdata` is encoded as `cm9vdDp0YW9zZGF0YQ==`..
......
...@@ -300,7 +300,7 @@ stmt.executeUpdate("create table if not exists tb (ts timestamp, temperature int ...@@ -300,7 +300,7 @@ stmt.executeUpdate("create table if not exists tb (ts timestamp, temperature int
> **Note**: If you do not use `use db` to specify the database, all subsequent operations on the table need to add the database name as a prefix, such as db.tb. > **Note**: If you do not use `use db` to specify the database, all subsequent operations on the table need to add the database name as a prefix, such as db.tb.
### 插入数据 ### Insert data
```java ```java
// insert data // insert data
......
...@@ -39,7 +39,7 @@ The Rust Connector is still under rapid development and is not guaranteed to be ...@@ -39,7 +39,7 @@ The Rust Connector is still under rapid development and is not guaranteed to be
* Install the Rust development toolchain * Install the Rust development toolchain
* If using the native connection, please install the TDengine client driver. Please refer to [install client driver](/reference/connector#install-client-driver) * If using the native connection, please install the TDengine client driver. Please refer to [install client driver](/reference/connector#install-client-driver)
# Add taos dependency ### Add taos dependency
Depending on the connection method, add the [taos][taos] dependency in your Rust project as follows: Depending on the connection method, add the [taos][taos] dependency in your Rust project as follows:
...@@ -282,7 +282,7 @@ In the application code, use `pool.get()? ` to get a connection object [Taos]. ...@@ -282,7 +282,7 @@ In the application code, use `pool.get()? ` to get a connection object [Taos].
let taos = pool.get()?; let taos = pool.get()?;
``` ```
# Connectors ### Connectors
The [Taos][struct.Taos] object provides an API to perform operations on multiple databases. The [Taos][struct.Taos] object provides an API to perform operations on multiple databases.
......
...@@ -228,6 +228,16 @@ All arguments to the `connect()` function are optional keyword arguments. The fo ...@@ -228,6 +228,16 @@ All arguments to the `connect()` function are optional keyword arguments. The fo
- `password`: TDengine user password. The default is `taosdata`. - `password`: TDengine user password. The default is `taosdata`.
- `timeout`: HTTP request timeout. Enter a value in seconds. The default is `socket._GLOBAL_DEFAULT_TIMEOUT`. Usually, no configuration is needed. - `timeout`: HTTP request timeout. Enter a value in seconds. The default is `socket._GLOBAL_DEFAULT_TIMEOUT`. Usually, no configuration is needed.
</TabItem>
<TabItem value="websocket" label="WebSocket connection">
```python
{{#include docs/examples/python/connect_websocket_examples.py:connect}}
```
The parameter of `connect()` is the url of TDengine, and the protocol is `taosws` or `ws`.
</TabItem> </TabItem>
</Tabs> </Tabs>
...@@ -298,7 +308,15 @@ The `RestClient` class is a direct wrapper for the [REST API](/reference/rest-ap ...@@ -298,7 +308,15 @@ The `RestClient` class is a direct wrapper for the [REST API](/reference/rest-ap
For a more detailed description of the `sql()` method, please refer to [RestClient](https://docs.taosdata.com/api/taospy/taosrest/restclient.html). For a more detailed description of the `sql()` method, please refer to [RestClient](https://docs.taosdata.com/api/taospy/taosrest/restclient.html).
</TabItem>
<TabItem value="websocket" label="WebSocket connection">
```python
{{#include docs/examples/python/connect_websocket_examples.py:basic}}
```
- `conn.execute`: can use to execute arbitrary SQL statements, and return the number of rows affected.
- `conn.query`: can use to execute query SQL statements, and return the query results.
</TabItem> </TabItem>
</Tabs> </Tabs>
...@@ -319,6 +337,13 @@ For a more detailed description of the `sql()` method, please refer to [RestClie ...@@ -319,6 +337,13 @@ For a more detailed description of the `sql()` method, please refer to [RestClie
{{#include docs/examples/python/conn_rest_pandas.py}} {{#include docs/examples/python/conn_rest_pandas.py}}
``` ```
</TabItem>
<TabItem value="websocket" label="WebSocket connection">
```python
{{#include docs/examples/python/conn_websocket_pandas.py}}
```
</TabItem> </TabItem>
</Tabs> </Tabs>
......
...@@ -94,7 +94,7 @@ In this scenario, modifying your project file is required in order to copy the W ...@@ -94,7 +94,7 @@ In this scenario, modifying your project file is required in order to copy the W
<ItemGroup> <ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.*" GeneratePathProperty="true" /> <PackageReference Include="TDengine.Connector" Version="3.0.*" GeneratePathProperty="true" />
</ItemGroup> </ItemGroup>
<Target Name="copyDLLDepency" BeforeTargets="BeforeBuild"> <Target Name="copyDLLDependency" BeforeTargets="BeforeBuild">
<ItemGroup> <ItemGroup>
<DepDLLFiles Include="$(PkgTDengine_Connector)\runtimes\**\*.*" /> <DepDLLFiles Include="$(PkgTDengine_Connector)\runtimes\**\*.*" />
</ItemGroup> </ItemGroup>
......
...@@ -87,7 +87,7 @@ In this section a few sample programs which use TDengine PHP connector to access ...@@ -87,7 +87,7 @@ In this section a few sample programs which use TDengine PHP connector to access
> Any error would throw exception: `TDengine\Exception\TDengineException` > Any error would throw exception: `TDengine\Exception\TDengineException`
### Establish Conection ### Establish Connection
<details> <details>
<summary>Establish Connection</summary> <summary>Establish Connection</summary>
......
...@@ -11,7 +11,7 @@ import PkgListV3 from "/components/PkgListV3"; ...@@ -11,7 +11,7 @@ import PkgListV3 from "/components/PkgListV3";
The default installation path is C:\TDengine, including the following files (directories). The default installation path is C:\TDengine, including the following files (directories).
- _taos.exe_: TDengine CLI command-line program - _taos.exe_: TDengine CLI command-line program
- _taosadapter.exe_: server-side executable that provides RESTful services and accepts writing requests from a variety of other softwares - _taosadapter.exe_: server-side executable that provides RESTful services and accepts writing requests from a variety of other software
- _taosBenchmark.exe_: TDengine testing tool - _taosBenchmark.exe_: TDengine testing tool
- _cfg_: configuration file directory - _cfg_: configuration file directory
- _driver_: client driver dynamic link library - _driver_: client driver dynamic link library
......
...@@ -208,7 +208,7 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\) ...@@ -208,7 +208,7 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
Keep trying if failed to insert, default is no. Available with v3.0.9+. Keep trying if failed to insert, default is no. Available with v3.0.9+.
- **-z/--trying-interval <NUMBER\>** : - **-z/--trying-interval <NUMBER\>** :
Specify interval between keep trying insert. Valid value is a postive number. Only valid when keep trying be enabled. Available with v3.0.9+. Specify interval between keep trying insert. Valid value is a positive number. Only valid when keep trying be enabled. Available with v3.0.9+.
- **-V/--version** : - **-V/--version** :
Show version information only. Users should not use it with other parameters. Show version information only. Users should not use it with other parameters.
...@@ -239,7 +239,7 @@ The parameters listed in this section apply to all function modes. ...@@ -239,7 +239,7 @@ The parameters listed in this section apply to all function modes.
- ** keep_trying ** : Keep trying if failed to insert, default is no. Available with v3.0.9+. - ** keep_trying ** : Keep trying if failed to insert, default is no. Available with v3.0.9+.
- ** trying_interval ** : Specify interval between keep trying insert. Valid value is a postive number. Only valid when keep trying be enabled. Available with v3.0.9+. - ** trying_interval ** : Specify interval between keep trying insert. Valid value is a positive number. Only valid when keep trying be enabled. Available with v3.0.9+.
#### Database related configuration parameters #### Database related configuration parameters
...@@ -352,7 +352,7 @@ The configuration parameters for specifying super table tag columns and data col ...@@ -352,7 +352,7 @@ The configuration parameters for specifying super table tag columns and data col
- **min**: The minimum value of the column/label of the data type. The generated value will equal or large than the minimum value. - **min**: The minimum value of the column/label of the data type. The generated value will equal or large than the minimum value.
- **max**: The maximum value of the column/label of the data type. The generated value will less than the maxium value. - **max**: The maximum value of the column/label of the data type. The generated value will less than the maximum value.
- **values**: The value field of the nchar/binary column/label, which will be chosen randomly from the values. - **values**: The value field of the nchar/binary column/label, which will be chosen randomly from the values.
......
...@@ -1590,7 +1590,7 @@ ...@@ -1590,7 +1590,7 @@
}, },
{ {
"datasource": "${DS_TDENGINE}", "datasource": "${DS_TDENGINE}",
"description": "taosd max memery last 10 minutes", "description": "taosd max memory last 10 minutes",
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"color": { "color": {
...@@ -1919,7 +1919,7 @@ ...@@ -1919,7 +1919,7 @@
}, },
{ {
"datasource": "${DS_TDENGINE}", "datasource": "${DS_TDENGINE}",
"description": "taosd max memery last 10 minutes", "description": "taosd max memory last 10 minutes",
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"color": { "color": {
...@@ -1977,7 +1977,7 @@ ...@@ -1977,7 +1977,7 @@
}, },
{ {
"datasource": "${DS_TDENGINE}", "datasource": "${DS_TDENGINE}",
"description": "taosd max memery last 10 minutes", "description": "taosd max memory last 10 minutes",
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"color": { "color": {
...@@ -2825,7 +2825,7 @@ ...@@ -2825,7 +2825,7 @@
"timeFrom": null, "timeFrom": null,
"timeRegions": [], "timeRegions": [],
"timeShift": null, "timeShift": null,
"title": "Requets Count per Minutes $fqdn", "title": "Requests Count per Minutes $fqdn",
"tooltip": { "tooltip": {
"shared": true, "shared": true,
"sort": 0, "sort": 0,
......
...@@ -1566,7 +1566,7 @@ ...@@ -1566,7 +1566,7 @@
}, },
{ {
"datasource": "${ds}", "datasource": "${ds}",
"description": "taosd max memery last 10 minutes", "description": "taosd max memory last 10 minutes",
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"color": { "color": {
...@@ -1933,7 +1933,7 @@ ...@@ -1933,7 +1933,7 @@
}, },
{ {
"datasource": "${ds}", "datasource": "${ds}",
"description": "taosd max memery last 10 minutes", "description": "taosd max memory last 10 minutes",
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"color": { "color": {
...@@ -2000,7 +2000,7 @@ ...@@ -2000,7 +2000,7 @@
}, },
{ {
"datasource": "${ds}", "datasource": "${ds}",
"description": "taosd max memery last 10 minutes", "description": "taosd max memory last 10 minutes",
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"color": { "color": {
...@@ -2961,7 +2961,7 @@ ...@@ -2961,7 +2961,7 @@
"timeFrom": null, "timeFrom": null,
"timeRegions": [], "timeRegions": [],
"timeShift": null, "timeShift": null,
"title": "Requets Count per Minutes $fqdn", "title": "Requests Count per Minutes $fqdn",
"tooltip": { "tooltip": {
"shared": true, "shared": true,
"sort": 0, "sort": 0,
......
...@@ -186,7 +186,7 @@ ...@@ -186,7 +186,7 @@
}, },
{ {
"datasource": "TDengine", "datasource": "TDengine",
"description": "taosd max memery last 10 minutes", "description": "taosd max memory last 10 minutes",
"gridPos": { "gridPos": {
"h": 6, "h": 6,
"w": 8, "w": 8,
...@@ -253,7 +253,7 @@ ...@@ -253,7 +253,7 @@
], ],
"timeFrom": null, "timeFrom": null,
"timeShift": null, "timeShift": null,
"title": "taosd memery", "title": "taosd memory",
"type": "gauge" "type": "gauge"
}, },
{ {
......
...@@ -29,7 +29,7 @@ taos -C ...@@ -29,7 +29,7 @@ taos -C
taos --dump-config taos --dump-config
``` ```
# Configuration Parameters ## Configuration Parameters
:::note :::note
The parameters described in this document by the effect that they have on the system. The parameters described in this document by the effect that they have on the system.
......
...@@ -24,7 +24,7 @@ All executable files of TDengine are in the _/usr/local/taos/bin_ directory by d ...@@ -24,7 +24,7 @@ All executable files of TDengine are in the _/usr/local/taos/bin_ directory by d
- _taosdump_: data import and export tool - _taosdump_: data import and export tool
- _taosBenchmark_: TDengine testing tool - _taosBenchmark_: TDengine testing tool
- _remove.sh_: script to uninstall TDengine, please execute it carefully, link to the **rmtaos** command in the /usr/bin directory. Will remove the TDengine installation directory `/usr/local/taos`, but will keep `/etc/taos`, `/var/lib/taos`, `/var/log/taos` - _remove.sh_: script to uninstall TDengine, please execute it carefully, link to the **rmtaos** command in the /usr/bin directory. Will remove the TDengine installation directory `/usr/local/taos`, but will keep `/etc/taos`, `/var/lib/taos`, `/var/log/taos`
- _taosadapter_: server-side executable that provides RESTful services and accepts writing requests from a variety of other softwares - _taosadapter_: server-side executable that provides RESTful services and accepts writing requests from a variety of other software
- _TDinsight.sh_: script to download TDinsight and install it - _TDinsight.sh_: script to download TDinsight and install it
- _set_core.sh_: script for setting up the system to generate core dump files for easy debugging - _set_core.sh_: script for setting up the system to generate core dump files for easy debugging
- _taosd-dump-cfg.gdb_: script to facilitate debugging of taosd's gdb execution. - _taosd-dump-cfg.gdb_: script to facilitate debugging of taosd's gdb execution.
......
...@@ -131,7 +131,7 @@ create stable st (_ts timestamp, c1 bigint, c2 bool, c3 binary(6), c4 bigint) ta ...@@ -131,7 +131,7 @@ create stable st (_ts timestamp, c1 bigint, c2 bool, c3 binary(6), c4 bigint) ta
This section describes the impact on the schema caused by different data being written. This section describes the impact on the schema caused by different data being written.
If you use line protocol to write to a specific tag field and then later change the field type, a schema error will ocur. This triggers an error on the write API. This is shown as follows: If you use line protocol to write to a specific tag field and then later change the field type, a schema error will occur. This triggers an error on the write API. This is shown as follows:
```json ```json
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4 1626006833639000000 st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4 1626006833639000000
......
...@@ -31,7 +31,7 @@ The default database name written by taosAdapter is `statsd`. To specify a diffe ...@@ -31,7 +31,7 @@ The default database name written by taosAdapter is `statsd`. To specify a diffe
### Configuring StatsD ### Configuring StatsD
To use StatsD, you need to download its [source code](https://github.com/statsd/statsd). Please refer to the example file `exampleConfig.js` in the root directory of the source download to modify the configuration file. In <taosAdpater's host\>, please fill in the domain name or IP address of the server running taosAdapter, and <port for StatsD\>, please fill in the port where taosAdapter receives StatsD data (default is 6044). To use StatsD, you need to download its [source code](https://github.com/statsd/statsd). Please refer to the example file `exampleConfig.js` in the root directory of the source download to modify the configuration file. In <taosAdapter's host\>, please fill in the domain name or IP address of the server running taosAdapter, and <port for StatsD\>, please fill in the port where taosAdapter receives StatsD data (default is 6044).
``` ```
backends section add ". /backends/repeater" backends section add ". /backends/repeater"
......
...@@ -28,4 +28,4 @@ SHOW MNODES; ...@@ -28,4 +28,4 @@ SHOW MNODES;
The end point and role/status (leader, follower, candidate, offline) of all mnodes can be shown by the above command. When the first dnode is started in a cluster, there must be one mnode in this dnode. Without at least one mnode, the cluster cannot work. The end point and role/status (leader, follower, candidate, offline) of all mnodes can be shown by the above command. When the first dnode is started in a cluster, there must be one mnode in this dnode. Without at least one mnode, the cluster cannot work.
From TDengine 3.0.0, RAFT procotol is used to guarantee the high availability, so the number of mnodes is should be 1 or 3. From TDengine 3.0.0, RAFT protocol is used to guarantee the high availability, so the number of mnodes is should be 1 or 3.
...@@ -14,8 +14,8 @@ create database db0 vgroups 100; ...@@ -14,8 +14,8 @@ create database db0 vgroups 100;
The proper value of `vgroups` depends on available system resources. Assuming there is only one database to be created in the system, then the number of `vgroups` is determined by the available resources from all dnodes. In principle more vgroups can be created if you have more CPU and memory. Disk I/O is another important factor to consider. Once the bottleneck shows on disk I/O, more vgroups may downgrad the system performance significantly. If multiple databases are to be created in the system, then the total number of `vroups` of all the databases are dependent on the available system resources. It needs to be careful to distribute vgroups among these databases, you need to consider the number of tables, data writing frequency, size of each data row for all these databases. A recommended practice is to firstly choose a starting number for `vgroups`, for example double of the number of CPU cores, then try to adjust and optimize system configurations to find the best setting for `vgroups`, then distribute these vgroups among databases. The proper value of `vgroups` depends on available system resources. Assuming there is only one database to be created in the system, then the number of `vgroups` is determined by the available resources from all dnodes. In principle more vgroups can be created if you have more CPU and memory. Disk I/O is another important factor to consider. Once the bottleneck shows on disk I/O, more vgroups may downgrad the system performance significantly. If multiple databases are to be created in the system, then the total number of `vroups` of all the databases are dependent on the available system resources. It needs to be careful to distribute vgroups among these databases, you need to consider the number of tables, data writing frequency, size of each data row for all these databases. A recommended practice is to firstly choose a starting number for `vgroups`, for example double of the number of CPU cores, then try to adjust and optimize system configurations to find the best setting for `vgroups`, then distribute these vgroups among databases.
Furthermode, TDengine distributes the vgroups of each database equally among all dnodes. In case of replica 3, the distrubtion is even more complex, TDengine tries its best to prevent any dnode from becoming a bottleneck. Furthermode, TDengine distributes the vgroups of each database equally among all dnodes. In case of replica 3, the distribution is even more complex, TDengine tries its best to prevent any dnode from becoming a bottleneck.
TDegnine utilizes the above ways to achieve load balance in a cluster, and finally achieve higher throughput. TDegnine utilizes the above ways to achieve load balance in a cluster, and finally achieve higher throughput.
Once the load balance is achieved, after some operations like deleting tables or droping databases, the load across all dnodes may become inbalanced, the method of rebalance will be provided in later versions. However, even without explicit rebalancing, TDengine will try its best to achieve new balance without manual interfering when a new database is created. Once the load balance is achieved, after some operations like deleting tables or dropping databases, the load across all dnodes may become imbalanced, the method of rebalance will be provided in later versions. However, even without explicit rebalancing, TDengine will try its best to achieve new balance without manual interfering when a new database is created.
\ No newline at end of file
...@@ -67,7 +67,7 @@ sudo systemctl start telegraf ...@@ -67,7 +67,7 @@ sudo systemctl start telegraf
Log in to the Grafana interface using a web browser at `IP:3000`, with the system's initial username and password being `admin/admin`. Log in to the Grafana interface using a web browser at `IP:3000`, with the system's initial username and password being `admin/admin`.
Click on the gear icon on the left and select `Plugins`, you should find the TDengine data source plugin icon. Click on the gear icon on the left and select `Plugins`, you should find the TDengine data source plugin icon.
Click on the plus icon on the left and select `Import` to get the data from `https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard-v3.json` (for Tdengine 3.0. for TDengine 2.x, please use `telegraf-dashboard-v2.json`), download the dashboard JSON file and import it. You will then see the dashboard in the following screen. Click on the plus icon on the left and select `Import` to get the data from `https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard-v3.json` (for TDengine 3.0. for TDengine 2.x, please use `telegraf-dashboard-v2.json`), download the dashboard JSON file and import it. You will then see the dashboard in the following screen.
![TDengine Database IT-DevOps-Solutions-telegraf-dashboard](./IT-DevOps-Solutions-telegraf-dashboard.webp) ![TDengine Database IT-DevOps-Solutions-telegraf-dashboard](./IT-DevOps-Solutions-telegraf-dashboard.webp)
......
import pandas
from sqlalchemy import create_engine, text
import taos
taos_conn = taos.connect()
taos_conn.execute('drop database if exists power')
taos_conn.execute('create database if not exists power')
taos_conn.execute("use power")
taos_conn.execute(
"CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)")
# insert data
taos_conn.execute("""INSERT INTO power.d1001 USING power.meters TAGS('California.SanFrancisco', 2)
VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000)
('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
power.d1002 USING power.meters TAGS('California.SanFrancisco', 3)
VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
power.d1003 USING power.meters TAGS('California.LosAngeles', 2)
VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
power.d1004 USING power.meters TAGS('California.LosAngeles', 3)
VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)""")
engine = create_engine("taosws://root:taosdata@localhost:6041")
conn = engine.connect()
df: pandas.DataFrame = pandas.read_sql(text("SELECT * FROM power.meters"), conn)
conn.close()
# print index
print(df.index)
# print data type of element in ts column
print(type(df.ts[0]))
print(df.head(3))
# output:
# RangeIndex(start=0, stop=8, step=1)
# <class 'pandas._libs.tslibs.timestamps.Timestamp'>
# ts current ... location groupid
# 0 2018-10-03 14:38:05.000 10.3 ... California.SanFrancisco 2
# 1 2018-10-03 14:38:15.000 12.6 ... California.SanFrancisco 2
# 2 2018-10-03 14:38:16.800 12.3 ... California.SanFrancisco 2
# ANCHOR: connect
import taosws
conn = taosws.connect("taosws://root:taosdata@localhost:6041")
# ANCHOR_END: connect
# ANCHOR: basic
conn.execute("drop database if exists connwspy")
conn.execute("create database if not exists connwspy")
conn.execute("use connwspy")
conn.execute("create table if not exists stb (ts timestamp, c1 int) tags (t1 int)")
conn.execute("create table if not exists tb1 using stb tags (1)")
conn.execute("insert into tb1 values (now, 1)")
conn.execute("insert into tb1 values (now, 2)")
conn.execute("insert into tb1 values (now, 3)")
r = conn.execute("select * from stb")
result = conn.query("select * from stb")
num_of_fields = result.field_count
print(num_of_fields)
for row in result:
print(row)
# output:
# 3
# ('2023-02-28 15:56:13.329 +08:00', 1, 1)
# ('2023-02-28 15:56:13.333 +08:00', 2, 1)
# ('2023-02-28 15:56:13.337 +08:00', 3, 1)
...@@ -149,7 +149,7 @@ TDengine 建议用数据采集点的名字(如上表中的 d1001)来做表 ...@@ -149,7 +149,7 @@ TDengine 建议用数据采集点的名字(如上表中的 d1001)来做表
3. 子表一定属于一张超级表,但普通表不属于任何超级表 3. 子表一定属于一张超级表,但普通表不属于任何超级表
4. 普通表无法转为子表,子表也无法转为普通表。 4. 普通表无法转为子表,子表也无法转为普通表。
超级表与基于超级表建立的子表之间的关系表现在: 超级表与基于超级表建立的子表之间的关系表现在:
1. 一张超级表包含有多张子表,这些子表具有相同的采集量 Schema,但带有不同的标签值。 1. 一张超级表包含有多张子表,这些子表具有相同的采集量 Schema,但带有不同的标签值。
2. 不能通过子表调整数据或标签的模式,对于超级表的数据模式修改立即对所有的子表生效。 2. 不能通过子表调整数据或标签的模式,对于超级表的数据模式修改立即对所有的子表生效。
......
...@@ -289,9 +289,9 @@ CREATE TOPIC topic_name AS DATABASE db_name; ...@@ -289,9 +289,9 @@ CREATE TOPIC topic_name AS DATABASE db_name;
| `td.connect.port` | integer | 用于创建连接,同 `taos_connect` | | | `td.connect.port` | integer | 用于创建连接,同 `taos_connect` | |
| `group.id` | string | 消费组 ID,同一消费组共享消费进度 | **必填项**。最大长度:192。 | | `group.id` | string | 消费组 ID,同一消费组共享消费进度 | **必填项**。最大长度:192。 |
| `client.id` | string | 客户端 ID | 最大长度:192。 | | `client.id` | string | 客户端 ID | 最大长度:192。 |
| `auto.offset.reset` | enum | 消费组订阅的初始位置 | 可选:`earliest`(default), `latest`, `none` | | `auto.offset.reset` | enum | 消费组订阅的初始位置 | <br />`earliest`: default;从头开始订阅; <br/>`latest`: 仅从最新数据开始订阅; <br/>`none`: 没有提交的 offset 无法订阅 |
| `enable.auto.commit` | boolean | 是否启用消费位点自动提交 | 合法值:`true`, `false`。 | | `enable.auto.commit` | boolean | 是否启用消费位点自动提交 | 合法值:`true`, `false`。 |
| `auto.commit.interval.ms` | integer | 以毫秒为单位的消费记录自动提交消费位点时间间 | 默认 5000 m | | `auto.commit.interval.ms` | integer | 以毫秒为单位的消费记录自动提交消费位点时间间 | 默认 5000 m |
| `enable.heartbeat.background` | boolean | 启用后台心跳,启用后即使长时间不 poll 消息也不会造成离线 | 默认开启 | | `enable.heartbeat.background` | boolean | 启用后台心跳,启用后即使长时间不 poll 消息也不会造成离线 | 默认开启 |
| `experimental.snapshot.enable` | boolean | 是否允许从 TSDB 消费数据 | 实验功能,默认关闭 | | `experimental.snapshot.enable` | boolean | 是否允许从 TSDB 消费数据 | 实验功能,默认关闭 |
| `msg.with.table.name` | boolean | 是否允许从消息中解析表名, 不适用于列订阅(列订阅时可将 tbname 作为列写入 subquery 语句) | | | `msg.with.table.name` | boolean | 是否允许从消息中解析表名, 不适用于列订阅(列订阅时可将 tbname 作为列写入 subquery 语句) | |
......
...@@ -65,11 +65,11 @@ int32_t aggfn_init() { ...@@ -65,11 +65,11 @@ int32_t aggfn_init() {
} }
// aggregate start function. The intermediate value or the state(@interBuf) is initialized in this function. The function name shall be concatenation of udf name and _start suffix // aggregate start function. The intermediate value or the state(@interBuf) is initialized in this function. The function name shall be concatenation of udf name and _start suffix
// @param interbuf intermediate value to intialize // @param interbuf intermediate value to initialize
// @return error number defined in taoserror.h // @return error number defined in taoserror.h
int32_t aggfn_start(SUdfInterBuf* interBuf) { int32_t aggfn_start(SUdfInterBuf* interBuf) {
// initialize intermediate value in interBuf // initialize intermediate value in interBuf
return TSDB_CODE_SUCESS; return TSDB_CODE_SUCCESS;
} }
// aggregate reduce function. This function aggregate old state(@interbuf) and one data bock(inputBlock) and output a new state(@newInterBuf). // aggregate reduce function. This function aggregate old state(@interbuf) and one data bock(inputBlock) and output a new state(@newInterBuf).
......
...@@ -69,7 +69,7 @@ curl -L -H "Authorization: Basic cm9vdDp0YW9zZGF0YQ==" \ ...@@ -69,7 +69,7 @@ curl -L -H "Authorization: Basic cm9vdDp0YW9zZGF0YQ==" \
## HTTP 请求格式 ## HTTP 请求格式
```text ```text
http://<fqdn>:<port>/rest/sql/[db_name][?tz=timezone] http://<fqdn>:<port>/rest/sql/[db_name][?tz=timezone[&req_id=req_id]]
``` ```
参数说明: 参数说明:
...@@ -78,6 +78,7 @@ http://<fqdn>:<port>/rest/sql/[db_name][?tz=timezone] ...@@ -78,6 +78,7 @@ http://<fqdn>:<port>/rest/sql/[db_name][?tz=timezone]
- port: 配置文件中 httpPort 配置项,缺省为 6041。 - port: 配置文件中 httpPort 配置项,缺省为 6041。
- db_name: 可选参数,指定本次所执行的 SQL 语句的默认数据库库名。 - db_name: 可选参数,指定本次所执行的 SQL 语句的默认数据库库名。
- tz: 可选参数,指定返回时间的时区,遵照 IANA Time Zone 规则,如 `America/New_York`。 - tz: 可选参数,指定返回时间的时区,遵照 IANA Time Zone 规则,如 `America/New_York`。
- req_id: 可选参数,指定请求 id,可以用于 tracing。
例如:`http://h1.taos.com:6041/rest/sql/test` 是指向地址为 `h1.taos.com:6041` 的 URL,并将默认使用的数据库库名设置为 `test`。 例如:`http://h1.taos.com:6041/rest/sql/test` 是指向地址为 `h1.taos.com:6041` 的 URL,并将默认使用的数据库库名设置为 `test`。
...@@ -100,13 +101,13 @@ HTTP 请求的 BODY 里就是一个完整的 SQL 语句,SQL 语句中的数据 ...@@ -100,13 +101,13 @@ HTTP 请求的 BODY 里就是一个完整的 SQL 语句,SQL 语句中的数据
使用 `curl` 通过自定义身份认证方式来发起一个 HTTP Request,语法如下: 使用 `curl` 通过自定义身份认证方式来发起一个 HTTP Request,语法如下:
```bash ```bash
curl -L -H "Authorization: Basic <TOKEN>" -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name][?tz=timezone] curl -L -H "Authorization: Basic <TOKEN>" -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name][?tz=timezone[&req_id=req_id]]
``` ```
或者, 或者,
```bash ```bash
curl -L -u username:password -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name][?tz=timezone] curl -L -u username:password -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name][?tz=timezone[&req_id=req_id]]
``` ```
其中,`TOKEN` 为 `{username}:{password}` 经过 Base64 编码之后的字符串,例如 `root:taosdata` 编码后为 `cm9vdDp0YW9zZGF0YQ==`。 其中,`TOKEN` 为 `{username}:{password}` 经过 Base64 编码之后的字符串,例如 `root:taosdata` 编码后为 `cm9vdDp0YW9zZGF0YQ==`。
......
...@@ -302,7 +302,7 @@ int taos_print_row(char *str, TAOS_ROW row, TAOS_FIELD *fields, int num_fields) ...@@ -302,7 +302,7 @@ int taos_print_row(char *str, TAOS_ROW row, TAOS_FIELD *fields, int num_fields)
- `TAOS_FIELD *taos_fetch_fields(TAOS_RES *res)` - `TAOS_FIELD *taos_fetch_fields(TAOS_RES *res)`
获取查询结果集每列数据的属性(列的名称、列的数据类型、列的长度),与 `taos_num_fileds()` 配合使用,可用来解析 `taos_fetch_row()` 返回的一个元组(一行)的数据。 `TAOS_FIELD` 的结构如下: 获取查询结果集每列数据的属性(列的名称、列的数据类型、列的长度),与 `taos_num_fields()` 配合使用,可用来解析 `taos_fetch_row()` 返回的一个元组(一行)的数据。 `TAOS_FIELD` 的结构如下:
```c ```c
typedef struct taosField { typedef struct taosField {
......
...@@ -229,6 +229,16 @@ curl -u root:taosdata http://<FQDN>:<PORT>/rest/sql -d "select server_version()" ...@@ -229,6 +229,16 @@ curl -u root:taosdata http://<FQDN>:<PORT>/rest/sql -d "select server_version()"
- `password`: TDengine 用户密码。默认是 taosdata。 - `password`: TDengine 用户密码。默认是 taosdata。
- `timeout`: HTTP 请求超时时间。单位为秒。默认为 `socket._GLOBAL_DEFAULT_TIMEOUT`。 一般无需配置。 - `timeout`: HTTP 请求超时时间。单位为秒。默认为 `socket._GLOBAL_DEFAULT_TIMEOUT`。 一般无需配置。
</TabItem>
<TabItem value="websocket" label="WebSocket 连接">
```python
{{#include docs/examples/python/connect_websocket_examples.py:connect}}
```
`connect()` 函数参数为连接 url,协议为 `taosws` 或 `ws`
</TabItem> </TabItem>
</Tabs> </Tabs>
...@@ -298,8 +308,15 @@ TaosCursor 类使用原生连接进行写入、查询操作。在客户端多线 ...@@ -298,8 +308,15 @@ TaosCursor 类使用原生连接进行写入、查询操作。在客户端多线
``` ```
对于 `sql()` 方法更详细的介绍, 请参考 [RestClient](https://docs.taosdata.com/api/taospy/taosrest/restclient.html)。 对于 `sql()` 方法更详细的介绍, 请参考 [RestClient](https://docs.taosdata.com/api/taospy/taosrest/restclient.html)。
</TabItem>
<TabItem value="websocket" label="WebSocket 连接">
```python
{{#include docs/examples/python/connect_websocket_examples.py:basic}}
```
- `conn.execute`: 用来执行任意 SQL 语句,返回影响的行数
- `conn.query`: 用来执行查询 SQL 语句,返回查询结果
</TabItem> </TabItem>
</Tabs> </Tabs>
...@@ -320,6 +337,13 @@ TaosCursor 类使用原生连接进行写入、查询操作。在客户端多线 ...@@ -320,6 +337,13 @@ TaosCursor 类使用原生连接进行写入、查询操作。在客户端多线
{{#include docs/examples/python/conn_rest_pandas.py}} {{#include docs/examples/python/conn_rest_pandas.py}}
``` ```
</TabItem>
<TabItem value="websocket" label="WebSocket 连接">
```python
{{#include docs/examples/python/conn_websocket_pandas.py}}
```
</TabItem> </TabItem>
</Tabs> </Tabs>
...@@ -335,15 +359,17 @@ TaosCursor 类使用原生连接进行写入、查询操作。在客户端多线 ...@@ -335,15 +359,17 @@ TaosCursor 类使用原生连接进行写入、查询操作。在客户端多线
```python ```python
{{#include docs/examples/python/tmq_example.py}} {{#include docs/examples/python/tmq_example.py}}
``` ```
</TabItem> </TabItem>
<TabItem value="rest" label="websocket 连接"> <TabItem value="websocket" label="WebSocket 连接">
除了原生的连接方式,Python 连接器还支持通过 websocket 订阅 TMQ 数据。 除了原生的连接方式,Python 连接器还支持通过 websocket 订阅 TMQ 数据。
```python ```python
{{#include docs/examples/python/tmq_websocket_example.py}} {{#include docs/examples/python/tmq_websocket_example.py}}
``` ```
</TabItem> </TabItem>
</Tabs> </Tabs>
...@@ -366,7 +392,7 @@ TaosCursor 类使用原生连接进行写入、查询操作。在客户端多线 ...@@ -366,7 +392,7 @@ TaosCursor 类使用原生连接进行写入、查询操作。在客户端多线
```python ```python
{{#include docs/examples/python/handle_exception.py}} {{#include docs/examples/python/handle_exception.py}}
``` ```
``
### 关于纳秒 (nanosecond) ### 关于纳秒 (nanosecond)
由于目前 Python 对 nanosecond 支持的不完善(见下面的链接),目前的实现方式是在 nanosecond 精度时返回整数,而不是 ms 和 us 返回的 datetime 类型,应用开发者需要自行处理,建议使用 pandas 的 to_datetime()。未来如果 Python 正式完整支持了纳秒,Python 连接器可能会修改相关接口。 由于目前 Python 对 nanosecond 支持的不完善(见下面的链接),目前的实现方式是在 nanosecond 精度时返回整数,而不是 ms 和 us 返回的 datetime 类型,应用开发者需要自行处理,建议使用 pandas 的 to_datetime()。未来如果 Python 正式完整支持了纳秒,Python 连接器可能会修改相关接口。
......
...@@ -96,7 +96,7 @@ dotnet add package TDengine.Connector ...@@ -96,7 +96,7 @@ dotnet add package TDengine.Connector
<ItemGroup> <ItemGroup>
<PackageReference Include="TDengine.Connector" Version="3.0.*" GeneratePathProperty="true" /> <PackageReference Include="TDengine.Connector" Version="3.0.*" GeneratePathProperty="true" />
</ItemGroup> </ItemGroup>
<Target Name="copyDLLDepency" BeforeTargets="BeforeBuild"> <Target Name="copyDLLDependency" BeforeTargets="BeforeBuild">
<ItemGroup> <ItemGroup>
<DepDLLFiles Include="$(PkgTDengine_Connector)\runtimes\**\*.*" /> <DepDLLFiles Include="$(PkgTDengine_Connector)\runtimes\**\*.*" />
</ItemGroup> </ItemGroup>
......
...@@ -102,7 +102,7 @@ spec: ...@@ -102,7 +102,7 @@ spec:
# Must set if you want a cluster. # Must set if you want a cluster.
- name: TAOS_FIRST_EP - name: TAOS_FIRST_EP
value: "$(STS_NAME)-0.$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local:$(TAOS_SERVER_PORT)" value: "$(STS_NAME)-0.$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local:$(TAOS_SERVER_PORT)"
# TAOS_FQND should always be setted in k8s env. # TAOS_FQND should always be set in k8s env.
- name: TAOS_FQDN - name: TAOS_FQDN
value: "$(POD_NAME).$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local" value: "$(POD_NAME).$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local"
volumeMounts: volumeMounts:
......
...@@ -274,9 +274,9 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数 ...@@ -274,9 +274,9 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
| 1 | stream_name | BINARY(64) | 流计算名称 | | 1 | stream_name | BINARY(64) | 流计算名称 |
| 2 | create_time | TIMESTAMP | 创建时间 | | 2 | create_time | TIMESTAMP | 创建时间 |
| 3 | sql | BINARY(1024) | 创建流计算时提供的 SQL 语句 | | 3 | sql | BINARY(1024) | 创建流计算时提供的 SQL 语句 |
| 4 | status | BIANRY(20) | 流当前状态 | | 4 | status | BINARY(20) | 流当前状态 |
| 5 | source_db | BINARY(64) | 源数据库 | | 5 | source_db | BINARY(64) | 源数据库 |
| 6 | target_db | BIANRY(64) | 目的数据库 | | 6 | target_db | BINARY(64) | 目的数据库 |
| 7 | target_table | BINARY(192) | 流计算写入的目标表 | | 7 | target_table | BINARY(192) | 流计算写入的目标表 |
| 8 | watermark | BIGINT | watermark,详见 SQL 手册流式计算。需要注意,`watermark` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 | | 8 | watermark | BIGINT | watermark,详见 SQL 手册流式计算。需要注意,`watermark` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 9 | trigger | INT | 计算结果推送模式,详见 SQL 手册流式计算。需要注意,`trigger` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 | | 9 | trigger | INT | 计算结果推送模式,详见 SQL 手册流式计算。需要注意,`trigger` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
...@@ -1590,7 +1590,7 @@ ...@@ -1590,7 +1590,7 @@
}, },
{ {
"datasource": "${DS_TDENGINE}", "datasource": "${DS_TDENGINE}",
"description": "taosd max memery last 10 minutes", "description": "taosd max memory last 10 minutes",
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"color": { "color": {
...@@ -1919,7 +1919,7 @@ ...@@ -1919,7 +1919,7 @@
}, },
{ {
"datasource": "${DS_TDENGINE}", "datasource": "${DS_TDENGINE}",
"description": "taosd max memery last 10 minutes", "description": "taosd max memory last 10 minutes",
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"color": { "color": {
...@@ -1977,7 +1977,7 @@ ...@@ -1977,7 +1977,7 @@
}, },
{ {
"datasource": "${DS_TDENGINE}", "datasource": "${DS_TDENGINE}",
"description": "taosd max memery last 10 minutes", "description": "taosd max memory last 10 minutes",
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"color": { "color": {
...@@ -2825,7 +2825,7 @@ ...@@ -2825,7 +2825,7 @@
"timeFrom": null, "timeFrom": null,
"timeRegions": [], "timeRegions": [],
"timeShift": null, "timeShift": null,
"title": "Requets Count per Minutes $fqdn", "title": "Requests Count per Minutes $fqdn",
"tooltip": { "tooltip": {
"shared": true, "shared": true,
"sort": 0, "sort": 0,
......
...@@ -1566,7 +1566,7 @@ ...@@ -1566,7 +1566,7 @@
}, },
{ {
"datasource": "${ds}", "datasource": "${ds}",
"description": "taosd max memery last 10 minutes", "description": "taosd max memory last 10 minutes",
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"color": { "color": {
...@@ -1933,7 +1933,7 @@ ...@@ -1933,7 +1933,7 @@
}, },
{ {
"datasource": "${ds}", "datasource": "${ds}",
"description": "taosd max memery last 10 minutes", "description": "taosd max memory last 10 minutes",
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"color": { "color": {
...@@ -2000,7 +2000,7 @@ ...@@ -2000,7 +2000,7 @@
}, },
{ {
"datasource": "${ds}", "datasource": "${ds}",
"description": "taosd max memery last 10 minutes", "description": "taosd max memory last 10 minutes",
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"color": { "color": {
...@@ -2961,7 +2961,7 @@ ...@@ -2961,7 +2961,7 @@
"timeFrom": null, "timeFrom": null,
"timeRegions": [], "timeRegions": [],
"timeShift": null, "timeShift": null,
"title": "Requets Count per Minutes $fqdn", "title": "Requests Count per Minutes $fqdn",
"tooltip": { "tooltip": {
"shared": true, "shared": true,
"sort": 0, "sort": 0,
......
...@@ -186,7 +186,7 @@ ...@@ -186,7 +186,7 @@
}, },
{ {
"datasource": "TDengine", "datasource": "TDengine",
"description": "taosd max memery last 10 minutes", "description": "taosd max memory last 10 minutes",
"gridPos": { "gridPos": {
"h": 6, "h": 6,
"w": 8, "w": 8,
...@@ -253,7 +253,7 @@ ...@@ -253,7 +253,7 @@
], ],
"timeFrom": null, "timeFrom": null,
"timeShift": null, "timeShift": null,
"title": "taosd memery", "title": "taosd memory",
"type": "gauge" "type": "gauge"
}, },
{ {
......
...@@ -29,7 +29,7 @@ taos -C ...@@ -29,7 +29,7 @@ taos -C
taos --dump-config taos --dump-config
``` ```
# 配置参数详细列表 ## 配置参数详细列表
:::note :::note
本节内容覆盖产品的配置参数,适用于服务端的参数按其对产品行为的影响进行分类,这其中有部分参数也同时适用于客户端;但有少量参数仅适用于客户端,这部分参数进行了单独归类。 本节内容覆盖产品的配置参数,适用于服务端的参数按其对产品行为的影响进行分类,这其中有部分参数也同时适用于客户端;但有少量参数仅适用于客户端,这部分参数进行了单独归类。
......
...@@ -31,7 +31,7 @@ deleteTimings = true ...@@ -31,7 +31,7 @@ deleteTimings = true
### 配置 StatsD ### 配置 StatsD
使用 StatsD 需要下载其[源代码](https://github.com/statsd/statsd)。其配置文件请参考其源代码下载到本地的根目录下的示例文件 `exampleConfig.js` 进行修改。其中 <taosAdpater's host\> 填写运行 taosAdapter 的服务器域名或 IP 地址,<port for StatsD\>请填写 taosAdapter 接收 StatsD 数据的端口(默认为 6044)。 使用 StatsD 需要下载其[源代码](https://github.com/statsd/statsd)。其配置文件请参考其源代码下载到本地的根目录下的示例文件 `exampleConfig.js` 进行修改。其中 <taosAdapter's host\> 填写运行 taosAdapter 的服务器域名或 IP 地址,<port for StatsD\>请填写 taosAdapter 接收 StatsD 数据的端口(默认为 6044)。
``` ```
backends 部分添加 "./backends/repeater" backends 部分添加 "./backends/repeater"
......
...@@ -205,144 +205,145 @@ ...@@ -205,144 +205,145 @@
#define TK_OUTPUTTYPE 187 #define TK_OUTPUTTYPE 187
#define TK_AGGREGATE 188 #define TK_AGGREGATE 188
#define TK_BUFSIZE 189 #define TK_BUFSIZE 189
#define TK_STREAM 190 #define TK_LANGUAGE 190
#define TK_INTO 191 #define TK_STREAM 191
#define TK_TRIGGER 192 #define TK_INTO 192
#define TK_AT_ONCE 193 #define TK_TRIGGER 193
#define TK_WINDOW_CLOSE 194 #define TK_AT_ONCE 194
#define TK_IGNORE 195 #define TK_WINDOW_CLOSE 195
#define TK_EXPIRED 196 #define TK_IGNORE 196
#define TK_FILL_HISTORY 197 #define TK_EXPIRED 197
#define TK_UPDATE 198 #define TK_FILL_HISTORY 198
#define TK_SUBTABLE 199 #define TK_UPDATE 199
#define TK_KILL 200 #define TK_SUBTABLE 200
#define TK_CONNECTION 201 #define TK_KILL 201
#define TK_TRANSACTION 202 #define TK_CONNECTION 202
#define TK_BALANCE 203 #define TK_TRANSACTION 203
#define TK_VGROUP 204 #define TK_BALANCE 204
#define TK_MERGE 205 #define TK_VGROUP 205
#define TK_REDISTRIBUTE 206 #define TK_MERGE 206
#define TK_SPLIT 207 #define TK_REDISTRIBUTE 207
#define TK_DELETE 208 #define TK_SPLIT 208
#define TK_INSERT 209 #define TK_DELETE 209
#define TK_NULL 210 #define TK_INSERT 210
#define TK_NK_QUESTION 211 #define TK_NULL 211
#define TK_NK_ARROW 212 #define TK_NK_QUESTION 212
#define TK_ROWTS 213 #define TK_NK_ARROW 213
#define TK_QSTART 214 #define TK_ROWTS 214
#define TK_QEND 215 #define TK_QSTART 215
#define TK_QDURATION 216 #define TK_QEND 216
#define TK_WSTART 217 #define TK_QDURATION 217
#define TK_WEND 218 #define TK_WSTART 218
#define TK_WDURATION 219 #define TK_WEND 219
#define TK_IROWTS 220 #define TK_WDURATION 220
#define TK_ISFILLED 221 #define TK_IROWTS 221
#define TK_CAST 222 #define TK_ISFILLED 222
#define TK_NOW 223 #define TK_CAST 223
#define TK_TODAY 224 #define TK_NOW 224
#define TK_TIMEZONE 225 #define TK_TODAY 225
#define TK_CLIENT_VERSION 226 #define TK_TIMEZONE 226
#define TK_SERVER_VERSION 227 #define TK_CLIENT_VERSION 227
#define TK_SERVER_STATUS 228 #define TK_SERVER_VERSION 228
#define TK_CURRENT_USER 229 #define TK_SERVER_STATUS 229
#define TK_CASE 230 #define TK_CURRENT_USER 230
#define TK_END 231 #define TK_CASE 231
#define TK_WHEN 232 #define TK_END 232
#define TK_THEN 233 #define TK_WHEN 233
#define TK_ELSE 234 #define TK_THEN 234
#define TK_BETWEEN 235 #define TK_ELSE 235
#define TK_IS 236 #define TK_BETWEEN 236
#define TK_NK_LT 237 #define TK_IS 237
#define TK_NK_GT 238 #define TK_NK_LT 238
#define TK_NK_LE 239 #define TK_NK_GT 239
#define TK_NK_GE 240 #define TK_NK_LE 240
#define TK_NK_NE 241 #define TK_NK_GE 241
#define TK_MATCH 242 #define TK_NK_NE 242
#define TK_NMATCH 243 #define TK_MATCH 243
#define TK_CONTAINS 244 #define TK_NMATCH 244
#define TK_IN 245 #define TK_CONTAINS 245
#define TK_JOIN 246 #define TK_IN 246
#define TK_INNER 247 #define TK_JOIN 247
#define TK_SELECT 248 #define TK_INNER 248
#define TK_DISTINCT 249 #define TK_SELECT 249
#define TK_WHERE 250 #define TK_DISTINCT 250
#define TK_PARTITION 251 #define TK_WHERE 251
#define TK_BY 252 #define TK_PARTITION 252
#define TK_SESSION 253 #define TK_BY 253
#define TK_STATE_WINDOW 254 #define TK_SESSION 254
#define TK_EVENT_WINDOW 255 #define TK_STATE_WINDOW 255
#define TK_START 256 #define TK_EVENT_WINDOW 256
#define TK_SLIDING 257 #define TK_START 257
#define TK_FILL 258 #define TK_SLIDING 258
#define TK_VALUE 259 #define TK_FILL 259
#define TK_VALUE_F 260 #define TK_VALUE 260
#define TK_NONE 261 #define TK_VALUE_F 261
#define TK_PREV 262 #define TK_NONE 262
#define TK_NULL_F 263 #define TK_PREV 263
#define TK_LINEAR 264 #define TK_NULL_F 264
#define TK_NEXT 265 #define TK_LINEAR 265
#define TK_HAVING 266 #define TK_NEXT 266
#define TK_RANGE 267 #define TK_HAVING 267
#define TK_EVERY 268 #define TK_RANGE 268
#define TK_ORDER 269 #define TK_EVERY 269
#define TK_SLIMIT 270 #define TK_ORDER 270
#define TK_SOFFSET 271 #define TK_SLIMIT 271
#define TK_LIMIT 272 #define TK_SOFFSET 272
#define TK_OFFSET 273 #define TK_LIMIT 273
#define TK_ASC 274 #define TK_OFFSET 274
#define TK_NULLS 275 #define TK_ASC 275
#define TK_ABORT 276 #define TK_NULLS 276
#define TK_AFTER 277 #define TK_ABORT 277
#define TK_ATTACH 278 #define TK_AFTER 278
#define TK_BEFORE 279 #define TK_ATTACH 279
#define TK_BEGIN 280 #define TK_BEFORE 280
#define TK_BITAND 281 #define TK_BEGIN 281
#define TK_BITNOT 282 #define TK_BITAND 282
#define TK_BITOR 283 #define TK_BITNOT 283
#define TK_BLOCKS 284 #define TK_BITOR 284
#define TK_CHANGE 285 #define TK_BLOCKS 285
#define TK_COMMA 286 #define TK_CHANGE 286
#define TK_CONCAT 287 #define TK_COMMA 287
#define TK_CONFLICT 288 #define TK_CONCAT 288
#define TK_COPY 289 #define TK_CONFLICT 289
#define TK_DEFERRED 290 #define TK_COPY 290
#define TK_DELIMITERS 291 #define TK_DEFERRED 291
#define TK_DETACH 292 #define TK_DELIMITERS 292
#define TK_DIVIDE 293 #define TK_DETACH 293
#define TK_DOT 294 #define TK_DIVIDE 294
#define TK_EACH 295 #define TK_DOT 295
#define TK_FAIL 296 #define TK_EACH 296
#define TK_FILE 297 #define TK_FAIL 297
#define TK_FOR 298 #define TK_FILE 298
#define TK_GLOB 299 #define TK_FOR 299
#define TK_ID 300 #define TK_GLOB 300
#define TK_IMMEDIATE 301 #define TK_ID 301
#define TK_IMPORT 302 #define TK_IMMEDIATE 302
#define TK_INITIALLY 303 #define TK_IMPORT 303
#define TK_INSTEAD 304 #define TK_INITIALLY 304
#define TK_ISNULL 305 #define TK_INSTEAD 305
#define TK_KEY 306 #define TK_ISNULL 306
#define TK_MODULES 307 #define TK_KEY 307
#define TK_NK_BITNOT 308 #define TK_MODULES 308
#define TK_NK_SEMI 309 #define TK_NK_BITNOT 309
#define TK_NOTNULL 310 #define TK_NK_SEMI 310
#define TK_OF 311 #define TK_NOTNULL 311
#define TK_PLUS 312 #define TK_OF 312
#define TK_PRIVILEGE 313 #define TK_PLUS 313
#define TK_RAISE 314 #define TK_PRIVILEGE 314
#define TK_REPLACE 315 #define TK_RAISE 315
#define TK_RESTRICT 316 #define TK_REPLACE 316
#define TK_ROW 317 #define TK_RESTRICT 317
#define TK_SEMI 318 #define TK_ROW 318
#define TK_STAR 319 #define TK_SEMI 319
#define TK_STATEMENT 320 #define TK_STAR 320
#define TK_STRICT 321 #define TK_STATEMENT 321
#define TK_STRING 322 #define TK_STRICT 322
#define TK_TIMES 323 #define TK_STRING 323
#define TK_VALUES 324 #define TK_TIMES 324
#define TK_VARIABLE 325 #define TK_VALUES 325
#define TK_VIEW 326 #define TK_VARIABLE 326
#define TK_WAL 327 #define TK_VIEW 327
#define TK_WAL 328
#define TK_NK_SPACE 600 #define TK_NK_SPACE 600
#define TK_NK_COMMENT 601 #define TK_NK_COMMENT 601
......
...@@ -430,6 +430,7 @@ typedef struct SCreateFunctionStmt { ...@@ -430,6 +430,7 @@ typedef struct SCreateFunctionStmt {
char libraryPath[PATH_MAX]; char libraryPath[PATH_MAX];
SDataType outputDt; SDataType outputDt;
int32_t bufSize; int32_t bufSize;
int8_t language;
} SCreateFunctionStmt; } SCreateFunctionStmt;
typedef struct SDropFunctionStmt { typedef struct SDropFunctionStmt {
......
...@@ -46,9 +46,9 @@ rd /s /Q C:\TDengine ...@@ -46,9 +46,9 @@ rd /s /Q C:\TDengine
cmake --install . cmake --install .
if not %errorlevel% == 0 ( call :RUNFAILED build x64 failed & exit /b 1) if not %errorlevel% == 0 ( call :RUNFAILED build x64 failed & exit /b 1)
cd %package_dir% cd %package_dir%
iscc /DMyAppInstallName="%packagServerName_x64%" /DMyAppVersion="%2" /DMyAppExcludeSource="" tools\tdengine.iss /O..\release iscc /DMyAppInstallName="%packagServerName_x64%" /DMyAppVersion="%2" /DCusName="TDengine" /DCusPrompt="taos" /DMyAppExcludeSource="" tools\tdengine.iss /O..\release
if not %errorlevel% == 0 ( call :RUNFAILED package %packagServerName_x64% failed & exit /b 1) if not %errorlevel% == 0 ( call :RUNFAILED package %packagServerName_x64% failed & exit /b 1)
iscc /DMyAppInstallName="%packagClientName_x64%" /DMyAppVersion="%2" /DMyAppExcludeSource="taosd.exe" tools\tdengine.iss /O..\release iscc /DMyAppInstallName="%packagClientName_x64%" /DMyAppVersion="%2" /DCusName="TDengine" /DCusPrompt="taos" /DMyAppExcludeSource="taosd.exe" tools\tdengine.iss /O..\release
if not %errorlevel% == 0 ( call :RUNFAILED package %packagClientName_x64% failed & exit /b 1) if not %errorlevel% == 0 ( call :RUNFAILED package %packagClientName_x64% failed & exit /b 1)
goto EXIT0 goto EXIT0
......
...@@ -212,7 +212,7 @@ SNode* createExplainStmt(SAstCreateContext* pCxt, bool analyze, SNode* pOptions, ...@@ -212,7 +212,7 @@ SNode* createExplainStmt(SAstCreateContext* pCxt, bool analyze, SNode* pOptions,
SNode* createDescribeStmt(SAstCreateContext* pCxt, SNode* pRealTable); SNode* createDescribeStmt(SAstCreateContext* pCxt, SNode* pRealTable);
SNode* createResetQueryCacheStmt(SAstCreateContext* pCxt); SNode* createResetQueryCacheStmt(SAstCreateContext* pCxt);
SNode* createCreateFunctionStmt(SAstCreateContext* pCxt, bool ignoreExists, bool aggFunc, const SToken* pFuncName, SNode* createCreateFunctionStmt(SAstCreateContext* pCxt, bool ignoreExists, bool aggFunc, const SToken* pFuncName,
const SToken* pLibPath, SDataType dataType, int32_t bufSize); const SToken* pLibPath, SDataType dataType, int32_t bufSize, const SToken* pLanguage);
SNode* createDropFunctionStmt(SAstCreateContext* pCxt, bool ignoreNotExists, const SToken* pFuncName); SNode* createDropFunctionStmt(SAstCreateContext* pCxt, bool ignoreNotExists, const SToken* pFuncName);
SNode* createStreamOptions(SAstCreateContext* pCxt); SNode* createStreamOptions(SAstCreateContext* pCxt);
SNode* createCreateStreamStmt(SAstCreateContext* pCxt, bool ignoreExists, SToken* pStreamName, SNode* pRealTable, SNode* createCreateStreamStmt(SAstCreateContext* pCxt, bool ignoreExists, SToken* pStreamName, SNode* pRealTable,
......
...@@ -531,7 +531,7 @@ explain_options(A) ::= explain_options(B) RATIO NK_FLOAT(C). ...@@ -531,7 +531,7 @@ explain_options(A) ::= explain_options(B) RATIO NK_FLOAT(C).
/************************************************ create/drop function ************************************************/ /************************************************ create/drop function ************************************************/
cmd ::= CREATE agg_func_opt(A) FUNCTION not_exists_opt(F) function_name(B) cmd ::= CREATE agg_func_opt(A) FUNCTION not_exists_opt(F) function_name(B)
AS NK_STRING(C) OUTPUTTYPE type_name(D) bufsize_opt(E). { pCxt->pRootNode = createCreateFunctionStmt(pCxt, F, A, &B, &C, D, E); } AS NK_STRING(C) OUTPUTTYPE type_name(D) bufsize_opt(E) language_opt(G). { pCxt->pRootNode = createCreateFunctionStmt(pCxt, F, A, &B, &C, D, E, &G); }
cmd ::= DROP FUNCTION exists_opt(B) function_name(A). { pCxt->pRootNode = createDropFunctionStmt(pCxt, B, &A); } cmd ::= DROP FUNCTION exists_opt(B) function_name(A). { pCxt->pRootNode = createDropFunctionStmt(pCxt, B, &A); }
%type agg_func_opt { bool } %type agg_func_opt { bool }
...@@ -544,6 +544,11 @@ agg_func_opt(A) ::= AGGREGATE. ...@@ -544,6 +544,11 @@ agg_func_opt(A) ::= AGGREGATE.
bufsize_opt(A) ::= . { A = 0; } bufsize_opt(A) ::= . { A = 0; }
bufsize_opt(A) ::= BUFSIZE NK_INTEGER(B). { A = taosStr2Int32(B.z, NULL, 10); } bufsize_opt(A) ::= BUFSIZE NK_INTEGER(B). { A = taosStr2Int32(B.z, NULL, 10); }
%type language_opt { SToken }
%destructor language_opt { }
language_opt(A) ::= . { A = nil_token; }
language_opt(A) ::= LANGUAGE NK_STRING(B). { A = B; }
/************************************************ create/drop stream **************************************************/ /************************************************ create/drop stream **************************************************/
cmd ::= CREATE STREAM not_exists_opt(E) stream_name(A) stream_options(B) INTO cmd ::= CREATE STREAM not_exists_opt(E) stream_name(A) stream_options(B) INTO
full_table_name(C) col_list_opt(H) tag_def_or_ref_opt(F) subtable_opt(G) full_table_name(C) col_list_opt(H) tag_def_or_ref_opt(F) subtable_opt(G)
......
...@@ -1779,13 +1779,29 @@ SNode* createResetQueryCacheStmt(SAstCreateContext* pCxt) { ...@@ -1779,13 +1779,29 @@ SNode* createResetQueryCacheStmt(SAstCreateContext* pCxt) {
return pStmt; return pStmt;
} }
static int32_t convertUdfLanguageType(SAstCreateContext* pCxt, const SToken* pLanguageToken, int8_t* pLanguage) {
if (TK_NK_NIL == pLanguageToken->type || 0 == strncasecmp(pLanguageToken->z + 1, "c", pLanguageToken->n - 2)) {
*pLanguage = TSDB_FUNC_SCRIPT_BIN_LIB;
} else if (0 == strncasecmp(pLanguageToken->z + 1, "python", pLanguageToken->n - 2)) {
*pLanguage = TSDB_FUNC_SCRIPT_PYTHON;
} else {
pCxt->errCode = generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_SYNTAX_ERROR,
"udf programming language supports c and python");
}
return pCxt->errCode;
}
SNode* createCreateFunctionStmt(SAstCreateContext* pCxt, bool ignoreExists, bool aggFunc, const SToken* pFuncName, SNode* createCreateFunctionStmt(SAstCreateContext* pCxt, bool ignoreExists, bool aggFunc, const SToken* pFuncName,
const SToken* pLibPath, SDataType dataType, int32_t bufSize) { const SToken* pLibPath, SDataType dataType, int32_t bufSize, const SToken* pLanguage) {
CHECK_PARSER_STATUS(pCxt); CHECK_PARSER_STATUS(pCxt);
if (pLibPath->n <= 2) { if (pLibPath->n <= 2) {
pCxt->errCode = TSDB_CODE_PAR_SYNTAX_ERROR; pCxt->errCode = TSDB_CODE_PAR_SYNTAX_ERROR;
return NULL; return NULL;
} }
int8_t language = 0;
if (TSDB_CODE_SUCCESS != convertUdfLanguageType(pCxt, pLanguage, &language)) {
return NULL;
}
SCreateFunctionStmt* pStmt = (SCreateFunctionStmt*)nodesMakeNode(QUERY_NODE_CREATE_FUNCTION_STMT); SCreateFunctionStmt* pStmt = (SCreateFunctionStmt*)nodesMakeNode(QUERY_NODE_CREATE_FUNCTION_STMT);
CHECK_OUT_OF_MEM(pStmt); CHECK_OUT_OF_MEM(pStmt);
pStmt->ignoreExists = ignoreExists; pStmt->ignoreExists = ignoreExists;
...@@ -1794,6 +1810,7 @@ SNode* createCreateFunctionStmt(SAstCreateContext* pCxt, bool ignoreExists, bool ...@@ -1794,6 +1810,7 @@ SNode* createCreateFunctionStmt(SAstCreateContext* pCxt, bool ignoreExists, bool
COPY_STRING_FORM_STR_TOKEN(pStmt->libraryPath, pLibPath); COPY_STRING_FORM_STR_TOKEN(pStmt->libraryPath, pLibPath);
pStmt->outputDt = dataType; pStmt->outputDt = dataType;
pStmt->bufSize = bufSize; pStmt->bufSize = bufSize;
pStmt->language = language;
return (SNode*)pStmt; return (SNode*)pStmt;
} }
......
...@@ -124,6 +124,7 @@ static SKeyword keywordTable[] = { ...@@ -124,6 +124,7 @@ static SKeyword keywordTable[] = {
{"JSON", TK_JSON}, {"JSON", TK_JSON},
{"KEEP", TK_KEEP}, {"KEEP", TK_KEEP},
{"KILL", TK_KILL}, {"KILL", TK_KILL},
{"LANGUAGE", TK_LANGUAGE},
{"LAST", TK_LAST}, {"LAST", TK_LAST},
{"LAST_ROW", TK_LAST_ROW}, {"LAST_ROW", TK_LAST_ROW},
{"LICENCES", TK_LICENCES}, {"LICENCES", TK_LICENCES},
......
...@@ -6358,7 +6358,7 @@ static int32_t translateCreateFunction(STranslateContext* pCxt, SCreateFunctionS ...@@ -6358,7 +6358,7 @@ static int32_t translateCreateFunction(STranslateContext* pCxt, SCreateFunctionS
strcpy(req.name, pStmt->funcName); strcpy(req.name, pStmt->funcName);
req.igExists = pStmt->ignoreExists; req.igExists = pStmt->ignoreExists;
req.funcType = pStmt->isAgg ? TSDB_FUNC_TYPE_AGGREGATE : TSDB_FUNC_TYPE_SCALAR; req.funcType = pStmt->isAgg ? TSDB_FUNC_TYPE_AGGREGATE : TSDB_FUNC_TYPE_SCALAR;
req.scriptType = TSDB_FUNC_SCRIPT_BIN_LIB; req.scriptType = pStmt->language;
req.outputType = pStmt->outputDt.type; req.outputType = pStmt->outputDt.type;
req.outputLen = pStmt->outputDt.bytes; req.outputLen = pStmt->outputDt.bytes;
req.bufSize = pStmt->bufSize; req.bufSize = pStmt->bufSize;
......
此差异已折叠。
...@@ -13,6 +13,8 @@ ...@@ -13,6 +13,8 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>. * along with this program. If not, see <http://www.gnu.org/licenses/>.
*/ */
#include <fstream>
#include "parTestUtil.h" #include "parTestUtil.h"
using namespace std; using namespace std;
...@@ -381,7 +383,8 @@ TEST_F(ParserInitialCTest, createDnode) { ...@@ -381,7 +383,8 @@ TEST_F(ParserInitialCTest, createDnode) {
} }
/* /*
* CREATE [AGGREGATE] FUNCTION [IF NOT EXISTS] func_name AS library_path OUTPUTTYPE type_name [BUFSIZE value] * CREATE [AGGREGATE] FUNCTION [IF NOT EXISTS] func_name
* AS library_path OUTPUTTYPE type_name [BUFSIZE value] [LANGUAGE value]
*/ */
TEST_F(ParserInitialCTest, createFunction) { TEST_F(ParserInitialCTest, createFunction) {
useDb("root", "test"); useDb("root", "test");
...@@ -389,12 +392,13 @@ TEST_F(ParserInitialCTest, createFunction) { ...@@ -389,12 +392,13 @@ TEST_F(ParserInitialCTest, createFunction) {
SCreateFuncReq expect = {0}; SCreateFuncReq expect = {0};
auto setCreateFuncReq = [&](const char* pUdfName, int8_t outputType, int32_t outputBytes = 0, auto setCreateFuncReq = [&](const char* pUdfName, int8_t outputType, int32_t outputBytes = 0,
int8_t funcType = TSDB_FUNC_TYPE_SCALAR, int8_t igExists = 0, int32_t bufSize = 0) { int8_t funcType = TSDB_FUNC_TYPE_SCALAR, int8_t igExists = 0, int32_t bufSize = 0,
int8_t language = TSDB_FUNC_SCRIPT_BIN_LIB) {
memset(&expect, 0, sizeof(SCreateFuncReq)); memset(&expect, 0, sizeof(SCreateFuncReq));
strcpy(expect.name, pUdfName); strcpy(expect.name, pUdfName);
expect.igExists = igExists; expect.igExists = igExists;
expect.funcType = funcType; expect.funcType = funcType;
expect.scriptType = TSDB_FUNC_SCRIPT_BIN_LIB; expect.scriptType = language;
expect.outputType = outputType; expect.outputType = outputType;
expect.outputLen = outputBytes > 0 ? outputBytes : tDataTypes[outputType].bytes; expect.outputLen = outputBytes > 0 ? outputBytes : tDataTypes[outputType].bytes;
expect.bufSize = bufSize; expect.bufSize = bufSize;
...@@ -412,13 +416,25 @@ TEST_F(ParserInitialCTest, createFunction) { ...@@ -412,13 +416,25 @@ TEST_F(ParserInitialCTest, createFunction) {
ASSERT_EQ(req.outputType, expect.outputType); ASSERT_EQ(req.outputType, expect.outputType);
ASSERT_EQ(req.outputLen, expect.outputLen); ASSERT_EQ(req.outputLen, expect.outputLen);
ASSERT_EQ(req.bufSize, expect.bufSize); ASSERT_EQ(req.bufSize, expect.bufSize);
tFreeSCreateFuncReq(&req);
}); });
struct udfFile {
udfFile(const std::string& filename) : path_(filename) {
std::ofstream file(filename, std::ios::binary);
file << 123 << "abc" << '\n';
file.close();
}
~udfFile() { remove(path_.c_str()); }
std::string path_;
} udffile("udf");
setCreateFuncReq("udf1", TSDB_DATA_TYPE_INT); setCreateFuncReq("udf1", TSDB_DATA_TYPE_INT);
// run("CREATE FUNCTION udf1 AS './build/lib/libudf1.so' OUTPUTTYPE INT"); run("CREATE FUNCTION udf1 AS 'udf' OUTPUTTYPE INT");
setCreateFuncReq("udf2", TSDB_DATA_TYPE_DOUBLE, 0, TSDB_FUNC_TYPE_AGGREGATE, 1, 8); setCreateFuncReq("udf2", TSDB_DATA_TYPE_DOUBLE, 0, TSDB_FUNC_TYPE_AGGREGATE, 1, 8, TSDB_FUNC_SCRIPT_PYTHON);
// run("CREATE AGGREGATE FUNCTION IF NOT EXISTS udf2 AS './build/lib/libudf2.so' OUTPUTTYPE DOUBLE BUFSIZE 8"); run("CREATE AGGREGATE FUNCTION IF NOT EXISTS udf2 AS 'udf' OUTPUTTYPE DOUBLE BUFSIZE 8 LANGUAGE 'python'");
} }
/* /*
......
...@@ -85,3 +85,9 @@ python3 fast_write_example.py ...@@ -85,3 +85,9 @@ python3 fast_write_example.py
pip3 install kafka-python pip3 install kafka-python
python3 kafka_example_consumer.py python3 kafka_example_consumer.py
# 21
pip3 install taos-ws-py
python3 conn_websocket_pandas.py
# 22
python3 connect_websocket_examples.py
...@@ -51,10 +51,24 @@ else ...@@ -51,10 +51,24 @@ else
REP_DIR=/home/TDinternal REP_DIR=/home/TDinternal
REP_REAL_PATH=$WORKDIR/TDinternal REP_REAL_PATH=$WORKDIR/TDinternal
REP_MOUNT_PARAM=$REP_REAL_PATH:/home/TDinternal REP_MOUNT_PARAM=$REP_REAL_PATH:/home/TDinternal
fi fi
date date
docker run \ docker run \
-v $REP_MOUNT_PARAM \ -v $REP_MOUNT_PARAM \
-v /root/.cargo/registry:/root/.cargo/registry \
-v /root/.cargo/git:/root/.cargo/git \
-v /root/go/pkg/mod:/root/go/pkg/mod \
-v /root/.cache/go-build:/root/.cache/go-build \
-v ${REP_REAL_PATH}/enterprise/src/plugins/taosx/target:${REP_DIR}/enterprise/src/plugins/taosx/target \
-v ${REP_REAL_PATH}/community/tools/taosws-rs/target:${REP_DIR}/community/tools/taosws-rs/target \
-v ${REP_REAL_PATH}/community/contrib/cJson/:${REP_DIR}/community/contrib/cJson \
-v ${REP_REAL_PATH}/community/contrib/googletest/:${REP_DIR}/community/contrib/googletest \
-v ${REP_REAL_PATH}/community/contrib/cpp-stub/:${REP_DIR}/community/contrib/cpp-stub \
-v ${REP_REAL_PATH}/community/contrib/libuv/:${REP_DIR}/community/contrib/libuv \
-v ${REP_REAL_PATH}/community/contrib/lz4/:${REP_DIR}/community/contrib/lz4 \
-v ${REP_REAL_PATH}/community/contrib/zlib/:${REP_DIR}/community/contrib/zlib \
-v ${REP_REAL_PATH}/community/contrib/jemalloc/:${REP_DIR}/community/contrib/jemalloc \
--rm --ulimit core=-1 taos_test:v1.0 sh -c "pip uninstall taospy -y;pip3 install taospy==2.7.2;cd $REP_DIR;rm -rf debug;mkdir -p debug;cd debug;cmake .. -DBUILD_HTTP=false -DBUILD_TOOLS=true -DBUILD_TEST=true -DWEBSOCKET=true -DBUILD_TAOSX=true;make -j || exit 1" --rm --ulimit core=-1 taos_test:v1.0 sh -c "pip uninstall taospy -y;pip3 install taospy==2.7.2;cd $REP_DIR;rm -rf debug;mkdir -p debug;cd debug;cmake .. -DBUILD_HTTP=false -DBUILD_TOOLS=true -DBUILD_TEST=true -DWEBSOCKET=true -DBUILD_TAOSX=true;make -j || exit 1"
if [[ -d ${WORKDIR}/debugNoSan ]] ;then if [[ -d ${WORKDIR}/debugNoSan ]] ;then
...@@ -70,6 +84,19 @@ mv ${REP_REAL_PATH}/debug ${WORKDIR}/debugNoSan ...@@ -70,6 +84,19 @@ mv ${REP_REAL_PATH}/debug ${WORKDIR}/debugNoSan
date date
docker run \ docker run \
-v $REP_MOUNT_PARAM \ -v $REP_MOUNT_PARAM \
-v /root/.cargo/registry:/root/.cargo/registry \
-v /root/.cargo/git:/root/.cargo/git \
-v /root/go/pkg/mod:/root/go/pkg/mod \
-v /root/.cache/go-build:/root/.cache/go-build \
-v ${REP_REAL_PATH}/enterprise/src/plugins/taosx/target:${REP_DIR}/enterprise/src/plugins/taosx/target \
-v ${REP_REAL_PATH}/community/tools/taosws-rs/target:${REP_DIR}/community/tools/taosws-rs/target \
-v ${REP_REAL_PATH}/community/contrib/cJson/:${REP_DIR}/community/contrib/cJson \
-v ${REP_REAL_PATH}/community/contrib/googletest/:${REP_DIR}/community/contrib/googletest \
-v ${REP_REAL_PATH}/community/contrib/cpp-stub/:${REP_DIR}/community/contrib/cpp-stub \
-v ${REP_REAL_PATH}/community/contrib/libuv/:${REP_DIR}/community/contrib/libuv \
-v ${REP_REAL_PATH}/community/contrib/lz4/:${REP_DIR}/community/contrib/lz4 \
-v ${REP_REAL_PATH}/community/contrib/zlib/:${REP_DIR}/community/contrib/zlib \
-v ${REP_REAL_PATH}/community/contrib/jemalloc/:${REP_DIR}/community/contrib/jemalloc \
--rm --ulimit core=-1 taos_test:v1.0 sh -c "pip uninstall taospy -y;pip3 install taospy==2.7.2;cd $REP_DIR;rm -rf debug;mkdir -p debug;cd debug;cmake .. -DBUILD_HTTP=false -DBUILD_TOOLS=true -DBUILD_TEST=true -DWEBSOCKET=true -DBUILD_SANITIZER=1 -DTOOLS_SANITIZE=true -DTOOLS_BUILD_TYPE=Debug -DBUILD_TAOSX=true;make -j || exit 1 " --rm --ulimit core=-1 taos_test:v1.0 sh -c "pip uninstall taospy -y;pip3 install taospy==2.7.2;cd $REP_DIR;rm -rf debug;mkdir -p debug;cd debug;cmake .. -DBUILD_HTTP=false -DBUILD_TOOLS=true -DBUILD_TEST=true -DWEBSOCKET=true -DBUILD_SANITIZER=1 -DTOOLS_SANITIZE=true -DTOOLS_BUILD_TYPE=Debug -DBUILD_TAOSX=true;make -j || exit 1 "
mv ${REP_REAL_PATH}/debug ${WORKDIR}/debugSan mv ${REP_REAL_PATH}/debug ${WORKDIR}/debugSan
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册