提交 c71600ba 编写于 作者: A Alex Duan

Merge branch '3.0' into feat/TD-17777-V30

...@@ -41,7 +41,7 @@ TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series ...@@ -41,7 +41,7 @@ TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series
TDengine 目前可以在 Linux、 Windows 等平台上安装和运行。任何 OS 的应用也可以选择 taosAdapter 的 RESTful 接口连接服务端 taosd。CPU 支持 X64/ARM64,后续会支持 MIPS64、Alpha64、ARM32、RISC-V 等 CPU 架构。 TDengine 目前可以在 Linux、 Windows 等平台上安装和运行。任何 OS 的应用也可以选择 taosAdapter 的 RESTful 接口连接服务端 taosd。CPU 支持 X64/ARM64,后续会支持 MIPS64、Alpha64、ARM32、RISC-V 等 CPU 架构。
用户可根据需求选择通过源码、[容器](https://docs.taosdata.com/get-started/docker/)[安装包](https://docs.taosdata.com/get-started/package/)[Kubenetes](https://docs.taosdata.com/deployment/k8s/)来安装。本快速指南仅适用于通过源码安装。 用户可根据需求选择通过源码、[容器](https://docs.taosdata.com/get-started/docker/)[安装包](https://docs.taosdata.com/get-started/package/)[Kubernetes](https://docs.taosdata.com/deployment/k8s/)来安装。本快速指南仅适用于通过源码安装。
TDengine 还提供一组辅助工具软件 taosTools,目前它包含 taosBenchmark(曾命名为 taosdemo)和 taosdump 两个软件。默认 TDengine 编译不包含 taosTools, 您可以在编译 TDengine 时使用`cmake .. -DBUILD_TOOLS=true` 来同时编译 taosTools。 TDengine 还提供一组辅助工具软件 taosTools,目前它包含 taosBenchmark(曾命名为 taosdemo)和 taosdump 两个软件。默认 TDengine 编译不包含 taosTools, 您可以在编译 TDengine 时使用`cmake .. -DBUILD_TOOLS=true` 来同时编译 taosTools。
...@@ -104,6 +104,12 @@ sudo yum install -y zlib-devel xz-devel snappy-devel jansson jansson-devel pkgco ...@@ -104,6 +104,12 @@ sudo yum install -y zlib-devel xz-devel snappy-devel jansson jansson-devel pkgco
sudo yum config-manager --set-enabled Powertools sudo yum config-manager --set-enabled Powertools
``` ```
### macOS
```
sudo brew install argp-standalone pkgconfig
```
### 设置 golang 开发环境 ### 设置 golang 开发环境
TDengine 包含数个使用 Go 语言开发的组件,比如taosAdapter, 请参考 golang.org 官方文档设置 go 开发环境。 TDengine 包含数个使用 Go 语言开发的组件,比如taosAdapter, 请参考 golang.org 官方文档设置 go 开发环境。
...@@ -210,14 +216,14 @@ cmake .. -G "NMake Makefiles" ...@@ -210,14 +216,14 @@ cmake .. -G "NMake Makefiles"
nmake nmake
``` ```
<!-- ### macOS 系统 ### macOS 系统
安装 Xcode 命令行工具和 cmake. 在 Catalina 和 Big Sur 操作系统上,需要安装 XCode 11.4+ 版本。 安装 XCode 命令行工具和 cmake. 在 Catalina 和 Big Sur 操作系统上,需要安装 XCode 11.4+ 版本。
```bash ```bash
mkdir debug && cd debug mkdir debug && cd debug
cmake .. && cmake --build . cmake .. && cmake --build .
``` --> ```
# 安装 # 安装
...@@ -263,6 +269,24 @@ nmake install ...@@ -263,6 +269,24 @@ nmake install
sudo make install sudo make install
``` ```
用户可以在[文件目录结构](https://docs.taosdata.com/reference/directory/)中了解更多在操作系统中生成的目录或文件。
从源代码安装也会为 TDengine 配置服务管理 ,用户也可以选择[从安装包中安装](https://docs.taosdata.com/get-started/package/)
安装成功后,可以在应用程序中双击 TDengine 图标启动服务,或者在终端中启动 TDengine 服务:
```bash
launchctl start taosd
```
用户可以使用 TDengine CLI 来连接 TDengine 服务,在终端中,输入:
```bash
taos
```
如果 TDengine CLI 连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印出错误消息。
## 快速运行 ## 快速运行
如果不希望以服务方式运行 TDengine,也可以在终端中直接运行它。也即在生成完成后,执行以下命令(在 Windows 下,生成的可执行文件会带有 .exe 后缀,例如会名为 taosd.exe ): 如果不希望以服务方式运行 TDengine,也可以在终端中直接运行它。也即在生成完成后,执行以下命令(在 Windows 下,生成的可执行文件会带有 .exe 后缀,例如会名为 taosd.exe ):
......
...@@ -19,7 +19,7 @@ English | [简体中文](README-CN.md) | [Learn more about TSDB](https://tdengin ...@@ -19,7 +19,7 @@ English | [简体中文](README-CN.md) | [Learn more about TSDB](https://tdengin
# What is TDengine? # What is TDengine?
TDengine is an open source, high-performance, cloud native [time-series database](https://tdengine.com/tsdb/) optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. TDengine differentiates itself from other time-seires databases with the following advantages: TDengine is an open source, high-performance, cloud native [time-series database](https://tdengine.com/tsdb/) optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. TDengine differentiates itself from other time-series databases with the following advantages:
- **[High Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression. - **[High Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression.
...@@ -105,6 +105,12 @@ If the PowerTools installation fails, you can try to use: ...@@ -105,6 +105,12 @@ If the PowerTools installation fails, you can try to use:
sudo yum config-manager --set-enabled powertools sudo yum config-manager --set-enabled powertools
``` ```
### macOS
```
sudo brew install argp-standalone pkgconfig
```
### Setup golang environment ### Setup golang environment
TDengine includes a few components like taosAdapter developed by Go language. Please refer to golang.org official documentation for golang environment setup. TDengine includes a few components like taosAdapter developed by Go language. Please refer to golang.org official documentation for golang environment setup.
...@@ -213,14 +219,14 @@ cmake .. -G "NMake Makefiles" ...@@ -213,14 +219,14 @@ cmake .. -G "NMake Makefiles"
nmake nmake
``` ```
<!-- ### On macOS platform ### On macOS platform
Please install XCode command line tools and cmake. Verified with XCode 11.4+ on Catalina and Big Sur. Please install XCode command line tools and cmake. Verified with XCode 11.4+ on Catalina and Big Sur.
```shell ```shell
mkdir debug && cd debug mkdir debug && cd debug
cmake .. && cmake --build . cmake .. && cmake --build .
``` --> ```
# Installing # Installing
...@@ -258,7 +264,7 @@ After building successfully, TDengine can be installed by: ...@@ -258,7 +264,7 @@ After building successfully, TDengine can be installed by:
nmake install nmake install
``` ```
<!--
## On macOS platform ## On macOS platform
After building successfully, TDengine can be installed by: After building successfully, TDengine can be installed by:
...@@ -266,7 +272,24 @@ After building successfully, TDengine can be installed by: ...@@ -266,7 +272,24 @@ After building successfully, TDengine can be installed by:
```bash ```bash
sudo make install sudo make install
``` ```
-->
Users can find more information about directories installed on the system in the [directory and files](https://docs.tdengine.com/reference/directory/) section.
Installing from source code will also configure service management for TDengine.Users can also choose to [install from packages](https://docs.tdengine.com/get-started/package/) for it.
To start the service after installation, double-click the /applications/TDengine to start the program, or in a terminal, use:
```bash
launchctl start taosd
```
Then users can use the TDengine CLI to connect the TDengine server. In a terminal, use:
```bash
taos
```
If TDengine CLI connects the server successfully, welcome messages and version info are printed. Otherwise, an error message is shown.
## Quick Run ## Quick Run
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
IF (DEFINED VERNUMBER) IF (DEFINED VERNUMBER)
SET(TD_VER_NUMBER ${VERNUMBER}) SET(TD_VER_NUMBER ${VERNUMBER})
ELSE () ELSE ()
SET(TD_VER_NUMBER "3.0.1.3") SET(TD_VER_NUMBER "3.0.1.4")
ENDIF () ENDIF ()
IF (DEFINED VERCOMPATIBLE) IF (DEFINED VERCOMPATIBLE)
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
# taos-tools # taos-tools
ExternalProject_Add(taos-tools ExternalProject_Add(taos-tools
GIT_REPOSITORY https://github.com/taosdata/taos-tools.git GIT_REPOSITORY https://github.com/taosdata/taos-tools.git
GIT_TAG 70f5a1c GIT_TAG 85179e9
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools" SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools"
BINARY_DIR "" BINARY_DIR ""
#BUILD_IN_SOURCE TRUE #BUILD_IN_SOURCE TRUE
......
...@@ -7,7 +7,7 @@ import Tabs from "@theme/Tabs"; ...@@ -7,7 +7,7 @@ import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem"; import TabItem from "@theme/TabItem";
import PkgListV3 from "/components/PkgListV3"; import PkgListV3 from "/components/PkgListV3";
This document describes how to install TDengine on Linux and Windows and perform queries and inserts. This document describes how to install TDengine on Linux/Windows/macOS and perform queries and inserts.
- The easiest way to explore TDengine is through [TDengine Cloud](http://cloud.tdengine.com). - The easiest way to explore TDengine is through [TDengine Cloud](http://cloud.tdengine.com).
- To get started with TDengine on Docker, see [Quick Install on Docker](../../get-started/docker). - To get started with TDengine on Docker, see [Quick Install on Docker](../../get-started/docker).
...@@ -17,7 +17,7 @@ The full package of TDengine includes the TDengine Server (`taosd`), TDengine Cl ...@@ -17,7 +17,7 @@ The full package of TDengine includes the TDengine Server (`taosd`), TDengine Cl
The standard server installation package includes `taos`, `taosd`, `taosAdapter`, `taosBenchmark`, and sample code. You can also download the Lite package that includes only `taosd` and the C/C++ connector. The standard server installation package includes `taos`, `taosd`, `taosAdapter`, `taosBenchmark`, and sample code. You can also download the Lite package that includes only `taosd` and the C/C++ connector.
The TDengine Community Edition is released as Deb and RPM packages. The Deb package can be installed on Debian, Ubuntu, and derivative systems. The RPM package can be installed on CentOS, RHEL, SUSE, and derivative systems. A .tar.gz package is also provided for enterprise customers, and you can install TDengine over `apt-get` as well. The .tar.tz package includes `taosdump` and the TDinsight installation script. If you want to use these utilities with the Deb or RPM package, download and install taosTools separately. TDengine can also be installed on 64-bit Windows. The TDengine Community Edition is released as Deb and RPM packages. The Deb package can be installed on Debian, Ubuntu, and derivative systems. The RPM package can be installed on CentOS, RHEL, SUSE, and derivative systems. A .tar.gz package is also provided for enterprise customers, and you can install TDengine over `apt-get` as well. The .tar.tz package includes `taosdump` and the TDinsight installation script. If you want to use these utilities with the Deb or RPM package, download and install taosTools separately. TDengine can also be installed on x64 Windows and x64/m1 macOS.
## Installation ## Installation
...@@ -111,6 +111,13 @@ Note: TDengine only supports Windows Server 2016/2019 and Windows 10/11 on the W ...@@ -111,6 +111,13 @@ Note: TDengine only supports Windows Server 2016/2019 and Windows 10/11 on the W
<PkgListV3 type={3}/> <PkgListV3 type={3}/>
2. Run the downloaded package to install TDengine. 2. Run the downloaded package to install TDengine.
</TabItem>
<TabItem label="macOS" value="macos">
1. Download the macOS installation package.
<PkgListV3 type={7}/>
2. Run the downloaded package to install TDengine.
</TabItem> </TabItem>
</Tabs> </Tabs>
...@@ -178,12 +185,33 @@ The following `systemctl` commands can help you manage TDengine service: ...@@ -178,12 +185,33 @@ The following `systemctl` commands can help you manage TDengine service:
After the installation is complete, run `C:\TDengine\taosd.exe` to start TDengine Server. After the installation is complete, run `C:\TDengine\taosd.exe` to start TDengine Server.
</TabItem>
<TabItem label="macOS" value="macos">
After the installation is complete, double-click the /applications/TDengine to start the program, or run `launchctl start taosd` to start TDengine Server.
The following `launchctl` commands can help you manage TDengine service:
- Start TDengine Server: `launchctl start taosd`
- Stop TDengine Server: `launchctl stop taosd`
- Check TDengine Server status: `launchctl list | grep taosd`
:::info
- The `launchctl` command does not require _root_ privileges. You don't need to use the `sudo` command.
- The first content returned by the `launchctl list | grep taosd` command is the PID of the program, if '-' indicates that the TDengine service is not running.
:::
</TabItem> </TabItem>
</Tabs> </Tabs>
## Command Line Interface (CLI) ## Command Line Interface (CLI)
You can use the TDengine CLI to monitor your TDengine deployment and execute ad hoc queries. To open the CLI, you can execute `taos` in the Linux terminal where TDengine is installed, or you can run `taos.exe` in the `C:\TDengine` directory of the Windows terminal where TDengine is installed to start the TDengine command line. You can use the TDengine CLI to monitor your TDengine deployment and execute ad hoc queries. To open the CLI, you can execute `taos` in the Linux/macOS terminal where TDengine is installed, or you can run `taos.exe` in the `C:\TDengine` directory of the Windows terminal where TDengine is installed to start the TDengine command line.
```bash ```bash
taos taos
...@@ -213,13 +241,13 @@ SELECT * FROM t; ...@@ -213,13 +241,13 @@ SELECT * FROM t;
Query OK, 2 row(s) in set (0.003128s) Query OK, 2 row(s) in set (0.003128s)
``` ```
You can also can monitor the deployment status, add and remove user accounts, and manage running instances. You can run the TDengine CLI on either Linux or Windows machines. For more information, see [TDengine CLI](../../reference/taos-shell/). You can also can monitor the deployment status, add and remove user accounts, and manage running instances. You can run the TDengine CLI on either machines. For more information, see [TDengine CLI](../../reference/taos-shell/).
## Test data insert performance ## Test data insert performance
After your TDengine Server is running normally, you can run the taosBenchmark utility to test its performance: After your TDengine Server is running normally, you can run the taosBenchmark utility to test its performance:
Start TDengine service and execute `taosBenchmark` (formerly named `taosdemo`) in a Linux or Windows terminal. Start TDengine service and execute `taosBenchmark` (formerly named `taosdemo`) in a terminal.
```bash ```bash
taosBenchmark taosBenchmark
......
...@@ -64,10 +64,10 @@ taos> use test; ...@@ -64,10 +64,10 @@ taos> use test;
Database changed. Database changed.
taos> show stables; taos> show stables;
name | created_time | columns | tags | tables | name |
============================================================================================ =================================
meters.current | 2022-03-30 17:04:10.877 | 2 | 2 | 2 | meters.current |
meters.voltage | 2022-03-30 17:04:10.882 | 2 | 2 | 2 | meters.voltage |
Query OK, 2 row(s) in set (0.002544s) Query OK, 2 row(s) in set (0.002544s)
taos> select tbname, * from `meters.current`; taos> select tbname, * from `meters.current`;
......
...@@ -81,10 +81,10 @@ taos> use test; ...@@ -81,10 +81,10 @@ taos> use test;
Database changed. Database changed.
taos> show stables; taos> show stables;
name | created_time | columns | tags | tables | name |
============================================================================================ =================================
meters.current | 2022-03-29 16:05:25.193 | 2 | 2 | 1 | meters.current |
meters.voltage | 2022-03-29 16:05:25.200 | 2 | 2 | 1 | meters.voltage |
Query OK, 2 row(s) in set (0.001954s) Query OK, 2 row(s) in set (0.001954s)
taos> select * from `meters.current`; taos> select * from `meters.current`;
......
...@@ -11,7 +11,7 @@ SELECT {DATABASE() | CLIENT_VERSION() | SERVER_VERSION() | SERVER_STATUS() | NOW ...@@ -11,7 +11,7 @@ SELECT {DATABASE() | CLIENT_VERSION() | SERVER_VERSION() | SERVER_STATUS() | NOW
SELECT [DISTINCT] select_list SELECT [DISTINCT] select_list
from_clause from_clause
[WHERE condition] [WHERE condition]
[PARTITION BY tag_list] [partition_by_clause]
[window_clause] [window_clause]
[group_by_clause] [group_by_clause]
[order_by_clasue] [order_by_clasue]
...@@ -52,6 +52,9 @@ window_clause: { ...@@ -52,6 +52,9 @@ window_clause: {
| STATE_WINDOW(col) | STATE_WINDOW(col)
| INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)] [WATERMARK(watermark_val)] [FILL(fill_mod_and_val)] | INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)] [WATERMARK(watermark_val)] [FILL(fill_mod_and_val)]
partition_by_clause:
PARTITION BY expr [, expr] ...
group_by_clause: group_by_clause:
GROUP BY expr [, expr] ... HAVING condition GROUP BY expr [, expr] ... HAVING condition
......
...@@ -109,7 +109,7 @@ TDengine's JDBC URL specification format is: ...@@ -109,7 +109,7 @@ TDengine's JDBC URL specification format is:
For establishing connections, native connections differ slightly from REST connections. For establishing connections, native connections differ slightly from REST connections.
<Tabs defaultValue="native"> <Tabs defaultValue="rest">
<TabItem value="native" label="native connection"> <TabItem value="native" label="native connection">
```java ```java
......
...@@ -113,7 +113,7 @@ username:password@protocol(address)/dbname?param=value ...@@ -113,7 +113,7 @@ username:password@protocol(address)/dbname?param=value
``` ```
### Connecting via connector ### Connecting via connector
<Tabs defaultValue="native"> <Tabs defaultValue="rest">
<TabItem value="native" label="native connection"> <TabItem value="native" label="native connection">
_taosSql_ implements Go's `database/sql/driver` interface via cgo. You can use the [`database/sql`](https://golang.org/pkg/database/sql/) interface by simply introducing the driver. _taosSql_ implements Go's `database/sql/driver` interface via cgo. You can use the [`database/sql`](https://golang.org/pkg/database/sql/) interface by simply introducing the driver.
......
...@@ -55,26 +55,28 @@ taos = "*" ...@@ -55,26 +55,28 @@ taos = "*"
</TabItem> </TabItem>
<TabItem value="native" label="native connection only"> <TabItem value="rest" label="Websocket only">
In `cargo.toml`, add [taos][taos] and enable the native feature: In `cargo.toml`, add [taos][taos] and enable the ws feature:
```toml ```toml
[dependencies] [dependencies]
taos = { version = "*", default-features = false, features = ["native"] } taos = { version = "*", default-features = false, features = ["ws"] }
``` ```
</TabItem> </TabItem>
<TabItem value="rest" label="Websocket only">
In `cargo.toml`, add [taos][taos] and enable the ws feature: <TabItem value="native" label="native connection only">
In `cargo.toml`, add [taos][taos] and enable the native feature:
```toml ```toml
[dependencies] [dependencies]
taos = { version = "*", default-features = false, features = ["ws"] } taos = { version = "*", default-features = false, features = ["native"] }
``` ```
</TabItem> </TabItem>
</Tabs> </Tabs>
## Establishing a connection ## Establishing a connection
......
...@@ -80,7 +80,7 @@ pip3 install git+https://github.com/taosdata/taos-connector-python.git ...@@ -80,7 +80,7 @@ pip3 install git+https://github.com/taosdata/taos-connector-python.git
### Verify ### Verify
<Tabs groupId="connect" default="native"> <Tabs defaultValue="rest">
<TabItem value="native" label="native connection"> <TabItem value="native" label="native connection">
For native connection, you need to verify that both the client driver and the Python connector itself are installed correctly. The client driver and Python connector have been installed properly if you can successfully import the `taos` module. In the Python Interactive Shell, you can type. For native connection, you need to verify that both the client driver and the Python connector itself are installed correctly. The client driver and Python connector have been installed properly if you can successfully import the `taos` module. In the Python Interactive Shell, you can type.
...@@ -118,7 +118,7 @@ Requirement already satisfied: taospy in c:\users\username\appdata\local\program ...@@ -118,7 +118,7 @@ Requirement already satisfied: taospy in c:\users\username\appdata\local\program
Before establishing a connection with the connector, we recommend testing the connectivity of the local TDengine CLI to the TDengine cluster. Before establishing a connection with the connector, we recommend testing the connectivity of the local TDengine CLI to the TDengine cluster.
<Tabs> <Tabs defaultValue="rest">
<TabItem value="native" label="native connection"> <TabItem value="native" label="native connection">
Ensure that the TDengine instance is up and that the FQDN of the machines in the cluster (the FQDN defaults to hostname if you are starting a standalone version) can be resolved locally, by testing with the `ping` command. Ensure that the TDengine instance is up and that the FQDN of the machines in the cluster (the FQDN defaults to hostname if you are starting a standalone version) can be resolved locally, by testing with the `ping` command.
...@@ -173,7 +173,7 @@ If the test is successful, it will output the server version information, e.g. ...@@ -173,7 +173,7 @@ If the test is successful, it will output the server version information, e.g.
The following example code assumes that TDengine is installed locally and that the default configuration is used for both FQDN and serverPort. The following example code assumes that TDengine is installed locally and that the default configuration is used for both FQDN and serverPort.
<Tabs> <Tabs defaultValue="rest">
<TabItem value="native" label="native connection" groupId="connect"> <TabItem value="native" label="native connection" groupId="connect">
```python ```python
...@@ -219,7 +219,7 @@ All arguments to the `connect()` function are optional keyword arguments. The fo ...@@ -219,7 +219,7 @@ All arguments to the `connect()` function are optional keyword arguments. The fo
### Basic Usage ### Basic Usage
<Tabs default="native" groupId="connect"> <Tabs defaultValue="rest">
<TabItem value="native" label="native connection"> <TabItem value="native" label="native connection">
##### TaosConnection class ##### TaosConnection class
...@@ -289,7 +289,7 @@ For a more detailed description of the `sql()` method, please refer to [RestClie ...@@ -289,7 +289,7 @@ For a more detailed description of the `sql()` method, please refer to [RestClie
### Used with pandas ### Used with pandas
<Tabs default="native" groupId="connect"> <Tabs defaultValue="rest">
<TabItem value="native" label="native connection"> <TabItem value="native" label="native connection">
```python ```python
......
...@@ -85,7 +85,7 @@ If using ARM64 Node.js on Windows 10 ARM, you must add "Visual C++ compilers and ...@@ -85,7 +85,7 @@ If using ARM64 Node.js on Windows 10 ARM, you must add "Visual C++ compilers and
### Install via npm ### Install via npm
<Tabs defaultValue="install_native"> <Tabs defaultValue="install_rest">
<TabItem value="install_native" label="Install native connector"> <TabItem value="install_native" label="Install native connector">
```bash ```bash
...@@ -124,7 +124,7 @@ node nodejsChecker.js host=localhost ...@@ -124,7 +124,7 @@ node nodejsChecker.js host=localhost
Please choose to use one of the connectors. Please choose to use one of the connectors.
<Tabs defaultValue="native"> <Tabs defaultValue="rest">
<TabItem value="native" label="native connection"> <TabItem value="native" label="native connection">
Install and import the `@tdengine/client` package. Install and import the `@tdengine/client` package.
......
...@@ -97,7 +97,7 @@ dotnet add exmaple.csproj reference src/TDengine.csproj ...@@ -97,7 +97,7 @@ dotnet add exmaple.csproj reference src/TDengine.csproj
## Establish a Connection ## Establish a Connection
<Tabs defaultValue="native"> <Tabs defaultValue="rest">
<TabItem value="native" label="Native Connection"> <TabItem value="native" label="Native Connection">
...@@ -173,7 +173,7 @@ ws://localhost:6041/test ...@@ -173,7 +173,7 @@ ws://localhost:6041/test
#### SQL Write #### SQL Write
<Tabs defaultValue="native"> <Tabs defaultValue="rest">
<TabItem value="native" label="Native Connection"> <TabItem value="native" label="Native Connection">
...@@ -204,7 +204,7 @@ ws://localhost:6041/test ...@@ -204,7 +204,7 @@ ws://localhost:6041/test
#### Parameter Binding #### Parameter Binding
<Tabs defaultValue="native"> <Tabs defaultValue="rest">
<TabItem value="native" label="Native Connection"> <TabItem value="native" label="Native Connection">
...@@ -227,7 +227,7 @@ ws://localhost:6041/test ...@@ -227,7 +227,7 @@ ws://localhost:6041/test
#### Synchronous Query #### Synchronous Query
<Tabs defaultValue="native"> <Tabs defaultValue="rest">
<TabItem value="native" label="Native Connection"> <TabItem value="native" label="Native Connection">
......
...@@ -4,11 +4,11 @@ Execute TDengine CLI program `taos` directly from the Linux shell to connect to ...@@ -4,11 +4,11 @@ Execute TDengine CLI program `taos` directly from the Linux shell to connect to
$ taos $ taos
taos> show databases; taos> show databases;
name | create_time | vgroups | ntables | replica | strict | duration | keep | buffer | pagesize | pages | minrows | maxrows | comp | precision | status | retention | single_stable | cachemodel | cachesize | wal_level | wal_fsync_period | wal_retention_period | wal_retention_size | wal_roll_period | wal_seg_size | name |
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================= =================================
information_schema | NULL | NULL | 14 | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | ready | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | information_schema |
performance_schema | NULL | NULL | 3 | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | ready | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | performance_schema |
db | 2022-08-04 14:14:49.385 | 2 | 4 | 1 | off | 14400m | 5254560m,5254560m,5254560m | 96 | 4 | 256 | 100 | 4096 | 2 | ms | ready | NULL | false | none | 1 | 1 | 3000 | 0 | 0 | 0 | 0 | db |
Query OK, 3 rows in database (0.019154s) Query OK, 3 rows in database (0.019154s)
taos> taos>
......
...@@ -2,12 +2,11 @@ Go to the `C:\TDengine` directory from `cmd` and execute TDengine CLI program `t ...@@ -2,12 +2,11 @@ Go to the `C:\TDengine` directory from `cmd` and execute TDengine CLI program `t
```text ```text
taos> show databases; taos> show databases;
name | create_time | vgroups | ntables | replica | strict | duration | keep | buffer | pagesize | pages | minrows | maxrows | comp | precision | status | retention | single_stable | cachemodel | cachesize | wal_level | wal_fsync_period | wal_retention_period | wal_retention_size | wal_roll_period | wal_seg_size | name |
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================= =================================
information_schema | NULL | NULL | 14 | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | ready | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | information_schema |
performance_schema | NULL | NULL | 3 | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | ready | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | performance_schema |
test | 2022-08-04 16:46:40.506 | 2 | 0 | 1 | off | 14400m | 5256000m,5256000m,5256000m | 96 | 4 | 256 | test |
100 | 4096 | 2 | ms | ready | NULL | false | none | 1 | 1 | 3000 | 0 | 0 | 0 | 0 |
Query OK, 3 rows in database (0.123000s) Query OK, 3 rows in database (0.123000s)
taos> taos>
......
...@@ -196,7 +196,8 @@ Support InfluxDB query parameters as follows. ...@@ -196,7 +196,8 @@ Support InfluxDB query parameters as follows.
- `u` TDengine user name - `u` TDengine user name
- `p` TDengine password - `p` TDengine password
Note: InfluxDB token authorization is not supported at present. Only Basic authorization and query parameter validation are supported. Note: InfluxDB token authorization is not supported at present. Only Basic authorization and query parameter validation are supported.
Example: curl --request POST http://127.0.0.1:6041/influxdb/v1/write?db=test --user "root:taosdata" --data-binary "measurement,host=host1 field1=2i,field2=2.0 1577836800000000000"
### OpenTSDB ### OpenTSDB
......
...@@ -25,10 +25,11 @@ The TDengine client taos can be executed in this container to access TDengine us ...@@ -25,10 +25,11 @@ The TDengine client taos can be executed in this container to access TDengine us
$ docker exec -it tdengine taos $ docker exec -it tdengine taos
taos> show databases; taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status | name |
==================================================================================================================================================================================================================================================================================== =================================
log | 2022-01-17 13:57:22.270 | 10 | 1 | 1 | 1 | 10 | 30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | 0 | us | 0 | ready | information_schema |
Query OK, 1 row(s) in set (0.002843s) performance_schema |
Query OK, 2 row(s) in set (0.002843s)
``` ```
The TDengine server running in the container uses the container's hostname to establish a connection. Using TDengine CLI or various connectors (such as JDBC-JNI) to access the TDengine inside the container from outside the container is more complicated. So the above is the simplest way to access the TDengine service in the container and is suitable for some simple scenarios. Please refer to the next section if you want to access the TDengine service in the container from outside the container using TDengine CLI or various connectors for complex scenarios. The TDengine server running in the container uses the container's hostname to establish a connection. Using TDengine CLI or various connectors (such as JDBC-JNI) to access the TDengine inside the container from outside the container is more complicated. So the above is the simplest way to access the TDengine service in the container and is suitable for some simple scenarios. Please refer to the next section if you want to access the TDengine service in the container from outside the container using TDengine CLI or various connectors for complex scenarios.
......
...@@ -51,5 +51,6 @@ port: 8125 ...@@ -51,5 +51,6 @@ port: 8125
Start StatsD after adding the following (assuming the config file is modified to config.js) Start StatsD after adding the following (assuming the config file is modified to config.js)
``` ```
npm install
node stats.js config.js & node stats.js config.js &
``` ```
...@@ -22,5 +22,4 @@ An example is as follows. ...@@ -22,5 +22,4 @@ An example is as follows.
username = "root" username = "root"
password = "taosdata" password = "taosdata"
data_format = "influx" data_format = "influx"
influx_max_line_bytes = 250
``` ```
...@@ -30,21 +30,20 @@ After restarting Prometheus, you can refer to the following example to verify th ...@@ -30,21 +30,20 @@ After restarting Prometheus, you can refer to the following example to verify th
``` ```
taos> show databases; taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status | name |
==================================================================================================================================================================================================================================================================================== =================================
test | 2022-04-12 08:07:58.756 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready | information_schema |
log | 2022-04-20 07:19:50.260 | 2 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready | performance_schema |
prometheus_data | 2022-04-20 07:21:09.202 | 158 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready | prometheus_data |
db | 2022-04-15 06:37:08.512 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready | Query OK, 3 row(s) in set (0.000585s)
Query OK, 4 row(s) in set (0.000585s)
taos> use prometheus_data; taos> use prometheus_data;
Database changed. Database changed.
taos> show stables; taos> show stables;
name | created_time | columns | tags | tables | name |
============================================================================================ =================================
metrics | 2022-04-20 07:21:09.209 | 2 | 1 | 1389 | metrics |
Query OK, 1 row(s) in set (0.000487s) Query OK, 1 row(s) in set (0.000487s)
taos> select * from metrics limit 10; taos> select * from metrics limit 10;
...@@ -89,3 +88,7 @@ VALUE TIMESTAMP ...@@ -89,3 +88,7 @@ VALUE TIMESTAMP
``` ```
:::note
- TDengine will automatically create unique IDs for sub-table names by the rule.
:::
...@@ -15,6 +15,7 @@ To write Telegraf data to TDengine requires the following preparations. ...@@ -15,6 +15,7 @@ To write Telegraf data to TDengine requires the following preparations.
- The TDengine cluster is deployed and functioning properly - The TDengine cluster is deployed and functioning properly
- taosAdapter is installed and running properly. Please refer to the [taosAdapter manual](/reference/taosadapter) for details. - taosAdapter is installed and running properly. Please refer to the [taosAdapter manual](/reference/taosadapter) for details.
- Telegraf has been installed. Please refer to the [official documentation](https://docs.influxdata.com/telegraf/v1.22/install/) for Telegraf installation. - Telegraf has been installed. Please refer to the [official documentation](https://docs.influxdata.com/telegraf/v1.22/install/) for Telegraf installation.
- Telegraf collects the running status measurements of current system. You can enable [input plugins](https://docs.influxdata.com/telegraf/v1.22/plugins/) to insert [other formats](https://docs.influxdata.com/telegraf/v1.24/data_formats/input/) data to Telegraf then forward to TDengine.
## Configuration steps ## Configuration steps
<Telegraf /> <Telegraf />
...@@ -31,26 +32,27 @@ Use TDengine CLI to verify Telegraf correctly writing data to TDengine and read ...@@ -31,26 +32,27 @@ Use TDengine CLI to verify Telegraf correctly writing data to TDengine and read
``` ```
taos> show databases; taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status | name |
==================================================================================================================================================================================================================================================================================== =================================
telegraf | 2022-04-20 08:47:53.488 | 22 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready | information_schema |
log | 2022-04-20 07:19:50.260 | 9 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready | performance_schema |
Query OK, 2 row(s) in set (0.002401s) telegraf |
Query OK, 3 rows in database (0.010568s)
taos> use telegraf; taos> use telegraf;
Database changed. Database changed.
taos> show stables; taos> show stables;
name | created_time | columns | tags | tables | name |
============================================================================================ =================================
swap | 2022-04-20 08:47:53.532 | 7 | 1 | 1 | swap |
cpu | 2022-04-20 08:48:03.488 | 11 | 2 | 5 | cpu |
system | 2022-04-20 08:47:53.512 | 8 | 1 | 1 | system |
diskio | 2022-04-20 08:47:53.550 | 12 | 2 | 15 | diskio |
kernel | 2022-04-20 08:47:53.503 | 6 | 1 | 1 | kernel |
mem | 2022-04-20 08:47:53.521 | 35 | 1 | 1 | mem |
processes | 2022-04-20 08:47:53.555 | 12 | 1 | 1 | processes |
disk | 2022-04-20 08:47:53.541 | 8 | 5 | 2 | disk |
Query OK, 8 row(s) in set (0.000521s) Query OK, 8 row(s) in set (0.000521s)
taos> select * from telegraf.system limit 10; taos> select * from telegraf.system limit 10;
...@@ -65,3 +67,11 @@ taos> select * from telegraf.system limit 10; ...@@ -65,3 +67,11 @@ taos> select * from telegraf.system limit 10;
| |
Query OK, 3 row(s) in set (0.013269s) Query OK, 3 row(s) in set (0.013269s)
``` ```
:::note
- TDengine take influxdb format data and create unique ID for table names by the rule.
The user can configure `smlChildTableName` parameter to generate specified table names if he/she needs. And he/she also need to insert data with specified data format.
For example, Add `smlChildTableName=tname` in the taos.cfg file. Insert data `st,tname=cpu1,t1=4 c1=3 1626006833639000000` then the table name will be cpu1. If there are multiple lines has same tname but different tag_set, the first line's tag_set will be used to automatically creating table and ignore other lines. Please refer to [TDengine Schemaless](/reference/schemaless/#Schemaless-Line-Protocol)
:::
...@@ -32,28 +32,29 @@ Use the TDengine CLI to verify that collectd's data is written to TDengine and c ...@@ -32,28 +32,29 @@ Use the TDengine CLI to verify that collectd's data is written to TDengine and c
``` ```
taos> show databases; taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status | name |
==================================================================================================================================================================================================================================================================================== =================================
collectd | 2022-04-20 09:27:45.460 | 95 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready | information_schema |
log | 2022-04-20 07:19:50.260 | 11 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready | performance_schema |
Query OK, 2 row(s) in set (0.003266s) collectd |
Query OK, 3 row(s) in set (0.003266s)
taos> use collectd; taos> use collectd;
Database changed. Database changed.
taos> show stables; taos> show stables;
name | created_time | columns | tags | tables | name |
============================================================================================ =================================
load_1 | 2022-04-20 09:27:45.492 | 2 | 2 | 1 | load_1 |
memory_value | 2022-04-20 09:27:45.463 | 2 | 3 | 6 | memory_value |
df_value | 2022-04-20 09:27:45.463 | 2 | 4 | 25 | df_value |
load_2 | 2022-04-20 09:27:45.501 | 2 | 2 | 1 | load_2 |
load_0 | 2022-04-20 09:27:45.485 | 2 | 2 | 1 | load_0 |
interface_1 | 2022-04-20 09:27:45.488 | 2 | 3 | 12 | interface_1 |
irq_value | 2022-04-20 09:27:45.476 | 2 | 3 | 31 | irq_value |
interface_0 | 2022-04-20 09:27:45.480 | 2 | 3 | 12 | interface_0 |
entropy_value | 2022-04-20 09:27:45.473 | 2 | 2 | 1 | entropy_value |
swap_value | 2022-04-20 09:27:45.477 | 2 | 3 | 5 | swap_value |
Query OK, 10 row(s) in set (0.002236s) Query OK, 10 row(s) in set (0.002236s)
taos> select * from collectd.memory_value limit 10; taos> select * from collectd.memory_value limit 10;
...@@ -72,3 +73,7 @@ taos> select * from collectd.memory_value limit 10; ...@@ -72,3 +73,7 @@ taos> select * from collectd.memory_value limit 10;
Query OK, 10 row(s) in set (0.010348s) Query OK, 10 row(s) in set (0.010348s)
``` ```
:::note
- TDengine will automatically create unique IDs for sub-table names by the rule.
:::
...@@ -26,7 +26,7 @@ Start StatsD: ...@@ -26,7 +26,7 @@ Start StatsD:
``` ```
$ node stats.js config.js & $ node stats.js config.js &
[1] 8546 [1] 8546
$ 20 Apr 09:54:41 - [8546] reading config file: exampleConfig.js $ 20 Apr 09:54:41 - [8546] reading config file: config.js
20 Apr 09:54:41 - server is up INFO 20 Apr 09:54:41 - server is up INFO
``` ```
...@@ -40,19 +40,20 @@ Use the TDengine CLI to verify that StatsD data is written to TDengine and can r ...@@ -40,19 +40,20 @@ Use the TDengine CLI to verify that StatsD data is written to TDengine and can r
``` ```
taos> show databases; taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status | name |
==================================================================================================================================================================================================================================================================================== =================================
log | 2022-04-20 07:19:50.260 | 11 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready | information_schema |
statsd | 2022-04-20 09:54:51.220 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready | performance_schema |
Query OK, 2 row(s) in set (0.003142s) statsd |
Query OK, 3 row(s) in set (0.003142s)
taos> use statsd; taos> use statsd;
Database changed. Database changed.
taos> show stables; taos> show stables;
name | created_time | columns | tags | tables | name |
============================================================================================ =================================
foo | 2022-04-20 09:54:51.234 | 2 | 1 | 1 | foo |
Query OK, 1 row(s) in set (0.002161s) Query OK, 1 row(s) in set (0.002161s)
taos> select * from foo; taos> select * from foo;
...@@ -63,3 +64,8 @@ Query OK, 1 row(s) in set (0.004179s) ...@@ -63,3 +64,8 @@ Query OK, 1 row(s) in set (0.004179s)
taos> taos>
``` ```
:::note
- TDengine will automatically create unique IDs for sub-table names by the rule.
:::
...@@ -36,39 +36,45 @@ After waiting about 10 seconds, use the TDengine CLI to query TDengine to verify ...@@ -36,39 +36,45 @@ After waiting about 10 seconds, use the TDengine CLI to query TDengine to verify
``` ```
taos> show databases; taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status | name |
==================================================================================================================================================================================================================================================================================== =================================
log | 2022-04-20 07:19:50.260 | 11 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready | information_schema |
icinga2 | 2022-04-20 12:11:39.697 | 13 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready | performance_schema |
Query OK, 2 row(s) in set (0.001867s) icinga2 |
Query OK, 3 row(s) in set (0.001867s)
taos> use icinga2; taos> use icinga2;
Database changed. Database changed.
taos> show stables; taos> show stables;
name | created_time | columns | tags | tables | name |
============================================================================================ =================================
icinga.service.users.state_... | 2022-04-20 12:11:39.726 | 2 | 1 | 1 | icinga.service.users.state_... |
icinga.service.users.acknow... | 2022-04-20 12:11:39.756 | 2 | 1 | 1 | icinga.service.users.acknow... |
icinga.service.procs.downti... | 2022-04-20 12:11:44.541 | 2 | 1 | 1 | icinga.service.procs.downti... |
icinga.service.users.users | 2022-04-20 12:11:39.770 | 2 | 1 | 1 | icinga.service.users.users |
icinga.service.procs.procs_min | 2022-04-20 12:11:44.599 | 2 | 1 | 1 | icinga.service.procs.procs_min |
icinga.service.users.users_min | 2022-04-20 12:11:39.809 | 2 | 1 | 1 | icinga.service.users.users_min |
icinga.check.max_check_atte... | 2022-04-20 12:11:39.847 | 2 | 3 | 2 | icinga.check.max_check_atte... |
icinga.service.procs.state_... | 2022-04-20 12:11:44.522 | 2 | 1 | 1 | icinga.service.procs.state_... |
icinga.service.procs.procs_... | 2022-04-20 12:11:44.576 | 2 | 1 | 1 | icinga.service.procs.procs_... |
icinga.service.users.users_... | 2022-04-20 12:11:39.796 | 2 | 1 | 1 | icinga.service.users.users_... |
icinga.check.latency | 2022-04-20 12:11:39.869 | 2 | 3 | 2 | icinga.check.latency |
icinga.service.procs.procs_... | 2022-04-20 12:11:44.588 | 2 | 1 | 1 | icinga.service.procs.procs_... |
icinga.service.users.downti... | 2022-04-20 12:11:39.746 | 2 | 1 | 1 | icinga.service.users.downti... |
icinga.service.users.users_... | 2022-04-20 12:11:39.783 | 2 | 1 | 1 | icinga.service.users.users_... |
icinga.service.users.reachable | 2022-04-20 12:11:39.736 | 2 | 1 | 1 | icinga.service.users.reachable |
icinga.service.procs.procs | 2022-04-20 12:11:44.565 | 2 | 1 | 1 | icinga.service.procs.procs |
icinga.service.procs.acknow... | 2022-04-20 12:11:44.554 | 2 | 1 | 1 | icinga.service.procs.acknow... |
icinga.service.procs.state | 2022-04-20 12:11:44.509 | 2 | 1 | 1 | icinga.service.procs.state |
icinga.service.procs.reachable | 2022-04-20 12:11:44.532 | 2 | 1 | 1 | icinga.service.procs.reachable |
icinga.check.current_attempt | 2022-04-20 12:11:39.825 | 2 | 3 | 2 | icinga.check.current_attempt |
icinga.check.execution_time | 2022-04-20 12:11:39.898 | 2 | 3 | 2 | icinga.check.execution_time |
icinga.service.users.state | 2022-04-20 12:11:39.704 | 2 | 1 | 1 | icinga.service.users.state |
Query OK, 22 row(s) in set (0.002317s) Query OK, 22 row(s) in set (0.002317s)
``` ```
:::note
- TDengine will automatically create unique IDs for sub-table names by the rule.
:::
...@@ -33,35 +33,41 @@ Wait for a few seconds and then use the TDengine CLI to query whether the corres ...@@ -33,35 +33,41 @@ Wait for a few seconds and then use the TDengine CLI to query whether the corres
``` ```
taos> show databases; taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status | name |
==================================================================================================================================================================================================================================================================================== =================================
tcollector | 2022-04-20 12:44:49.604 | 88 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready | information_schema |
log | 2022-04-20 07:19:50.260 | 11 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready | performance_schema |
Query OK, 2 row(s) in set (0.002679s) tcollector |
Query OK, 3 rows in database (0.001647s)
taos> use tcollector; taos> use tcollector;
Database changed. Database changed.
taos> show stables; taos> show stables;
name | created_time | columns | tags | tables | name |
============================================================================================ =================================
proc.meminfo.hugepages_rsvd | 2022-04-20 12:44:53.945 | 2 | 1 | 1 | proc.meminfo.hugepages_rsvd |
proc.meminfo.directmap1g | 2022-04-20 12:44:54.110 | 2 | 1 | 1 | proc.meminfo.directmap1g |
proc.meminfo.vmallocchunk | 2022-04-20 12:44:53.724 | 2 | 1 | 1 | proc.meminfo.vmallocchunk |
proc.meminfo.hugepagesize | 2022-04-20 12:44:54.004 | 2 | 1 | 1 | proc.meminfo.hugepagesize |
tcollector.reader.lines_dro... | 2022-04-20 12:44:49.675 | 2 | 1 | 1 | tcollector.reader.lines_dro... |
proc.meminfo.sunreclaim | 2022-04-20 12:44:53.437 | 2 | 1 | 1 | proc.meminfo.sunreclaim |
proc.stat.ctxt | 2022-04-20 12:44:55.363 | 2 | 1 | 1 | proc.stat.ctxt |
proc.meminfo.swaptotal | 2022-04-20 12:44:53.158 | 2 | 1 | 1 | proc.meminfo.swaptotal |
proc.uptime.total | 2022-04-20 12:44:52.813 | 2 | 1 | 1 | proc.uptime.total |
tcollector.collector.lines_... | 2022-04-20 12:44:49.895 | 2 | 2 | 51 | tcollector.collector.lines_... |
proc.meminfo.vmallocused | 2022-04-20 12:44:53.704 | 2 | 1 | 1 | proc.meminfo.vmallocused |
proc.meminfo.memavailable | 2022-04-20 12:44:52.939 | 2 | 1 | 1 | proc.meminfo.memavailable |
sys.numa.foreign_allocs | 2022-04-20 12:44:57.929 | 2 | 2 | 1 | sys.numa.foreign_allocs |
proc.meminfo.committed_as | 2022-04-20 12:44:53.639 | 2 | 1 | 1 | proc.meminfo.committed_as |
proc.vmstat.pswpin | 2022-04-20 12:44:54.177 | 2 | 1 | 1 | proc.vmstat.pswpin |
proc.meminfo.cmafree | 2022-04-20 12:44:53.865 | 2 | 1 | 1 | proc.meminfo.cmafree |
proc.meminfo.mapped | 2022-04-20 12:44:53.349 | 2 | 1 | 1 | proc.meminfo.mapped |
proc.vmstat.pgmajfault | 2022-04-20 12:44:54.251 | 2 | 1 | 1 | proc.vmstat.pgmajfault |
... ...
``` ```
:::note
- TDengine will automatically create unique IDs for sub-table names by the rule.
:::
...@@ -60,7 +60,6 @@ For the configuration method, add the following text to `/etc/telegraf/telegraf. ...@@ -60,7 +60,6 @@ For the configuration method, add the following text to `/etc/telegraf/telegraf.
username = "<TDengine's username>" username = "<TDengine's username>"
password = "<TDengine's password>" password = "<TDengine's password>"
data_format = "influx" data_format = "influx"
influx_max_line_bytes = 250
``` ```
Then restart telegraf: Then restart telegraf:
......
...@@ -6,6 +6,10 @@ description: TDengine release history, Release Notes and download links. ...@@ -6,6 +6,10 @@ description: TDengine release history, Release Notes and download links.
import Release from "/components/ReleaseV3"; import Release from "/components/ReleaseV3";
## 3.0.1.4
<Release type="tdengine" version="3.0.1.4" />
## 3.0.1.3 ## 3.0.1.3
<Release type="tdengine" version="3.0.1.3" /> <Release type="tdengine" version="3.0.1.3" />
......
...@@ -6,6 +6,10 @@ description: taosTools release history, Release Notes, download links. ...@@ -6,6 +6,10 @@ description: taosTools release history, Release Notes, download links.
import Release from "/components/ReleaseV3"; import Release from "/components/ReleaseV3";
## 2.2.4
<Release type="tools" version="2.2.4" />
## 2.2.3 ## 2.2.3
<Release type="tools" version="2.2.3" /> <Release type="tools" version="2.2.3" />
......
...@@ -10,11 +10,11 @@ import PkgListV3 from "/components/PkgListV3"; ...@@ -10,11 +10,11 @@ import PkgListV3 from "/components/PkgListV3";
您可以[用 Docker 立即体验](../../get-started/docker/) TDengine。如果您希望对 TDengine 贡献代码或对内部实现感兴趣,请参考我们的 [TDengine GitHub 主页](https://github.com/taosdata/TDengine) 下载源码构建和安装. 您可以[用 Docker 立即体验](../../get-started/docker/) TDengine。如果您希望对 TDengine 贡献代码或对内部实现感兴趣,请参考我们的 [TDengine GitHub 主页](https://github.com/taosdata/TDengine) 下载源码构建和安装.
TDengine 完整的软件包包括服务端(taosd)、应用驱动(taosc)、用于与第三方系统对接并提供 RESTful 接口的 taosAdapter、命令行程序(CLI,taos)和一些工具软件。目前 taosAdapter 仅在 Linux 系统上安装和运行,后续将支持 Windows、macOS 等系统。TDengine 除了提供多种语言的连接器之外,还通过 [taosAdapter](../../reference/taosadapter/) 提供 [RESTful 接口](../../connector/rest-api/) TDengine 完整的软件包包括服务端(taosd)、应用驱动(taosc)、用于与第三方系统对接并提供 RESTful 接口的 taosAdapter、命令行程序(CLI,taos)和一些工具软件。目前 taosdump、TDinsight 仅在 Linux 系统上安装和运行,后续将支持 Windows、macOS 等系统。TDengine 除了提供多种语言的连接器之外,还通过 [taosAdapter](../../reference/taosadapter/) 提供 [RESTful 接口](../../connector/rest-api/)
为方便使用,标准的服务端安装包包含了 taosd、taosAdapter、taosc、taos、taosdump、taosBenchmark、TDinsight 安装脚本和示例代码;如果您只需要用到服务端程序和客户端连接的 C/C++ 语言支持,也可以仅下载 Lite 版本的安装包。 为方便使用,标准的服务端安装包包含了 taosd、taosAdapter、taosc、taos、taosdump、taosBenchmark、TDinsight 安装脚本和示例代码;如果您只需要用到服务端程序和客户端连接的 C/C++ 语言支持,也可以仅下载 Lite 版本的安装包。
在 Linux 系统上,TDengine 社区版提供 Deb 和 RPM 格式安装包,用户可以根据自己的运行环境选择合适的安装包。其中 Deb 支持 Debian/Ubuntu 及其衍生系统,RPM 支持 CentOS/RHEL/SUSE 及其衍生系统。同时我们也为企业用户提供 tar.gz 格式安装包,也支持通过 `apt-get` 工具从线上进行安装。需要注意的是,RPM 和 Deb 包不含 `taosdump` 和 TDinsight 安装脚本,这些工具需要通过安装 taosTool 包获得。TDengine 也提供 Windows x64 平台的安装包。 在 Linux 系统上,TDengine 社区版提供 Deb 和 RPM 格式安装包,用户可以根据自己的运行环境选择合适的安装包。其中 Deb 支持 Debian/Ubuntu 及其衍生系统,RPM 支持 CentOS/RHEL/SUSE 及其衍生系统。同时我们也为企业用户提供 tar.gz 格式安装包,也支持通过 `apt-get` 工具从线上进行安装。需要注意的是,RPM 和 Deb 包不含 `taosdump` 和 TDinsight 安装脚本,这些工具需要通过安装 taosTools 包获得。TDengine 也提供 Windows x64 平台和 macOS x64/m1 平台的安装包。
## 安装 ## 安装
...@@ -110,6 +110,13 @@ apt-get 方式只适用于 Debian 或 Ubuntu 系统。 ...@@ -110,6 +110,13 @@ apt-get 方式只适用于 Debian 或 Ubuntu 系统。
<PkgListV3 type={3}/> <PkgListV3 type={3}/>
2. 运行可执行程序来安装 TDengine。 2. 运行可执行程序来安装 TDengine。
</TabItem>
<TabItem label="macOS 安装" value="macos">
1. 从列表中下载获得 pkg 安装程序;
<PkgListV3 type={7}/>
2. 运行可执行程序来安装 TDengine。
</TabItem> </TabItem>
</Tabs> </Tabs>
...@@ -177,12 +184,33 @@ Active: inactive (dead) ...@@ -177,12 +184,33 @@ Active: inactive (dead)
安装后,在 `C:\TDengine` 目录下,运行 `taosd.exe` 来启动 TDengine 服务进程。 安装后,在 `C:\TDengine` 目录下,运行 `taosd.exe` 来启动 TDengine 服务进程。
</TabItem>
<TabItem label="macOS 系统" value="macos">
安装后,在应用程序目录下,双击 TDengine 图标来启动程序,也可以运行 `launchctl start taosd` 来启动 TDengine 服务进程。
如下 `launchctl` 命令可以帮助你管理 TDengine 服务:
- 启动服务进程:`launchctl start taosd`
- 停止服务进程:`launchctl stop taosd`
- 查看服务状态:`launchctl list | grep taosd`
:::info
- `launchctl` 命令不需要管理员权限,请不要在前面加 `sudo`
- `launchctl list | grep taosd` 指令返回的第一个内容是程序的 PID,若为 `-` 则说明 TDengine 服务未运行。
:::
</TabItem> </TabItem>
</Tabs> </Tabs>
## TDengine 命令行(CLI) ## TDengine 命令行(CLI)
为便于检查 TDengine 的状态,执行数据库(Database)的各种即席(Ad Hoc)查询,TDengine 提供一命令行应用程序(以下简称为 TDengine CLI)taos。要进入 TDengine 命令行,您只要在安装有 TDengine 的 Linux 终端执行 `taos` 即可,也可以在安装有 TDengine 的 Windows 终端的 C:\TDengine 目录下,运行 taos.exe 来启动 TDengine 命令行。 为便于检查 TDengine 的状态,执行数据库(Database)的各种即席(Ad Hoc)查询,TDengine 提供一命令行应用程序(以下简称为 TDengine CLI)taos。要进入 TDengine 命令行,您只要在安装有 TDengine 的 Linux、macOS 终端执行 `taos` 即可,也可以在安装有 TDengine 的 Windows 终端的 C:\TDengine 目录下,运行 taos.exe 来启动 TDengine 命令行。
```bash ```bash
taos taos
...@@ -212,13 +240,13 @@ SELECT * FROM t; ...@@ -212,13 +240,13 @@ SELECT * FROM t;
Query OK, 2 row(s) in set (0.003128s) Query OK, 2 row(s) in set (0.003128s)
``` ```
除执行 SQL 语句外,系统管理员还可以从 TDengine CLI 进行检查系统运行状态、添加删除用户账号等操作。TDengine CLI 连同应用驱动也可以独立安装在 Linux 或 Windows 机器上运行,更多细节请参考 [TDengine 命令行](../../reference/taos-shell/) 除执行 SQL 语句外,系统管理员还可以从 TDengine CLI 进行检查系统运行状态、添加删除用户账号等操作。TDengine CLI 连同应用驱动也可以独立安装在机器上运行,更多细节请参考 [TDengine 命令行](../../reference/taos-shell/)
## 使用 taosBenchmark 体验写入速度 ## 使用 taosBenchmark 体验写入速度
可以使用 TDengine 的自带工具 taosBenchmark 快速体验 TDengine 的写入速度。 可以使用 TDengine 的自带工具 taosBenchmark 快速体验 TDengine 的写入速度。
启动 TDengine 的服务,在 Linux 或 Windows 终端执行 `taosBenchmark`(曾命名为 `taosdemo`): 启动 TDengine 服务,然后在终端执行 `taosBenchmark`(曾命名为 `taosdemo`):
```bash ```bash
$ taosBenchmark $ taosBenchmark
...@@ -249,7 +277,7 @@ SELECT AVG(current), MAX(voltage), MIN(phase) FROM test.meters; ...@@ -249,7 +277,7 @@ SELECT AVG(current), MAX(voltage), MIN(phase) FROM test.meters;
查询 location = "California.SanFrancisco" 的记录总条数: 查询 location = "California.SanFrancisco" 的记录总条数:
```sql ```sql
SELECT COUNT(*) FROM test.meters WHERE location = "Calaifornia.SanFrancisco"; SELECT COUNT(*) FROM test.meters WHERE location = "California.SanFrancisco";
``` ```
查询 groupId = 10 的所有记录的平均值、最大值、最小值等: 查询 groupId = 10 的所有记录的平均值、最大值、最小值等:
......
...@@ -67,6 +67,10 @@ meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0 ...@@ -67,6 +67,10 @@ meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0
</TabItem> </TabItem>
</Tabs> </Tabs>
## 查询示例 ## SQL查询示例
比如查询 location=California.LosAngeles,groupid=2 子表的数据可以通过如下sql: - meters 是插入数据的超级表名
select * from meters where location=California.LosAngeles and groupid=2 - 可以通过超级表的tag来过滤数据,比如查询 `location=California.LosAngeles,groupid=2` 可以通过如下sql:
``` cmd
select * from meters where location="California.LosAngeles" and groupid=2
```
...@@ -66,10 +66,10 @@ taos> use test; ...@@ -66,10 +66,10 @@ taos> use test;
Database changed. Database changed.
taos> show stables; taos> show stables;
name | created_time | columns | tags | tables | name |
============================================================================================ =================================
meters.current | 2022-03-30 17:04:10.877 | 2 | 2 | 2 | meters.current |
meters.voltage | 2022-03-30 17:04:10.882 | 2 | 2 | 2 | meters.voltage |
Query OK, 2 row(s) in set (0.002544s) Query OK, 2 row(s) in set (0.002544s)
taos> select tbname, * from `meters.current`; taos> select tbname, * from `meters.current`;
...@@ -81,6 +81,10 @@ taos> select tbname, * from `meters.current`; ...@@ -81,6 +81,10 @@ taos> select tbname, * from `meters.current`;
t_7e7b26dd860280242c6492a16... | 2022-03-28 09:56:51.250 | 12.600000000 | 2 | California.SanFrancisco | t_7e7b26dd860280242c6492a16... | 2022-03-28 09:56:51.250 | 12.600000000 | 2 | California.SanFrancisco |
Query OK, 4 row(s) in set (0.005399s) Query OK, 4 row(s) in set (0.005399s)
``` ```
## 查询示例:
想要查询 location=California.LosAngeles groupid=3 的数据,可以通过如下sql: ## SQL查询示例
select * from `meters.voltage` where location="California.LosAngeles" and groupid=3 - `meters.current` 是插入数据的超级表名
- 可以通过超级表的tag来过滤数据,比如查询 `location=California.LosAngeles groupid=3` 可以通过如下sql:
``` cmd
select * from `meters.current` where location="California.LosAngeles" and groupid=3
```
...@@ -82,10 +82,10 @@ taos> use test; ...@@ -82,10 +82,10 @@ taos> use test;
Database changed. Database changed.
taos> show stables; taos> show stables;
name | created_time | columns | tags | tables | name |
============================================================================================ =================================
meters.current | 2022-03-29 16:05:25.193 | 2 | 2 | 1 | meters.current |
meters.voltage | 2022-03-29 16:05:25.200 | 2 | 2 | 1 | meters.voltage |
Query OK, 2 row(s) in set (0.001954s) Query OK, 2 row(s) in set (0.001954s)
taos> select * from `meters.current`; taos> select * from `meters.current`;
...@@ -96,6 +96,9 @@ taos> select * from `meters.current`; ...@@ -96,6 +96,9 @@ taos> select * from `meters.current`;
Query OK, 2 row(s) in set (0.004076s) Query OK, 2 row(s) in set (0.004076s)
``` ```
## 查询示例 ## SQL查询示例
想要查询"tags": {"location": "California.LosAngeles", "groupid": 1} 的数据,可以通过如下sql: - `meters.voltage` 是插入数据的超级表名
- 可以通过超级表的tag来过滤数据,比如查询 `location=California.LosAngeles groupid=1` 可以通过如下sql:
``` cmd
select * from `meters.voltage` where location="California.LosAngeles" and groupid=1 select * from `meters.voltage` where location="California.LosAngeles" and groupid=1
```
...@@ -70,7 +70,7 @@ insert into d1004 values("2018-10-03 14:38:06.500", 11.50000, 221, 0.35000); ...@@ -70,7 +70,7 @@ insert into d1004 values("2018-10-03 14:38:06.500", 11.50000, 221, 0.35000);
### 查询以观察结果 ### 查询以观察结果
```sql ```sql
taos> select start, end, max_current from current_stream_output_stb; taos> select start, wend, max_current from current_stream_output_stb;
start | wend | max_current | start | wend | max_current |
=========================================================================== ===========================================================================
2018-10-03 14:38:05.000 | 2018-10-03 14:38:10.000 | 10.30000 | 2018-10-03 14:38:05.000 | 2018-10-03 14:38:10.000 | 10.30000 |
......
...@@ -74,7 +74,7 @@ http://<fqdn>:<port>/rest/sql/[db_name] ...@@ -74,7 +74,7 @@ http://<fqdn>:<port>/rest/sql/[db_name]
参数说明: 参数说明:
- fqnd: 集群中的任一台主机 FQDN 或 IP 地址。 - fqdn: 集群中的任一台主机 FQDN 或 IP 地址。
- port: 配置文件中 httpPort 配置项,缺省为 6041。 - port: 配置文件中 httpPort 配置项,缺省为 6041。
- db_name: 可选参数,指定本次所执行的 SQL 语句的默认数据库库名。 - db_name: 可选参数,指定本次所执行的 SQL 语句的默认数据库库名。
......
...@@ -109,7 +109,7 @@ TDengine 的 JDBC URL 规范格式为: ...@@ -109,7 +109,7 @@ TDengine 的 JDBC URL 规范格式为:
对于建立连接,原生连接与 REST 连接有细微不同。 对于建立连接,原生连接与 REST 连接有细微不同。
<Tabs defaultValue="native"> <Tabs defaultValue="rest">
<TabItem value="native" label="原生连接"> <TabItem value="native" label="原生连接">
```java ```java
......
...@@ -114,7 +114,7 @@ username:password@protocol(address)/dbname?param=value ...@@ -114,7 +114,7 @@ username:password@protocol(address)/dbname?param=value
``` ```
### 使用连接器进行连接 ### 使用连接器进行连接
<Tabs defaultValue="native"> <Tabs defaultValue="rest">
<TabItem value="native" label="原生连接"> <TabItem value="native" label="原生连接">
_taosSql_ 通过 cgo 实现了 Go `database/sql/driver` 接口。只需要引入驱动就可以使用 [`database/sql`](https://golang.org/pkg/database/sql/) 的接口。 _taosSql_ 通过 cgo 实现了 Go `database/sql/driver` 接口。只需要引入驱动就可以使用 [`database/sql`](https://golang.org/pkg/database/sql/) 的接口。
......
...@@ -55,23 +55,24 @@ taos = "*" ...@@ -55,23 +55,24 @@ taos = "*"
</TabItem> </TabItem>
<TabItem value="native" label="仅原生连接"> <TabItem value="rest" label="仅 Websocket">
在 `Cargo.toml` 文件中添加 [taos][taos],并启用 `native` 特性: 在 `Cargo.toml` 文件中添加 [taos][taos],并启用 `ws` 特性。
```toml ```toml
[dependencies] [dependencies]
taos = { version = "*", default-features = false, features = ["native"] } taos = { version = "*", default-features = false, features = ["ws"] }
``` ```
</TabItem> </TabItem>
<TabItem value="rest" label="仅 Websocket">
在 `Cargo.toml` 文件中添加 [taos][taos],并启用 `ws` 特性。 <TabItem value="native" label="仅原生连接">
在 `Cargo.toml` 文件中添加 [taos][taos],并启用 `native` 特性:
```toml ```toml
[dependencies] [dependencies]
taos = { version = "*", default-features = false, features = ["ws"] } taos = { version = "*", default-features = false, features = ["native"] }
``` ```
</TabItem> </TabItem>
......
...@@ -80,7 +80,7 @@ pip3 install git+https://github.com/taosdata/taos-connector-python.git ...@@ -80,7 +80,7 @@ pip3 install git+https://github.com/taosdata/taos-connector-python.git
### 安装验证 ### 安装验证
<Tabs groupId="connect" default="native"> <Tabs defaultValue="rest">
<TabItem value="native" label="原生连接"> <TabItem value="native" label="原生连接">
对于原生连接,需要验证客户端驱动和 Python 连接器本身是否都正确安装。如果能成功导入 `taos` 模块,则说明已经正确安装了客户端驱动和 Python 连接器。可在 Python 交互式 Shell 中输入: 对于原生连接,需要验证客户端驱动和 Python 连接器本身是否都正确安装。如果能成功导入 `taos` 模块,则说明已经正确安装了客户端驱动和 Python 连接器。可在 Python 交互式 Shell 中输入:
...@@ -118,7 +118,7 @@ Requirement already satisfied: taospy in c:\users\username\appdata\local\program ...@@ -118,7 +118,7 @@ Requirement already satisfied: taospy in c:\users\username\appdata\local\program
在用连接器建立连接之前,建议先测试本地 TDengine CLI 到 TDengine 集群的连通性。 在用连接器建立连接之前,建议先测试本地 TDengine CLI 到 TDengine 集群的连通性。
<Tabs> <Tabs defaultValue="rest">
<TabItem value="native" label="原生连接"> <TabItem value="native" label="原生连接">
请确保 TDengine 集群已经启动, 且集群中机器的 FQDN (如果启动的是单机版,FQDN 默认为 hostname)在本机能够解析, 可用 `ping` 命令进行测试: 请确保 TDengine 集群已经启动, 且集群中机器的 FQDN (如果启动的是单机版,FQDN 默认为 hostname)在本机能够解析, 可用 `ping` 命令进行测试:
...@@ -173,7 +173,7 @@ curl -u root:taosdata http://<FQDN>:<PORT>/rest/sql -d "select server_version()" ...@@ -173,7 +173,7 @@ curl -u root:taosdata http://<FQDN>:<PORT>/rest/sql -d "select server_version()"
以下示例代码假设 TDengine 安装在本机, 且 FQDN 和 serverPort 都使用了默认配置。 以下示例代码假设 TDengine 安装在本机, 且 FQDN 和 serverPort 都使用了默认配置。
<Tabs> <Tabs defaultValue="rest">
<TabItem value="native" label="原生连接" groupId="connect"> <TabItem value="native" label="原生连接" groupId="connect">
```python ```python
...@@ -219,7 +219,7 @@ curl -u root:taosdata http://<FQDN>:<PORT>/rest/sql -d "select server_version()" ...@@ -219,7 +219,7 @@ curl -u root:taosdata http://<FQDN>:<PORT>/rest/sql -d "select server_version()"
### 基本使用 ### 基本使用
<Tabs default="native" groupId="connect"> <Tabs defaultValue="rest">
<TabItem value="native" label="原生连接"> <TabItem value="native" label="原生连接">
##### TaosConnection 类的使用 ##### TaosConnection 类的使用
...@@ -289,7 +289,7 @@ TaosCursor 类使用原生连接进行写入、查询操作。在客户端多线 ...@@ -289,7 +289,7 @@ TaosCursor 类使用原生连接进行写入、查询操作。在客户端多线
### 与 pandas 一起使用 ### 与 pandas 一起使用
<Tabs default="native" groupId="connect"> <Tabs defaultValue="rest">
<TabItem value="native" label="原生连接"> <TabItem value="native" label="原生连接">
```python ```python
......
...@@ -85,7 +85,7 @@ REST 连接器支持所有能运行 Node.js 的平台。 ...@@ -85,7 +85,7 @@ REST 连接器支持所有能运行 Node.js 的平台。
### 使用 npm 安装 ### 使用 npm 安装
<Tabs defaultValue="install_native"> <Tabs defaultValue="install_rest">
<TabItem value="install_native" label="安装原生连接器"> <TabItem value="install_native" label="安装原生连接器">
```bash ```bash
...@@ -124,7 +124,7 @@ node nodejsChecker.js host=localhost ...@@ -124,7 +124,7 @@ node nodejsChecker.js host=localhost
请选择使用一种连接器。 请选择使用一种连接器。
<Tabs defaultValue="native"> <Tabs defaultValue="rest">
<TabItem value="native" label="原生连接"> <TabItem value="native" label="原生连接">
安装并引用 `@tdengine/client` 包。 安装并引用 `@tdengine/client` 包。
......
...@@ -35,7 +35,7 @@ import CSAsyncQuery from "../07-develop/04-query-data/_cs_async.mdx" ...@@ -35,7 +35,7 @@ import CSAsyncQuery from "../07-develop/04-query-data/_cs_async.mdx"
## 支持的功能特性 ## 支持的功能特性
<Tabs defaultValue="native"> <Tabs defaultValue="rest">
<TabItem value="native" label="原生连接"> <TabItem value="native" label="原生连接">
...@@ -96,7 +96,7 @@ dotnet add exmaple.csproj reference src/TDengine.csproj ...@@ -96,7 +96,7 @@ dotnet add exmaple.csproj reference src/TDengine.csproj
## 建立连接 ## 建立连接
<Tabs defaultValue="native"> <Tabs defaultValue="rest">
<TabItem value="native" label="原生连接"> <TabItem value="native" label="原生连接">
...@@ -171,7 +171,7 @@ namespace TDengineExample ...@@ -171,7 +171,7 @@ namespace TDengineExample
#### SQL 写入 #### SQL 写入
<Tabs defaultValue="native"> <Tabs defaultValue="rest">
<TabItem value="native" label="原生连接"> <TabItem value="native" label="原生连接">
...@@ -203,7 +203,7 @@ namespace TDengineExample ...@@ -203,7 +203,7 @@ namespace TDengineExample
#### 参数绑定 #### 参数绑定
<Tabs defaultValue="native"> <Tabs defaultValue="rest">
<TabItem value="native" label="原生连接"> <TabItem value="native" label="原生连接">
...@@ -227,7 +227,7 @@ namespace TDengineExample ...@@ -227,7 +227,7 @@ namespace TDengineExample
#### 同步查询 #### 同步查询
<Tabs defaultValue="native"> <Tabs defaultValue="rest">
<TabItem value="native" label="原生连接"> <TabItem value="native" label="原生连接">
......
...@@ -4,11 +4,11 @@ ...@@ -4,11 +4,11 @@
$ taos $ taos
taos> show databases; taos> show databases;
name | create_time | vgroups | ntables | replica | strict | duration | keep | buffer | pagesize | pages | minrows | maxrows | comp | precision | status | retention | single_stable | cachemodel | cachesize | wal_level | wal_fsync_period | wal_retention_period | wal_retention_size | wal_roll_period | wal_seg_size | name |
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================= =================================
information_schema | NULL | NULL | 14 | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | ready | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | information_schema |
performance_schema | NULL | NULL | 3 | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | ready | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | performance_schema |
db | 2022-08-04 14:14:49.385 | 2 | 4 | 1 | off | 14400m | 5254560m,5254560m,5254560m | 96 | 4 | 256 | 100 | 4096 | 2 | ms | ready | NULL | false | none | 1 | 1 | 3000 | 0 | 0 | 0 | 0 | db |
Query OK, 3 rows in database (0.019154s) Query OK, 3 rows in database (0.019154s)
taos> taos>
......
...@@ -12,7 +12,7 @@ SELECT {DATABASE() | CLIENT_VERSION() | SERVER_VERSION() | SERVER_STATUS() | NOW ...@@ -12,7 +12,7 @@ SELECT {DATABASE() | CLIENT_VERSION() | SERVER_VERSION() | SERVER_STATUS() | NOW
SELECT [DISTINCT] select_list SELECT [DISTINCT] select_list
from_clause from_clause
[WHERE condition] [WHERE condition]
[PARTITION BY tag_list] [partition_by_clause]
[window_clause] [window_clause]
[group_by_clause] [group_by_clause]
[order_by_clasue] [order_by_clasue]
...@@ -53,6 +53,9 @@ window_clause: { ...@@ -53,6 +53,9 @@ window_clause: {
| STATE_WINDOW(col) | STATE_WINDOW(col)
| INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)] [WATERMARK(watermark_val)] [FILL(fill_mod_and_val)] | INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)] [WATERMARK(watermark_val)] [FILL(fill_mod_and_val)]
partition_by_clause:
PARTITION BY expr [, expr] ...
group_by_clause: group_by_clause:
GROUP BY expr [, expr] ... HAVING condition GROUP BY expr [, expr] ... HAVING condition
......
...@@ -4,9 +4,9 @@ title: 特色查询 ...@@ -4,9 +4,9 @@ title: 特色查询
description: TDengine 提供的时序数据特有的查询功能 description: TDengine 提供的时序数据特有的查询功能
--- ---
TDengine 是专为时序数据而研发的大数据平台,存储和计算都针对时序数据的特定进行了量身定制,在支持标准 SQL 的基础之上,还提供了一系列贴合时序业务场景的特色查询语法,极大的方便时序场景的应用开发 TDengine 在支持标准 SQL 的基础之上,还提供了一系列满足时序业务场景需求的特色查询语法,这些语法能够为时序场景的应用的开发带来极大的便利
TDengine 提供的特色查询包括数据切分查询和窗口切分查询。 TDengine 提供的特色查询包括数据切分查询和时间窗口切分查询。
## 数据切分查询 ## 数据切分查询
...@@ -31,7 +31,7 @@ select max(current) from meters partition by location interval(10m) ...@@ -31,7 +31,7 @@ select max(current) from meters partition by location interval(10m)
## 窗口切分查询 ## 窗口切分查询
TDengine 支持按时间窗口切分方式进行聚合结果查询,比如温度传感器每秒采集一次数据,但需查询每隔 10 分钟的温度平均值。这种场景下可以使用窗口子句来获得需要的查询结果。窗口子句用于针对查询的数据集合按照窗口切分成为查询子集并进行聚合,窗口包含时间窗口(time window)、状态窗口(status window)、会话窗口(session window)三种窗口。其中时间窗口又可划分为滑动时间窗口和翻转时间窗口。窗口切分查询语法如下: TDengine 支持按时间窗口切分方式进行聚合结果查询,比如温度传感器每秒采集一次数据,但需查询每隔 10 分钟的温度平均值。这种场景下可以使用窗口子句来获得需要的查询结果。窗口子句用于针对查询的数据集合按照窗口切分成为查询子集并进行聚合,窗口包含时间窗口(time window)、状态窗口(status window)、会话窗口(session window)三种窗口。其中时间窗口又可划分为滑动时间窗口和翻转时间窗口。窗口切分查询语法如下:
```sql ```sql
SELECT select_list FROM tb_name SELECT select_list FROM tb_name
...@@ -132,6 +132,10 @@ SELECT * FROM (SELECT COUNT(*) AS cnt, FIRST(ts) AS fst, status FROM temp_tb_1 S ...@@ -132,6 +132,10 @@ SELECT * FROM (SELECT COUNT(*) AS cnt, FIRST(ts) AS fst, status FROM temp_tb_1 S
SELECT COUNT(*), FIRST(ts) FROM temp_tb_1 SESSION(ts, tol_val); SELECT COUNT(*), FIRST(ts) FROM temp_tb_1 SESSION(ts, tol_val);
``` ```
### 时间戳伪列
窗口聚合查询结果中,如果 SQL 语句中没有指定输出查询结果中的时间戳列,那么最终结果中不会自动包含窗口的时间列信息。如果需要在结果中输出聚合结果所对应的时间窗口信息,需要在 SELECT 子句中使用时间戳相关的伪列: 时间窗口起始时间 (\_WSTART), 时间窗口结束时间 (\_WEND), 时间窗口持续时间 (\_WDURATION), 以及查询整体窗口相关的伪列: 查询窗口起始时间(\_QSTART) 和查询窗口结束时间(\_QEND)。需要注意的是时间窗口起始时间和结束时间均是闭区间,时间窗口持续时间是数据当前时间分辨率下的数值。例如,如果当前数据库的时间分辨率是毫秒,那么结果中 500 就表示当前时间窗口的持续时间是 500毫秒 (500 ms)。
### 示例 ### 示例
智能电表的建表语句如下: 智能电表的建表语句如下:
...@@ -143,8 +147,10 @@ CREATE TABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS ...@@ -143,8 +147,10 @@ CREATE TABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS
针对智能电表采集的数据,以 10 分钟为一个阶段,计算过去 24 小时的电流数据的平均值、最大值、电流的中位数。如果没有计算值,用前一个非 NULL 值填充。使用的查询语句如下: 针对智能电表采集的数据,以 10 分钟为一个阶段,计算过去 24 小时的电流数据的平均值、最大值、电流的中位数。如果没有计算值,用前一个非 NULL 值填充。使用的查询语句如下:
``` ```
SELECT AVG(current), MAX(current), APERCENTILE(current, 50) FROM meters SELECT _WSTART, _WEND, AVG(current), MAX(current), APERCENTILE(current, 50) FROM meters
WHERE ts>=NOW-1d and ts<=now WHERE ts>=NOW-1d and ts<=now
INTERVAL(10m) INTERVAL(10m)
FILL(PREV); FILL(PREV);
``` ```
...@@ -189,7 +189,7 @@ AllowWebSockets ...@@ -189,7 +189,7 @@ AllowWebSockets
/influxdb/v1/write /influxdb/v1/write
``` ```
支持 InfluxDB 查询参数如下: 支持 InfluxDB 参数如下:
- `db` 指定 TDengine 使用的数据库名 - `db` 指定 TDengine 使用的数据库名
- `precision` TDengine 使用的时间精度 - `precision` TDengine 使用的时间精度
...@@ -197,7 +197,7 @@ AllowWebSockets ...@@ -197,7 +197,7 @@ AllowWebSockets
- `p` TDengine 密码 - `p` TDengine 密码
注意: 目前不支持 InfluxDB 的 token 验证方式,仅支持 Basic 验证和查询参数验证。 注意: 目前不支持 InfluxDB 的 token 验证方式,仅支持 Basic 验证和查询参数验证。
示例: curl --request POST http://127.0.0.1:6041/influxdb/v1/write?db=test --user "root:taosdata" --data-binary "measurement,host=host1 field1=2i,field2=2.0 1577836800000000000"
### OpenTSDB ### OpenTSDB
您可以使用任何支持 http 协议的客户端访问 Restful 接口地址 `http://<fqdn>:6041/<APIEndPoint>` 来写入 OpenTSDB 兼容格式的数据到 TDengine。EndPoint 如下: 您可以使用任何支持 http 协议的客户端访问 Restful 接口地址 `http://<fqdn>:6041/<APIEndPoint>` 来写入 OpenTSDB 兼容格式的数据到 TDengine。EndPoint 如下:
......
...@@ -51,5 +51,6 @@ port: 8125 ...@@ -51,5 +51,6 @@ port: 8125
增加如下内容后启动 StatsD(假设配置文件修改为 config.js)。 增加如下内容后启动 StatsD(假设配置文件修改为 config.js)。
``` ```
npm install
node stats.js config.js & node stats.js config.js &
``` ```
...@@ -22,6 +22,5 @@ ...@@ -22,6 +22,5 @@
username = "root" username = "root"
password = "taosdata" password = "taosdata"
data_format = "influx" data_format = "influx"
influx_max_line_bytes = 250
``` ```
...@@ -29,21 +29,20 @@ Prometheus 提供了 `remote_write` 和 `remote_read` 接口来利用其它数 ...@@ -29,21 +29,20 @@ Prometheus 提供了 `remote_write` 和 `remote_read` 接口来利用其它数
### 使用 TDengine CLI 查询写入数据 ### 使用 TDengine CLI 查询写入数据
``` ```
taos> show databases; taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status | name |
==================================================================================================================================================================================================================================================================================== =================================
test | 2022-04-12 08:07:58.756 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready | information_schema |
log | 2022-04-20 07:19:50.260 | 2 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready | performance_schema |
prometheus_data | 2022-04-20 07:21:09.202 | 158 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready | prometheus_data |
db | 2022-04-15 06:37:08.512 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready | Query OK, 3 row(s) in set (0.000585s)
Query OK, 4 row(s) in set (0.000585s)
taos> use prometheus_data; taos> use prometheus_data;
Database changed. Database changed.
taos> show stables; taos> show stables;
name | created_time | columns | tags | tables | name |
============================================================================================ =================================
metrics | 2022-04-20 07:21:09.209 | 2 | 1 | 1389 | metrics |
Query OK, 1 row(s) in set (0.000487s) Query OK, 1 row(s) in set (0.000487s)
taos> select * from metrics limit 10; taos> select * from metrics limit 10;
...@@ -88,3 +87,7 @@ VALUE TIMESTAMP ...@@ -88,3 +87,7 @@ VALUE TIMESTAMP
``` ```
:::note
- TDengine 默认生成的子表名是根据规则生成的唯一 ID 值。
:::
...@@ -16,6 +16,7 @@ Telegraf 是一款十分流行的指标采集开源软件。在数据采集和 ...@@ -16,6 +16,7 @@ Telegraf 是一款十分流行的指标采集开源软件。在数据采集和
- TDengine 集群已经部署并正常运行 - TDengine 集群已经部署并正常运行
- taosAdapter 已经安装并正常运行。具体细节请参考 [taosAdapter 的使用手册](/reference/taosadapter) - taosAdapter 已经安装并正常运行。具体细节请参考 [taosAdapter 的使用手册](/reference/taosadapter)
- Telegraf 已经安装。安装 Telegraf 请参考[官方文档](https://docs.influxdata.com/telegraf/v1.22/install/) - Telegraf 已经安装。安装 Telegraf 请参考[官方文档](https://docs.influxdata.com/telegraf/v1.22/install/)
- Telegraf 默认采集系统运行状态数据。通过使能[输入插件](https://docs.influxdata.com/telegraf/v1.22/plugins/)方式可以输出[其他格式](https://docs.influxdata.com/telegraf/v1.24/data_formats/input/)的数据到 Telegraf 再写入到 TDengine中。
## 配置步骤 ## 配置步骤
<Telegraf /> <Telegraf />
...@@ -32,26 +33,27 @@ sudo systemctl restart telegraf ...@@ -32,26 +33,27 @@ sudo systemctl restart telegraf
``` ```
taos> show databases; taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status | name |
==================================================================================================================================================================================================================================================================================== =================================
telegraf | 2022-04-20 08:47:53.488 | 22 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready | information_schema |
log | 2022-04-20 07:19:50.260 | 9 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready | performance_schema |
Query OK, 2 row(s) in set (0.002401s) telegraf |
Query OK, 3 rows in database (0.010568s)
taos> use telegraf; taos> use telegraf;
Database changed. Database changed.
taos> show stables; taos> show stables;
name | created_time | columns | tags | tables | name |
============================================================================================ =================================
swap | 2022-04-20 08:47:53.532 | 7 | 1 | 1 | swap |
cpu | 2022-04-20 08:48:03.488 | 11 | 2 | 5 | cpu |
system | 2022-04-20 08:47:53.512 | 8 | 1 | 1 | system |
diskio | 2022-04-20 08:47:53.550 | 12 | 2 | 15 | diskio |
kernel | 2022-04-20 08:47:53.503 | 6 | 1 | 1 | kernel |
mem | 2022-04-20 08:47:53.521 | 35 | 1 | 1 | mem |
processes | 2022-04-20 08:47:53.555 | 12 | 1 | 1 | processes |
disk | 2022-04-20 08:47:53.541 | 8 | 5 | 2 | disk |
Query OK, 8 row(s) in set (0.000521s) Query OK, 8 row(s) in set (0.000521s)
taos> select * from telegraf.system limit 10; taos> select * from telegraf.system limit 10;
...@@ -66,3 +68,11 @@ taos> select * from telegraf.system limit 10; ...@@ -66,3 +68,11 @@ taos> select * from telegraf.system limit 10;
| |
Query OK, 3 row(s) in set (0.013269s) Query OK, 3 row(s) in set (0.013269s)
``` ```
:::note
- TDengine 接收 influxdb 格式数据默认生成的子表名是根据规则生成的唯一 ID 值。
用户如需指定生成的表名,可以通过在 taos.cfg 里配置 smlChildTableName 参数来指定。如果通过控制输入数据格式,即可利用 TDengine 这个功能指定生成的表名。
举例如下:配置 smlChildTableName=tname 插入数据为 st,tname=cpu1,t1=4 c1=3 1626006833639000000 则创建的表名为 cpu1。如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set,其他的行会忽略)。[TDengine 无模式写入参考指南](/reference/schemaless/#无模式写入行协议)
:::
...@@ -32,28 +32,29 @@ sudo systemctl restart collectd ...@@ -32,28 +32,29 @@ sudo systemctl restart collectd
``` ```
taos> show databases; taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status | name |
==================================================================================================================================================================================================================================================================================== =================================
collectd | 2022-04-20 09:27:45.460 | 95 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready | information_schema |
log | 2022-04-20 07:19:50.260 | 11 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready | performance_schema |
Query OK, 2 row(s) in set (0.003266s) collectd |
Query OK, 3 row(s) in set (0.003266s)
taos> use collectd; taos> use collectd;
Database changed. Database changed.
taos> show stables; taos> show stables;
name | created_time | columns | tags | tables | name |
============================================================================================ =================================
load_1 | 2022-04-20 09:27:45.492 | 2 | 2 | 1 | load_1 |
memory_value | 2022-04-20 09:27:45.463 | 2 | 3 | 6 | memory_value |
df_value | 2022-04-20 09:27:45.463 | 2 | 4 | 25 | df_value |
load_2 | 2022-04-20 09:27:45.501 | 2 | 2 | 1 | load_2 |
load_0 | 2022-04-20 09:27:45.485 | 2 | 2 | 1 | load_0 |
interface_1 | 2022-04-20 09:27:45.488 | 2 | 3 | 12 | interface_1 |
irq_value | 2022-04-20 09:27:45.476 | 2 | 3 | 31 | irq_value |
interface_0 | 2022-04-20 09:27:45.480 | 2 | 3 | 12 | interface_0 |
entropy_value | 2022-04-20 09:27:45.473 | 2 | 2 | 1 | entropy_value |
swap_value | 2022-04-20 09:27:45.477 | 2 | 3 | 5 | swap_value |
Query OK, 10 row(s) in set (0.002236s) Query OK, 10 row(s) in set (0.002236s)
taos> select * from collectd.memory_value limit 10; taos> select * from collectd.memory_value limit 10;
...@@ -72,3 +73,7 @@ taos> select * from collectd.memory_value limit 10; ...@@ -72,3 +73,7 @@ taos> select * from collectd.memory_value limit 10;
Query OK, 10 row(s) in set (0.010348s) Query OK, 10 row(s) in set (0.010348s)
``` ```
:::note
- TDengine 默认生成的子表名是根据规则生成的唯一 ID 值。
:::
...@@ -27,7 +27,7 @@ StatsD 是汇总和总结应用指标的一个简单的守护进程,近些年 ...@@ -27,7 +27,7 @@ StatsD 是汇总和总结应用指标的一个简单的守护进程,近些年
``` ```
$ node stats.js config.js & $ node stats.js config.js &
[1] 8546 [1] 8546
$ 20 Apr 09:54:41 - [8546] reading config file: exampleConfig.js $ 20 Apr 09:54:41 - [8546] reading config file: config.js
20 Apr 09:54:41 - server is up INFO 20 Apr 09:54:41 - server is up INFO
``` ```
...@@ -41,19 +41,20 @@ $ echo "foo:1|c" | nc -u -w0 127.0.0.1 8125 ...@@ -41,19 +41,20 @@ $ echo "foo:1|c" | nc -u -w0 127.0.0.1 8125
``` ```
taos> show databases; taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status | name |
==================================================================================================================================================================================================================================================================================== =================================
log | 2022-04-20 07:19:50.260 | 11 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready | information_schema |
statsd | 2022-04-20 09:54:51.220 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready | performance_schema |
Query OK, 2 row(s) in set (0.003142s) statsd |
Query OK, 3 row(s) in set (0.003142s)
taos> use statsd; taos> use statsd;
Database changed. Database changed.
taos> show stables; taos> show stables;
name | created_time | columns | tags | tables | name |
============================================================================================ =================================
foo | 2022-04-20 09:54:51.234 | 2 | 1 | 1 | foo |
Query OK, 1 row(s) in set (0.002161s) Query OK, 1 row(s) in set (0.002161s)
taos> select * from foo; taos> select * from foo;
...@@ -64,3 +65,8 @@ Query OK, 1 row(s) in set (0.004179s) ...@@ -64,3 +65,8 @@ Query OK, 1 row(s) in set (0.004179s)
taos> taos>
``` ```
:::note
- TDengine will automatically create unique IDs for sub-table names by the rule.
:::
...@@ -37,39 +37,46 @@ sudo systemctl restart icinga2 ...@@ -37,39 +37,46 @@ sudo systemctl restart icinga2
``` ```
taos> show databases; taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status | name |
==================================================================================================================================================================================================================================================================================== =================================
log | 2022-04-20 07:19:50.260 | 11 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready | information_schema |
icinga2 | 2022-04-20 12:11:39.697 | 13 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready | performance_schema |
Query OK, 2 row(s) in set (0.001867s) icinga2 |
Query OK, 3 row(s) in set (0.001867s)
taos> use icinga2; taos> use icinga2;
Database changed. Database changed.
taos> show stables; taos> show stables;
name | created_time | columns | tags | tables | name |
============================================================================================ =================================
icinga.service.users.state_... | 2022-04-20 12:11:39.726 | 2 | 1 | 1 | icinga.service.users.state_... |
icinga.service.users.acknow... | 2022-04-20 12:11:39.756 | 2 | 1 | 1 | icinga.service.users.acknow... |
icinga.service.procs.downti... | 2022-04-20 12:11:44.541 | 2 | 1 | 1 | icinga.service.procs.downti... |
icinga.service.users.users | 2022-04-20 12:11:39.770 | 2 | 1 | 1 | icinga.service.users.users |
icinga.service.procs.procs_min | 2022-04-20 12:11:44.599 | 2 | 1 | 1 | icinga.service.procs.procs_min |
icinga.service.users.users_min | 2022-04-20 12:11:39.809 | 2 | 1 | 1 | icinga.service.users.users_min |
icinga.check.max_check_atte... | 2022-04-20 12:11:39.847 | 2 | 3 | 2 | icinga.check.max_check_atte... |
icinga.service.procs.state_... | 2022-04-20 12:11:44.522 | 2 | 1 | 1 | icinga.service.procs.state_... |
icinga.service.procs.procs_... | 2022-04-20 12:11:44.576 | 2 | 1 | 1 | icinga.service.procs.procs_... |
icinga.service.users.users_... | 2022-04-20 12:11:39.796 | 2 | 1 | 1 | icinga.service.users.users_... |
icinga.check.latency | 2022-04-20 12:11:39.869 | 2 | 3 | 2 | icinga.check.latency |
icinga.service.procs.procs_... | 2022-04-20 12:11:44.588 | 2 | 1 | 1 | icinga.service.procs.procs_... |
icinga.service.users.downti... | 2022-04-20 12:11:39.746 | 2 | 1 | 1 | icinga.service.users.downti... |
icinga.service.users.users_... | 2022-04-20 12:11:39.783 | 2 | 1 | 1 | icinga.service.users.users_... |
icinga.service.users.reachable | 2022-04-20 12:11:39.736 | 2 | 1 | 1 | icinga.service.users.reachable |
icinga.service.procs.procs | 2022-04-20 12:11:44.565 | 2 | 1 | 1 | icinga.service.procs.procs |
icinga.service.procs.acknow... | 2022-04-20 12:11:44.554 | 2 | 1 | 1 | icinga.service.procs.acknow... |
icinga.service.procs.state | 2022-04-20 12:11:44.509 | 2 | 1 | 1 | icinga.service.procs.state |
icinga.service.procs.reachable | 2022-04-20 12:11:44.532 | 2 | 1 | 1 | icinga.service.procs.reachable |
icinga.check.current_attempt | 2022-04-20 12:11:39.825 | 2 | 3 | 2 | icinga.check.current_attempt |
icinga.check.execution_time | 2022-04-20 12:11:39.898 | 2 | 3 | 2 | icinga.check.execution_time |
icinga.service.users.state | 2022-04-20 12:11:39.704 | 2 | 1 | 1 | icinga.service.users.state |
Query OK, 22 row(s) in set (0.002317s) Query OK, 22 row(s) in set (0.002317s)
``` ```
:::note
- TDengine 默认生成的子表名是根据规则生成的唯一 ID 值。
:::
...@@ -34,35 +34,42 @@ sudo systemctl restart taosadapter ...@@ -34,35 +34,42 @@ sudo systemctl restart taosadapter
``` ```
taos> show databases; taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status | name |
==================================================================================================================================================================================================================================================================================== =================================
tcollector | 2022-04-20 12:44:49.604 | 88 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready | information_schema |
log | 2022-04-20 07:19:50.260 | 11 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready | performance_schema |
Query OK, 2 row(s) in set (0.002679s) tcollector |
Query OK, 3 rows in database (0.001647s)
taos> use tcollector; taos> use tcollector;
Database changed. Database changed.
taos> show stables; taos> show stables;
name | created_time | columns | tags | tables | name |
============================================================================================ =================================
proc.meminfo.hugepages_rsvd | 2022-04-20 12:44:53.945 | 2 | 1 | 1 | proc.meminfo.hugepages_rsvd |
proc.meminfo.directmap1g | 2022-04-20 12:44:54.110 | 2 | 1 | 1 | proc.meminfo.directmap1g |
proc.meminfo.vmallocchunk | 2022-04-20 12:44:53.724 | 2 | 1 | 1 | proc.meminfo.vmallocchunk |
proc.meminfo.hugepagesize | 2022-04-20 12:44:54.004 | 2 | 1 | 1 | proc.meminfo.hugepagesize |
tcollector.reader.lines_dro... | 2022-04-20 12:44:49.675 | 2 | 1 | 1 | tcollector.reader.lines_dro... |
proc.meminfo.sunreclaim | 2022-04-20 12:44:53.437 | 2 | 1 | 1 | proc.meminfo.sunreclaim |
proc.stat.ctxt | 2022-04-20 12:44:55.363 | 2 | 1 | 1 | proc.stat.ctxt |
proc.meminfo.swaptotal | 2022-04-20 12:44:53.158 | 2 | 1 | 1 | proc.meminfo.swaptotal |
proc.uptime.total | 2022-04-20 12:44:52.813 | 2 | 1 | 1 | proc.uptime.total |
tcollector.collector.lines_... | 2022-04-20 12:44:49.895 | 2 | 2 | 51 | tcollector.collector.lines_... |
proc.meminfo.vmallocused | 2022-04-20 12:44:53.704 | 2 | 1 | 1 | proc.meminfo.vmallocused |
proc.meminfo.memavailable | 2022-04-20 12:44:52.939 | 2 | 1 | 1 | proc.meminfo.memavailable |
sys.numa.foreign_allocs | 2022-04-20 12:44:57.929 | 2 | 2 | 1 | sys.numa.foreign_allocs |
proc.meminfo.committed_as | 2022-04-20 12:44:53.639 | 2 | 1 | 1 | proc.meminfo.committed_as |
proc.vmstat.pswpin | 2022-04-20 12:44:54.177 | 2 | 1 | 1 | proc.vmstat.pswpin |
proc.meminfo.cmafree | 2022-04-20 12:44:53.865 | 2 | 1 | 1 | proc.meminfo.cmafree |
proc.meminfo.mapped | 2022-04-20 12:44:53.349 | 2 | 1 | 1 | proc.meminfo.mapped |
proc.vmstat.pgmajfault | 2022-04-20 12:44:54.251 | 2 | 1 | 1 | proc.vmstat.pgmajfault |
... ...
``` ```
:::note
- TDengine 默认生成的子表名是根据规则生成的唯一 ID 值。
:::
...@@ -61,7 +61,6 @@ IT 运维监测数据通常都是对时间特性比较敏感的数据,例如 ...@@ -61,7 +61,6 @@ IT 运维监测数据通常都是对时间特性比较敏感的数据,例如
username = "<TDengine's username>" username = "<TDengine's username>"
password = "<TDengine's password>" password = "<TDengine's password>"
data_format = "influx" data_format = "influx"
influx_max_line_bytes = 250
``` ```
然后重启 Telegraf: 然后重启 Telegraf:
......
...@@ -6,6 +6,10 @@ description: TDengine 发布历史、Release Notes 及下载链接 ...@@ -6,6 +6,10 @@ description: TDengine 发布历史、Release Notes 及下载链接
import Release from "/components/ReleaseV3"; import Release from "/components/ReleaseV3";
## 3.0.1.4
<Release type="tdengine" version="3.0.1.4" />
## 3.0.1.3 ## 3.0.1.3
<Release type="tdengine" version="3.0.1.3" /> <Release type="tdengine" version="3.0.1.3" />
......
...@@ -6,6 +6,10 @@ description: taosTools 的发布历史、Release Notes 和下载链接 ...@@ -6,6 +6,10 @@ description: taosTools 的发布历史、Release Notes 和下载链接
import Release from "/components/ReleaseV3"; import Release from "/components/ReleaseV3";
## 2.2.4
<Release type="tools" version="2.2.4" />
## 2.2.3 ## 2.2.3
<Release type="tools" version="2.2.3" /> <Release type="tools" version="2.2.3" />
......
...@@ -177,6 +177,7 @@ typedef struct SSDataBlock { ...@@ -177,6 +177,7 @@ typedef struct SSDataBlock {
enum { enum {
FETCH_TYPE__DATA = 1, FETCH_TYPE__DATA = 1,
FETCH_TYPE__META, FETCH_TYPE__META,
FETCH_TYPE__SEP,
FETCH_TYPE__NONE, FETCH_TYPE__NONE,
}; };
......
...@@ -27,6 +27,7 @@ ...@@ -27,6 +27,7 @@
extern "C" { extern "C" {
#endif #endif
typedef struct SBuffer SBuffer;
typedef struct SSchema SSchema; typedef struct SSchema SSchema;
typedef struct STColumn STColumn; typedef struct STColumn STColumn;
typedef struct STSchema STSchema; typedef struct STSchema STSchema;
...@@ -56,6 +57,18 @@ const static uint8_t BIT2_MAP[4][4] = {{0b00000000, 0b00000001, 0b00000010, 0}, ...@@ -56,6 +57,18 @@ const static uint8_t BIT2_MAP[4][4] = {{0b00000000, 0b00000001, 0b00000010, 0},
#define SET_BIT2(p, i, v) ((p)[(i) >> 2] = (p)[(i) >> 2] & N1(BIT2_MAP[(i)&3][3]) | BIT2_MAP[(i)&3][(v)]) #define SET_BIT2(p, i, v) ((p)[(i) >> 2] = (p)[(i) >> 2] & N1(BIT2_MAP[(i)&3][3]) | BIT2_MAP[(i)&3][(v)])
#define GET_BIT2(p, i) (((p)[(i) >> 2] >> BIT2_MAP[(i)&3][3]) & ((uint8_t)3)) #define GET_BIT2(p, i) (((p)[(i) >> 2] >> BIT2_MAP[(i)&3][3]) & ((uint8_t)3))
// SBuffer ================================
struct SBuffer {
int64_t nBuf;
uint8_t *pBuf;
};
#define tBufferCreate() \
(SBuffer) { .nBuf = 0, .pBuf = NULL }
void tBufferDestroy(SBuffer *pBuffer);
int32_t tBufferInit(SBuffer *pBuffer, int64_t size);
int32_t tBufferPut(SBuffer *pBuffer, const void *pData, int64_t nData);
// STSchema ================================ // STSchema ================================
int32_t tTSchemaCreate(int32_t sver, SSchema *pSchema, int32_t nCols, STSchema **ppTSchema); int32_t tTSchemaCreate(int32_t sver, SSchema *pSchema, int32_t nCols, STSchema **ppTSchema);
void tTSchemaDestroy(STSchema *pTSchema); void tTSchemaDestroy(STSchema *pTSchema);
...@@ -162,17 +175,7 @@ struct STSRowBuilder { ...@@ -162,17 +175,7 @@ struct STSRowBuilder {
struct SValue { struct SValue {
union { union {
int8_t i8; // TSDB_DATA_TYPE_BOOL||TSDB_DATA_TYPE_TINYINT int64_t val;
uint8_t u8; // TSDB_DATA_TYPE_UTINYINT
int16_t i16; // TSDB_DATA_TYPE_SMALLINT
uint16_t u16; // TSDB_DATA_TYPE_USMALLINT
int32_t i32; // TSDB_DATA_TYPE_INT
uint32_t u32; // TSDB_DATA_TYPE_UINT
int64_t i64; // TSDB_DATA_TYPE_BIGINT
uint64_t u64; // TSDB_DATA_TYPE_UBIGINT
TSKEY ts; // TSDB_DATA_TYPE_TIMESTAMP
float f; // TSDB_DATA_TYPE_FLOAT
double d; // TSDB_DATA_TYPE_DOUBLE
struct { struct {
uint32_t nData; uint32_t nData;
uint8_t *pData; uint8_t *pData;
......
...@@ -227,110 +227,111 @@ ...@@ -227,110 +227,111 @@
#define TK_WEND 209 #define TK_WEND 209
#define TK_WDURATION 210 #define TK_WDURATION 210
#define TK_IROWTS 211 #define TK_IROWTS 211
#define TK_CAST 212 #define TK_QTAGS 212
#define TK_NOW 213 #define TK_CAST 213
#define TK_TODAY 214 #define TK_NOW 214
#define TK_TIMEZONE 215 #define TK_TODAY 215
#define TK_CLIENT_VERSION 216 #define TK_TIMEZONE 216
#define TK_SERVER_VERSION 217 #define TK_CLIENT_VERSION 217
#define TK_SERVER_STATUS 218 #define TK_SERVER_VERSION 218
#define TK_CURRENT_USER 219 #define TK_SERVER_STATUS 219
#define TK_COUNT 220 #define TK_CURRENT_USER 220
#define TK_LAST_ROW 221 #define TK_COUNT 221
#define TK_CASE 222 #define TK_LAST_ROW 222
#define TK_END 223 #define TK_CASE 223
#define TK_WHEN 224 #define TK_END 224
#define TK_THEN 225 #define TK_WHEN 225
#define TK_ELSE 226 #define TK_THEN 226
#define TK_BETWEEN 227 #define TK_ELSE 227
#define TK_IS 228 #define TK_BETWEEN 228
#define TK_NK_LT 229 #define TK_IS 229
#define TK_NK_GT 230 #define TK_NK_LT 230
#define TK_NK_LE 231 #define TK_NK_GT 231
#define TK_NK_GE 232 #define TK_NK_LE 232
#define TK_NK_NE 233 #define TK_NK_GE 233
#define TK_MATCH 234 #define TK_NK_NE 234
#define TK_NMATCH 235 #define TK_MATCH 235
#define TK_CONTAINS 236 #define TK_NMATCH 236
#define TK_IN 237 #define TK_CONTAINS 237
#define TK_JOIN 238 #define TK_IN 238
#define TK_INNER 239 #define TK_JOIN 239
#define TK_SELECT 240 #define TK_INNER 240
#define TK_DISTINCT 241 #define TK_SELECT 241
#define TK_WHERE 242 #define TK_DISTINCT 242
#define TK_PARTITION 243 #define TK_WHERE 243
#define TK_BY 244 #define TK_PARTITION 244
#define TK_SESSION 245 #define TK_BY 245
#define TK_STATE_WINDOW 246 #define TK_SESSION 246
#define TK_SLIDING 247 #define TK_STATE_WINDOW 247
#define TK_FILL 248 #define TK_SLIDING 248
#define TK_VALUE 249 #define TK_FILL 249
#define TK_NONE 250 #define TK_VALUE 250
#define TK_PREV 251 #define TK_NONE 251
#define TK_LINEAR 252 #define TK_PREV 252
#define TK_NEXT 253 #define TK_LINEAR 253
#define TK_HAVING 254 #define TK_NEXT 254
#define TK_RANGE 255 #define TK_HAVING 255
#define TK_EVERY 256 #define TK_RANGE 256
#define TK_ORDER 257 #define TK_EVERY 257
#define TK_SLIMIT 258 #define TK_ORDER 258
#define TK_SOFFSET 259 #define TK_SLIMIT 259
#define TK_LIMIT 260 #define TK_SOFFSET 260
#define TK_OFFSET 261 #define TK_LIMIT 261
#define TK_ASC 262 #define TK_OFFSET 262
#define TK_NULLS 263 #define TK_ASC 263
#define TK_ABORT 264 #define TK_NULLS 264
#define TK_AFTER 265 #define TK_ABORT 265
#define TK_ATTACH 266 #define TK_AFTER 266
#define TK_BEFORE 267 #define TK_ATTACH 267
#define TK_BEGIN 268 #define TK_BEFORE 268
#define TK_BITAND 269 #define TK_BEGIN 269
#define TK_BITNOT 270 #define TK_BITAND 270
#define TK_BITOR 271 #define TK_BITNOT 271
#define TK_BLOCKS 272 #define TK_BITOR 272
#define TK_CHANGE 273 #define TK_BLOCKS 273
#define TK_COMMA 274 #define TK_CHANGE 274
#define TK_COMPACT 275 #define TK_COMMA 275
#define TK_CONCAT 276 #define TK_COMPACT 276
#define TK_CONFLICT 277 #define TK_CONCAT 277
#define TK_COPY 278 #define TK_CONFLICT 278
#define TK_DEFERRED 279 #define TK_COPY 279
#define TK_DELIMITERS 280 #define TK_DEFERRED 280
#define TK_DETACH 281 #define TK_DELIMITERS 281
#define TK_DIVIDE 282 #define TK_DETACH 282
#define TK_DOT 283 #define TK_DIVIDE 283
#define TK_EACH 284 #define TK_DOT 284
#define TK_FAIL 285 #define TK_EACH 285
#define TK_FILE 286 #define TK_FAIL 286
#define TK_FOR 287 #define TK_FILE 287
#define TK_GLOB 288 #define TK_FOR 288
#define TK_ID 289 #define TK_GLOB 289
#define TK_IMMEDIATE 290 #define TK_ID 290
#define TK_IMPORT 291 #define TK_IMMEDIATE 291
#define TK_INITIALLY 292 #define TK_IMPORT 292
#define TK_INSTEAD 293 #define TK_INITIALLY 293
#define TK_ISNULL 294 #define TK_INSTEAD 294
#define TK_KEY 295 #define TK_ISNULL 295
#define TK_NK_BITNOT 296 #define TK_KEY 296
#define TK_NK_SEMI 297 #define TK_NK_BITNOT 297
#define TK_NOTNULL 298 #define TK_NK_SEMI 298
#define TK_OF 299 #define TK_NOTNULL 299
#define TK_PLUS 300 #define TK_OF 300
#define TK_PRIVILEGE 301 #define TK_PLUS 301
#define TK_RAISE 302 #define TK_PRIVILEGE 302
#define TK_REPLACE 303 #define TK_RAISE 303
#define TK_RESTRICT 304 #define TK_REPLACE 304
#define TK_ROW 305 #define TK_RESTRICT 305
#define TK_SEMI 306 #define TK_ROW 306
#define TK_STAR 307 #define TK_SEMI 307
#define TK_STATEMENT 308 #define TK_STAR 308
#define TK_STRING 309 #define TK_STATEMENT 309
#define TK_TIMES 310 #define TK_STRING 310
#define TK_UPDATE 311 #define TK_TIMES 311
#define TK_VALUES 312 #define TK_UPDATE 312
#define TK_VARIABLE 313 #define TK_VALUES 313
#define TK_VIEW 314 #define TK_VARIABLE 314
#define TK_WAL 315 #define TK_VIEW 315
#define TK_WAL 316
#define TK_NK_SPACE 300 #define TK_NK_SPACE 300
#define TK_NK_COMMENT 301 #define TK_NK_COMMENT 301
......
...@@ -333,10 +333,10 @@ typedef struct tDataTypeDescriptor { ...@@ -333,10 +333,10 @@ typedef struct tDataTypeDescriptor {
char *name; char *name;
int64_t minValue; int64_t minValue;
int64_t maxValue; int64_t maxValue;
int32_t (*compFunc)(const char *const input, int32_t inputSize, const int32_t nelements, char *const output, int32_t (*compFunc)(void *pIn, int32_t nIn, int32_t nEle, void *pOut, int32_t nOut, uint8_t cmprAlg, void *pBuf,
int32_t outputSize, char algorithm, char *const buffer, int32_t bufferSize); int32_t nBuf);
int32_t (*decompFunc)(const char *const input, int32_t compressedSize, const int32_t nelements, char *const output, int32_t (*decompFunc)(void *pIn, int32_t nIn, int32_t nEle, void *pOut, int32_t nOut, uint8_t cmprAlg, void *pBuf,
int32_t outputSize, char algorithm, char *const buffer, int32_t bufferSize); int32_t nBuf);
void (*statisFunc)(int8_t bitmapMode, const void *pBitmap, const void *pData, int32_t numofrow, int64_t *min, void (*statisFunc)(int8_t bitmapMode, const void *pBitmap, const void *pData, int32_t numofrow, int64_t *min,
int64_t *max, int64_t *sum, int16_t *minindex, int16_t *maxindex, int16_t *numofnull); int64_t *max, int64_t *sum, int16_t *minindex, int16_t *maxindex, int16_t *numofnull);
} tDataTypeDescriptor; } tDataTypeDescriptor;
...@@ -356,7 +356,6 @@ void operateVal(void *dst, void *s1, void *s2, int32_t optr, int32_t type); ...@@ -356,7 +356,6 @@ void operateVal(void *dst, void *s1, void *s2, int32_t optr, int32_t type);
void *getDataMin(int32_t type); void *getDataMin(int32_t type);
void *getDataMax(int32_t type); void *getDataMax(int32_t type);
#ifdef __cplusplus #ifdef __cplusplus
} }
#endif #endif
......
...@@ -29,13 +29,13 @@ typedef void* DataSinkHandle; ...@@ -29,13 +29,13 @@ typedef void* DataSinkHandle;
struct SRpcMsg; struct SRpcMsg;
struct SSubplan; struct SSubplan;
typedef int32_t (*localFetchFp)(void *, uint64_t, uint64_t, uint64_t, int64_t, int32_t, void**, SArray*); typedef int32_t (*localFetchFp)(void*, uint64_t, uint64_t, uint64_t, int64_t, int32_t, void**, SArray*);
typedef struct { typedef struct {
void *handle; void* handle;
bool localExec; bool localExec;
localFetchFp fp; localFetchFp fp;
SArray *explainRes; SArray* explainRes;
} SLocalFetch; } SLocalFetch;
typedef struct { typedef struct {
...@@ -51,9 +51,9 @@ typedef struct { ...@@ -51,9 +51,9 @@ typedef struct {
bool initTqReader; bool initTqReader;
int32_t numOfVgroups; int32_t numOfVgroups;
void* sContext; // SSnapContext* void* sContext; // SSnapContext*
void* pStateBackend; void* pStateBackend;
} SReadHandle; } SReadHandle;
// in queue mode, data streams are seperated by msg // in queue mode, data streams are seperated by msg
...@@ -136,6 +136,7 @@ int32_t qGetQueryTableSchemaVersion(qTaskInfo_t tinfo, char* dbName, char* table ...@@ -136,6 +136,7 @@ int32_t qGetQueryTableSchemaVersion(qTaskInfo_t tinfo, char* dbName, char* table
* @param handle * @param handle
* @return * @return
*/ */
int32_t qExecTaskOpt(qTaskInfo_t tinfo, SArray* pResList, uint64_t* useconds, bool* hasMore, SLocalFetch *pLocal); int32_t qExecTaskOpt(qTaskInfo_t tinfo, SArray* pResList, uint64_t* useconds, bool* hasMore, SLocalFetch *pLocal);
int32_t qExecTask(qTaskInfo_t tinfo, SSDataBlock** pBlock, uint64_t* useconds); int32_t qExecTask(qTaskInfo_t tinfo, SSDataBlock** pBlock, uint64_t* useconds);
...@@ -195,6 +196,8 @@ int32_t qStreamPrepareTsdbScan(qTaskInfo_t tinfo, uint64_t uid, int64_t ts); ...@@ -195,6 +196,8 @@ int32_t qStreamPrepareTsdbScan(qTaskInfo_t tinfo, uint64_t uid, int64_t ts);
int32_t qStreamPrepareScan(qTaskInfo_t tinfo, STqOffsetVal* pOffset, int8_t subType); int32_t qStreamPrepareScan(qTaskInfo_t tinfo, STqOffsetVal* pOffset, int8_t subType);
int32_t qStreamScanMemData(qTaskInfo_t tinfo, const SSubmitReq* pReq);
int32_t qStreamExtractOffset(qTaskInfo_t tinfo, STqOffsetVal* pOffset); int32_t qStreamExtractOffset(qTaskInfo_t tinfo, STqOffsetVal* pOffset);
SMqMetaRsp* qStreamExtractMetaMsg(qTaskInfo_t tinfo); SMqMetaRsp* qStreamExtractMetaMsg(qTaskInfo_t tinfo);
......
...@@ -120,6 +120,7 @@ typedef enum EFunctionType { ...@@ -120,6 +120,7 @@ typedef enum EFunctionType {
FUNCTION_TYPE_WEND, FUNCTION_TYPE_WEND,
FUNCTION_TYPE_WDURATION, FUNCTION_TYPE_WDURATION,
FUNCTION_TYPE_IROWTS, FUNCTION_TYPE_IROWTS,
FUNCTION_TYPE_TAGS,
// internal function // internal function
FUNCTION_TYPE_SELECT_VALUE = 3750, FUNCTION_TYPE_SELECT_VALUE = 3750,
......
...@@ -206,12 +206,6 @@ void indexJsonRebuild(SIndexJson* idx, void* iter); ...@@ -206,12 +206,6 @@ void indexJsonRebuild(SIndexJson* idx, void* iter);
**/ **/
bool indexJsonIsRebuild(SIndexJson* idx); bool indexJsonIsRebuild(SIndexJson* idx);
/*
* init index env
*
*/
void indexInit();
/* index filter */ /* index filter */
typedef struct SIndexMetaArg { typedef struct SIndexMetaArg {
void* metaEx; void* metaEx;
...@@ -225,6 +219,12 @@ typedef enum { SFLT_NOT_INDEX, SFLT_COARSE_INDEX, SFLT_ACCURATE_INDEX } SIdxFltS ...@@ -225,6 +219,12 @@ typedef enum { SFLT_NOT_INDEX, SFLT_COARSE_INDEX, SFLT_ACCURATE_INDEX } SIdxFltS
SIdxFltStatus idxGetFltStatus(SNode* pFilterNode); SIdxFltStatus idxGetFltStatus(SNode* pFilterNode);
int32_t doFilterTag(SNode* pFilterNode, SIndexMetaArg* metaArg, SArray* result, SIdxFltStatus* status); int32_t doFilterTag(SNode* pFilterNode, SIndexMetaArg* metaArg, SArray* result, SIdxFltStatus* status);
/*
* init index env
*
*/
void indexInit(int32_t threads);
/* /*
* destory index env * destory index env
* *
......
...@@ -27,9 +27,10 @@ extern "C" { ...@@ -27,9 +27,10 @@ extern "C" {
#define LIST_LENGTH(l) (NULL != (l) ? (l)->length : 0) #define LIST_LENGTH(l) (NULL != (l) ? (l)->length : 0)
#define FOREACH(node, list) \ #define FOREACH(node, list) \
for (SListCell *cell = (NULL != (list) ? (list)->pHead : NULL), *pNext; \ for (SListCell* cell = (NULL != (list) ? (list)->pHead : NULL), *pNext; \
(NULL != cell ? (node = cell->pNode, pNext = cell->pNext, true) : (node = NULL, pNext = NULL, false)); cell = pNext) (NULL != cell ? (node = cell->pNode, pNext = cell->pNext, true) : (node = NULL, pNext = NULL, false)); \
cell = pNext)
#define REPLACE_NODE(newNode) cell->pNode = (SNode*)(newNode) #define REPLACE_NODE(newNode) cell->pNode = (SNode*)(newNode)
...@@ -192,6 +193,7 @@ typedef enum ENodeType { ...@@ -192,6 +193,7 @@ typedef enum ENodeType {
QUERY_NODE_SHOW_TABLE_DISTRIBUTED_STMT, QUERY_NODE_SHOW_TABLE_DISTRIBUTED_STMT,
QUERY_NODE_SHOW_LOCAL_VARIABLES_STMT, QUERY_NODE_SHOW_LOCAL_VARIABLES_STMT,
QUERY_NODE_SHOW_SCORES_STMT, QUERY_NODE_SHOW_SCORES_STMT,
QUERY_NODE_SHOW_TABLE_TAGS_STMT,
QUERY_NODE_KILL_CONNECTION_STMT, QUERY_NODE_KILL_CONNECTION_STMT,
QUERY_NODE_KILL_QUERY_STMT, QUERY_NODE_KILL_QUERY_STMT,
QUERY_NODE_KILL_TRANSACTION_STMT, QUERY_NODE_KILL_TRANSACTION_STMT,
......
...@@ -33,6 +33,7 @@ typedef struct { ...@@ -33,6 +33,7 @@ typedef struct {
TTB* pFuncStateDb; TTB* pFuncStateDb;
TTB* pFillStateDb; // todo refactor TTB* pFillStateDb; // todo refactor
TXN txn; TXN txn;
int32_t number;
} SStreamState; } SStreamState;
SStreamState* streamStateOpen(char* path, SStreamTask* pTask, bool specPath); SStreamState* streamStateOpen(char* path, SStreamTask* pTask, bool specPath);
...@@ -42,7 +43,8 @@ int32_t streamStateCommit(SStreamState* pState); ...@@ -42,7 +43,8 @@ int32_t streamStateCommit(SStreamState* pState);
int32_t streamStateAbort(SStreamState* pState); int32_t streamStateAbort(SStreamState* pState);
typedef struct { typedef struct {
TBC* pCur; TBC* pCur;
int64_t number;
} SStreamStateCur; } SStreamStateCur;
int32_t streamStateFuncPut(SStreamState* pState, const STupleKey* key, const void* value, int32_t vLen); int32_t streamStateFuncPut(SStreamState* pState, const STupleKey* key, const void* value, int32_t vLen);
...@@ -52,6 +54,8 @@ int32_t streamStateFuncDel(SStreamState* pState, const STupleKey* key); ...@@ -52,6 +54,8 @@ int32_t streamStateFuncDel(SStreamState* pState, const STupleKey* key);
int32_t streamStatePut(SStreamState* pState, const SWinKey* key, const void* value, int32_t vLen); int32_t streamStatePut(SStreamState* pState, const SWinKey* key, const void* value, int32_t vLen);
int32_t streamStateGet(SStreamState* pState, const SWinKey* key, void** pVal, int32_t* pVLen); int32_t streamStateGet(SStreamState* pState, const SWinKey* key, void** pVal, int32_t* pVLen);
int32_t streamStateDel(SStreamState* pState, const SWinKey* key); int32_t streamStateDel(SStreamState* pState, const SWinKey* key);
int32_t streamStateClear(SStreamState* pState);
void streamStateSetNumber(SStreamState* pState, int32_t number);
int32_t streamStateFillPut(SStreamState* pState, const SWinKey* key, const void* value, int32_t vLen); int32_t streamStateFillPut(SStreamState* pState, const SWinKey* key, const void* value, int32_t vLen);
int32_t streamStateFillGet(SStreamState* pState, const SWinKey* key, void** pVal, int32_t* pVLen); int32_t streamStateFillGet(SStreamState* pState, const SWinKey* key, void** pVal, int32_t* pVLen);
...@@ -63,6 +67,7 @@ void streamFreeVal(void* val); ...@@ -63,6 +67,7 @@ void streamFreeVal(void* val);
SStreamStateCur* streamStateGetCur(SStreamState* pState, const SWinKey* key); SStreamStateCur* streamStateGetCur(SStreamState* pState, const SWinKey* key);
SStreamStateCur* streamStateGetAndCheckCur(SStreamState* pState, SWinKey* key); SStreamStateCur* streamStateGetAndCheckCur(SStreamState* pState, SWinKey* key);
SStreamStateCur* streamStateSeekKeyNext(SStreamState* pState, const SWinKey* key);
SStreamStateCur* streamStateFillSeekKeyNext(SStreamState* pState, const SWinKey* key); SStreamStateCur* streamStateFillSeekKeyNext(SStreamState* pState, const SWinKey* key);
SStreamStateCur* streamStateFillSeekKeyPrev(SStreamState* pState, const SWinKey* key); SStreamStateCur* streamStateFillSeekKeyPrev(SStreamState* pState, const SWinKey* key);
void streamStateFreeCur(SStreamStateCur* pCur); void streamStateFreeCur(SStreamStateCur* pCur);
...@@ -70,6 +75,7 @@ void streamStateFreeCur(SStreamStateCur* pCur); ...@@ -70,6 +75,7 @@ void streamStateFreeCur(SStreamStateCur* pCur);
int32_t streamStateGetGroupKVByCur(SStreamStateCur* pCur, SWinKey* pKey, const void** pVal, int32_t* pVLen); int32_t streamStateGetGroupKVByCur(SStreamStateCur* pCur, SWinKey* pKey, const void** pVal, int32_t* pVLen);
int32_t streamStateGetKVByCur(SStreamStateCur* pCur, SWinKey* pKey, const void** pVal, int32_t* pVLen); int32_t streamStateGetKVByCur(SStreamStateCur* pCur, SWinKey* pKey, const void** pVal, int32_t* pVLen);
int32_t streamStateGetFirst(SStreamState* pState, SWinKey* key);
int32_t streamStateSeekFirst(SStreamState* pState, SStreamStateCur* pCur); int32_t streamStateSeekFirst(SStreamState* pState, SStreamStateCur* pCur);
int32_t streamStateSeekLast(SStreamState* pState, SStreamStateCur* pCur); int32_t streamStateSeekLast(SStreamState* pState, SStreamStateCur* pCur);
......
...@@ -132,7 +132,7 @@ typedef struct SSyncFSM { ...@@ -132,7 +132,7 @@ typedef struct SSyncFSM {
void (*FpRollBackCb)(struct SSyncFSM* pFsm, const SRpcMsg* pMsg, SFsmCbMeta cbMeta); void (*FpRollBackCb)(struct SSyncFSM* pFsm, const SRpcMsg* pMsg, SFsmCbMeta cbMeta);
void (*FpRestoreFinishCb)(struct SSyncFSM* pFsm); void (*FpRestoreFinishCb)(struct SSyncFSM* pFsm);
void (*FpReConfigCb)(struct SSyncFSM* pFsm, const SRpcMsg* pMsg, SReConfigCbMeta cbMeta); void (*FpReConfigCb)(struct SSyncFSM* pFsm, const SRpcMsg* pMsg, SReConfigCbMeta *cbMeta);
void (*FpLeaderTransferCb)(struct SSyncFSM* pFsm, const SRpcMsg* pMsg, SFsmCbMeta cbMeta); void (*FpLeaderTransferCb)(struct SSyncFSM* pFsm, const SRpcMsg* pMsg, SFsmCbMeta cbMeta);
void (*FpBecomeLeaderCb)(struct SSyncFSM* pFsm); void (*FpBecomeLeaderCb)(struct SSyncFSM* pFsm);
......
...@@ -38,9 +38,9 @@ extern "C" { ...@@ -38,9 +38,9 @@ extern "C" {
#define TD_LOG_DIR_PATH "C:\\TDengine\\log\\" #define TD_LOG_DIR_PATH "C:\\TDengine\\log\\"
#elif defined(_TD_DARWIN_64) #elif defined(_TD_DARWIN_64)
#define TD_TMP_DIR_PATH "/tmp/taosd/" #define TD_TMP_DIR_PATH "/tmp/taosd/"
#define TD_CFG_DIR_PATH "/usr/local/etc/taos/" #define TD_CFG_DIR_PATH "/etc/taos/"
#define TD_DATA_DIR_PATH "/usr/local/var/lib/taos/" #define TD_DATA_DIR_PATH "/var/lib/taos/"
#define TD_LOG_DIR_PATH "/usr/local/var/log/taos/" #define TD_LOG_DIR_PATH "/var/log/taos/"
#else #else
#define TD_TMP_DIR_PATH "/tmp/" #define TD_TMP_DIR_PATH "/tmp/"
#define TD_CFG_DIR_PATH "/etc/taos/" #define TD_CFG_DIR_PATH "/etc/taos/"
......
...@@ -565,6 +565,7 @@ int32_t* taosGetErrno(); ...@@ -565,6 +565,7 @@ int32_t* taosGetErrno();
#define TSDB_CODE_PAR_GET_META_ERROR TAOS_DEF_ERROR_CODE(0, 0x2662) #define TSDB_CODE_PAR_GET_META_ERROR TAOS_DEF_ERROR_CODE(0, 0x2662)
#define TSDB_CODE_PAR_NOT_UNIQUE_TABLE_ALIAS TAOS_DEF_ERROR_CODE(0, 0x2663) #define TSDB_CODE_PAR_NOT_UNIQUE_TABLE_ALIAS TAOS_DEF_ERROR_CODE(0, 0x2663)
#define TSDB_CODE_PAR_NOT_SUPPORT_JOIN TAOS_DEF_ERROR_CODE(0, 0x2664) #define TSDB_CODE_PAR_NOT_SUPPORT_JOIN TAOS_DEF_ERROR_CODE(0, 0x2664)
#define TSDB_CODE_PAR_INVALID_TAGS_PC TAOS_DEF_ERROR_CODE(0, 0x2665)
#define TSDB_CODE_PAR_INTERNAL_ERROR TAOS_DEF_ERROR_CODE(0, 0x26FF) #define TSDB_CODE_PAR_INTERNAL_ERROR TAOS_DEF_ERROR_CODE(0, 0x26FF)
//planner //planner
......
...@@ -23,8 +23,8 @@ extern "C" { ...@@ -23,8 +23,8 @@ extern "C" {
#endif #endif
#define ENCODE_LIMIT (((uint8_t)1) << 7) #define ENCODE_LIMIT (((uint8_t)1) << 7)
#define ZIGZAGE(T, v) ((u##T)((v) >> (sizeof(T) * 8 - 1))) ^ (((u##T)(v)) << 1) // zigzag encode #define ZIGZAGE(T, v) (((u##T)((v) >> (sizeof(T) * 8 - 1))) ^ (((u##T)(v)) << 1)) // zigzag encode
#define ZIGZAGD(T, v) ((v) >> 1) ^ -((T)((v)&1)) // zigzag decode #define ZIGZAGD(T, v) (((v) >> 1) ^ -((T)((v)&1))) // zigzag decode
/* ------------------------ LEGACY CODES ------------------------ */ /* ------------------------ LEGACY CODES ------------------------ */
#if 1 #if 1
...@@ -70,7 +70,7 @@ static FORCE_INLINE int32_t taosEncodeFixedBool(void **buf, bool value) { ...@@ -70,7 +70,7 @@ static FORCE_INLINE int32_t taosEncodeFixedBool(void **buf, bool value) {
} }
static FORCE_INLINE void *taosDecodeFixedBool(const void *buf, bool *value) { static FORCE_INLINE void *taosDecodeFixedBool(const void *buf, bool *value) {
*value = ( (((int8_t *)buf)[0] == 0) ? false : true ); *value = ((((int8_t *)buf)[0] == 0) ? false : true);
return POINTER_SHIFT(buf, sizeof(int8_t)); return POINTER_SHIFT(buf, sizeof(int8_t));
} }
......
...@@ -51,287 +51,12 @@ extern "C" { ...@@ -51,287 +51,12 @@ extern "C" {
#define HEAD_MODE(x) x % 2 #define HEAD_MODE(x) x % 2
#define HEAD_ALGO(x) x / 2 #define HEAD_ALGO(x) x / 2
extern int32_t tsCompressINTImp(const char *const input, const int32_t nelements, char *const output, const char type);
extern int32_t tsDecompressINTImp(const char *const input, const int32_t nelements, char *const output,
const char type);
extern int32_t tsCompressBoolImp(const char *const input, const int32_t nelements, char *const output);
extern int32_t tsDecompressBoolImp(const char *const input, const int32_t nelements, char *const output);
extern int32_t tsCompressStringImp(const char *const input, int32_t inputSize, char *const output, int32_t outputSize);
extern int32_t tsDecompressStringImp(const char *const input, int32_t compressedSize, char *const output,
int32_t outputSize);
extern int32_t tsCompressTimestampImp(const char *const input, const int32_t nelements, char *const output);
extern int32_t tsDecompressTimestampImp(const char *const input, const int32_t nelements, char *const output);
extern int32_t tsCompressDoubleImp(const char *const input, const int32_t nelements, char *const output);
extern int32_t tsDecompressDoubleImp(const char *const input, const int32_t nelements, char *const output);
extern int32_t tsCompressFloatImp(const char *const input, const int32_t nelements, char *const output);
extern int32_t tsDecompressFloatImp(const char *const input, const int32_t nelements, char *const output);
// lossy
extern int32_t tsCompressFloatLossyImp(const char *input, const int32_t nelements, char *const output);
extern int32_t tsDecompressFloatLossyImp(const char *input, int32_t compressedSize, const int32_t nelements,
char *const output);
extern int32_t tsCompressDoubleLossyImp(const char *input, const int32_t nelements, char *const output);
extern int32_t tsDecompressDoubleLossyImp(const char *input, int32_t compressedSize, const int32_t nelements,
char *const output);
#ifdef TD_TSZ #ifdef TD_TSZ
extern bool lossyFloat; extern bool lossyFloat;
extern bool lossyDouble; extern bool lossyDouble;
int32_t tsCompressInit(); int32_t tsCompressInit();
void tsCompressExit(); void tsCompressExit();
#endif
static FORCE_INLINE int32_t tsCompressTinyint(const char *const input, int32_t inputSize, const int32_t nelements,
char *const output, int32_t outputSize, char algorithm,
char *const buffer, int32_t bufferSize) {
if (algorithm == ONE_STAGE_COMP) {
return tsCompressINTImp(input, nelements, output, TSDB_DATA_TYPE_TINYINT);
} else if (algorithm == TWO_STAGE_COMP) {
int32_t len = tsCompressINTImp(input, nelements, buffer, TSDB_DATA_TYPE_TINYINT);
return tsCompressStringImp(buffer, len, output, outputSize);
} else {
assert(0);
return -1;
}
}
static FORCE_INLINE int32_t tsDecompressTinyint(const char *const input, int32_t compressedSize,
const int32_t nelements, char *const output, int32_t outputSize,
char algorithm, char *const buffer, int32_t bufferSize) {
if (algorithm == ONE_STAGE_COMP) {
return tsDecompressINTImp(input, nelements, output, TSDB_DATA_TYPE_TINYINT);
} else if (algorithm == TWO_STAGE_COMP) {
if (tsDecompressStringImp(input, compressedSize, buffer, bufferSize) < 0) return -1;
return tsDecompressINTImp(buffer, nelements, output, TSDB_DATA_TYPE_TINYINT);
} else {
assert(0);
return -1;
}
}
static FORCE_INLINE int32_t tsCompressSmallint(const char *const input, int32_t inputSize, const int32_t nelements,
char *const output, int32_t outputSize, char algorithm,
char *const buffer, int32_t bufferSize) {
if (algorithm == ONE_STAGE_COMP) {
return tsCompressINTImp(input, nelements, output, TSDB_DATA_TYPE_SMALLINT);
} else if (algorithm == TWO_STAGE_COMP) {
int32_t len = tsCompressINTImp(input, nelements, buffer, TSDB_DATA_TYPE_SMALLINT);
return tsCompressStringImp(buffer, len, output, outputSize);
} else {
assert(0);
return -1;
}
}
static FORCE_INLINE int32_t tsDecompressSmallint(const char *const input, int32_t compressedSize,
const int32_t nelements, char *const output, int32_t outputSize,
char algorithm, char *const buffer, int32_t bufferSize) {
if (algorithm == ONE_STAGE_COMP) {
return tsDecompressINTImp(input, nelements, output, TSDB_DATA_TYPE_SMALLINT);
} else if (algorithm == TWO_STAGE_COMP) {
if (tsDecompressStringImp(input, compressedSize, buffer, bufferSize) < 0) return -1;
return tsDecompressINTImp(buffer, nelements, output, TSDB_DATA_TYPE_SMALLINT);
} else {
assert(0);
return -1;
}
}
static FORCE_INLINE int32_t tsCompressInt(const char *const input, int32_t inputSize, const int32_t nelements,
char *const output, int32_t outputSize, char algorithm, char *const buffer,
int32_t bufferSize) {
if (algorithm == ONE_STAGE_COMP) {
return tsCompressINTImp(input, nelements, output, TSDB_DATA_TYPE_INT);
} else if (algorithm == TWO_STAGE_COMP) {
int32_t len = tsCompressINTImp(input, nelements, buffer, TSDB_DATA_TYPE_INT);
return tsCompressStringImp(buffer, len, output, outputSize);
} else {
assert(0);
return -1;
}
}
static FORCE_INLINE int32_t tsDecompressInt(const char *const input, int32_t compressedSize, const int32_t nelements,
char *const output, int32_t outputSize, char algorithm, char *const buffer,
int32_t bufferSize) {
if (algorithm == ONE_STAGE_COMP) {
return tsDecompressINTImp(input, nelements, output, TSDB_DATA_TYPE_INT);
} else if (algorithm == TWO_STAGE_COMP) {
if (tsDecompressStringImp(input, compressedSize, buffer, bufferSize) < 0) return -1;
return tsDecompressINTImp(buffer, nelements, output, TSDB_DATA_TYPE_INT);
} else {
assert(0);
return -1;
}
}
static FORCE_INLINE int32_t tsCompressBigint(const char *const input, int32_t inputSize, const int32_t nelements,
char *const output, int32_t outputSize, char algorithm, char *const buffer,
int32_t bufferSize) {
if (algorithm == ONE_STAGE_COMP) {
return tsCompressINTImp(input, nelements, output, TSDB_DATA_TYPE_BIGINT);
} else if (algorithm == TWO_STAGE_COMP) {
int32_t len = tsCompressINTImp(input, nelements, buffer, TSDB_DATA_TYPE_BIGINT);
return tsCompressStringImp(buffer, len, output, outputSize);
} else {
assert(0);
return -1;
}
}
static FORCE_INLINE int32_t tsDecompressBigint(const char *const input, int32_t compressedSize, const int32_t nelements,
char *const output, int32_t outputSize, char algorithm,
char *const buffer, int32_t bufferSize) {
if (algorithm == ONE_STAGE_COMP) {
return tsDecompressINTImp(input, nelements, output, TSDB_DATA_TYPE_BIGINT);
} else if (algorithm == TWO_STAGE_COMP) {
if (tsDecompressStringImp(input, compressedSize, buffer, bufferSize) < 0) return -1;
return tsDecompressINTImp(buffer, nelements, output, TSDB_DATA_TYPE_BIGINT);
} else {
assert(0);
return -1;
}
}
static FORCE_INLINE int32_t tsCompressBool(const char *const input, int32_t inputSize, const int32_t nelements,
char *const output, int32_t outputSize, char algorithm, char *const buffer,
int32_t bufferSize) {
if (algorithm == ONE_STAGE_COMP) {
return tsCompressBoolImp(input, nelements, output);
} else if (algorithm == TWO_STAGE_COMP) {
int32_t len = tsCompressBoolImp(input, nelements, buffer);
return tsCompressStringImp(buffer, len, output, outputSize);
} else {
assert(0);
return -1;
}
}
static FORCE_INLINE int32_t tsDecompressBool(const char *const input, int32_t compressedSize, const int32_t nelements,
char *const output, int32_t outputSize, char algorithm, char *const buffer,
int32_t bufferSize) {
if (algorithm == ONE_STAGE_COMP) {
return tsDecompressBoolImp(input, nelements, output);
} else if (algorithm == TWO_STAGE_COMP) {
if (tsDecompressStringImp(input, compressedSize, buffer, bufferSize) < 0) return -1;
return tsDecompressBoolImp(buffer, nelements, output);
} else {
assert(0);
return -1;
}
}
static FORCE_INLINE int32_t tsCompressString(const char *const input, int32_t inputSize, const int32_t nelements,
char *const output, int32_t outputSize, char algorithm, char *const buffer,
int32_t bufferSize) {
return tsCompressStringImp(input, inputSize, output, outputSize);
}
static FORCE_INLINE int32_t tsDecompressString(const char *const input, int32_t compressedSize, const int32_t nelements,
char *const output, int32_t outputSize, char algorithm,
char *const buffer, int32_t bufferSize) {
return tsDecompressStringImp(input, compressedSize, output, outputSize);
}
static FORCE_INLINE int32_t tsCompressFloat(const char *const input, int32_t inputSize, const int32_t nelements,
char *const output, int32_t outputSize, char algorithm, char *const buffer,
int32_t bufferSize) {
#ifdef TD_TSZ
// lossy mode
if (lossyFloat) {
return tsCompressFloatLossyImp(input, nelements, output);
// lossless mode
} else {
#endif
if (algorithm == ONE_STAGE_COMP) {
return tsCompressFloatImp(input, nelements, output);
} else if (algorithm == TWO_STAGE_COMP) {
int32_t len = tsCompressFloatImp(input, nelements, buffer);
return tsCompressStringImp(buffer, len, output, outputSize);
} else {
assert(0);
return -1;
}
#ifdef TD_TSZ
}
#endif
}
static FORCE_INLINE int32_t tsDecompressFloat(const char *const input, int32_t compressedSize, const int32_t nelements,
char *const output, int32_t outputSize, char algorithm,
char *const buffer, int32_t bufferSize) {
#ifdef TD_TSZ
if (HEAD_ALGO(input[0]) == ALGO_SZ_LOSSY) {
// decompress lossy
return tsDecompressFloatLossyImp(input, compressedSize, nelements, output);
} else {
#endif
// decompress lossless
if (algorithm == ONE_STAGE_COMP) {
return tsDecompressFloatImp(input, nelements, output);
} else if (algorithm == TWO_STAGE_COMP) {
if (tsDecompressStringImp(input, compressedSize, buffer, bufferSize) < 0) return -1;
return tsDecompressFloatImp(buffer, nelements, output);
} else {
assert(0);
return -1;
}
#ifdef TD_TSZ
}
#endif
}
static FORCE_INLINE int32_t tsCompressDouble(const char *const input, int32_t inputSize, const int32_t nelements,
char *const output, int32_t outputSize, char algorithm, char *const buffer,
int32_t bufferSize) {
#ifdef TD_TSZ
if (lossyDouble) {
// lossy mode
return tsCompressDoubleLossyImp(input, nelements, output);
} else {
#endif
// lossless mode
if (algorithm == ONE_STAGE_COMP) {
return tsCompressDoubleImp(input, nelements, output);
} else if (algorithm == TWO_STAGE_COMP) {
int32_t len = tsCompressDoubleImp(input, nelements, buffer);
return tsCompressStringImp(buffer, len, output, outputSize);
} else {
assert(0);
return -1;
}
#ifdef TD_TSZ
}
#endif
}
static FORCE_INLINE int32_t tsDecompressDouble(const char *const input, int32_t compressedSize, const int32_t nelements,
char *const output, int32_t outputSize, char algorithm,
char *const buffer, int32_t bufferSize) {
#ifdef TD_TSZ
if (HEAD_ALGO(input[0]) == ALGO_SZ_LOSSY) {
// decompress lossy
return tsDecompressDoubleLossyImp(input, compressedSize, nelements, output);
} else {
#endif
// decompress lossless
if (algorithm == ONE_STAGE_COMP) {
return tsDecompressDoubleImp(input, nelements, output);
} else if (algorithm == TWO_STAGE_COMP) {
if (tsDecompressStringImp(input, compressedSize, buffer, bufferSize) < 0) return -1;
return tsDecompressDoubleImp(buffer, nelements, output);
} else {
assert(0);
return -1;
}
#ifdef TD_TSZ
}
#endif
}
#ifdef TD_TSZ
//
// lossy float double
//
static FORCE_INLINE int32_t tsCompressFloatLossy(const char *const input, int32_t inputSize, const int32_t nelements, static FORCE_INLINE int32_t tsCompressFloatLossy(const char *const input, int32_t inputSize, const int32_t nelements,
char *const output, int32_t outputSize, char algorithm, char *const output, int32_t outputSize, char algorithm,
char *const buffer, int32_t bufferSize) { char *const buffer, int32_t bufferSize) {
...@@ -358,33 +83,56 @@ static FORCE_INLINE int32_t tsDecompressDoubleLossy(const char *const input, int ...@@ -358,33 +83,56 @@ static FORCE_INLINE int32_t tsDecompressDoubleLossy(const char *const input, int
#endif #endif
static FORCE_INLINE int32_t tsCompressTimestamp(const char *const input, int32_t inputSize, const int32_t nelements, /*************************************************************************
char *const output, int32_t outputSize, char algorithm, * REGULAR COMPRESSION
char *const buffer, int32_t bufferSize) { *************************************************************************/
if (algorithm == ONE_STAGE_COMP) { int32_t tsCompressTimestamp(void *pIn, int32_t nIn, int32_t nEle, void *pOut, int32_t nOut, uint8_t cmprAlg, void *pBuf,
return tsCompressTimestampImp(input, nelements, output); int32_t nBuf);
} else if (algorithm == TWO_STAGE_COMP) { int32_t tsDecompressTimestamp(void *pIn, int32_t nIn, int32_t nEle, void *pOut, int32_t nOut, uint8_t cmprAlg,
int32_t len = tsCompressTimestampImp(input, nelements, buffer); void *pBuf, int32_t nBuf);
return tsCompressStringImp(buffer, len, output, outputSize); int32_t tsCompressFloat(void *pIn, int32_t nIn, int32_t nEle, void *pOut, int32_t nOut, uint8_t cmprAlg, void *pBuf,
} else { int32_t nBuf);
assert(0); int32_t tsDecompressFloat(void *pIn, int32_t nIn, int32_t nEle, void *pOut, int32_t nOut, uint8_t cmprAlg, void *pBuf,
return -1; int32_t nBuf);
} int32_t tsCompressDouble(void *pIn, int32_t nIn, int32_t nEle, void *pOut, int32_t nOut, uint8_t cmprAlg, void *pBuf,
} int32_t nBuf);
int32_t tsDecompressDouble(void *pIn, int32_t nIn, int32_t nEle, void *pOut, int32_t nOut, uint8_t cmprAlg, void *pBuf,
static FORCE_INLINE int32_t tsDecompressTimestamp(const char *const input, int32_t compressedSize, int32_t nBuf);
const int32_t nelements, char *const output, int32_t outputSize, int32_t tsCompressString(void *pIn, int32_t nIn, int32_t nEle, void *pOut, int32_t nOut, uint8_t cmprAlg, void *pBuf,
char algorithm, char *const buffer, int32_t bufferSize) { int32_t nBuf);
if (algorithm == ONE_STAGE_COMP) { int32_t tsDecompressString(void *pIn, int32_t nIn, int32_t nEle, void *pOut, int32_t nOut, uint8_t cmprAlg, void *pBuf,
return tsDecompressTimestampImp(input, nelements, output); int32_t nBuf);
} else if (algorithm == TWO_STAGE_COMP) { int32_t tsCompressBool(void *pIn, int32_t nIn, int32_t nEle, void *pOut, int32_t nOut, uint8_t cmprAlg, void *pBuf,
if (tsDecompressStringImp(input, compressedSize, buffer, bufferSize) < 0) return -1; int32_t nBuf);
return tsDecompressTimestampImp(buffer, nelements, output); int32_t tsDecompressBool(void *pIn, int32_t nIn, int32_t nEle, void *pOut, int32_t nOut, uint8_t cmprAlg, void *pBuf,
} else { int32_t nBuf);
assert(0); int32_t tsCompressTinyint(void *pIn, int32_t nIn, int32_t nEle, void *pOut, int32_t nOut, uint8_t cmprAlg, void *pBuf,
return -1; int32_t nBuf);
} int32_t tsDecompressTinyint(void *pIn, int32_t nIn, int32_t nEle, void *pOut, int32_t nOut, uint8_t cmprAlg, void *pBuf,
} int32_t nBuf);
int32_t tsCompressSmallint(void *pIn, int32_t nIn, int32_t nEle, void *pOut, int32_t nOut, uint8_t cmprAlg, void *pBuf,
int32_t nBuf);
int32_t tsDecompressSmallint(void *pIn, int32_t nIn, int32_t nEle, void *pOut, int32_t nOut, uint8_t cmprAlg,
void *pBuf, int32_t nBuf);
int32_t tsCompressInt(void *pIn, int32_t nIn, int32_t nEle, void *pOut, int32_t nOut, uint8_t cmprAlg, void *pBuf,
int32_t nBuf);
int32_t tsDecompressInt(void *pIn, int32_t nIn, int32_t nEle, void *pOut, int32_t nOut, uint8_t cmprAlg, void *pBuf,
int32_t nBuf);
int32_t tsCompressBigint(void *pIn, int32_t nIn, int32_t nEle, void *pOut, int32_t nOut, uint8_t cmprAlg, void *pBuf,
int32_t nBuf);
int32_t tsDecompressBigint(void *pIn, int32_t nIn, int32_t nEle, void *pOut, int32_t nOut, uint8_t cmprAlg, void *pBuf,
int32_t nBuf);
/*************************************************************************
* STREAM COMPRESSION
*************************************************************************/
typedef struct SCompressor SCompressor;
int32_t tCompressorCreate(SCompressor **ppCmprsor);
int32_t tCompressorDestroy(SCompressor *pCmprsor);
int32_t tCompressStart(SCompressor *pCmprsor, int8_t type, int8_t cmprAlg);
int32_t tCompressEnd(SCompressor *pCmprsor, const uint8_t **ppOut, int32_t *nOut, int32_t *nOrigin);
int32_t tCompress(SCompressor *pCmprsor, const void *pData, int64_t nData);
#ifdef __cplusplus #ifdef __cplusplus
} }
......
...@@ -219,12 +219,12 @@ fi ...@@ -219,12 +219,12 @@ fi
if [[ "$cpuType" == "x64" ]] || [[ "$cpuType" == "aarch64" ]] || [[ "$cpuType" == "aarch32" ]] || [[ "$cpuType" == "arm64" ]] || [[ "$cpuType" == "arm32" ]] || [[ "$cpuType" == "mips64" ]]; then if [[ "$cpuType" == "x64" ]] || [[ "$cpuType" == "aarch64" ]] || [[ "$cpuType" == "aarch32" ]] || [[ "$cpuType" == "arm64" ]] || [[ "$cpuType" == "arm32" ]] || [[ "$cpuType" == "mips64" ]]; then
if [ "$verMode" != "cluster" ]; then if [ "$verMode" != "cluster" ]; then
# community-version compile # community-version compile
cmake ../ -DCPUTYPE=${cpuType} -DOSTYPE=${osType} -DSOMODE=${soMode} -DDBNAME=${dbName} -DVERTYPE=${verType} -DVERDATE="${build_time}" -DGITINFO=${gitinfo} -DGITINFOI=${gitinfoOfInternal} -DVERNUMBER=${verNumber} -DVERCOMPATIBLE=${verNumberComp} -DPAGMODE=${pagMode} -DBUILD_HTTP=${BUILD_HTTP} -DBUILD_TOOLS=${BUILD_TOOLS} ${allocator_macro} cmake ../ -DCPUTYPE=${cpuType} -DWEBSOCKET=true -DOSTYPE=${osType} -DSOMODE=${soMode} -DDBNAME=${dbName} -DVERTYPE=${verType} -DVERDATE="${build_time}" -DGITINFO=${gitinfo} -DGITINFOI=${gitinfoOfInternal} -DVERNUMBER=${verNumber} -DVERCOMPATIBLE=${verNumberComp} -DPAGMODE=${pagMode} -DBUILD_HTTP=${BUILD_HTTP} -DBUILD_TOOLS=${BUILD_TOOLS} ${allocator_macro}
else else
if [[ "$dbName" != "taos" ]]; then if [[ "$dbName" != "taos" ]]; then
replace_enterprise_$dbName replace_enterprise_$dbName
fi fi
cmake ../../ -DCPUTYPE=${cpuType} -DOSTYPE=${osType} -DSOMODE=${soMode} -DDBNAME=${dbName} -DVERTYPE=${verType} -DVERDATE="${build_time}" -DGITINFO=${gitinfo} -DGITINFOI=${gitinfoOfInternal} -DVERNUMBER=${verNumber} -DVERCOMPATIBLE=${verNumberComp} -DBUILD_HTTP=${BUILD_HTTP} -DBUILD_TOOLS=${BUILD_TOOLS} ${allocator_macro} cmake ../../ -DCPUTYPE=${cpuType} -DWEBSOCKET=true -DOSTYPE=${osType} -DSOMODE=${soMode} -DDBNAME=${dbName} -DVERTYPE=${verType} -DVERDATE="${build_time}" -DGITINFO=${gitinfo} -DGITINFOI=${gitinfoOfInternal} -DVERNUMBER=${verNumber} -DVERCOMPATIBLE=${verNumberComp} -DBUILD_HTTP=${BUILD_HTTP} -DBUILD_TOOLS=${BUILD_TOOLS} ${allocator_macro}
fi fi
else else
echo "input cpuType=${cpuType} error!!!" echo "input cpuType=${cpuType} error!!!"
......
#!/usr/bin/env bash
function showAlertMessage(){
osascript <<EOF
set buttonStr to "${3}"
set oldDelimiters to AppleScript's text item delimiters
set AppleScript's text item delimiters to ","
set buttonList to every text item of buttonStr
set AppleScript's text item delimiters to oldDelimiters
get buttonList
set btns to buttonList
display dialog "${1}" with title "${2}" buttons btns with icon ${4}
get result
EOF
}
taosd_status=`Launchctl list | grep taosd | head -n 1 | awk '{print $1}'`
if [ "$taosd_status"x = "-"x ]; then
launchctl start taosd
showAlertMessage "Taosd is running!" "TDengine" "ok" "note"
else
choose_result=`showAlertMessage "Taosd is running!\nDo you want to close it?" "TDengine" "yes,cancel" "stop"`
if [ "$choose_result"x = "button returned:yes"x ]; then
launchctl stop taosd
fi
fi
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>taosd</string>
<key>ProgramArguments</key>
<array>
<string>/usr/local/bin/taosd</string>
</array>
<key>ProcessType</key>
<string>Interactive</string>
<key>Disabled</key>
<false/>
<key>RunAtLoad</key>
<false/>
<key>LaunchOnlyOnce</key>
<false/>
<key>SessionCreate</key>
<true/>
<key>ExitTimeOut</key>
<integer>600</integer>
<key>KeepAlive</key>
<dict>
<key>SuccessfulExit</key>
<false/>
<key>AfterInitialDemand</key>
<true/>
</dict>
<key>Program</key>
<string>/usr/local/bin/taosd</string>
</dict>
</plist>
\ No newline at end of file
TDengine is a high-efficient, scalable, high-available distributed time-series database, which makes a lot of optimizations on inserting and querying data, which is far more efficient than normal regular databases. So TDengine can meet the high requirements of IOT and other areas on storing and querying a large amount of data.
To configure TDengine : edit /etc/taos/taos.cfg
To start service : launchctl start taosd
To access TDengine : use taos in shell
\ No newline at end of file
此差异已折叠。
...@@ -7,27 +7,52 @@ ...@@ -7,27 +7,52 @@
iplist="" iplist=""
serverFqdn="" serverFqdn=""
# -----------------------Variables definition--------------------- osType=`uname`
script_dir=$(dirname $(readlink -f "$0"))
# Dynamic directory # Dynamic directory
data_dir="/var/lib/taos" data_dir="/var/lib/taos"
log_dir="/var/log/taos" log_dir="/var/log/taos"
data_link_dir="/usr/local/taos/data" cfg_install_dir="/etc/taos"
log_link_dir="/usr/local/taos/log"
install_main_dir="/usr/local/taos"
# static directory if [ "$osType" != "Darwin" ]; then
cfg_dir="/usr/local/taos/cfg" script_dir=$(dirname $(readlink -f "$0"))
bin_dir="/usr/local/taos/bin" verNumber=""
lib_dir="/usr/local/taos/driver" lib_file_ext="so"
init_d_dir="/usr/local/taos/init.d"
inc_dir="/usr/local/taos/include" bin_link_dir="/usr/bin"
lib_link_dir="/usr/lib"
lib64_link_dir="/usr/lib64"
inc_link_dir="/usr/include"
install_main_dir="/usr/local/taos"
else
script_dir=${source_dir}/packaging/tools
verNumber=`ls tdengine/driver | grep -E "libtaos\.[0-9]\.[0-9]" | sed "s/libtaos.//g" | sed "s/.dylib//g" | head -n 1`
lib_file_ext="dylib"
bin_link_dir="/usr/local/bin"
lib_link_dir="/usr/local/lib"
lib64_link_dir="/usr/local/lib"
inc_link_dir="/usr/local/include"
if [ -d "/usr/local/Cellar/" ];then
install_main_dir="/usr/local/Cellar/tdengine/${verNumber}"
elif [ -d "/opt/homebrew/Cellar/" ];then
install_main_dir="/opt/homebrew/Cellar/tdengine/${verNumber}"
else
install_main_dir="/usr/local/taos"
fi
fi
cfg_install_dir="/etc/taos" data_link_dir="${install_main_dir}/data"
bin_link_dir="/usr/bin" log_link_dir="${install_main_dir}/log"
lib_link_dir="/usr/lib"
lib64_link_dir="/usr/lib64" # static directory
inc_link_dir="/usr/include" cfg_dir="${install_main_dir}/cfg"
bin_dir="${install_main_dir}/bin"
lib_dir="${install_main_dir}/driver"
init_d_dir="${install_main_dir}/init.d"
inc_dir="${install_main_dir}/include"
service_config_dir="/etc/systemd/system" service_config_dir="/etc/systemd/system"
...@@ -40,8 +65,10 @@ GREEN_UNDERLINE='\033[4;32m' ...@@ -40,8 +65,10 @@ GREEN_UNDERLINE='\033[4;32m'
NC='\033[0m' NC='\033[0m'
csudo="" csudo=""
csudouser=""
if command -v sudo > /dev/null; then if command -v sudo > /dev/null; then
csudo="sudo " csudo="sudo "
csudouser="sudo -u ${USER} "
fi fi
initd_mod=0 initd_mod=0
...@@ -63,6 +90,14 @@ elif $(which service &> /dev/null); then ...@@ -63,6 +90,14 @@ elif $(which service &> /dev/null); then
else else
service_mod=2 service_mod=2
fi fi
if [ "$osType" = "Darwin" ]; then
if [ -d "${install_main_dir}" ];then
${csudo}rm -rf ${install_main_dir}
fi
${csudo}mkdir -p ${install_main_dir}
${csudo}rm -rf ${install_main_dir}
${csudo}cp -rf tdengine ${install_main_dir}
fi
function kill_taosadapter() { function kill_taosadapter() {
# ${csudo}pkill -f taosadapter || : # ${csudo}pkill -f taosadapter || :
...@@ -96,22 +131,24 @@ function install_lib() { ...@@ -96,22 +131,24 @@ function install_lib() {
${csudo}rm -f ${lib_link_dir}/libtaos* || : ${csudo}rm -f ${lib_link_dir}/libtaos* || :
${csudo}rm -f ${lib64_link_dir}/libtaos* || : ${csudo}rm -f ${lib64_link_dir}/libtaos* || :
[ -f ${lib_link_dir}/libtaosws.so ] && ${csudo}rm -f ${lib_link_dir}/libtaosws.so || : [ -f ${lib_link_dir}/libtaosws.${lib_file_ext} ] && ${csudo}rm -f ${lib_link_dir}/libtaosws.${lib_file_ext} || :
[ -f ${lib64_link_dir}/libtaosws.so ] && ${csudo}rm -f ${lib64_link_dir}/libtaosws.so || : [ -f ${lib64_link_dir}/libtaosws.${lib_file_ext} ] && ${csudo}rm -f ${lib64_link_dir}/libtaosws.${lib_file_ext} || :
${csudo}ln -s ${lib_dir}/libtaos.* ${lib_link_dir}/libtaos.so.1 ${csudo}ln -s ${lib_dir}/libtaos.* ${lib_link_dir}/libtaos.so.1
${csudo}ln -s ${lib_link_dir}/libtaos.so.1 ${lib_link_dir}/libtaos.so ${csudo}ln -s ${lib_link_dir}/libtaos.so.1 ${lib_link_dir}/libtaos.so
[ -f ${lib_dir}/libtaosws.so ] && ${csudo}ln -sf ${lib_dir}/libtaosws.so ${lib_link_dir}/libtaosws.so ||: [ -f ${lib_dir}/libtaosws.${lib_file_ext} ] && ${csudo}ln -sf ${lib_dir}/libtaosws.${lib_file_ext} ${lib_link_dir}/libtaosws.${lib_file_ext} ||:
if [[ -d ${lib64_link_dir} && ! -e ${lib64_link_dir}/libtaos.so ]]; then if [[ -d ${lib64_link_dir} && ! -e ${lib64_link_dir}/libtaos.so ]]; then
${csudo}ln -s ${lib_dir}/libtaos.* ${lib64_link_dir}/libtaos.so.1 || : ${csudo}ln -s ${lib_dir}/libtaos.* ${lib64_link_dir}/libtaos.so.1 || :
${csudo}ln -s ${lib64_link_dir}/libtaos.so.1 ${lib64_link_dir}/libtaos.so || : ${csudo}ln -s ${lib64_link_dir}/libtaos.so.1 ${lib64_link_dir}/libtaos.so || :
[ -f ${lib_dir}/libtaosws.so ] && ${csudo}ln -sf ${lib_dir}/libtaosws.so ${lib64_link_dir}/libtaosws.so || : [ -f ${lib_dir}/libtaosws.${lib_file_ext} ] && ${csudo}ln -sf ${lib_dir}/libtaosws.${lib_file_ext} ${lib64_link_dir}/libtaosws.${lib_file_ext} || :
fi fi
${csudo}ldconfig if [ "$osType" != "Darwin" ]; then
${csudo}ldconfig
fi
} }
function install_bin() { function install_bin() {
...@@ -138,6 +175,7 @@ function install_bin() { ...@@ -138,6 +175,7 @@ function install_bin() {
[ -x ${bin_dir}/TDinsight.sh ] && ${csudo}ln -sf ${bin_dir}/TDinsight.sh ${bin_link_dir}/TDinsight.sh || : [ -x ${bin_dir}/TDinsight.sh ] && ${csudo}ln -sf ${bin_dir}/TDinsight.sh ${bin_link_dir}/TDinsight.sh || :
[ -x ${bin_dir}/taosdump ] && ${csudo}ln -s ${bin_dir}/taosdump ${bin_link_dir}/taosdump || : [ -x ${bin_dir}/taosdump ] && ${csudo}ln -s ${bin_dir}/taosdump ${bin_link_dir}/taosdump || :
[ -x ${bin_dir}/set_core.sh ] && ${csudo}ln -s ${bin_dir}/set_core.sh ${bin_link_dir}/set_core || : [ -x ${bin_dir}/set_core.sh ] && ${csudo}ln -s ${bin_dir}/set_core.sh ${bin_link_dir}/set_core || :
[ -x ${bin_dir}/remove.sh ] && ${csudo}ln -s ${bin_dir}/remove.sh ${bin_link_dir}/rmtaos || :
} }
function add_newHostname_to_hosts() { function add_newHostname_to_hosts() {
...@@ -466,6 +504,14 @@ function install_service_on_systemd() { ...@@ -466,6 +504,14 @@ function install_service_on_systemd() {
${csudo}systemctl enable taosd ${csudo}systemctl enable taosd
} }
function install_service_on_launchctl() {
if [ -f ${install_main_dir}/service/com.taosdata.taosd.plist ]; then
${csudouser}launchctl unload -w /Library/LaunchDaemons/com.taosdata.taosd.plist > /dev/null 2>&1 || :
${csudo}cp ${install_main_dir}/service/com.taosdata.taosd.plist /Library/LaunchDaemons/com.taosdata.taosd.plist || :
${csudouser}launchctl load -w /Library/LaunchDaemons/com.taosdata.taosd.plist || :
fi
}
function install_taosadapter_service() { function install_taosadapter_service() {
if ((${service_mod}==0)); then if ((${service_mod}==0)); then
[ -f ${script_dir}/../cfg/taosadapter.service ] &&\ [ -f ${script_dir}/../cfg/taosadapter.service ] &&\
...@@ -476,6 +522,7 @@ function install_taosadapter_service() { ...@@ -476,6 +522,7 @@ function install_taosadapter_service() {
} }
function install_service() { function install_service() {
if [ "$osType" != "Darwin" ]; then
if ((${service_mod}==0)); then if ((${service_mod}==0)); then
install_service_on_systemd install_service_on_systemd
elif ((${service_mod}==1)); then elif ((${service_mod}==1)); then
...@@ -485,6 +532,25 @@ function install_service() { ...@@ -485,6 +532,25 @@ function install_service() {
kill_taosadapter kill_taosadapter
kill_taosd kill_taosd
fi fi
else
install_service_on_launchctl
fi
}
function install_app() {
if [ "$osType" = "Darwin" ]; then
if [ -f ${install_main_dir}/service/TDengine ]; then
${csudo}rm -rf /Applications/TDengine.app &&
${csudo}mkdir -p /Applications/TDengine.app/Contents/MacOS/ &&
${csudo}cp ${install_main_dir}/service/TDengine /Applications/TDengine.app/Contents/MacOS/ &&
echo "<plist><dict></dict></plist>" | ${csudo}tee /Applications/TDengine.app/Contents/Info.plist > /dev/null &&
${csudo}sips -i ${install_main_dir}/service/logo.png > /dev/null &&
DeRez -only icns ${install_main_dir}/service/logo.png | ${csudo}tee /Applications/TDengine.app/mac_logo.rsrc > /dev/null &&
${csudo}rez -append /Applications/TDengine.app/mac_logo.rsrc -o $'/Applications/TDengine.app/Icon\r' &&
${csudo}SetFile -a C /Applications/TDengine.app/ &&
${csudo}rm /Applications/TDengine.app/mac_logo.rsrc
fi
fi
} }
function install_TDengine() { function install_TDengine() {
...@@ -492,7 +558,7 @@ function install_TDengine() { ...@@ -492,7 +558,7 @@ function install_TDengine() {
#install log and data dir , then ln to /usr/local/taos #install log and data dir , then ln to /usr/local/taos
${csudo}mkdir -p ${log_dir} && ${csudo}chmod 777 ${log_dir} ${csudo}mkdir -p ${log_dir} && ${csudo}chmod 777 ${log_dir}
${csudo}mkdir -p ${data_dir} ${csudo}mkdir -p ${data_dir} && ${csudo}chmod 777 ${data_dir}
${csudo}rm -rf ${log_link_dir} || : ${csudo}rm -rf ${log_link_dir} || :
${csudo}rm -rf ${data_link_dir} || : ${csudo}rm -rf ${data_link_dir} || :
...@@ -508,6 +574,7 @@ function install_TDengine() { ...@@ -508,6 +574,7 @@ function install_TDengine() {
install_taosadapter_config install_taosadapter_config
install_taosadapter_service install_taosadapter_service
install_service install_service
install_app
# Ask if to start the service # Ask if to start the service
#echo #echo
......
...@@ -6,12 +6,31 @@ set -e ...@@ -6,12 +6,31 @@ set -e
#set -x #set -x
verMode=edge verMode=edge
osType=`uname`
RED='\033[0;31m' RED='\033[0;31m'
GREEN='\033[1;32m' GREEN='\033[1;32m'
NC='\033[0m' NC='\033[0m'
installDir="/usr/local/taos" if [ "$osType" != "Darwin" ]; then
installDir="/usr/local/taos"
bin_link_dir="/usr/bin"
lib_link_dir="/usr/lib"
lib64_link_dir="/usr/lib64"
inc_link_dir="/usr/include"
else
if [ -d "/usr/local/Cellar/" ];then
installDir="/usr/local/Cellar/tdengine/${verNumber}"
elif [ -d "/opt/homebrew/Cellar/" ];then
installDir="/opt/homebrew/Cellar/tdengine/${verNumber}"
else
installDir="/usr/local/taos"
fi
bin_link_dir="/usr/local/bin"
lib_link_dir="/usr/local/lib"
lib64_link_dir="/usr/local/lib"
inc_link_dir="/usr/local/include"
fi
serverName="taosd" serverName="taosd"
clientName="taos" clientName="taos"
uninstallScript="rmtaos" uninstallScript="rmtaos"
...@@ -22,11 +41,8 @@ install_main_dir=${installDir} ...@@ -22,11 +41,8 @@ install_main_dir=${installDir}
data_link_dir=${installDir}/data data_link_dir=${installDir}/data
log_link_dir=${installDir}/log log_link_dir=${installDir}/log
cfg_link_dir=${installDir}/cfg cfg_link_dir=${installDir}/cfg
bin_link_dir="/usr/bin"
local_bin_link_dir="/usr/local/bin" local_bin_link_dir="/usr/local/bin"
lib_link_dir="/usr/lib"
lib64_link_dir="/usr/lib64"
inc_link_dir="/usr/include"
service_config_dir="/etc/systemd/system" service_config_dir="/etc/systemd/system"
taos_service_name=${serverName} taos_service_name=${serverName}
...@@ -82,6 +98,7 @@ function clean_bin() { ...@@ -82,6 +98,7 @@ function clean_bin() {
# Remove link # Remove link
${csudo}rm -f ${bin_link_dir}/${clientName} || : ${csudo}rm -f ${bin_link_dir}/${clientName} || :
${csudo}rm -f ${bin_link_dir}/${serverName} || : ${csudo}rm -f ${bin_link_dir}/${serverName} || :
${csudo}rm -f ${bin_link_dir}/udfd || :
${csudo}rm -f ${bin_link_dir}/taosadapter || : ${csudo}rm -f ${bin_link_dir}/taosadapter || :
${csudo}rm -f ${bin_link_dir}/taosBenchmark || : ${csudo}rm -f ${bin_link_dir}/taosBenchmark || :
${csudo}rm -f ${bin_link_dir}/taosdemo || : ${csudo}rm -f ${bin_link_dir}/taosdemo || :
...@@ -103,7 +120,7 @@ function clean_lib() { ...@@ -103,7 +120,7 @@ function clean_lib() {
[ -f ${lib_link_dir}/libtaosws.so ] && ${csudo}rm -f ${lib_link_dir}/libtaosws.so || : [ -f ${lib_link_dir}/libtaosws.so ] && ${csudo}rm -f ${lib_link_dir}/libtaosws.so || :
${csudo}rm -f ${lib64_link_dir}/libtaos.* || : ${csudo}rm -f ${lib64_link_dir}/libtaos.* || :
[ -f ${lib64_link_dir}/libtaosws.so ] && ${csudo}rm -f ${lib64_link_dir}/libtaosws.so || : [ -f ${lib64_link_dir}/libtaosws.* ] && ${csudo}rm -f ${lib64_link_dir}/libtaosws.* || :
#${csudo}rm -rf ${v15_java_app_dir} || : #${csudo}rm -rf ${v15_java_app_dir} || :
} }
...@@ -195,12 +212,20 @@ function clean_service_on_sysvinit() { ...@@ -195,12 +212,20 @@ function clean_service_on_sysvinit() {
fi fi
} }
function clean_service_on_launchctl() {
${csudouser}launchctl unload -w /Library/LaunchDaemons/com.taosdata.taosd.plist > /dev/null 2>&1 || :
${csudo}rm /Library/LaunchDaemons/com.taosdata.taosd.plist > /dev/null 2>&1 || :
}
function clean_service() { function clean_service() {
if ((${service_mod} == 0)); then if ((${service_mod} == 0)); then
clean_service_on_systemd clean_service_on_systemd
elif ((${service_mod} == 1)); then elif ((${service_mod} == 1)); then
clean_service_on_sysvinit clean_service_on_sysvinit
else else
if [ "$osType" = "Darwin" ]; then
clean_service_on_launchctl
fi
kill_taosadapter kill_taosadapter
kill_taosd kill_taosd
kill_tarbitrator kill_tarbitrator
...@@ -241,6 +266,9 @@ elif echo $osinfo | grep -qwi "centos"; then ...@@ -241,6 +266,9 @@ elif echo $osinfo | grep -qwi "centos"; then
# echo "this is centos system" # echo "this is centos system"
${csudo}rpm -e --noscripts tdengine >/dev/null 2>&1 || : ${csudo}rpm -e --noscripts tdengine >/dev/null 2>&1 || :
fi fi
if [ "$osType" = "Darwin" ]; then
${csudo}rm -rf /Applications/TDengine.app
fi
echo -e "${GREEN}${productName} is removed successfully!${NC}" echo -e "${GREEN}${productName} is removed successfully!${NC}"
echo echo
...@@ -874,8 +874,6 @@ void schedulerExecCb(SExecResult* pResult, void* param, int32_t code) { ...@@ -874,8 +874,6 @@ void schedulerExecCb(SExecResult* pResult, void* param, int32_t code) {
STscObj* pTscObj = pRequest->pTscObj; STscObj* pTscObj = pRequest->pTscObj;
pRequest->code = code; pRequest->code = code;
pRequest->metric.resultReady = taosGetTimestampUs();
if (pResult) { if (pResult) {
destroyQueryExecRes(&pRequest->body.resInfo.execRes); destroyQueryExecRes(&pRequest->body.resInfo.execRes);
memcpy(&pRequest->body.resInfo.execRes, pResult, sizeof(*pResult)); memcpy(&pRequest->body.resInfo.execRes, pResult, sizeof(*pResult));
...@@ -1061,7 +1059,6 @@ void launchAsyncQuery(SRequestObj* pRequest, SQuery* pQuery, SMetaData* pResultM ...@@ -1061,7 +1059,6 @@ void launchAsyncQuery(SRequestObj* pRequest, SQuery* pQuery, SMetaData* pResultM
} }
pRequest->metric.planEnd = taosGetTimestampUs(); pRequest->metric.planEnd = taosGetTimestampUs();
if (TSDB_CODE_SUCCESS == code && !pRequest->validateOnly) { if (TSDB_CODE_SUCCESS == code && !pRequest->validateOnly) {
SArray* pNodeList = NULL; SArray* pNodeList = NULL;
buildAsyncExecNodeList(pRequest, &pNodeList, pMnodeList, pResultMeta); buildAsyncExecNodeList(pRequest, &pNodeList, pMnodeList, pResultMeta);
......
...@@ -817,7 +817,6 @@ void doAsyncQuery(SRequestObj *pRequest, bool updateMetaForce) { ...@@ -817,7 +817,6 @@ void doAsyncQuery(SRequestObj *pRequest, bool updateMetaForce) {
pRequest->metric.syntaxEnd = taosGetTimestampUs(); pRequest->metric.syntaxEnd = taosGetTimestampUs();
if (!updateMetaForce) { if (!updateMetaForce) {
STscObj *pTscObj = pRequest->pTscObj;
SAppClusterSummary *pActivity = &pTscObj->pAppInfo->summary; SAppClusterSummary *pActivity = &pTscObj->pAppInfo->summary;
if (NULL == pRequest->pQuery->pRoot) { if (NULL == pRequest->pQuery->pRoot) {
atomic_add_fetch_64((int64_t *)&pActivity->numOfInsertsReq, 1); atomic_add_fetch_64((int64_t *)&pActivity->numOfInsertsReq, 1);
...@@ -864,6 +863,7 @@ static void fetchCallback(void *pResult, void *param, int32_t code) { ...@@ -864,6 +863,7 @@ static void fetchCallback(void *pResult, void *param, int32_t code) {
SRequestObj *pRequest = (SRequestObj *)param; SRequestObj *pRequest = (SRequestObj *)param;
SReqResultInfo *pResultInfo = &pRequest->body.resInfo; SReqResultInfo *pResultInfo = &pRequest->body.resInfo;
pRequest->metric.resultReady = taosGetTimestampUs();
tscDebug("0x%" PRIx64 " enter scheduler fetch cb, code:%d - %s, reqId:0x%" PRIx64, pRequest->self, code, tscDebug("0x%" PRIx64 " enter scheduler fetch cb, code:%d - %s, reqId:0x%" PRIx64, pRequest->self, code,
tstrerror(code), pRequest->requestId); tstrerror(code), pRequest->requestId);
......
...@@ -515,7 +515,7 @@ int32_t tmqCommitMsgImpl(tmq_t* tmq, const TAOS_RES* msg, int8_t async, tmq_comm ...@@ -515,7 +515,7 @@ int32_t tmqCommitMsgImpl(tmq_t* tmq, const TAOS_RES* msg, int8_t async, tmq_comm
SMqMetaRspObj* pMetaRspObj = (SMqMetaRspObj*)msg; SMqMetaRspObj* pMetaRspObj = (SMqMetaRspObj*)msg;
topic = pMetaRspObj->topic; topic = pMetaRspObj->topic;
vgId = pMetaRspObj->vgId; vgId = pMetaRspObj->vgId;
} else if(TD_RES_TMQ_METADATA(msg)) { } else if (TD_RES_TMQ_METADATA(msg)) {
SMqTaosxRspObj* pRspObj = (SMqTaosxRspObj*)msg; SMqTaosxRspObj* pRspObj = (SMqTaosxRspObj*)msg;
topic = pRspObj->topic; topic = pRspObj->topic;
vgId = pRspObj->vgId; vgId = pRspObj->vgId;
...@@ -715,7 +715,7 @@ void tmqSendHbReq(void* param, void* tmrId) { ...@@ -715,7 +715,7 @@ void tmqSendHbReq(void* param, void* tmrId) {
int32_t epoch = tmq->epoch; int32_t epoch = tmq->epoch;
SMqHbReq* pReq = taosMemoryMalloc(sizeof(SMqHbReq)); SMqHbReq* pReq = taosMemoryMalloc(sizeof(SMqHbReq));
if (pReq == NULL) goto OVER; if (pReq == NULL) goto OVER;
pReq->consumerId = consumerId; pReq->consumerId = htobe64(consumerId);
pReq->epoch = epoch; pReq->epoch = epoch;
SMsgSendInfo* sendInfo = taosMemoryCalloc(1, sizeof(SMsgSendInfo)); SMsgSendInfo* sendInfo = taosMemoryCalloc(1, sizeof(SMsgSendInfo));
...@@ -1603,6 +1603,7 @@ void* tmqHandleAllRsp(tmq_t* tmq, int64_t timeout, bool pollIfReset) { ...@@ -1603,6 +1603,7 @@ void* tmqHandleAllRsp(tmq_t* tmq, int64_t timeout, bool pollIfReset) {
return NULL; return NULL;
} else if (rspWrapper->tmqRspType == TMQ_MSG_TYPE__POLL_RSP) { } else if (rspWrapper->tmqRspType == TMQ_MSG_TYPE__POLL_RSP) {
SMqPollRspWrapper* pollRspWrapper = (SMqPollRspWrapper*)rspWrapper; SMqPollRspWrapper* pollRspWrapper = (SMqPollRspWrapper*)rspWrapper;
tscDebug("consumer %ld actual process poll rsp", tmq->consumerId);
/*atomic_sub_fetch_32(&tmq->readyRequest, 1);*/ /*atomic_sub_fetch_32(&tmq->readyRequest, 1);*/
int32_t consumerEpoch = atomic_load_32(&tmq->epoch); int32_t consumerEpoch = atomic_load_32(&tmq->epoch);
if (pollRspWrapper->dataRsp.head.epoch == consumerEpoch) { if (pollRspWrapper->dataRsp.head.epoch == consumerEpoch) {
...@@ -1661,9 +1662,9 @@ void* tmqHandleAllRsp(tmq_t* tmq, int64_t timeout, bool pollIfReset) { ...@@ -1661,9 +1662,9 @@ void* tmqHandleAllRsp(tmq_t* tmq, int64_t timeout, bool pollIfReset) {
// build rsp // build rsp
void* pRsp = NULL; void* pRsp = NULL;
if(pollRspWrapper->taosxRsp.createTableNum == 0){ if (pollRspWrapper->taosxRsp.createTableNum == 0) {
pRsp = tmqBuildRspFromWrapper(pollRspWrapper); pRsp = tmqBuildRspFromWrapper(pollRspWrapper);
}else{ } else {
pRsp = tmqBuildTaosxRspFromWrapper(pollRspWrapper); pRsp = tmqBuildTaosxRspFromWrapper(pollRspWrapper);
} }
taosFreeQitem(pollRspWrapper); taosFreeQitem(pollRspWrapper);
...@@ -1718,7 +1719,10 @@ TAOS_RES* tmq_consumer_poll(tmq_t* tmq, int64_t timeout) { ...@@ -1718,7 +1719,10 @@ TAOS_RES* tmq_consumer_poll(tmq_t* tmq, int64_t timeout) {
while (1) { while (1) {
tmqHandleAllDelayedTask(tmq); tmqHandleAllDelayedTask(tmq);
if (tmqPollImpl(tmq, timeout) < 0) return NULL; if (tmqPollImpl(tmq, timeout) < 0) {
tscDebug("return since poll err");
/*return NULL;*/
}
rspObj = tmqHandleAllRsp(tmq, timeout, false); rspObj = tmqHandleAllRsp(tmq, timeout, false);
if (rspObj) { if (rspObj) {
...@@ -1850,12 +1854,12 @@ const char* tmq_get_table_name(TAOS_RES* res) { ...@@ -1850,12 +1854,12 @@ const char* tmq_get_table_name(TAOS_RES* res) {
return (const char*)taosArrayGetP(pRspObj->rsp.blockTbName, pRspObj->resIter); return (const char*)taosArrayGetP(pRspObj->rsp.blockTbName, pRspObj->resIter);
} else if (TD_RES_TMQ_METADATA(res)) { } else if (TD_RES_TMQ_METADATA(res)) {
SMqTaosxRspObj* pRspObj = (SMqTaosxRspObj*)res; SMqTaosxRspObj* pRspObj = (SMqTaosxRspObj*)res;
if (!pRspObj->rsp.withTbName || pRspObj->rsp.blockTbName == NULL || pRspObj->resIter < 0 || if (!pRspObj->rsp.withTbName || pRspObj->rsp.blockTbName == NULL || pRspObj->resIter < 0 ||
pRspObj->resIter >= pRspObj->rsp.blockNum) { pRspObj->resIter >= pRspObj->rsp.blockNum) {
return NULL; return NULL;
}
return (const char*)taosArrayGetP(pRspObj->rsp.blockTbName, pRspObj->resIter);
} }
return (const char*)taosArrayGetP(pRspObj->rsp.blockTbName, pRspObj->resIter);
}
return NULL; return NULL;
} }
......
此差异已折叠。
...@@ -374,8 +374,8 @@ static int32_t taosAddServerCfg(SConfig *pCfg) { ...@@ -374,8 +374,8 @@ static int32_t taosAddServerCfg(SConfig *pCfg) {
tsNumOfVnodeStreamThreads = TMAX(tsNumOfVnodeStreamThreads, 4); tsNumOfVnodeStreamThreads = TMAX(tsNumOfVnodeStreamThreads, 4);
if (cfgAddInt32(pCfg, "numOfVnodeStreamThreads", tsNumOfVnodeStreamThreads, 4, 1024, 0) != 0) return -1; if (cfgAddInt32(pCfg, "numOfVnodeStreamThreads", tsNumOfVnodeStreamThreads, 4, 1024, 0) != 0) return -1;
tsNumOfVnodeFetchThreads = 1; // tsNumOfVnodeFetchThreads = 1;
if (cfgAddInt32(pCfg, "numOfVnodeFetchThreads", tsNumOfVnodeFetchThreads, 1, 1024, 0) != 0) return -1; // if (cfgAddInt32(pCfg, "numOfVnodeFetchThreads", tsNumOfVnodeFetchThreads, 1, 1, 0) != 0) return -1;
tsNumOfVnodeWriteThreads = tsNumOfCores; tsNumOfVnodeWriteThreads = tsNumOfCores;
tsNumOfVnodeWriteThreads = TMAX(tsNumOfVnodeWriteThreads, 1); tsNumOfVnodeWriteThreads = TMAX(tsNumOfVnodeWriteThreads, 1);
...@@ -497,6 +497,7 @@ static int32_t taosUpdateServerCfg(SConfig *pCfg) { ...@@ -497,6 +497,7 @@ static int32_t taosUpdateServerCfg(SConfig *pCfg) {
pItem->stype = stype; pItem->stype = stype;
} }
/*
pItem = cfgGetItem(tsCfg, "numOfVnodeFetchThreads"); pItem = cfgGetItem(tsCfg, "numOfVnodeFetchThreads");
if (pItem != NULL && pItem->stype == CFG_STYPE_DEFAULT) { if (pItem != NULL && pItem->stype == CFG_STYPE_DEFAULT) {
tsNumOfVnodeFetchThreads = numOfCores / 4; tsNumOfVnodeFetchThreads = numOfCores / 4;
...@@ -504,6 +505,7 @@ static int32_t taosUpdateServerCfg(SConfig *pCfg) { ...@@ -504,6 +505,7 @@ static int32_t taosUpdateServerCfg(SConfig *pCfg) {
pItem->i32 = tsNumOfVnodeFetchThreads; pItem->i32 = tsNumOfVnodeFetchThreads;
pItem->stype = stype; pItem->stype = stype;
} }
*/
pItem = cfgGetItem(tsCfg, "numOfVnodeWriteThreads"); pItem = cfgGetItem(tsCfg, "numOfVnodeWriteThreads");
if (pItem != NULL && pItem->stype == CFG_STYPE_DEFAULT) { if (pItem != NULL && pItem->stype == CFG_STYPE_DEFAULT) {
...@@ -703,7 +705,7 @@ static int32_t taosSetServerCfg(SConfig *pCfg) { ...@@ -703,7 +705,7 @@ static int32_t taosSetServerCfg(SConfig *pCfg) {
tsNumOfMnodeReadThreads = cfgGetItem(pCfg, "numOfMnodeReadThreads")->i32; tsNumOfMnodeReadThreads = cfgGetItem(pCfg, "numOfMnodeReadThreads")->i32;
tsNumOfVnodeQueryThreads = cfgGetItem(pCfg, "numOfVnodeQueryThreads")->i32; tsNumOfVnodeQueryThreads = cfgGetItem(pCfg, "numOfVnodeQueryThreads")->i32;
tsNumOfVnodeStreamThreads = cfgGetItem(pCfg, "numOfVnodeStreamThreads")->i32; tsNumOfVnodeStreamThreads = cfgGetItem(pCfg, "numOfVnodeStreamThreads")->i32;
tsNumOfVnodeFetchThreads = cfgGetItem(pCfg, "numOfVnodeFetchThreads")->i32; // tsNumOfVnodeFetchThreads = cfgGetItem(pCfg, "numOfVnodeFetchThreads")->i32;
tsNumOfVnodeWriteThreads = cfgGetItem(pCfg, "numOfVnodeWriteThreads")->i32; tsNumOfVnodeWriteThreads = cfgGetItem(pCfg, "numOfVnodeWriteThreads")->i32;
tsNumOfVnodeSyncThreads = cfgGetItem(pCfg, "numOfVnodeSyncThreads")->i32; tsNumOfVnodeSyncThreads = cfgGetItem(pCfg, "numOfVnodeSyncThreads")->i32;
tsNumOfVnodeRsmaThreads = cfgGetItem(pCfg, "numOfVnodeRsmaThreads")->i32; tsNumOfVnodeRsmaThreads = cfgGetItem(pCfg, "numOfVnodeRsmaThreads")->i32;
...@@ -953,8 +955,10 @@ int32_t taosSetCfg(SConfig *pCfg, char *name) { ...@@ -953,8 +955,10 @@ int32_t taosSetCfg(SConfig *pCfg, char *name) {
tsNumOfMnodeReadThreads = cfgGetItem(pCfg, "numOfMnodeReadThreads")->i32; tsNumOfMnodeReadThreads = cfgGetItem(pCfg, "numOfMnodeReadThreads")->i32;
} else if (strcasecmp("numOfVnodeQueryThreads", name) == 0) { } else if (strcasecmp("numOfVnodeQueryThreads", name) == 0) {
tsNumOfVnodeQueryThreads = cfgGetItem(pCfg, "numOfVnodeQueryThreads")->i32; tsNumOfVnodeQueryThreads = cfgGetItem(pCfg, "numOfVnodeQueryThreads")->i32;
/*
} else if (strcasecmp("numOfVnodeFetchThreads", name) == 0) { } else if (strcasecmp("numOfVnodeFetchThreads", name) == 0) {
tsNumOfVnodeFetchThreads = cfgGetItem(pCfg, "numOfVnodeFetchThreads")->i32; tsNumOfVnodeFetchThreads = cfgGetItem(pCfg, "numOfVnodeFetchThreads")->i32;
*/
} else if (strcasecmp("numOfVnodeWriteThreads", name) == 0) { } else if (strcasecmp("numOfVnodeWriteThreads", name) == 0) {
tsNumOfVnodeWriteThreads = cfgGetItem(pCfg, "numOfVnodeWriteThreads")->i32; tsNumOfVnodeWriteThreads = cfgGetItem(pCfg, "numOfVnodeWriteThreads")->i32;
} else if (strcasecmp("numOfVnodeSyncThreads", name) == 0) { } else if (strcasecmp("numOfVnodeSyncThreads", name) == 0) {
......
...@@ -689,7 +689,7 @@ int32_t tdSTSRowNew(SArray *pArray, STSchema *pTSchema, STSRow **ppRow) { ...@@ -689,7 +689,7 @@ int32_t tdSTSRowNew(SArray *pArray, STSchema *pTSchema, STSRow **ppRow) {
memcpy(varDataVal(varBuf), pColVal->value.pData, pColVal->value.nData); memcpy(varDataVal(varBuf), pColVal->value.pData, pColVal->value.nData);
val = varBuf; val = varBuf;
} else { } else {
val = (const void *)&pColVal->value.i64; val = (const void *)&pColVal->value.val;
} }
} else { } else {
pColVal = NULL; pColVal = NULL;
......
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>. * along with this program. If not, see <http://www.gnu.org/licenses/>.
*/ */
#if 0
#include <gtest/gtest.h> #include <gtest/gtest.h>
#include <taoserror.h> #include <taoserror.h>
...@@ -476,4 +477,5 @@ TEST(testCase, NoneTest) { ...@@ -476,4 +477,5 @@ TEST(testCase, NoneTest) {
taosArrayDestroy(pArray); taosArrayDestroy(pArray);
taosMemoryFree(pTSchema); taosMemoryFree(pTSchema);
} }
#endif
#endif #endif
\ No newline at end of file
...@@ -135,7 +135,7 @@ _OVER: ...@@ -135,7 +135,7 @@ _OVER:
if (content != NULL) taosMemoryFree(content); if (content != NULL) taosMemoryFree(content);
if (root != NULL) cJSON_Delete(root); if (root != NULL) cJSON_Delete(root);
if (pFile != NULL) taosCloseFile(&pFile); if (pFile != NULL) taosCloseFile(&pFile);
if (*ppCfgs == NULL && pCfgs != NULL) taosMemoryFree(pCfgs); if (code != 0) taosMemoryFree(pCfgs);
terrno = code; terrno = code;
return code; return code;
...@@ -157,6 +157,11 @@ int32_t vmWriteVnodeListToFile(SVnodeMgmt *pMgmt) { ...@@ -157,6 +157,11 @@ int32_t vmWriteVnodeListToFile(SVnodeMgmt *pMgmt) {
int32_t numOfVnodes = 0; int32_t numOfVnodes = 0;
SVnodeObj **pVnodes = vmGetVnodeListFromHash(pMgmt, &numOfVnodes); SVnodeObj **pVnodes = vmGetVnodeListFromHash(pMgmt, &numOfVnodes);
if (pVnodes == NULL) {
terrno = TSDB_CODE_OUT_OF_MEMORY;
ret = -1;
goto _OVER;
}
int32_t len = 0; int32_t len = 0;
int32_t maxLen = MAX_CONTENT_LEN; int32_t maxLen = MAX_CONTENT_LEN;
......
...@@ -384,7 +384,7 @@ static int32_t vmStartVnodes(SVnodeMgmt *pMgmt) { ...@@ -384,7 +384,7 @@ static int32_t vmStartVnodes(SVnodeMgmt *pMgmt) {
for (int32_t v = 0; v < numOfVnodes; ++v) { for (int32_t v = 0; v < numOfVnodes; ++v) {
int32_t t = v % threadNum; int32_t t = v % threadNum;
SVnodeThread *pThread = &threads[t]; SVnodeThread *pThread = &threads[t];
if (pThread->ppVnodes != NULL) { if (pThread->ppVnodes != NULL && ppVnodes != NULL) {
pThread->ppVnodes[pThread->vnodeNum++] = ppVnodes[v]; pThread->ppVnodes[pThread->vnodeNum++] = ppVnodes[v];
} }
} }
......
...@@ -193,6 +193,8 @@ int32_t dmInitDnode(SDnode *pDnode, EDndNodeType rtype) { ...@@ -193,6 +193,8 @@ int32_t dmInitDnode(SDnode *pDnode, EDndNodeType rtype) {
goto _OVER; goto _OVER;
} }
indexInit(tsNumOfCommitThreads);
dmReportStartup("dnode-transport", "initialized"); dmReportStartup("dnode-transport", "initialized");
dDebug("dnode is created, ptr:%p", pDnode); dDebug("dnode is created, ptr:%p", pDnode);
code = 0; code = 0;
......
...@@ -74,11 +74,14 @@ static SProcQueue *dmInitProcQueue(SProc *proc, char *ptr, int32_t size) { ...@@ -74,11 +74,14 @@ static SProcQueue *dmInitProcQueue(SProc *proc, char *ptr, int32_t size) {
} }
tstrncpy(queue->name, proc->name, sizeof(queue->name)); tstrncpy(queue->name, proc->name, sizeof(queue->name));
taosThreadMutexLock(&queue->mutex);
queue->head = 0; queue->head = 0;
queue->tail = 0; queue->tail = 0;
queue->total = bufSize; queue->total = bufSize;
queue->avail = bufSize; queue->avail = bufSize;
queue->items = 0; queue->items = 0;
taosThreadMutexUnlock(&queue->mutex);
} }
return queue; return queue;
......
...@@ -301,7 +301,7 @@ int32_t dmInitServer(SDnode *pDnode) { ...@@ -301,7 +301,7 @@ int32_t dmInitServer(SDnode *pDnode) {
SDnodeTrans *pTrans = &pDnode->trans; SDnodeTrans *pTrans = &pDnode->trans;
SRpcInit rpcInit = {0}; SRpcInit rpcInit = {0};
strncpy(rpcInit.localFqdn, tsLocalFqdn, TSDB_FQDN_LEN); tstrncpy(rpcInit.localFqdn, tsLocalFqdn, TSDB_FQDN_LEN);
rpcInit.localPort = tsServerPort; rpcInit.localPort = tsServerPort;
rpcInit.label = "DND-S"; rpcInit.label = "DND-S";
rpcInit.numOfThreads = tsNumOfRpcThreads; rpcInit.numOfThreads = tsNumOfRpcThreads;
......
...@@ -77,12 +77,13 @@ static int32_t mndCreateDefaultAcct(SMnode *pMnode) { ...@@ -77,12 +77,13 @@ static int32_t mndCreateDefaultAcct(SMnode *pMnode) {
SSdbRaw *pRaw = mndAcctActionEncode(&acctObj); SSdbRaw *pRaw = mndAcctActionEncode(&acctObj);
if (pRaw == NULL) return -1; if (pRaw == NULL) return -1;
sdbSetRawStatus(pRaw, SDB_STATUS_READY); (void)sdbSetRawStatus(pRaw, SDB_STATUS_READY);
mInfo("acct:%s, will be created when deploying, raw:%p", acctObj.acct, pRaw); mInfo("acct:%s, will be created when deploying, raw:%p", acctObj.acct, pRaw);
STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_NOTHING, NULL, "create-acct"); STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_NOTHING, NULL, "create-acct");
if (pTrans == NULL) { if (pTrans == NULL) {
sdbFreeRaw(pRaw);
mError("acct:%s, failed to create since %s", acctObj.acct, terrstr()); mError("acct:%s, failed to create since %s", acctObj.acct, terrstr());
return -1; return -1;
} }
......
...@@ -231,12 +231,13 @@ static int32_t mndCreateDefaultCluster(SMnode *pMnode) { ...@@ -231,12 +231,13 @@ static int32_t mndCreateDefaultCluster(SMnode *pMnode) {
SSdbRaw *pRaw = mndClusterActionEncode(&clusterObj); SSdbRaw *pRaw = mndClusterActionEncode(&clusterObj);
if (pRaw == NULL) return -1; if (pRaw == NULL) return -1;
sdbSetRawStatus(pRaw, SDB_STATUS_READY); (void)sdbSetRawStatus(pRaw, SDB_STATUS_READY);
mInfo("cluster:%" PRId64 ", will be created when deploying, raw:%p", clusterObj.id, pRaw); mInfo("cluster:%" PRId64 ", will be created when deploying, raw:%p", clusterObj.id, pRaw);
STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_NOTHING, NULL, "create-cluster"); STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_NOTHING, NULL, "create-cluster");
if (pTrans == NULL) { if (pTrans == NULL) {
sdbFreeRaw(pRaw);
mError("cluster:%" PRId64 ", failed to create since %s", clusterObj.id, terrstr()); mError("cluster:%" PRId64 ", failed to create since %s", clusterObj.id, terrstr());
return -1; return -1;
} }
...@@ -247,7 +248,7 @@ static int32_t mndCreateDefaultCluster(SMnode *pMnode) { ...@@ -247,7 +248,7 @@ static int32_t mndCreateDefaultCluster(SMnode *pMnode) {
mndTransDrop(pTrans); mndTransDrop(pTrans);
return -1; return -1;
} }
sdbSetRawStatus(pRaw, SDB_STATUS_READY); (void)sdbSetRawStatus(pRaw, SDB_STATUS_READY);
if (mndTransPrepare(pMnode, pTrans) != 0) { if (mndTransPrepare(pMnode, pTrans) != 0) {
mError("trans:%d, failed to prepare since %s", pTrans->id, terrstr()); mError("trans:%d, failed to prepare since %s", pTrans->id, terrstr());
...@@ -315,7 +316,7 @@ static int32_t mndProcessUptimeTimer(SRpcMsg *pReq) { ...@@ -315,7 +316,7 @@ static int32_t mndProcessUptimeTimer(SRpcMsg *pReq) {
return 0; return 0;
} }
mInfo("update cluster uptime to %" PRId64, clusterObj.upTime); mInfo("update cluster uptime to %d", clusterObj.upTime);
STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_CONFLICT_NOTHING, pReq, "update-uptime"); STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_CONFLICT_NOTHING, pReq, "update-uptime");
if (pTrans == NULL) return -1; if (pTrans == NULL) return -1;
...@@ -325,7 +326,7 @@ static int32_t mndProcessUptimeTimer(SRpcMsg *pReq) { ...@@ -325,7 +326,7 @@ static int32_t mndProcessUptimeTimer(SRpcMsg *pReq) {
mndTransDrop(pTrans); mndTransDrop(pTrans);
return -1; return -1;
} }
sdbSetRawStatus(pCommitRaw, SDB_STATUS_READY); (void)sdbSetRawStatus(pCommitRaw, SDB_STATUS_READY);
if (mndTransPrepare(pMnode, pTrans) != 0) { if (mndTransPrepare(pMnode, pTrans) != 0) {
mError("trans:%d, failed to prepare since %s", pTrans->id, terrstr()); mError("trans:%d, failed to prepare since %s", pTrans->id, terrstr());
......
...@@ -272,6 +272,7 @@ static int32_t mndProcessMqHbReq(SRpcMsg *pMsg) { ...@@ -272,6 +272,7 @@ static int32_t mndProcessMqHbReq(SRpcMsg *pMsg) {
SMqConsumerObj *pConsumer = mndAcquireConsumer(pMnode, consumerId); SMqConsumerObj *pConsumer = mndAcquireConsumer(pMnode, consumerId);
if (pConsumer == NULL) { if (pConsumer == NULL) {
mError("consumer %ld not exist", consumerId);
terrno = TSDB_CODE_MND_CONSUMER_NOT_EXIST; terrno = TSDB_CODE_MND_CONSUMER_NOT_EXIST;
return -1; return -1;
} }
......
...@@ -730,7 +730,7 @@ static int32_t mndSetAlterDbRedoLogs(SMnode *pMnode, STrans *pTrans, SDbObj *pOl ...@@ -730,7 +730,7 @@ static int32_t mndSetAlterDbRedoLogs(SMnode *pMnode, STrans *pTrans, SDbObj *pOl
return -1; return -1;
} }
sdbSetRawStatus(pRedoRaw, SDB_STATUS_READY); (void)sdbSetRawStatus(pRedoRaw, SDB_STATUS_READY);
return 0; return 0;
} }
...@@ -742,7 +742,7 @@ static int32_t mndSetAlterDbCommitLogs(SMnode *pMnode, STrans *pTrans, SDbObj *p ...@@ -742,7 +742,7 @@ static int32_t mndSetAlterDbCommitLogs(SMnode *pMnode, STrans *pTrans, SDbObj *p
return -1; return -1;
} }
sdbSetRawStatus(pCommitRaw, SDB_STATUS_READY); (void)sdbSetRawStatus(pCommitRaw, SDB_STATUS_READY);
return 0; return 0;
} }
...@@ -938,7 +938,7 @@ static int32_t mndSetDropDbCommitLogs(SMnode *pMnode, STrans *pTrans, SDbObj *pD ...@@ -938,7 +938,7 @@ static int32_t mndSetDropDbCommitLogs(SMnode *pMnode, STrans *pTrans, SDbObj *pD
sdbRelease(pSdb, pVgroup); sdbRelease(pSdb, pVgroup);
return -1; return -1;
} }
sdbSetRawStatus(pVgRaw, SDB_STATUS_DROPPED); (void)sdbSetRawStatus(pVgRaw, SDB_STATUS_DROPPED);
} }
sdbRelease(pSdb, pVgroup); sdbRelease(pSdb, pVgroup);
...@@ -956,7 +956,7 @@ static int32_t mndSetDropDbCommitLogs(SMnode *pMnode, STrans *pTrans, SDbObj *pD ...@@ -956,7 +956,7 @@ static int32_t mndSetDropDbCommitLogs(SMnode *pMnode, STrans *pTrans, SDbObj *pD
sdbRelease(pSdb, pStbRaw); sdbRelease(pSdb, pStbRaw);
return -1; return -1;
} }
sdbSetRawStatus(pStbRaw, SDB_STATUS_DROPPED); (void)sdbSetRawStatus(pStbRaw, SDB_STATUS_DROPPED);
} }
sdbRelease(pSdb, pStb); sdbRelease(pSdb, pStb);
...@@ -1052,7 +1052,7 @@ static int32_t mndDropDb(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb) { ...@@ -1052,7 +1052,7 @@ static int32_t mndDropDb(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb) {
mError("trans:%d, failed to append redo log since %s", pTrans->id, terrstr()); mError("trans:%d, failed to append redo log since %s", pTrans->id, terrstr());
goto _OVER; goto _OVER;
} }
sdbSetRawStatus(pCommitRaw, SDB_STATUS_READY); (void)sdbSetRawStatus(pCommitRaw, SDB_STATUS_READY);
} }
int32_t rspLen = 0; int32_t rspLen = 0;
...@@ -1594,7 +1594,7 @@ static void mndDumpDbInfoData(SMnode *pMnode, SSDataBlock *pBlock, SDbObj *pDb, ...@@ -1594,7 +1594,7 @@ static void mndDumpDbInfoData(SMnode *pMnode, SSDataBlock *pBlock, SDbObj *pDb,
break; break;
} }
char precVstr[10] = {0}; char precVstr[10] = {0};
STR_WITH_SIZE_TO_VARSTR(precVstr, precStr, 2); STR_WITH_MAXSIZE_TO_VARSTR(precVstr, precStr, 10);
char *statusStr = "ready"; char *statusStr = "ready";
if (objStatus == SDB_STATUS_CREATING) { if (objStatus == SDB_STATUS_CREATING) {
...@@ -1607,7 +1607,7 @@ static void mndDumpDbInfoData(SMnode *pMnode, SSDataBlock *pBlock, SDbObj *pDb, ...@@ -1607,7 +1607,7 @@ static void mndDumpDbInfoData(SMnode *pMnode, SSDataBlock *pBlock, SDbObj *pDb,
} }
} }
char statusVstr[24] = {0}; char statusVstr[24] = {0};
STR_WITH_SIZE_TO_VARSTR(statusVstr, statusStr, strlen(statusStr)); STR_WITH_MAXSIZE_TO_VARSTR(statusVstr, statusStr, 24);
if (sysDb || !sysinfo) { if (sysDb || !sysinfo) {
for (int32_t i = 0; i < pShow->numOfColumns; ++i) { for (int32_t i = 0; i < pShow->numOfColumns; ++i) {
...@@ -1644,7 +1644,7 @@ static void mndDumpDbInfoData(SMnode *pMnode, SSDataBlock *pBlock, SDbObj *pDb, ...@@ -1644,7 +1644,7 @@ static void mndDumpDbInfoData(SMnode *pMnode, SSDataBlock *pBlock, SDbObj *pDb,
const char *strictStr = pDb->cfg.strict ? "on" : "off"; const char *strictStr = pDb->cfg.strict ? "on" : "off";
char strictVstr[24] = {0}; char strictVstr[24] = {0};
STR_WITH_SIZE_TO_VARSTR(strictVstr, strictStr, strlen(strictStr)); STR_WITH_MAXSIZE_TO_VARSTR(strictVstr, strictStr, 24);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++); pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
colDataAppend(pColInfo, rows, (const char *)strictVstr, false); colDataAppend(pColInfo, rows, (const char *)strictVstr, false);
...@@ -1704,7 +1704,7 @@ static void mndDumpDbInfoData(SMnode *pMnode, SSDataBlock *pBlock, SDbObj *pDb, ...@@ -1704,7 +1704,7 @@ static void mndDumpDbInfoData(SMnode *pMnode, SSDataBlock *pBlock, SDbObj *pDb,
const char *cacheModelStr = getCacheModelStr(pDb->cfg.cacheLast); const char *cacheModelStr = getCacheModelStr(pDb->cfg.cacheLast);
char cacheModelVstr[24] = {0}; char cacheModelVstr[24] = {0};
STR_WITH_SIZE_TO_VARSTR(cacheModelVstr, cacheModelStr, strlen(cacheModelStr)); STR_WITH_MAXSIZE_TO_VARSTR(cacheModelVstr, cacheModelStr, 24);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++); pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
colDataAppend(pColInfo, rows, (const char *)cacheModelVstr, false); colDataAppend(pColInfo, rows, (const char *)cacheModelVstr, false);
......
...@@ -42,9 +42,23 @@ int32_t tEncodeSStreamObj(SEncoder *pEncoder, const SStreamObj *pObj) { ...@@ -42,9 +42,23 @@ int32_t tEncodeSStreamObj(SEncoder *pEncoder, const SStreamObj *pObj) {
if (tEncodeI64(pEncoder, pObj->targetStbUid) < 0) return -1; if (tEncodeI64(pEncoder, pObj->targetStbUid) < 0) return -1;
if (tEncodeI32(pEncoder, pObj->fixedSinkVgId) < 0) return -1; if (tEncodeI32(pEncoder, pObj->fixedSinkVgId) < 0) return -1;
if (tEncodeCStr(pEncoder, pObj->sql) < 0) return -1; if (pObj->sql != NULL) {
if (tEncodeCStr(pEncoder, pObj->ast) < 0) return -1; if (tEncodeCStr(pEncoder, pObj->sql) < 0) return -1;
if (tEncodeCStr(pEncoder, pObj->physicalPlan) < 0) return -1; } else {
if (tEncodeCStr(pEncoder, "") < 0) return -1;
}
if (pObj->ast != NULL) {
if (tEncodeCStr(pEncoder, pObj->ast) < 0) return -1;
} else {
if (tEncodeCStr(pEncoder, "") < 0) return -1;
}
if (pObj->physicalPlan != NULL) {
if (tEncodeCStr(pEncoder, pObj->physicalPlan) < 0) return -1;
} else {
if (tEncodeCStr(pEncoder, "") < 0) return -1;
}
int32_t sz = taosArrayGetSize(pObj->tasks); int32_t sz = taosArrayGetSize(pObj->tasks);
if (tEncodeI32(pEncoder, sz) < 0) return -1; if (tEncodeI32(pEncoder, sz) < 0) return -1;
......
...@@ -101,8 +101,9 @@ static int32_t mndCreateDefaultDnode(SMnode *pMnode) { ...@@ -101,8 +101,9 @@ static int32_t mndCreateDefaultDnode(SMnode *pMnode) {
dnodeObj.createdTime = taosGetTimestampMs(); dnodeObj.createdTime = taosGetTimestampMs();
dnodeObj.updateTime = dnodeObj.createdTime; dnodeObj.updateTime = dnodeObj.createdTime;
dnodeObj.port = tsServerPort; dnodeObj.port = tsServerPort;
memcpy(&dnodeObj.fqdn, tsLocalFqdn, TSDB_FQDN_LEN); tstrncpy(dnodeObj.fqdn, tsLocalFqdn, TSDB_FQDN_LEN);
snprintf(dnodeObj.ep, TSDB_EP_LEN, "%s:%u", dnodeObj.fqdn, dnodeObj.port); dnodeObj.fqdn[TSDB_FQDN_LEN - 1] = 0;
snprintf(dnodeObj.ep, TSDB_EP_LEN - 1, "%s:%u", dnodeObj.fqdn, dnodeObj.port);
pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_GLOBAL, NULL, "create-dnode"); pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_GLOBAL, NULL, "create-dnode");
if (pTrans == NULL) goto _OVER; if (pTrans == NULL) goto _OVER;
...@@ -110,7 +111,7 @@ static int32_t mndCreateDefaultDnode(SMnode *pMnode) { ...@@ -110,7 +111,7 @@ static int32_t mndCreateDefaultDnode(SMnode *pMnode) {
pRaw = mndDnodeActionEncode(&dnodeObj); pRaw = mndDnodeActionEncode(&dnodeObj);
if (pRaw == NULL || mndTransAppendCommitlog(pTrans, pRaw) != 0) goto _OVER; if (pRaw == NULL || mndTransAppendCommitlog(pTrans, pRaw) != 0) goto _OVER;
sdbSetRawStatus(pRaw, SDB_STATUS_READY); (void)sdbSetRawStatus(pRaw, SDB_STATUS_READY);
pRaw = NULL; pRaw = NULL;
if (mndTransPrepare(pMnode, pTrans) != 0) goto _OVER; if (mndTransPrepare(pMnode, pTrans) != 0) goto _OVER;
...@@ -190,7 +191,10 @@ _OVER: ...@@ -190,7 +191,10 @@ _OVER:
static int32_t mndDnodeActionInsert(SSdb *pSdb, SDnodeObj *pDnode) { static int32_t mndDnodeActionInsert(SSdb *pSdb, SDnodeObj *pDnode) {
mTrace("dnode:%d, perform insert action, row:%p", pDnode->id, pDnode); mTrace("dnode:%d, perform insert action, row:%p", pDnode->id, pDnode);
pDnode->offlineReason = DND_REASON_STATUS_NOT_RECEIVED; pDnode->offlineReason = DND_REASON_STATUS_NOT_RECEIVED;
snprintf(pDnode->ep, TSDB_EP_LEN, "%s:%u", pDnode->fqdn, pDnode->port);
char ep[TSDB_EP_LEN] = {0};
snprintf(ep, TSDB_EP_LEN - 1, "%s:%u", pDnode->fqdn, pDnode->port);
tstrncpy(pDnode->ep, ep, TSDB_EP_LEN);
return 0; return 0;
} }
...@@ -253,7 +257,7 @@ int32_t mndGetDnodeSize(SMnode *pMnode) { ...@@ -253,7 +257,7 @@ int32_t mndGetDnodeSize(SMnode *pMnode) {
bool mndIsDnodeOnline(SDnodeObj *pDnode, int64_t curMs) { bool mndIsDnodeOnline(SDnodeObj *pDnode, int64_t curMs) {
int64_t interval = TABS(pDnode->lastAccessTime - curMs); int64_t interval = TABS(pDnode->lastAccessTime - curMs);
if (interval > 5000 * tsStatusInterval) { if (interval > 5000 * (int64_t)tsStatusInterval) {
if (pDnode->rebootTime > 0) { if (pDnode->rebootTime > 0) {
pDnode->offlineReason = DND_REASON_STATUS_MSG_TIMEOUT; pDnode->offlineReason = DND_REASON_STATUS_MSG_TIMEOUT;
} }
...@@ -275,7 +279,7 @@ void mndGetDnodeData(SMnode *pMnode, SArray *pDnodeEps) { ...@@ -275,7 +279,7 @@ void mndGetDnodeData(SMnode *pMnode, SArray *pDnodeEps) {
SDnodeEp dnodeEp = {0}; SDnodeEp dnodeEp = {0};
dnodeEp.id = pDnode->id; dnodeEp.id = pDnode->id;
dnodeEp.ep.port = pDnode->port; dnodeEp.ep.port = pDnode->port;
memcpy(dnodeEp.ep.fqdn, pDnode->fqdn, TSDB_FQDN_LEN); tstrncpy(dnodeEp.ep.fqdn, pDnode->fqdn, TSDB_FQDN_LEN);
sdbRelease(pSdb, pDnode); sdbRelease(pSdb, pDnode);
dnodeEp.isMnode = 0; dnodeEp.isMnode = 0;
...@@ -485,8 +489,8 @@ static int32_t mndCreateDnode(SMnode *pMnode, SRpcMsg *pReq, SCreateDnodeReq *pC ...@@ -485,8 +489,8 @@ static int32_t mndCreateDnode(SMnode *pMnode, SRpcMsg *pReq, SCreateDnodeReq *pC
dnodeObj.createdTime = taosGetTimestampMs(); dnodeObj.createdTime = taosGetTimestampMs();
dnodeObj.updateTime = dnodeObj.createdTime; dnodeObj.updateTime = dnodeObj.createdTime;
dnodeObj.port = pCreate->port; dnodeObj.port = pCreate->port;
memcpy(dnodeObj.fqdn, pCreate->fqdn, TSDB_FQDN_LEN); tstrncpy(dnodeObj.fqdn, pCreate->fqdn, TSDB_FQDN_LEN);
snprintf(dnodeObj.ep, TSDB_EP_LEN, "%s:%u", dnodeObj.fqdn, dnodeObj.port); snprintf(dnodeObj.ep, TSDB_EP_LEN - 1, "%s:%u", dnodeObj.fqdn, dnodeObj.port);
pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_CONFLICT_GLOBAL, pReq, "create-dnode"); pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_CONFLICT_GLOBAL, pReq, "create-dnode");
if (pTrans == NULL) goto _OVER; if (pTrans == NULL) goto _OVER;
...@@ -494,7 +498,7 @@ static int32_t mndCreateDnode(SMnode *pMnode, SRpcMsg *pReq, SCreateDnodeReq *pC ...@@ -494,7 +498,7 @@ static int32_t mndCreateDnode(SMnode *pMnode, SRpcMsg *pReq, SCreateDnodeReq *pC
pRaw = mndDnodeActionEncode(&dnodeObj); pRaw = mndDnodeActionEncode(&dnodeObj);
if (pRaw == NULL || mndTransAppendCommitlog(pTrans, pRaw) != 0) goto _OVER; if (pRaw == NULL || mndTransAppendCommitlog(pTrans, pRaw) != 0) goto _OVER;
sdbSetRawStatus(pRaw, SDB_STATUS_READY); (void)sdbSetRawStatus(pRaw, SDB_STATUS_READY);
pRaw = NULL; pRaw = NULL;
if (mndTransPrepare(pMnode, pTrans) != 0) goto _OVER; if (mndTransPrepare(pMnode, pTrans) != 0) goto _OVER;
...@@ -673,13 +677,15 @@ static int32_t mndDropDnode(SMnode *pMnode, SRpcMsg *pReq, SDnodeObj *pDnode, SM ...@@ -673,13 +677,15 @@ static int32_t mndDropDnode(SMnode *pMnode, SRpcMsg *pReq, SDnodeObj *pDnode, SM
mInfo("trans:%d, used to drop dnode:%d", pTrans->id, pDnode->id); mInfo("trans:%d, used to drop dnode:%d", pTrans->id, pDnode->id);
pRaw = mndDnodeActionEncode(pDnode); pRaw = mndDnodeActionEncode(pDnode);
if (pRaw == NULL || mndTransAppendRedolog(pTrans, pRaw) != 0) goto _OVER; if (pRaw == NULL) goto _OVER;
sdbSetRawStatus(pRaw, SDB_STATUS_DROPPING); if (mndTransAppendRedolog(pTrans, pRaw) != 0) goto _OVER;
(void)sdbSetRawStatus(pRaw, SDB_STATUS_DROPPING);
pRaw = NULL; pRaw = NULL;
pRaw = mndDnodeActionEncode(pDnode); pRaw = mndDnodeActionEncode(pDnode);
if (pRaw == NULL || mndTransAppendCommitlog(pTrans, pRaw) != 0) goto _OVER; if (pRaw == NULL) goto _OVER;
sdbSetRawStatus(pRaw, SDB_STATUS_DROPPED); if (mndTransAppendCommitlog(pTrans, pRaw) != 0) goto _OVER;
(void)sdbSetRawStatus(pRaw, SDB_STATUS_DROPPED);
pRaw = NULL; pRaw = NULL;
if (pMObj != NULL) { if (pMObj != NULL) {
......
...@@ -257,16 +257,19 @@ static int32_t mndDropFunc(SMnode *pMnode, SRpcMsg *pReq, SFuncObj *pFunc) { ...@@ -257,16 +257,19 @@ static int32_t mndDropFunc(SMnode *pMnode, SRpcMsg *pReq, SFuncObj *pFunc) {
mInfo("trans:%d, used to drop user:%s", pTrans->id, pFunc->name); mInfo("trans:%d, used to drop user:%s", pTrans->id, pFunc->name);
SSdbRaw *pRedoRaw = mndFuncActionEncode(pFunc); SSdbRaw *pRedoRaw = mndFuncActionEncode(pFunc);
if (pRedoRaw == NULL || mndTransAppendRedolog(pTrans, pRedoRaw) != 0) goto _OVER; if (pRedoRaw == NULL) goto _OVER;
sdbSetRawStatus(pRedoRaw, SDB_STATUS_DROPPING); if (mndTransAppendRedolog(pTrans, pRedoRaw) != 0) goto _OVER;
(void)sdbSetRawStatus(pRedoRaw, SDB_STATUS_DROPPING);
SSdbRaw *pUndoRaw = mndFuncActionEncode(pFunc); SSdbRaw *pUndoRaw = mndFuncActionEncode(pFunc);
if (pUndoRaw == NULL || mndTransAppendUndolog(pTrans, pUndoRaw) != 0) goto _OVER; if (pUndoRaw == NULL) goto _OVER;
sdbSetRawStatus(pUndoRaw, SDB_STATUS_READY); if (mndTransAppendUndolog(pTrans, pUndoRaw) != 0) goto _OVER;
(void)sdbSetRawStatus(pUndoRaw, SDB_STATUS_READY);
SSdbRaw *pCommitRaw = mndFuncActionEncode(pFunc); SSdbRaw *pCommitRaw = mndFuncActionEncode(pFunc);
if (pCommitRaw == NULL || mndTransAppendCommitlog(pTrans, pCommitRaw) != 0) goto _OVER; if (pCommitRaw == NULL) goto _OVER;
sdbSetRawStatus(pCommitRaw, SDB_STATUS_DROPPED); if (mndTransAppendCommitlog(pTrans, pCommitRaw) != 0) goto _OVER;
(void)sdbSetRawStatus(pCommitRaw, SDB_STATUS_DROPPED);
if (mndTransPrepare(pMnode, pTrans) != 0) goto _OVER; if (mndTransPrepare(pMnode, pTrans) != 0) goto _OVER;
......
...@@ -30,85 +30,85 @@ static int32_t mndRetrieveGrant(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pBl ...@@ -30,85 +30,85 @@ static int32_t mndRetrieveGrant(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pBl
cols = 0; cols = 0;
SColumnInfoData *pColInfo = taosArrayGet(pBlock->pDataBlock, cols); SColumnInfoData *pColInfo = taosArrayGet(pBlock->pDataBlock, cols);
const char *src = "community"; const char *src = "community";
STR_WITH_SIZE_TO_VARSTR(tmp, src, strlen(src)); STR_WITH_MAXSIZE_TO_VARSTR(tmp, src, 32);
colDataAppend(pColInfo, numOfRows, tmp, false); colDataAppend(pColInfo, numOfRows, tmp, false);
cols++; cols++;
pColInfo = taosArrayGet(pBlock->pDataBlock, cols); pColInfo = taosArrayGet(pBlock->pDataBlock, cols);
src = "unlimited"; src = "unlimited";
STR_WITH_SIZE_TO_VARSTR(tmp, src, strlen(src)); STR_WITH_MAXSIZE_TO_VARSTR(tmp, src, 32);
colDataAppend(pColInfo, numOfRows, tmp, false); colDataAppend(pColInfo, numOfRows, tmp, false);
cols++; cols++;
pColInfo = taosArrayGet(pBlock->pDataBlock, cols); pColInfo = taosArrayGet(pBlock->pDataBlock, cols);
src = "false"; src = "false";
STR_WITH_SIZE_TO_VARSTR(tmp, src, strlen(src)); STR_WITH_MAXSIZE_TO_VARSTR(tmp, src, 32);
colDataAppend(pColInfo, numOfRows, tmp, false); colDataAppend(pColInfo, numOfRows, tmp, false);
cols++; cols++;
pColInfo = taosArrayGet(pBlock->pDataBlock, cols); pColInfo = taosArrayGet(pBlock->pDataBlock, cols);
src = "unlimited"; src = "unlimited";
STR_WITH_SIZE_TO_VARSTR(tmp, src, strlen(src)); STR_WITH_MAXSIZE_TO_VARSTR(tmp, src, 32);
colDataAppend(pColInfo, numOfRows, tmp, false); colDataAppend(pColInfo, numOfRows, tmp, false);
cols++; cols++;
pColInfo = taosArrayGet(pBlock->pDataBlock, cols); pColInfo = taosArrayGet(pBlock->pDataBlock, cols);
src = "unlimited"; src = "unlimited";
STR_WITH_SIZE_TO_VARSTR(tmp, src, strlen(src)); STR_WITH_MAXSIZE_TO_VARSTR(tmp, src, 32);
colDataAppend(pColInfo, numOfRows, tmp, false); colDataAppend(pColInfo, numOfRows, tmp, false);
cols++; cols++;
pColInfo = taosArrayGet(pBlock->pDataBlock, cols); pColInfo = taosArrayGet(pBlock->pDataBlock, cols);
src = "unlimited"; src = "unlimited";
STR_WITH_SIZE_TO_VARSTR(tmp, src, strlen(src)); STR_WITH_MAXSIZE_TO_VARSTR(tmp, src, 32);
colDataAppend(pColInfo, numOfRows, tmp, false); colDataAppend(pColInfo, numOfRows, tmp, false);
cols++; cols++;
pColInfo = taosArrayGet(pBlock->pDataBlock, cols); pColInfo = taosArrayGet(pBlock->pDataBlock, cols);
src = "unlimited"; src = "unlimited";
STR_WITH_SIZE_TO_VARSTR(tmp, src, strlen(src)); STR_WITH_MAXSIZE_TO_VARSTR(tmp, src, 32);
colDataAppend(pColInfo, numOfRows, tmp, false); colDataAppend(pColInfo, numOfRows, tmp, false);
cols++; cols++;
pColInfo = taosArrayGet(pBlock->pDataBlock, cols); pColInfo = taosArrayGet(pBlock->pDataBlock, cols);
src = "unlimited"; src = "unlimited";
STR_WITH_SIZE_TO_VARSTR(tmp, src, strlen(src)); STR_WITH_MAXSIZE_TO_VARSTR(tmp, src, 32);
colDataAppend(pColInfo, numOfRows, tmp, false); colDataAppend(pColInfo, numOfRows, tmp, false);
cols++; cols++;
pColInfo = taosArrayGet(pBlock->pDataBlock, cols); pColInfo = taosArrayGet(pBlock->pDataBlock, cols);
src = "unlimited"; src = "unlimited";
STR_WITH_SIZE_TO_VARSTR(tmp, src, strlen(src)); STR_WITH_MAXSIZE_TO_VARSTR(tmp, src, 32);
colDataAppend(pColInfo, numOfRows, tmp, false); colDataAppend(pColInfo, numOfRows, tmp, false);
cols++; cols++;
pColInfo = taosArrayGet(pBlock->pDataBlock, cols); pColInfo = taosArrayGet(pBlock->pDataBlock, cols);
src = "unlimited"; src = "unlimited";
STR_WITH_SIZE_TO_VARSTR(tmp, src, strlen(src)); STR_WITH_MAXSIZE_TO_VARSTR(tmp, src, 32);
colDataAppend(pColInfo, numOfRows, tmp, false); colDataAppend(pColInfo, numOfRows, tmp, false);
cols++; cols++;
pColInfo = taosArrayGet(pBlock->pDataBlock, cols); pColInfo = taosArrayGet(pBlock->pDataBlock, cols);
src = "unlimited"; src = "unlimited";
STR_WITH_SIZE_TO_VARSTR(tmp, src, strlen(src)); STR_WITH_MAXSIZE_TO_VARSTR(tmp, src, 32);
colDataAppend(pColInfo, numOfRows, tmp, false); colDataAppend(pColInfo, numOfRows, tmp, false);
cols++; cols++;
pColInfo = taosArrayGet(pBlock->pDataBlock, cols); pColInfo = taosArrayGet(pBlock->pDataBlock, cols);
src = "unlimited"; src = "unlimited";
STR_WITH_SIZE_TO_VARSTR(tmp, src, strlen(src)); STR_WITH_MAXSIZE_TO_VARSTR(tmp, src, 32);
colDataAppend(pColInfo, numOfRows, tmp, false); colDataAppend(pColInfo, numOfRows, tmp, false);
cols++; cols++;
pColInfo = taosArrayGet(pBlock->pDataBlock, cols); pColInfo = taosArrayGet(pBlock->pDataBlock, cols);
src = "unlimited"; src = "unlimited";
STR_WITH_SIZE_TO_VARSTR(tmp, src, strlen(src)); STR_WITH_MAXSIZE_TO_VARSTR(tmp, src, 32);
colDataAppend(pColInfo, numOfRows, tmp, false); colDataAppend(pColInfo, numOfRows, tmp, false);
cols++; cols++;
pColInfo = taosArrayGet(pBlock->pDataBlock, cols); pColInfo = taosArrayGet(pBlock->pDataBlock, cols);
src = "unlimited"; src = "unlimited";
STR_WITH_SIZE_TO_VARSTR(tmp, src, strlen(src)); STR_WITH_MAXSIZE_TO_VARSTR(tmp, src, 32);
colDataAppend(pColInfo, numOfRows, tmp, false); colDataAppend(pColInfo, numOfRows, tmp, false);
numOfRows++; numOfRows++;
......
...@@ -649,7 +649,7 @@ int32_t mndProcessRpcMsg(SRpcMsg *pMsg) { ...@@ -649,7 +649,7 @@ int32_t mndProcessRpcMsg(SRpcMsg *pMsg) {
void mndSetMsgHandle(SMnode *pMnode, tmsg_t msgType, MndMsgFp fp) { void mndSetMsgHandle(SMnode *pMnode, tmsg_t msgType, MndMsgFp fp) {
tmsg_t type = TMSG_INDEX(msgType); tmsg_t type = TMSG_INDEX(msgType);
if (type >= 0 && type < TDMT_MAX) { if (type < TDMT_MAX) {
pMnode->msgFp[type] = fp; pMnode->msgFp[type] = fp;
} }
} }
......
...@@ -93,6 +93,7 @@ static int32_t mndCreateDefaultMnode(SMnode *pMnode) { ...@@ -93,6 +93,7 @@ static int32_t mndCreateDefaultMnode(SMnode *pMnode) {
STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_GLOBAL, NULL, "create-mnode"); STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_GLOBAL, NULL, "create-mnode");
if (pTrans == NULL) { if (pTrans == NULL) {
sdbFreeRaw(pRaw);
mError("mnode:%d, failed to create since %s", mnodeObj.id, terrstr()); mError("mnode:%d, failed to create since %s", mnodeObj.id, terrstr());
return -1; return -1;
} }
...@@ -220,8 +221,12 @@ bool mndIsMnode(SMnode *pMnode, int32_t dnodeId) { ...@@ -220,8 +221,12 @@ bool mndIsMnode(SMnode *pMnode, int32_t dnodeId) {
void mndGetMnodeEpSet(SMnode *pMnode, SEpSet *pEpSet) { void mndGetMnodeEpSet(SMnode *pMnode, SEpSet *pEpSet) {
SSdb *pSdb = pMnode->pSdb; SSdb *pSdb = pMnode->pSdb;
int32_t totalMnodes = sdbGetSize(pSdb, SDB_MNODE); int32_t totalMnodes = sdbGetSize(pSdb, SDB_MNODE);
void *pIter = NULL; if (totalMnodes == 0) {
syncGetRetryEpSet(pMnode->syncMgmt.sync, pEpSet);
return;
}
void *pIter = NULL;
while (1) { while (1) {
SMnodeObj *pObj = NULL; SMnodeObj *pObj = NULL;
pIter = sdbFetch(pSdb, SDB_MNODE, pIter, (void **)&pObj); pIter = sdbFetch(pSdb, SDB_MNODE, pIter, (void **)&pObj);
...@@ -658,7 +663,7 @@ static int32_t mndRetrieveMnodes(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pB ...@@ -658,7 +663,7 @@ static int32_t mndRetrieveMnodes(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pB
colDataAppend(pColInfo, numOfRows, (const char *)&pObj->id, false); colDataAppend(pColInfo, numOfRows, (const char *)&pObj->id, false);
char b1[TSDB_EP_LEN + VARSTR_HEADER_SIZE] = {0}; char b1[TSDB_EP_LEN + VARSTR_HEADER_SIZE] = {0};
STR_WITH_MAXSIZE_TO_VARSTR(b1, pObj->pDnode->ep, pShow->pMeta->pSchemas[cols].bytes); STR_WITH_MAXSIZE_TO_VARSTR(b1, pObj->pDnode->ep, TSDB_EP_LEN + VARSTR_HEADER_SIZE);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++); pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
colDataAppend(pColInfo, numOfRows, b1, false); colDataAppend(pColInfo, numOfRows, b1, false);
...@@ -667,7 +672,7 @@ static int32_t mndRetrieveMnodes(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pB ...@@ -667,7 +672,7 @@ static int32_t mndRetrieveMnodes(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pB
if (pObj->id == pMnode->selfDnodeId) { if (pObj->id == pMnode->selfDnodeId) {
roles = syncStr(TAOS_SYNC_STATE_LEADER); roles = syncStr(TAOS_SYNC_STATE_LEADER);
} }
if (pObj->pDnode && mndIsDnodeOnline(pObj->pDnode, curMs)) { if (mndIsDnodeOnline(pObj->pDnode, curMs)) {
roles = syncStr(pObj->state); roles = syncStr(pObj->state);
if (pObj->state == TAOS_SYNC_STATE_LEADER && pObj->id != pMnode->selfDnodeId) { if (pObj->state == TAOS_SYNC_STATE_LEADER && pObj->id != pMnode->selfDnodeId) {
roles = syncStr(TAOS_SYNC_STATE_ERROR); roles = syncStr(TAOS_SYNC_STATE_ERROR);
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册