提交 a5750014 编写于 作者: sangshuduo's avatar sangshuduo

Merge branch '3.0' into feat/sangshuduo/TD-14141-update-taostools-for3.0

......@@ -6,101 +6,100 @@ In order to explain the basic concepts and provide some sample code, the TDengin
<div className="center-table">
<table>
<thead><tr>
<th>Device ID</th>
<th>Time Stamp</th>
<th colSpan="3">Collected Metrics</th>
<th colSpan="2">Tags</th>
<thead>
<tr>
<th rowSpan="2">Device ID</th>
<th rowSpan="2">Timestamp</th>
<th colSpan="3">Collected Metrics</th>
<th colSpan="2">Tags</th>
</tr>
<tr>
<th>Device ID</th>
<th>Time Stamp</th>
<th>current</th>
<th>voltage</th>
<th>phase</th>
<th>location</th>
<th>groupId</th>
</tr>
</thead>
<tbody>
<tr>
<td>d1001</td>
<td>1538548685000</td>
<td>10.3</td>
<td>219</td>
<td>0.31</td>
<td>California.SanFrancisco</td>
<td>2</td>
</tr>
<tr>
<td>d1002</td>
<td>1538548684000</td>
<td>10.2</td>
<td>220</td>
<td>0.23</td>
<td>California.SanFrancisco</td>
<td>3</td>
</tr>
<tr>
<td>d1003</td>
<td>1538548686500</td>
<td>11.5</td>
<td>221</td>
<td>0.35</td>
<td>California.LosAngeles</td>
<td>3</td>
</tr>
<tr>
<td>d1004</td>
<td>1538548685500</td>
<td>13.4</td>
<td>223</td>
<td>0.29</td>
<td>California.LosAngeles</td>
<td>2</td>
</tr>
<tr>
<td>d1001</td>
<td>1538548695000</td>
<td>12.6</td>
<td>218</td>
<td>0.33</td>
<td>California.SanFrancisco</td>
<td>2</td>
</tr>
<tr>
<td>d1004</td>
<td>1538548696600</td>
<td>11.8</td>
<td>221</td>
<td>0.28</td>
<td>California.LosAngeles</td>
<td>2</td>
</tr>
<tr>
<td>d1002</td>
<td>1538548696650</td>
<td>10.3</td>
<td>218</td>
<td>0.25</td>
<td>California.SanFrancisco</td>
<td>3</td>
</tr>
<tr>
<td>d1001</td>
<td>1538548696800</td>
<td>12.3</td>
<td>221</td>
<td>0.31</td>
<td>California.SanFrancisco</td>
<td>2</td>
</tr>
</tbody>
<tr>
<th>current</th>
<th>voltage</th>
<th>phase</th>
<th>location</th>
<th>groupid</th>
</tr>
</thead>
<tbody>
<tr>
<td>d1001</td>
<td>1538548685000</td>
<td>10.3</td>
<td>219</td>
<td>0.31</td>
<td>California.SanFrancisco</td>
<td>2</td>
</tr>
<tr>
<td>d1002</td>
<td>1538548684000</td>
<td>10.2</td>
<td>220</td>
<td>0.23</td>
<td>California.SanFrancisco</td>
<td>3</td>
</tr>
<tr>
<td>d1003</td>
<td>1538548686500</td>
<td>11.5</td>
<td>221</td>
<td>0.35</td>
<td>California.LosAngeles</td>
<td>3</td>
</tr>
<tr>
<td>d1004</td>
<td>1538548685500</td>
<td>13.4</td>
<td>223</td>
<td>0.29</td>
<td>California.LosAngeles</td>
<td>2</td>
</tr>
<tr>
<td>d1001</td>
<td>1538548695000</td>
<td>12.6</td>
<td>218</td>
<td>0.33</td>
<td>California.SanFrancisco</td>
<td>2</td>
</tr>
<tr>
<td>d1004</td>
<td>1538548696600</td>
<td>11.8</td>
<td>221</td>
<td>0.28</td>
<td>California.LosAngeles</td>
<td>2</td>
</tr>
<tr>
<td>d1002</td>
<td>1538548696650</td>
<td>10.3</td>
<td>218</td>
<td>0.25</td>
<td>California.SanFrancisco</td>
<td>3</td>
</tr>
<tr>
<td>d1001</td>
<td>1538548696800</td>
<td>12.3</td>
<td>221</td>
<td>0.31</td>
<td>California.SanFrancisco</td>
<td>2</td>
</tr>
</tbody>
</table>
<a href="#model_table1">Table 1: Smart meter example data</a>
</div>
Each row contains the device ID, time stamp, collected metrics (current, voltage, phase as above), and static tags (location and groupId in Table 1) associated with the devices. Each smart meter generates a row (measurement) in a pre-defined time interval or triggered by an external event. The device produces a sequence of measurements with associated time stamps.
Each row contains the device ID, timestamp, collected metrics (`current`, `voltage`, `phase` as above), and static tags (`location` and `groupid` in Table 1) associated with the devices. Each smart meter generates a row (measurement) in a pre-defined time interval or triggered by an external event. The device produces a sequence of measurements with associated timestamps.
## Metric
......@@ -112,22 +111,22 @@ Label/Tag refers to the static properties of sensors, equipment or other types o
## Data Collection Point
Data Collection Point (DCP) refers to hardware or software that collects metrics based on preset time periods or triggered by events. A data collection point can collect one or multiple metrics, but these metrics are collected at the same time and have the same time stamp. For some complex equipment, there are often multiple data collection points, and the sampling rate of each collection point may be different, and fully independent. For example, for a car, there could be a data collection point to collect GPS position metrics, a data collection point to collect engine status metrics, and a data collection point to collect the environment metrics inside the car. So in this example the car would have three data collection points. In the smart meters example, d1001, d1002, d1003, and d1004 are the data collection points.
Data Collection Point (DCP) refers to hardware or software that collects metrics based on preset time periods or triggered by events. A data collection point can collect one or multiple metrics, but these metrics are collected at the same time and have the same timestamp. For some complex equipment, there are often multiple data collection points, and the sampling rate of each collection point may be different, and fully independent. For example, for a car, there could be a data collection point to collect GPS position metrics, a data collection point to collect engine status metrics, and a data collection point to collect the environment metrics inside the car. So in this example the car would have three data collection points. In the smart meters example, d1001, d1002, d1003, and d1004 are the data collection points.
## Table
Since time-series data is most likely to be structured data, TDengine adopts the traditional relational database model to process them with a short learning curve. You need to create a database, create tables, then insert data points and execute queries to explore the data.
To make full use of time-series data characteristics, TDengine adopts a strategy of "**One Table for One Data Collection Point**". TDengine requires the user to create a table for each data collection point (DCP) to store collected time-series data. For example, if there are over 10 million smart meters, it means 10 million tables should be created. For the table above, 4 tables should be created for devices D1001, D1002, D1003, and D1004 to store the data collected. This design has several benefits:
To make full use of time-series data characteristics, TDengine adopts a strategy of "**One Table for One Data Collection Point**". TDengine requires the user to create a table for each data collection point (DCP) to store collected time-series data. For example, if there are over 10 million smart meters, it means 10 million tables should be created. For the table above, 4 tables should be created for devices d1001, d1002, d1003, and d1004 to store the data collected. This design has several benefits:
1. Since the metric data from different DCP are fully independent, the data source of each DCP is unique, and a table has only one writer. In this way, data points can be written in a lock-free manner, and the writing speed can be greatly improved.
2. For a DCP, the metric data generated by DCP is ordered by timestamp, so the write operation can be implemented by simple appending, which further greatly improves the data writing speed.
3. The metric data from a DCP is continuously stored, block by block. If you read data for a period of time, it can greatly reduce random read operations and improve read and query performance by orders of magnitude.
4. Inside a data block for a DCP, columnar storage is used, and different compression algorithms are used for different data types. Metrics generally don't vary as significantly between themselves over a time range as compared to other metrics, which allows for a higher compression rate.
If the metric data of multiple DCPs are traditionally written into a single table, due to uncontrollable network delays, the timing of the data from different DCPs arriving at the server cannot be guaranteed, write operations must be protected by locks, and metric data from one DCP cannot be guaranteed to be continuously stored together. ** One table for one data collection point can ensure the best performance of insert and query of a single data collection point to the greatest possible extent.**
If the metric data of multiple DCPs are traditionally written into a single table, due to uncontrollable network delays, the timing of the data from different DCPs arriving at the server cannot be guaranteed, write operations must be protected by locks, and metric data from one DCP cannot be guaranteed to be continuously stored together. **One table for one data collection point can ensure the best performance of insert and query of a single data collection point to the greatest possible extent.**
TDengine suggests using DCP ID as the table name (like D1001 in the above table). Each DCP may collect one or multiple metrics (like the current, voltage, phase as above). Each metric has a corresponding column in the table. The data type for a column can be int, float, string and others. In addition, the first column in the table must be a timestamp. TDengine uses the time stamp as the index, and won’t build the index on any metrics stored. Column wise storage is used.
TDengine suggests using DCP ID as the table name (like d1001 in the above table). Each DCP may collect one or multiple metrics (like the `current`, `voltage`, `phase` as above). Each metric has a corresponding column in the table. The data type for a column can be int, float, string and others. In addition, the first column in the table must be a timestamp. TDengine uses the timestamp as the index, and won’t build the index on any metrics stored. Column wise storage is used.
Complex devices, such as connected cars, may have multiple DCPs. In this case, multiple tables are created for a single device, one table per DCP.
......@@ -156,9 +155,16 @@ The relationship between a STable and the subtables created based on this STable
Queries can be executed on both a table (subtable) and a STable. For a query on a STable, TDengine will treat the data in all its subtables as a whole data set for processing. TDengine will first find the subtables that meet the tag filter conditions, then scan the time-series data of these subtables to perform aggregation operation, which reduces the number of data sets to be scanned which in turn greatly improves the performance of data aggregation across multiple DCPs. In essence, querying a supertable is a very efficient aggregate query on multiple DCPs of the same type.
In TDengine, it is recommended to use a subtable instead of a regular table for a DCP. In the smart meters example, we can create subtables like d1001, d1002, d1003, and d1004 under super table meters.
In TDengine, it is recommended to use a subtable instead of a regular table for a DCP. In the smart meters example, we can create subtables like d1001, d1002, d1003, and d1004 under super table `meters`.
To better understand the data model using metrics, tags, super table and subtable, please refer to the diagram below which demonstrates the data model of the smart meters example.
<figure>
![Meters Data Model Diagram](./supertable.webp)
To better understand the data model using metri, tags, super table and subtable, please refer to the diagram below which demonstrates the data model of the smart meters example. ![Meters Data Model Diagram](./supertable.webp)
<center><figcaption>Figure 1. Meters Data Model Diagram</figcaption></center>
</figure>
## Database
......@@ -172,4 +178,4 @@ FQDN (Fully Qualified Domain Name) is the full domain name of a specific compute
Each node of a TDengine cluster is uniquely identified by an End Point, which consists of an FQDN and a Port, such as h1.tdengine.com:6030. In this way, when the IP changes, we can still use the FQDN to dynamically find the node without changing any configuration of the cluster. In addition, FQDN is used to facilitate unified access to the same cluster from the Intranet and the Internet.
TDengine does not recommend using an IP address to access the cluster. FQDN is recommended for cluster management.
TDengine does not recommend using an IP address to access the cluster. FQDN is recommended for cluster management.
......@@ -13,7 +13,7 @@ If Docker is already installed on your computer, run the following command:
docker run -d -p 6030:6030 -p 6041:6041 -p 6043-6049:6043-6049 -p 6043-6049:6043-6049/udp tdengine/tdengine
```
Note that TDengine Server uses TCP port 6030. Port 6041 is used by taosAdapter for the REST API service. Ports 6043 through 6049 are used by taosAdapter for other connectors. You can open these ports as needed.
Note that TDengine Server 3.0 uses TCP port 6030. Port 6041 is used by taosAdapter for the REST API service. Ports 6043 through 6049 are used by taosAdapter for other connectors. You can open these ports as needed.
Run the following command to ensure that your container is running:
......@@ -21,7 +21,7 @@ Run the following command to ensure that your container is running:
docker ps
```
Enter the container and open the bash shell:
Enter the container and open the `bash` shell:
```shell
docker exec -it <container name> bash
......@@ -31,68 +31,68 @@ You can now access TDengine or run other Linux commands.
Note: For information about installing docker, see the [official documentation](https://docs.docker.com/get-docker/).
## Insert Data into TDengine
You can use the `taosBenchmark` tool included with TDengine to write test data into your deployment.
## Open the TDengine CLI
To do so, run the following command:
On the container, run the following command to open the TDengine CLI:
```bash
$ taosBenchmark
```
```
$ taos
This command creates the `meters` supertable in the `test` database. In the `meters` supertable, it then creates 10,000 subtables named `d0` to `d9999`. Each table has 10,000 rows and each row has four columns: `ts`, `current`, `voltage`, and `phase`. The timestamps of the data in these columns range from 2017-07-14 10:40:00 000 to 2017-07-14 10:40:09 999. Each table is randomly assigned a `groupId` tag from 1 to 10 and a `location` tag of either `Campbell`, `Cupertino`, `Los Angeles`, `Mountain View`, `Palo Alto`, `San Diego`, `San Francisco`, `San Jose`, `Santa Clara` or `Sunnyvale`.
taos>
The `taosBenchmark` command creates a deployment with 100 million data points that you can use for testing purposes. The time required depends on the hardware specifications of the local system.
```
You can customize the test deployment that taosBenchmark creates by specifying command-line parameters. For information about command-line parameters, run the `taosBenchmark --help` command. For more information about taosBenchmark, see [taosBenchmark](/reference/taosbenchmark).
## Test data insert performance
## Open the TDengine CLI
After your TDengine Server is running normally, you can run the taosBenchmark utility to test its performance:
On the container, run the following command to open the TDengine CLI:
Start TDengine service and execute `taosBenchmark` (formerly named `taosdemo`) in a Linux or Windows terminal.
```bash
taosBenchmark
```
$ taos
taos>
This command creates the `meters` supertable in the `test` database. In the `meters` supertable, it then creates 10,000 subtables named `d0` to `d9999`. Each table has 10,000 rows and each row has four columns: `ts`, `current`, `voltage`, and `phase`. The timestamps of the data in these columns range from 2017-07-14 10:40:00 000 to 2017-07-14 10:40:09 999. Each table is randomly assigned a `groupId` tag from 1 to 10 and a `location` tag of either `Campbell`, `Cupertino`, `Los Angeles`, `Mountain View`, `Palo Alto`, `San Diego`, `San Francisco`, `San Jose`, `Santa Clara` or `Sunnyvale`.
The `taosBenchmark` command creates a deployment with 100 million data points that you can use for testing purposes. The time required to create the deployment depends on your hardware. On most modern servers, the deployment is created in ten to twenty seconds.
```
You can customize the test deployment that taosBenchmark creates by specifying command-line parameters. For information about command-line parameters, run the `taosBenchmark --help` command. For more information about taosBenchmark, see [taosBenchmark](../../reference/taosbenchmark).
## Query Data in TDengine
## Test data query performance
After using taosBenchmark to create your test deployment, you can run queries in the TDengine CLI to test its performance. For example:
After using `taosBenchmark` to create your test deployment, you can run queries in the TDengine CLI to test its performance:
From the TDengine CLI query the number of rows in the `meters` supertable:
From the TDengine CLI (taos) query the number of rows in the `meters` supertable:
```sql
select count(*) from test.meters;
SELECT COUNT(*) FROM test.meters;
```
Query the average, maximum, and minimum values of all 100 million rows of data:
```sql
select avg(current), max(voltage), min(phase) from test.meters;
SELECT AVG(current), MAX(voltage), MIN(phase) FROM test.meters;
```
Query the number of rows whose `location` tag is `San Francisco`:
```sql
select count(*) from test.meters where location="San Francisco";
SELECT COUNT(*) FROM test.meters WHERE location = "San Francisco";
```
Query the average, maximum, and minimum values of all rows whose `groupId` tag is `10`:
```sql
select avg(current), max(voltage), min(phase) from test.meters where groupId=10;
SELECT AVG(current), MAX(voltage), MIN(phase) FROM test.meters WHERE groupId = 10;
```
Query the average, maximum, and minimum values for table `d10` in 1 second intervals:
Query the average, maximum, and minimum values for table `d10` in 10 second intervals:
```sql
select first(ts), avg(current), max(voltage), min(phase) from test.d10 interval(1s);
SELECT FIRST(ts), AVG(current), MAX(voltage), MIN(phase) FROM test.d10 INTERVAL(10s);
```
In the query above you are selecting the first timestamp (ts) in the interval, another way of selecting this would be _wstart which will give the start of the time window. For more information about windowed queries, see [Time-Series Extensions](../../taos-sql/distinguished/).
In the query above you are selecting the first timestamp (ts) in the interval, another way of selecting this would be `\_wstart` which will give the start of the time window. For more information about windowed queries, see [Time-Series Extensions](../../taos-sql/distinguished/).
## Additional Information
......
......@@ -9,23 +9,24 @@ import PkgListV3 from "/components/PkgListV3";
For information about installing TDengine on Docker, see [Quick Install on Docker](../../get-started/docker). If you want to view the source code, build TDengine yourself, or contribute to the project, see the [TDengine GitHub repository](https://github.com/taosdata/TDengine).
The full package of TDengine includes the TDengine Server (`taosd`), TDengine Client (`taosc`), taosAdapter for connecting with third-party systems and providing a RESTful interface, a command-line interface, and some tools. Note that taosAdapter supports Linux only. In addition to connectors for multiple languages, TDengine also provides a [REST API](../../reference/rest-api) through [taosAdapter](../../reference/taosadapter).
The full package of TDengine includes the TDengine Server (`taosd`), TDengine Client (`taosc`), taosAdapter for connecting with third-party systems and providing a RESTful interface, a command-line interface (CLI, taos), and some tools. Note that taosAdapter supports Linux only. In addition to connectors for multiple languages, TDengine also provides a [REST API](../../reference/rest-api) through [taosAdapter](../../reference/taosadapter).
The standard server installation package includes `taos`, `taosd`, `taosAdapter`, `taosBenchmark`, and sample code. You can also download a lite package that includes only `taosd` and the C/C++ connector.
The standard server installation package includes `taos`, `taosd`, `taosAdapter`, `taosBenchmark`, and sample code. You can also download the Lite package that includes only `taosd` and the C/C++ connector.
The TDengine Community Edition is released as .deb and .rpm packages. The .deb package can be installed on Debian, Ubuntu, and derivative systems. The .rpm package can be installed on CentOS, RHEL, SUSE, and derivative systems. A .tar.gz package is also provided for enterprise customers, and you can install TDengine over `apt-get` as well. The .tar.tz package includes `taosdump` and the TDinsight installation script. If you want to use these utilities with the .deb or .rpm package, download and install taosTools separately. TDengine can also be installed on 64-bit Windows servers.
The TDengine Community Edition is released as Deb and RPM packages. The Deb package can be installed on Debian, Ubuntu, and derivative systems. The RPM package can be installed on CentOS, RHEL, SUSE, and derivative systems. A .tar.gz package is also provided for enterprise customers, and you can install TDengine over `apt-get` as well. The .tar.tz package includes `taosdump` and the TDinsight installation script. If you want to use these utilities with the Deb or RPM package, download and install taosTools separately. TDengine can also be installed on 64-bit Windows.
## Installation
<Tabs>
<TabItem label=".deb" value="debinst">
1. Download the .deb installation package.
<PkgListV3 type={6}/>
1. Download the Deb installation package.
<PkgListV3 type={6}/>
2. In the directory where the package is located, use `dpkg` to install the package:
> Please replace `<version>` with the corresponding version of the package downloaded
```bash
# Enter the name of the package that you downloaded.
sudo dpkg -i TDengine-server-<version>-Linux-x64.deb
```
......@@ -34,11 +35,12 @@ sudo dpkg -i TDengine-server-<version>-Linux-x64.deb
<TabItem label=".rpm" value="rpminst">
1. Download the .rpm installation package.
<PkgListV3 type={5}/>
<PkgListV3 type={5}/>
2. In the directory where the package is located, use rpm to install the package:
> Please replace `<version>` with the corresponding version of the package downloaded
```bash
# Enter the name of the package that you downloaded.
sudo rpm -ivh TDengine-server-<version>-Linux-x64.rpm
```
......@@ -47,11 +49,12 @@ sudo rpm -ivh TDengine-server-<version>-Linux-x64.rpm
<TabItem label=".tar.gz" value="tarinst">
1. Download the .tar.gz installation package.
<PkgListV3 type={0}/>
<PkgListV3 type={0}/>
2. In the directory where the package is located, use `tar` to decompress the package:
> Please replace `<version>` with the corresponding version of the package downloaded
```bash
# Enter the name of the package that you downloaded.
tar -zxvf TDengine-server-<version>-Linux-x64.tar.gz
```
......@@ -96,23 +99,23 @@ sudo apt-get install tdengine
This installation method is supported only for Debian and Ubuntu.
::::
</TabItem>
<TabItem label="Windows" value="windows">
<TabItem label="Windows" value="windows">
Note: TDengine only supports Windows Server 2016/2019 and windows 10/11 system versions on the windows platform.
Note: TDengine only supports Windows Server 2016/2019 and Windows 10/11 on the Windows platform.
1. Download the Windows installation package.
<PkgListV3 type={3}/>
<PkgListV3 type={3}/>
2. Run the downloaded package to install TDengine.
</TabItem>
</Tabs>
:::info
For information about TDengine releases, see [Release History](../../releases).
For information about TDengine releases, see [Release History](../../releases).
:::
:::note
On the first node in your TDengine cluster, leave the `Enter FQDN:` prompt blank and press **Enter**. On subsequent nodes, you can enter the end point of the first dnode in the cluster. You can also configure this setting after you have finished installing TDengine.
On the first node in your TDengine cluster, leave the `Enter FQDN:` prompt blank and press **Enter**. On subsequent nodes, you can enter the endpoint of the first dnode in the cluster. You can also configure this setting after you have finished installing TDengine.
:::
......@@ -147,7 +150,7 @@ Active: inactive (dead)
After confirming that TDengine is running, run the `taos` command to access the TDengine CLI.
The following `systemctl` commands can help you manage TDengine:
The following `systemctl` commands can help you manage TDengine service:
- Start TDengine Server: `systemctl start taosd`
......@@ -159,7 +162,7 @@ The following `systemctl` commands can help you manage TDengine:
:::info
- The `systemctl` command requires _root_ privileges. If you are not logged in as the `root` user, use the `sudo` command.
- The `systemctl` command requires _root_ privileges. If you are not logged in as the _root_ user, use the `sudo` command.
- The `systemctl stop taosd` command does not instantly stop TDengine Server. The server is stopped only after all data in memory is flushed to disk. The time required depends on the cache size.
- If your system does not include `systemd`, you can run `/usr/local/taos/bin/taosd` to start TDengine manually.
......@@ -174,23 +177,9 @@ After the installation is complete, run `C:\TDengine\taosd.exe` to start TDengin
</TabItem>
</Tabs>
## Test data insert performance
After your TDengine Server is running normally, you can run the taosBenchmark utility to test its performance:
```bash
taosBenchmark
```
This command creates the `meters` supertable in the `test` database. In the `meters` supertable, it then creates 10,000 subtables named `d0` to `d9999`. Each table has 10,000 rows and each row has four columns: `ts`, `current`, `voltage`, and `phase`. The timestamps of the data in these columns range from 2017-07-14 10:40:00 000 to 2017-07-14 10:40:09 999. Each table is randomly assigned a `groupId` tag from 1 to 10 and a `location` tag of either `Campbell`, `Cupertino`, `Los Angeles`, `Mountain View`, `Palo Alto`, `San Diego`, `San Francisco`, `San Jose`, `Santa Clara` or `Sunnyvale`.
## Command Line Interface (CLI)
The `taosBenchmark` command creates a deployment with 100 million data points that you can use for testing purposes. The time required to create the deployment depends on your hardware. On most modern servers, the deployment is created in less than a minute.
You can customize the test deployment that taosBenchmark creates by specifying command-line parameters. For information about command-line parameters, run the `taosBenchmark --help` command. For more information about taosBenchmark, see [taosBenchmark](../../reference/taosbenchmark).
## Command Line Interface
You can use the TDengine CLI to monitor your TDengine deployment and execute ad hoc queries. To open the CLI, run the following command:
You can use the TDengine CLI to monitor your TDengine deployment and execute ad hoc queries. To open the CLI, you can execute `taos` in the Linux terminal where TDengine is installed, or you can run `taos.exe` in the `C:\TDengine` directory of the Windows terminal where TDengine is installed to start the TDengine command line.
```bash
taos
......@@ -205,52 +194,71 @@ taos>
For example, you can create and delete databases and tables and run all types of queries. Each SQL command must be end with a semicolon (;). For example:
```sql
create database demo;
use demo;
create table t (ts timestamp, speed int);
insert into t values ('2019-07-15 00:00:00', 10);
insert into t values ('2019-07-15 01:00:00', 20);
select * from t;
CREATE DATABASE demo;
USE demo;
CREATE TABLE t (ts TIMESTAMP, speed INT);
INSERT INTO t VALUES ('2019-07-15 00:00:00', 10);
INSERT INTO t VALUES ('2019-07-15 01:00:00', 20);
SELECT * FROM t;
ts | speed |
========================================
2019-07-15 00:00:00.000 | 10 |
2019-07-15 01:00:00.000 | 20 |
Query OK, 2 row(s) in set (0.003128s)
```
You can also can monitor the deployment status, add and remove user accounts, and manage running instances. You can run the TDengine CLI on either Linux or Windows machines. For more information, see [TDengine CLI](../../reference/taos-shell/).
## Test data insert performance
After your TDengine Server is running normally, you can run the taosBenchmark utility to test its performance:
Start TDengine service and execute `taosBenchmark` (formerly named `taosdemo`) in a Linux or Windows terminal.
```bash
taosBenchmark
```
This command creates the `meters` supertable in the `test` database. In the `meters` supertable, it then creates 10,000 subtables named `d0` to `d9999`. Each table has 10,000 rows and each row has four columns: `ts`, `current`, `voltage`, and `phase`. The timestamps of the data in these columns range from 2017-07-14 10:40:00 000 to 2017-07-14 10:40:09 999. Each table is randomly assigned a `groupId` tag from 1 to 10 and a `location` tag of either `Campbell`, `Cupertino`, `Los Angeles`, `Mountain View`, `Palo Alto`, `San Diego`, `San Francisco`, `San Jose`, `Santa Clara` or `Sunnyvale`.
The `taosBenchmark` command creates a deployment with 100 million data points that you can use for testing purposes. The time required to create the deployment depends on your hardware. On most modern servers, the deployment is created in ten to twenty seconds.
You can customize the test deployment that taosBenchmark creates by specifying command-line parameters. For information about command-line parameters, run the `taosBenchmark --help` command. For more information about taosBenchmark, see [taosBenchmark](../../reference/taosbenchmark).
## Test data query performance
After using taosBenchmark to create your test deployment, you can run queries in the TDengine CLI to test its performance:
After using `taosBenchmark` to create your test deployment, you can run queries in the TDengine CLI to test its performance:
From the TDengine CLI query the number of rows in the `meters` supertable:
From the TDengine CLI (taos) query the number of rows in the `meters` supertable:
```sql
select count(*) from test.meters;
SELECT COUNT(*) FROM test.meters;
```
Query the average, maximum, and minimum values of all 100 million rows of data:
```sql
select avg(current), max(voltage), min(phase) from test.meters;
SELECT AVG(current), MAX(voltage), MIN(phase) FROM test.meters;
```
Query the number of rows whose `location` tag is `San Francisco`:
```sql
select count(*) from test.meters where location="San Francisco";
SELECT COUNT(*) FROM test.meters WHERE location = "San Francisco";
```
Query the average, maximum, and minimum values of all rows whose `groupId` tag is `10`:
```sql
select avg(current), max(voltage), min(phase) from test.meters where groupId=10;
SELECT AVG(current), MAX(voltage), MIN(phase) FROM test.meters WHERE groupId = 10;
```
Query the average, maximum, and minimum values for table `d10` in 1 second intervals:
Query the average, maximum, and minimum values for table `d10` in 10 second intervals:
```sql
select first(ts), avg(current), max(voltage), min(phase) from test.d10 interval(1s);
SELECT FIRST(ts), AVG(current), MAX(voltage), MIN(phase) FROM test.d10 INTERVAL(10s);
```
In the query above you are selecting the first timestamp (ts) in the interval, another way of selecting this would be _wstart which will give the start of the time window. For more information about windowed queries, see [Time-Series Extensions](../../taos-sql/distinguished/).
In the query above you are selecting the first timestamp (ts) in the interval, another way of selecting this would be `\_wstart` which will give the start of the time window. For more information about windowed queries, see [Time-Series Extensions](../../taos-sql/distinguished/).
......@@ -238,6 +238,7 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
- TOPICS
- TRIGGER
- TSERIES
- TTL
### U
......@@ -253,6 +254,7 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
### V
- VALUE
- VALUES
- VARIABLE
- VARIABLES
......
......@@ -39,14 +39,14 @@ Comparing the connector support for TDengine functional features as follows.
### Using the native interface (taosc)
| **Functional Features** | **Java** | **Python** | **Go** | **C#** | **Node.js** | **Rust** |
| -------------- | -------- | ---------- | ------ | ------ | ----------- | -------- |
| **Connection Management** | Support | Support | Support | Support | Support | Support |
| **Regular Query** | Support | Support | Support | Support | Support | Support |
| **Parameter Binding** | Support | Support | Support | Support | Support | Support |
| ** TMQ ** | Support | Support | Support | Support | Support | Support |
| **Schemaless** | Support | Support | Support | Support | Support | Support |
| **DataFrame** | Not Supported | Support | Not Supported | Not Supported | Not Supported | Not Supported |
| **Functional Features** | **Java** | **Python** | **Go** | **C#** | **Node.js** | **Rust** |
| ----------------------------- | ------------- | ---------- | ------------- | ------------- | ------------- | ------------- |
| **Connection Management** | Support | Support | Support | Support | Support | Support |
| **Regular Query** | Support | Support | Support | Support | Support | Support |
| **Parameter Binding** | Support | Support | Support | Support | Support | Support |
| **Subscription (TMQ)** | Support | Support | Support | Support | Support | Support |
| **Schemaless** | Support | Support | Support | Support | Support | Support |
| **DataFrame** | Not Supported | Support | Not Supported | Not Supported | Not Supported | Not Supported |
:::info
The different database framework specifications for various programming languages do not mean that all C/C++ interfaces need a wrapper.
......@@ -54,16 +54,15 @@ The different database framework specifications for various programming language
### Use HTTP Interfaces (REST or WebSocket)
| **Functional Features** | **Java** | **Python** | **Go** | **C# (not supported yet)** | **Node.js** | **Rust** |
| ------------------------------ | -------- | ---------- | -------- | ------------------ | ----------- | -------- |
| **Connection Management** | Support | Support | Support | N/A | Support | Support |
| **Regular Query** | Support | Support | Support | N/A | Support | Support |
| **Continous Query ** | Support | Support | Support | N/A | Support | Support |
| **Parameter Binding** | Not supported | Not supported | Not supported | N/A | Not supported | Support |
| ** TMQ ** | Not supported | Not supported | Not supported | N/A | Not supported | Support |
| **Schemaless** | Not supported | Not supported | Not supported | N/A | Not supported | Not supported |
| **Bulk Pulling (based on WebSocket) **| Support | Support | Not Supported | N/A | Not Supported | Supported |
| **DataFrame** | Not supported | Support | Not supported | N/A | Not supported | Not supported |
| **Functional Features** | **Java** | **Python** | **Go** | **C#** | **Node.js** | **Rust** |
| -------------------------------------- | ------------- | --------------- | ------------- | ------------- | ------------- | ------------- |
| **Connection Management** | Support | Support | Support | Support | Support | Support |
| **Regular Query** | Support | Support | Support | Support | Support | Support |
| **Parameter Binding** | Not supported | Not supported | Not supported | Support | Not supported | Support |
| **Subscription (TMQ) ** | Not supported | Not supported | Not supported | Not supported | Not supported | Support |
| **Schemaless** | Not supported | Not supported | Not supported | Not supported | Not supported | Not supported |
| **Bulk Pulling (based on WebSocket) ** | Support | Support | Not Supported | support | Not Supported | Supported |
| **DataFrame** | Not supported | Support | Not supported | Not supported | Not supported | Not supported |
:::warning
......
......@@ -23,6 +23,7 @@ namespace TDengineExample
CheckRes(conn, res, "failed to insert data");
int affectedRows = TDengine.AffectRows(res);
Console.WriteLine("affectedRows " + affectedRows);
TDengine.FreeResult(res);
ExitProgram(conn, 0);
}
......
......@@ -48,7 +48,7 @@ TDengine 的主要功能如下:
- 多种[数据导出](../operation/export)方式
9. 工具
- 提供[交互式命令行程序(CLI)](../reference/taos-shell),便于管理集群,检查系统状态,做即席查询
- 提供压力测试工具[taosBenchmark](../reference/taosbenchmark),用于测试 TDengine 的性能
- 提供压力测试工具 [taosBenchmark](../reference/taosbenchmark),用于测试 TDengine 的性能
10. 编程
- 提供各种语言的[连接器(Connector)](../connector): 如 [C/C++](../connector/cpp)[Java](../connector/java)[Go](../connector/go)[Node.js](../connector/node)[Rust](../connector/rust)[Python](../connector/python)[C#](../connector/csharp)
- 支持 [REST 接口](../connector/rest-api/)
......
......@@ -4,119 +4,118 @@ title: 数据模型和基本概念
description: TDengine 的数据模型和基本概念
---
为了便于解释基本概念,便于撰写示例程序,整个 TDengine 文档以智能电表作为典型时序数据场景。假设每个智能电表采集电流、电压、相位三个量,有多个智能电表,每个电表有位置 location 和分组 group ID 的静态属性. 其采集的数据类似如下的表格:
为了便于解释基本概念,便于撰写示例程序,整个 TDengine 文档以智能电表作为典型时序数据场景。假设每个智能电表采集电流、电压、相位三个量,有多个智能电表,每个电表有位置 Location 和分组 Group ID 的静态属性. 其采集的数据类似如下的表格:
<div className="center-table">
<table>
<thead><tr>
<th>Device ID</th>
<th>Time Stamp</th>
<th colSpan="3">Collected Metrics</th>
<th colSpan="2">Tags</th>
<thead>
<tr>
<th rowSpan="2">Device ID</th>
<th rowSpan="2">Timestamp</th>
<th colSpan="3">Collected Metrics</th>
<th colSpan="2">Tags</th>
</tr>
<tr>
<th>Device ID</th>
<th>Time Stamp</th>
<th>current</th>
<th>voltage</th>
<th>phase</th>
<th>location</th>
<th>groupId</th>
</tr>
</thead>
<tbody>
<tr>
<td>d1001</td>
<td>1538548685000</td>
<td>10.3</td>
<td>219</td>
<td>0.31</td>
<td>California.SanFrancisco</td>
<td>2</td>
</tr>
<tr>
<td>d1002</td>
<td>1538548684000</td>
<td>10.2</td>
<td>220</td>
<td>0.23</td>
<td>California.SanFrancisco</td>
<td>3</td>
</tr>
<tr>
<td>d1003</td>
<td>1538548686500</td>
<td>11.5</td>
<td>221</td>
<td>0.35</td>
<td>California.LosAngeles</td>
<td>3</td>
</tr>
<tr>
<td>d1004</td>
<td>1538548685500</td>
<td>13.4</td>
<td>223</td>
<td>0.29</td>
<td>California.LosAngeles</td>
<td>2</td>
</tr>
<tr>
<td>d1001</td>
<td>1538548695000</td>
<td>12.6</td>
<td>218</td>
<td>0.33</td>
<td>California.SanFrancisco</td>
<td>2</td>
</tr>
<tr>
<td>d1004</td>
<td>1538548696600</td>
<td>11.8</td>
<td>221</td>
<td>0.28</td>
<td>California.LosAngeles</td>
<td>2</td>
</tr>
<tr>
<td>d1002</td>
<td>1538548696650</td>
<td>10.3</td>
<td>218</td>
<td>0.25</td>
<td>California.SanFrancisco</td>
<td>3</td>
</tr>
<tr>
<td>d1001</td>
<td>1538548696800</td>
<td>12.3</td>
<td>221</td>
<td>0.31</td>
<td>California.SanFrancisco</td>
<td>2</td>
</tr>
</tbody>
<tr>
<th>current</th>
<th>voltage</th>
<th>phase</th>
<th>location</th>
<th>groupid</th>
</tr>
</thead>
<tbody>
<tr>
<td>d1001</td>
<td>1538548685000</td>
<td>10.3</td>
<td>219</td>
<td>0.31</td>
<td>California.SanFrancisco</td>
<td>2</td>
</tr>
<tr>
<td>d1002</td>
<td>1538548684000</td>
<td>10.2</td>
<td>220</td>
<td>0.23</td>
<td>California.SanFrancisco</td>
<td>3</td>
</tr>
<tr>
<td>d1003</td>
<td>1538548686500</td>
<td>11.5</td>
<td>221</td>
<td>0.35</td>
<td>California.LosAngeles</td>
<td>3</td>
</tr>
<tr>
<td>d1004</td>
<td>1538548685500</td>
<td>13.4</td>
<td>223</td>
<td>0.29</td>
<td>California.LosAngeles</td>
<td>2</td>
</tr>
<tr>
<td>d1001</td>
<td>1538548695000</td>
<td>12.6</td>
<td>218</td>
<td>0.33</td>
<td>California.SanFrancisco</td>
<td>2</td>
</tr>
<tr>
<td>d1004</td>
<td>1538548696600</td>
<td>11.8</td>
<td>221</td>
<td>0.28</td>
<td>California.LosAngeles</td>
<td>2</td>
</tr>
<tr>
<td>d1002</td>
<td>1538548696650</td>
<td>10.3</td>
<td>218</td>
<td>0.25</td>
<td>California.SanFrancisco</td>
<td>3</td>
</tr>
<tr>
<td>d1001</td>
<td>1538548696800</td>
<td>12.3</td>
<td>221</td>
<td>0.31</td>
<td>California.SanFrancisco</td>
<td>2</td>
</tr>
</tbody>
</table>
<a href="#model_table1">表 1:智能电表数据示例</a>
<a name="#model_table1">表 1. 智能电表数据示例</a>
</div>
每一条记录都有设备 ID,时间戳,采集的物理量以及每个设备相关的静态标签。每个设备是受外界的触发,或按照设定的周期采集数据。采集的数据点是时序的,是一个数据流。
每一条记录都有设备 ID、时间戳、采集的物理量(如上表中的 `current``voltage``phase`)以及每个设备相关的静态标签(`location``groupid`。每个设备是受外界的触发,或按照设定的周期采集数据。采集的数据点是时序的,是一个数据流。
## 采集量 (Metric)
## 采集量(Metric)
采集量是指传感器、设备或其他类型采集点采集的物理量,比如电流、电压、温度、压力、GPS 位置等,是随时间变化的,数据类型可以是整型、浮点型、布尔型,也可是字符串。随着时间的推移,存储的采集量的数据量越来越大。智能电表示例中的电流、电压、相位就是采集量。
## 标签 (Label/Tag)
## 标签(Label/Tag)
标签是指传感器、设备或其他类型采集点的静态属性,不是随时间变化的,比如设备型号、颜色、设备的所在地等,数据类型可以是任何类型。虽然是静态的,但 TDengine 容许用户修改、删除或增加标签值。与采集量不一样的是,随时间的推移,存储的标签的数据量不会有什么变化。智能电表示例中的location与groupId就是标签。
标签是指传感器、设备或其他类型采集点的静态属性,不是随时间变化的,比如设备型号、颜色、设备的所在地等,数据类型可以是任何类型。虽然是静态的,但 TDengine 容许用户修改、删除或增加标签值。与采集量不一样的是,随时间的推移,存储的标签的数据量不会有什么变化。智能电表示例中的 `location``groupid` 就是标签。
## 数据采集点 (Data Collection Point)
## 数据采集点(Data Collection Point)
数据采集点是指按照预设时间周期或受事件触发采集物理量的硬件或软件。一个数据采集点可以采集一个或多个采集量,**但这些采集量都是同一时刻采集的,具有相同的时间戳**。对于复杂的设备,往往有多个数据采集点,每个数据采集点采集的周期都可能不一样,而且完全独立,不同步。比如对于一台汽车,有数据采集点专门采集 GPS 位置,有数据采集点专门采集发动机状态,有数据采集点专门采集车内的环境,这样一台汽车就有三个数据采集点。智能电表示例中的d1001, d1002, d1003, d1004等就是数据采集点。
数据采集点是指按照预设时间周期或受事件触发采集物理量的硬件或软件。一个数据采集点可以采集一个或多个采集量,**但这些采集量都是同一时刻采集的,具有相同的时间戳**。对于复杂的设备,往往有多个数据采集点,每个数据采集点采集的周期都可能不一样,而且完全独立,不同步。比如对于一台汽车,有数据采集点专门采集 GPS 位置,有数据采集点专门采集发动机状态,有数据采集点专门采集车内的环境,这样一台汽车就有三个数据采集点。智能电表示例中的 d1001、d1002、d1003、d1004 等就是数据采集点。
## 表 (Table)
## 表(Table)
因为采集量一般是结构化数据,同时为降低学习门槛,TDengine 采用传统的关系型数据库模型管理数据。用户需要先创建库,然后创建表,之后才能插入或查询数据。
......@@ -129,50 +128,56 @@ description: TDengine 的数据模型和基本概念
如果采用传统的方式,将多个数据采集点的数据写入一张表,由于网络延时不可控,不同数据采集点的数据到达服务器的时序是无法保证的,写入操作是要有锁保护的,而且一个数据采集点的数据是难以保证连续存储在一起的。**采用一个数据采集点一张表的方式,能最大程度的保证单个数据采集点的插入和查询的性能是最优的。**
TDengine 建议用数据采集点的名字(如上表中的 D1001)来做表名。每个数据采集点可能同时采集多个采集量(如上表中的 current,voltage,phase),每个采集量对应一张表中的一列,数据类型可以是整型、浮点型、字符串等。除此之外,表的第一列必须是时间戳,即数据类型为 timestamp。对采集量,TDengine 将自动按照时间戳建立索引,但对采集量本身不建任何索引。数据用列式存储方式保存。
对于复杂的设备,比如汽车,它有多个数据采集点,那么就需要为一台汽车建立多张表。
TDengine 建议用数据采集点的名字(如上表中的 d1001)来做表名。每个数据采集点可能同时采集多个采集量(如上表中的 `current``voltage``phase`),每个采集量对应一张表中的一列,数据类型可以是整型、浮点型、字符串等。除此之外,表的第一列必须是时间戳,即数据类型为 Timestamp。对采集量,TDengine 将自动按照时间戳建立索引,但对采集量本身不建任何索引。数据用列式存储方式保存。
对于复杂的设备,比如汽车,它有多个数据采集点,那么就需要为一辆汽车建立多张表。
## 超级表 (STable)
## 超级表(STable)
由于一个数据采集点一张表,导致表的数量巨增,难以管理,而且应用经常需要做采集点之间的聚合操作,聚合的操作也变得复杂起来。为解决这个问题,TDengine 引入超级表(Super Table,简称为 STable)的概念。
超级表是指某一特定类型的数据采集点的集合。同一类型的数据采集点,其表的结构是完全一样的,但每个表(数据采集点)的静态属性(标签)是不一样的。描述一个超级表(某一特定类型的数据采集点的集合),除需要定义采集量的表结构之外,还需要定义其标签的 schema,标签的数据类型可以是整数、浮点数、字符串,标签可以有多个,可以事后增加、删除或修改。如果整个系统有 N 个不同类型的数据采集点,就需要建立 N 个超级表。
超级表是指某一特定类型的数据采集点的集合。同一类型的数据采集点,其表的结构是完全一样的,但每个表(数据采集点)的静态属性(标签)是不一样的。描述一个超级表(某一特定类型的数据采集点的集合),除需要定义采集量的表结构之外,还需要定义其标签的 Schema,标签的数据类型可以是整数、浮点数、字符串、JSON,标签可以有多个,可以事后增加、删除或修改。如果整个系统有 N 个不同类型的数据采集点,就需要建立 N 个超级表。
在 TDengine 的设计里,**表用来代表一个具体的数据采集点,超级表用来代表一组相同类型的数据采集点集合**。智能电表示例中,我们可以创建一个超级表meters.
在 TDengine 的设计里,**表用来代表一个具体的数据采集点,超级表用来代表一组相同类型的数据采集点集合**。智能电表示例中,我们可以创建一个超级表 `meters`.
## 子表 (Subtable)
## 子表(Subtable)
当为某个具体数据采集点创建表时,用户可以使用超级表的定义做模板,同时指定该具体采集点(表)的具体标签值来创建该表。**通过超级表创建的表称之为子表**。正常的表与子表的差异在于:
1. 子表就是表,因此所有正常表的SQL操作都可以在子表上执行。
1. 子表就是表,因此所有正常表的 SQL 操作都可以在子表上执行。
2. 子表在正常表的基础上有扩展,它是带有静态标签的,而且这些标签可以事后增加、删除、修改,而正常的表没有。
3. 子表一定属于一张超级表,但普通表不属于任何超级表
4. 普通表无法转为子表,子表也无法转为普通表。
超级表与与基于超级表建立的子表之间的关系表现在:
1. 一张超级表包含有多张子表,这些子表具有相同的采集量 schema,但带有不同的标签值。
1. 一张超级表包含有多张子表,这些子表具有相同的采集量 Schema,但带有不同的标签值。
2. 不能通过子表调整数据或标签的模式,对于超级表的数据模式修改立即对所有的子表生效。
3. 超级表只定义一个模板,自身不存储任何数据或标签信息。因此,不能向一个超级表写入数据,只能将数据写入子表中。
查询既可以在表上进行,也可以在超级表上进行。针对超级表的查询,TDengine 将把所有子表中的数据视为一个整体数据集进行处理,会先把满足标签过滤条件的表从超级表中找出来,然后再扫描这些表的时序数据,进行聚合操作,这样需要扫描的数据集会大幅减少,从而显著提高查询的性能。本质上,TDengine 通过对超级表查询的支持,实现了多个同类数据采集点的高效聚合。
TDengine系统建议给一个数据采集点建表,需要通过超级表建表,而不是建普通表。在智能电表的示例中,我们可以通过超级表meters创建子表d1001, d1002, d1003, d1004等。
TDengine 系统建议给一个数据采集点建表,需要通过超级表建表,而不是建普通表。在智能电表的示例中,我们可以通过超级表 meters 创建子表 d1001、d1002、d1003、d1004 等。
为了更好地理解采集量、标签、超级与子表的关系,可以参考下面关于智能电表数据模型的示意图。
<figure>
![智能电表数据模型示意图](./supertable.webp)
为了更好地理解超级与子表的关系,可以参考下面关于智能电表数据模型的示意图。 ![智能电表数据模型示意图](./supertable.webp)
<center><figcaption>图 1. 智能电表数据模型示意图</figcaption></center>
</figure>
## 库 (database)
## 库(Database)
库是指一组表的集合。TDengine 容许一个运行实例有多个库,而且每个库可以配置不同的存储策略。不同类型的数据采集点往往具有不同的数据特征,包括数据采集频率的高低,数据保留时间的长短,副本的数目,数据块的大小,是否允许更新数据等等。为了在各种场景下 TDengine 都能最大效率的工作,TDengine 建议将不同数据特征的超级表创建在不同的库里。
一个库里,可以有一到多个超级表,但一个超级表只属于一个库。一个超级表所拥有的子表全部存在一个库里。
## FQDN & End Point
## FQDN & Endpoint
FQDN (fully qualified domain name, 完全限定域名)是 Internet 上特定计算机或主机的完整域名。FQDN 由两部分组成:主机名和域名。例如,假设邮件服务器的 FQDN 可能是 mail.tdengine.com。主机名是 mail,主机位于域名 tdengine.com 中。DNS(Domain Name System),负责将 FQDN 翻译成 IP,是互联网应用的寻址方式。对于没有 DNS 的系统,可以通过配置 hosts 文件来解决。
FQDN(Fully Qualified Domain Name,完全限定域名)是 Internet 上特定计算机或主机的完整域名。FQDN 由两部分组成:主机名和域名。例如,假设邮件服务器的 FQDN 可能是 mail.tdengine.com。主机名是 mail,主机位于域名 tdengine.com 中。DNS(Domain Name System),负责将 FQDN 翻译成 IP,是互联网应用的寻址方式。对于没有 DNS 的系统,可以通过配置 hosts 文件来解决。
TDengine 集群的每个节点是由 End Point 来唯一标识的,End Point 是由 FQDN 外加 Port 组成,比如 h1.tdengine.com:6030。这样当 IP 发生变化的时候,我们依然可以使用 FQDN 来动态找到节点,不需要更改集群的任何配置。而且采用 FQDN,便于内网和外网对同一个集群的统一访问。
TDengine 集群的每个节点是由 Endpoint 来唯一标识的,Endpoint 是由 FQDN 外加 Port 组成,比如 h1.tdengine.com:6030。这样当 IP 发生变化的时候,我们依然可以使用 FQDN 来动态找到节点,不需要更改集群的任何配置。而且采用 FQDN,便于内网和外网对同一个集群的统一访问。
TDengine 不建议采用直接的 IP 地址访问集群,不利于管理。不了解 FQDN 概念,请看博文[《一篇文章说清楚 TDengine 的 FQDN》](https://www.taosdata.com/blog/2020/09/11/1824.html)
......@@ -4,11 +4,11 @@ title: 通过 Docker 快速体验 TDengine
description: 使用 Docker 快速体验 TDengine 的高效写入和查询
---
本节首先介绍如何通过 Docker 快速体验 TDengine,然后介绍如何在 Docker 环境下体验 TDengine 的写入和查询功能。如果你不熟悉 Docker,请使用[安装包的方式快速体验](../../get-started/package/)。如果您希望为 TDengine 贡献代码或对内部技术实现感兴趣,请参考 [TDengine GitHub 主页](https://github.com/taosdata/TDengine) 下载源码构建和安装.
本节首先介绍如何通过 Docker 快速体验 TDengine,然后介绍如何在 Docker 环境下体验 TDengine 的写入和查询功能。如果你不熟悉 Docker,请使用[安装包的方式快速体验](../../get-started/package/)。如果您希望为 TDengine 贡献代码或对内部技术实现感兴趣,请参考 [TDengine GitHub 主页](https://github.com/taosdata/TDengine)下载源码构建和安装。
## 启动 TDengine
如果已经安装了 docker, 只需执行下面的命令。
如果已经安装了 Docker,只需执行下面的命令:
```shell
docker run -d -p 6030:6030 -p 6041:6041 -p 6043-6049:6043-6049 -p 6043-6049:6043-6049/udp tdengine/tdengine
......@@ -16,84 +16,84 @@ docker run -d -p 6030:6030 -p 6041:6041 -p 6043-6049:6043-6049 -p 6043-6049:6043
注意:TDengine 3.0 服务端仅使用 6030 TCP 端口。6041 为 taosAdapter 所使用提供 REST 服务端口。6043-6049 为 taosAdapter 提供第三方应用接入所使用端口,可根据需要选择是否打开。
确定该容器已经启动并且在正常运行
确定该容器已经启动并且在正常运行
```shell
docker ps
```
进入该容器并执行 bash
进入该容器并执行 `bash`
```shell
docker exec -it <container name> bash
```
然后就可以执行相关的 Linux 命令操作和访问 TDengine
然后就可以执行相关的 Linux 命令操作和访问 TDengine
: Docker 工具自身的下载和使用请参考 [Docker 官网文档](https://docs.docker.com/get-docker/)
Docker 工具自身的下载和使用请参考 [Docker 官网文档](https://docs.docker.com/get-docker/)
## 运行 TDengine CLI
进入容器,执行 taos
进入容器,执行 `taos`
```
$ taos
taos>
taos>
```
## 写入数据
## 使用 taosBenchmark 体验写入速度
可以使用 TDengine 的自带工具 taosBenchmark 快速体验 TDengine 的写入。
可以使用 TDengine 的自带工具 taosBenchmark 快速体验 TDengine 的写入速度
进入容器,启动 taosBenchmark
启动 TDengine 的服务,在 Linux 或 Windows 终端执行 `taosBenchmark`(曾命名为 `taosdemo`
```bash
$ taosBenchmark
```
```bash
$ taosBenchmark
```
该命令将在数据库 test 下面自动创建一张超级表 meters,该超级表下有 1 万张表,表名为 "d0" 到 "d9999",每张表有 1 万条记录,每条记录有 (ts, current, voltage, phase) 四个字段,时间戳从 "2017-07-14 10:40:00 000" 到 "2017-07-14 10:40:09 999",每张表带有标签 location 和 groupId,groupId 被设置为 1 到 10, location 被设置为 "San Francisco" 或者 "Los Angeles"等城市名称
该命令将在数据库 `test` 下面自动创建一张超级表 `meters`,该超级表下有 1 万张表,表名为 `d0``d9999`,每张表有 1 万条记录,每条记录有 `ts``current``voltage``phase` 四个字段,时间戳从 2017-07-14 10:40:00 000 到 2017-07-14 10:40:09 999,每张表带有标签 `location``groupId`,groupId 被设置为 1 到 10,location 被设置为 `Campbell``Cupertino``Los Angeles``Mountain View``Palo Alto``San Diego``San Francisco``San Jose``Santa Clara` 或者 `Sunnyvale`
这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能
这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能,即使在一台普通的 PC 服务器往往也仅需十几秒
taosBenchmark 命令本身带有很多选项,配置表的数目、记录条数等等,您可以设置不同参数进行体验,请执行 `taosBenchmark --help` 详细列出。taosBenchmark 详细使用方法请参照 [taosBenchmark 参考手册](../../reference/taosbenchmark)
taosBenchmark 命令本身带有很多选项,配置表的数目、记录条数等等,您可以设置不同参数进行体验,请执行 `taosBenchmark --help` 详细列出。taosBenchmark 详细使用方法请参照[如何使用 taosBenchmark 对 TDengine 进行性能测试](https://www.taosdata.com/2021/10/09/3111.html) [taosBenchmark 参考手册](../../reference/taosbenchmark)
## 体验查询
## 使用 TDengine CLI 体验查询速度
使用上述 taosBenchmark 插入数据后,可以在 TDengine CLI 输入查询命令,体验查询速度。
使用上述 `taosBenchmark` 插入数据后,可以在 TDengine CLI(taos)输入查询命令,体验查询速度。
查询超级表记录总条数:
查询超级表 `meters` 下的记录总条数:
```sql
taos> select count(*) from test.meters;
SELECT COUNT(*) FROM test.meters;
```
查询 1 亿条记录的平均值、最大值、最小值等:
```sql
taos> select avg(current), max(voltage), min(phase) from test.meters;
SELECT AVG(current), MAX(voltage), MIN(phase) FROM test.meters;
```
查询 location="San Francisco" 的记录总条数:
查询 location = "San Francisco" 的记录总条数:
```sql
taos> select count(*) from test.meters where location="San Francisco";
SELECT COUNT(*) FROM test.meters WHERE location = "San Francisco";
```
查询 groupId=10 的所有记录的平均值、最大值、最小值等:
查询 groupId = 10 的所有记录的平均值、最大值、最小值等:
```sql
taos> select avg(current), max(voltage), min(phase) from test.meters where groupId=10;
SELECT AVG(current), MAX(voltage), MIN(phase) FROM test.meters WHERE groupId = 10;
```
对表 d10 按 10s 进行平均值、最大值和最小值聚合统计:
对表 `d10` 按 10 每秒进行平均值、最大值和最小值聚合统计:
```sql
taos> select avg(current), max(voltage), min(phase) from test.d10 interval(10s);
SELECT FIRST(ts), AVG(current), MAX(voltage), MIN(phase) FROM test.d10 INTERVAL(10s);
```
在上面的查询中,你选择的是区间内的第一个时间戳(ts),另一种选择方式是 `\_wstart`,它将给出时间窗口的开始。关于窗口查询的更多信息,参见[特色查询](../../taos-sql/distinguished/)
## 其它
更多关于在 Docker 环境下使用 TDengine 的细节,请参考 [在 Docker 下使用 TDengine](../../reference/docker)
更多关于在 Docker 环境下使用 TDengine 的细节,请参考 [在 Docker 下使用 TDengine](../../reference/docker)
......@@ -10,23 +10,24 @@ import PkgListV3 from "/components/PkgListV3";
您可以[用 Docker 立即体验](../../get-started/docker/) TDengine。如果您希望对 TDengine 贡献代码或对内部实现感兴趣,请参考我们的 [TDengine GitHub 主页](https://github.com/taosdata/TDengine) 下载源码构建和安装.
TDengine 完整的软件包包括服务端(taosd)、用于与第三方系统对接并提供 RESTful 接口的 taosAdapter、应用驱动(taosc)、命令行程序 (CLI,taos) 和一些工具软件。目前 taosAdapter 仅在 Linux 系统上安装和运行,后续将支持 Windows、macOS 等系统。TDengine 除了提供多种语言的连接器之外,还通过 [taosAdapter](../../reference/taosadapter/) 提供 [RESTful 接口](../../connector/rest-api/)
TDengine 完整的软件包包括服务端(taosd)、应用驱动(taosc)、用于与第三方系统对接并提供 RESTful 接口的 taosAdapter、命令行程序(CLI,taos)和一些工具软件。目前 taosAdapter 仅在 Linux 系统上安装和运行,后续将支持 Windows、macOS 等系统。TDengine 除了提供多种语言的连接器之外,还通过 [taosAdapter](../../reference/taosadapter/) 提供 [RESTful 接口](../../connector/rest-api/)
为方便使用,标准的服务端安装包包含了 taosd、taosAdapter、taosc、taos、taosdump、taosBenchmark、TDinsight 安装脚本和示例代码;如果您只需要用到服务端程序和客户端连接的 C/C++ 语言支持,也可以仅下载 lite 版本的安装包。
为方便使用,标准的服务端安装包包含了 taosd、taosAdapter、taosc、taos、taosdump、taosBenchmark、TDinsight 安装脚本和示例代码;如果您只需要用到服务端程序和客户端连接的 C/C++ 语言支持,也可以仅下载 Lite 版本的安装包。
在 Linux 系统上,TDengine 开源版本提供 deb 和 rpm 格式安装包,用户可以根据自己的运行环境选择合适的安装包。其中 deb 支持 Debian/Ubuntu 及衍生系统,rpm 支持 CentOS/RHEL/SUSE 及衍生系统。同时我们也为企业用户提供 tar.gz 格式安装包,也支持通过 `apt-get` 工具从线上进行安装。需要注意的是,rpm 和 deb 包不含 taosdump 和 TDinsight 安装脚本,这些工具需要通过安装 taosTool 包获得。TDengine 也提供 Windows x64 平台的安装包。
在 Linux 系统上,TDengine 社区版提供 Deb 和 RPM 格式安装包,用户可以根据自己的运行环境选择合适的安装包。其中 Deb 支持 Debian/Ubuntu 及其衍生系统,RPM 支持 CentOS/RHEL/SUSE 及其衍生系统。同时我们也为企业用户提供 tar.gz 格式安装包,也支持通过 `apt-get` 工具从线上进行安装。需要注意的是,RPM 和 Deb 包不含 `taosdump` 和 TDinsight 安装脚本,这些工具需要通过安装 taosTool 包获得。TDengine 也提供 Windows x64 平台的安装包。
## 安装
<Tabs>
<TabItem label="Deb 安装" value="debinst">
1. 从列表中下载获得 deb 安装包;
<PkgListV3 type={6}/>
1. 从列表中下载获得 Deb 安装包;
<PkgListV3 type={6}/>
2. 进入到安装包所在目录,执行如下的安装命令:
> 请将 `<version>` 替换为下载的安装包版本
```bash
# 替换为下载的安装包版本
sudo dpkg -i TDengine-server-<version>-Linux-x64.deb
```
......@@ -34,12 +35,13 @@ sudo dpkg -i TDengine-server-<version>-Linux-x64.deb
<TabItem label="RPM 安装" value="rpminst">
1. 从列表中下载获得 rpm 安装包;
<PkgListV3 type={5}/>
1. 从列表中下载获得 RPM 安装包;
<PkgListV3 type={5}/>
2. 进入到安装包所在目录,执行如下的安装命令:
> 请将 `<version>` 替换为下载的安装包版本
```bash
# 替换为下载的安装包版本
sudo rpm -ivh TDengine-server-<version>-Linux-x64.rpm
```
......@@ -48,44 +50,46 @@ sudo rpm -ivh TDengine-server-<version>-Linux-x64.rpm
<TabItem label="tar.gz 安装" value="tarinst">
1. 从列表中下载获得 tar.gz 安装包;
<PkgListV3 type={0}/>
2. 进入到安装包所在目录,先解压文件后,进入子目录,执行其中的 install.sh 安装脚本:
<PkgListV3 type={0}/>
2. 进入到安装包所在目录,使用 `tar` 解压安装包;
3. 进入到安装包所在目录,先解压文件后,进入子目录,执行其中的 install.sh 安装脚本。
> 请将 `<version>` 替换为下载的安装包版本
```bash
# 替换为下载的安装包版本
tar -zxvf TDengine-server-<version>-Linux-x64.tar.gz
```
解压后进入相应路径,执行
解压文件后,进入相应子目录,执行其中的 `install.sh` 安装脚本:
```bash
sudo ./install.sh
```
:::info
install.sh 安装脚本在执行过程中,会通过命令行交互界面询问一些配置信息。如果希望采取无交互安装方式,那么可以用 -e no 参数来执行 install.sh 脚本。运行 `./install.sh -h` 指令可以查看所有参数的详细说明信息。
install.sh 安装脚本在执行过程中,会通过命令行交互界面询问一些配置信息。如果希望采取无交互安装方式,那么可以运行 `./install.sh -e no`。运行 `./install.sh -h` 指令可以查看所有参数的详细说明信息。
:::
</TabItem>
<TabItem value="apt-get" label="apt-get">
可以使用 apt-get 工具从官方仓库安装。
可以使用 `apt-get` 工具从官方仓库安装。
**安装包仓库**
**配置包仓库**
```bash
wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-stable stable main" | sudo tee /etc/apt/sources.list.d/tdengine-stable.list
```
如果安装 Beta 版需要安装包仓库
如果安装 Beta 版需要安装包仓库
```bash
wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-beta beta main" | sudo tee /etc/apt/sources.list.d/tdengine-beta.list
```
**使用 apt-get 命令安装**
**使用 `apt-get` 命令安装**
```bash
sudo apt-get update
......@@ -94,26 +98,26 @@ sudo apt-get install tdengine
```
:::tip
apt-get 方式只适用于 Debian 或 Ubuntu 系统
apt-get 方式只适用于 Debian 或 Ubuntu 系统
::::
</TabItem>
<TabItem label="Windows 安装" value="windows">
<TabItem label="Windows 安装" value="windows">
注意:目前 TDengine 在 Windows 平台上只支持 Windows server 2016/2019 和 Windows 10/11 系统版本
注意:目前 TDengine 在 Windows 平台上只支持 Windows Server 2016/2019 和 Windows 10/11
1. 从列表中下载获得 exe 安装程序;
<PkgListV3 type={3}/>
<PkgListV3 type={3}/>
2. 运行可执行程序来安装 TDengine。
</TabItem>
</Tabs>
:::info
下载其他组件、最新 Beta 版及之前版本的安装包,请点击[发布历史页面](../../releases/tdengine)
下载其他组件、最新 Beta 版及之前版本的安装包,请点击[发布历史页面](../../releases/tdengine)
:::
:::note
当安装第一个节点时,出现 Enter FQDN:提示的时候,不需要输入任何内容。只有当安装第二个或以后更多的节点时,才需要输入已有集群中任何一个可用节点的 FQDN,支持该新节点加入集群。当然也可以不输入,而是在新节点启动前,配置到新节点的配置文件中。
当安装第一个节点时,出现 `Enter FQDN:` 提示的时候,不需要输入任何内容。只有当安装第二个或以后更多的节点时,才需要输入已有集群中任何一个可用节点的 FQDN,支持该新节点加入集群。当然也可以不输入,而是在新节点启动前,配置到新节点的配置文件中。
:::
......@@ -148,7 +152,7 @@ Active: inactive (dead)
如果 TDengine 服务正常工作,那么您可以通过 TDengine 的命令行程序 `taos` 来访问并体验 TDengine。
systemctl 命令汇总
如下 `systemctl` 命令可以帮助你管理 TDengine 服务
- 启动服务进程:`systemctl start taosd`
......@@ -160,7 +164,7 @@ systemctl 命令汇总:
:::info
- systemctl 命令需要 _root_ 权限来运行,如果您非 _root_ 用户,请在命令前添加 sudo
- `systemctl` 命令需要 _root_ 权限来运行,如果您非 _root_ 用户,请在命令前添加 `sudo`
- `systemctl stop taosd` 指令在执行后并不会马上停止 TDengine 服务,而是会等待系统中必要的落盘工作正常完成。在数据量很大的情况下,这可能会消耗较长时间。
- 如果系统中不支持 `systemd`,也可以用手动运行 `/usr/local/taos/bin/taosd` 方式启动 TDengine 服务。
......@@ -170,87 +174,93 @@ systemctl 命令汇总:
<TabItem label="Windows 系统" value="windows">
安装后,在 C:\TDengine 目录下,运行 taosd.exe 来启动 TDengine 服务进程。
安装后,在 `C:\TDengine` 目录下,运行 `taosd.exe` 来启动 TDengine 服务进程。
</TabItem>
</Tabs>
## TDengine 命令行 (CLI)
## TDengine 命令行(CLI)
为便于检查 TDengine 的状态,执行数据库 (Database) 的各种即席(Ad Hoc)查询,TDengine 提供一命令行应用程序(以下简称为 TDengine CLI) taos。要进入 TDengine 命令行,您只要在安装有 TDengine 的 Linux 终端执行 `taos` 即可,也可以在安装有 TDengine 的 Windows 终端的 C:\TDengine 目录下,运行 taos.exe 来启动 TDengine 命令行。
为便于检查 TDengine 的状态,执行数据库(Database)的各种即席(Ad Hoc)查询,TDengine 提供一命令行应用程序(以下简称为 TDengine CLI)taos。要进入 TDengine 命令行,您只要在安装有 TDengine 的 Linux 终端执行 `taos` 即可,也可以在安装有 TDengine 的 Windows 终端的 C:\TDengine 目录下,运行 taos.exe 来启动 TDengine 命令行。
```bash
taos
```
如果连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印错误消息出来(请参考 [FAQ](/train-faq/faq) 来解决终端连接服务端失败的问题)。 TDengine CLI 的提示符号如下:
如果连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印错误消息出来(请参考 [FAQ](/train-faq/faq) 来解决终端连接服务端失败的问题)。TDengine CLI 的提示符号如下:
```cmd
taos>
```
在 TDengine CLI 中,用户可以通过 SQL 命令来创建/删除数据库、表等,并进行数据库(database)插入查询操作。在终端中运行的 SQL 语句需要以分号结束来运行。示例:
在 TDengine CLI 中,用户可以通过 SQL 命令来创建/删除数据库、表等,并进行数据库(Database)插入查询操作。在终端中运行的 SQL 语句需要以分号(;)结束来运行。示例:
```sql
create database demo;
use demo;
create table t (ts timestamp, speed int);
insert into t values ('2019-07-15 00:00:00', 10);
insert into t values ('2019-07-15 01:00:00', 20);
select * from t;
CREATE DATABASE demo;
USE demo;
CREATE TABLE t (ts TIMESTAMP, speed INT);
INSERT INTO t VALUES ('2019-07-15 00:00:00', 10);
INSERT INTO t VALUES ('2019-07-15 01:00:00', 20);
SELECT * FROM t;
ts | speed |
========================================
2019-07-15 00:00:00.000 | 10 |
2019-07-15 01:00:00.000 | 20 |
Query OK, 2 row(s) in set (0.003128s)
```
除执行 SQL 语句外,系统管理员还可以从 TDengine CLI 进行检查系统运行状态、添加删除用户账号等操作。TDengine CLI 连同应用驱动也可以独立安装在 Linux 或 Windows 机器上运行,更多细节请参考 [这里](../../reference/taos-shell/)
除执行 SQL 语句外,系统管理员还可以从 TDengine CLI 进行检查系统运行状态、添加删除用户账号等操作。TDengine CLI 连同应用驱动也可以独立安装在 Linux 或 Windows 机器上运行,更多细节请参考 [TDengine 命令行](../../reference/taos-shell/)
## 使用 taosBenchmark 体验写入速度
启动 TDengine 的服务,在 Linux 或 windows 终端执行 `taosBenchmark` (曾命名为 `taosdemo`):
可以使用 TDengine 的自带工具 taosBenchmark 快速体验 TDengine 的写入速度。
启动 TDengine 的服务,在 Linux 或 Windows 终端执行 `taosBenchmark`(曾命名为 `taosdemo`):
```bash
taosBenchmark
$ taosBenchmark
```
该命令将在数据库 test 下面自动创建一张超级表 meters,该超级表下有 1 万张表,表名为 "d0" 到 "d9999",每张表有 1 万条记录,每条记录有 (ts, current, voltage, phase) 四个字段,时间戳从 "2017-07-14 10:40:00 000" 到 "2017-07-14 10:40:09 999",每张表带有标签 location 和 groupId,groupId 被设置为 1 到 10, location 被设置为 "California.SanFrancisco" 或者 "California.LosAngeles"
该命令将在数据库 `test` 下面自动创建一张超级表 `meters`,该超级表下有 1 万张表,表名为 `d0``d9999`,每张表有 1 万条记录,每条记录有 `ts``current``voltage``phase` 四个字段,时间戳从 2017-07-14 10:40:00 000 到 2017-07-14 10:40:09 999,每张表带有标签 `location``groupId`,groupId 被设置为 1 到 10,location 被设置为 `Campbell``Cupertino``Los Angeles``Mountain View``Palo Alto``San Diego``San Francisco``San Jose``Santa Clara` 或者 `Sunnyvale`
这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能,即使在一台普通的 PC 服务器往往也仅需十几秒。
taosBenchmark 命令本身带有很多选项,配置表的数目、记录条数等等,您可以设置不同参数进行体验,请执行 `taosBenchmark --help` 详细列出。taosBenchmark 详细使用方法请参照 [如何使用 taosBenchmark 对 TDengine 进行性能测试](https://www.taosdata.com/2021/10/09/3111.html)
taosBenchmark 命令本身带有很多选项,配置表的数目、记录条数等等,您可以设置不同参数进行体验,请执行 `taosBenchmark --help` 详细列出。taosBenchmark 详细使用方法请参照[如何使用 taosBenchmark 对 TDengine 进行性能测试](https://www.taosdata.com/2021/10/09/3111.html)[taosBenchmark 参考手册](../../reference/taosbenchmark)
## 使用 TDengine CLI 体验查询速度
使用上述 taosBenchmark 插入数据后,可以在 TDengine CLI 输入查询命令,体验查询速度。
使用上述 `taosBenchmark` 插入数据后,可以在 TDengine CLI(taos)输入查询命令,体验查询速度。
查询超级表记录总条数:
查询超级表 `meters` 下的记录总条数:
```sql
taos> select count(*) from test.meters;
SELECT COUNT(*) FROM test.meters;
```
查询 1 亿条记录的平均值、最大值、最小值等:
```sql
taos> select avg(current), max(voltage), min(phase) from test.meters;
SELECT AVG(current), MAX(voltage), MIN(phase) FROM test.meters;
```
查询 location="California.SanFrancisco" 的记录总条数:
查询 location = "San Francisco" 的记录总条数:
```sql
taos> select count(*) from test.meters where location="California.SanFrancisco";
SELECT COUNT(*) FROM test.meters WHERE location = "San Francisco";
```
查询 groupId=10 的所有记录的平均值、最大值、最小值等:
查询 groupId = 10 的所有记录的平均值、最大值、最小值等:
```sql
taos> select avg(current), max(voltage), min(phase) from test.meters where groupId=10;
SELECT AVG(current), MAX(voltage), MIN(phase) FROM test.meters WHERE groupId = 10;
```
对表 d10 按 10s 进行平均值、最大值和最小值聚合统计:
对表 `d10` 按 10 每秒进行平均值、最大值和最小值聚合统计:
```sql
taos> select avg(current), max(voltage), min(phase) from test.d10 interval(10s);
SELECT FIRST(ts), AVG(current), MAX(voltage), MIN(phase) FROM test.d10 INTERVAL(10s);
```
在上面的查询中,你选择的是区间内的第一个时间戳(ts),另一种选择方式是 `\_wstart`,它将给出时间窗口的开始。关于窗口查询的更多信息,参见[特色查询](../../taos-sql/distinguished/)
......@@ -41,14 +41,14 @@ TDengine 版本更新往往会增加新的功能特性,列表中的连接器
### 使用原生接口(taosc)
| **功能特性** | **Java** | **Python** | **Go** | **C#** | **Node.js** | **Rust** |
| -------------- | -------- | ---------- | ------ | ------ | ----------- | -------- |
| **连接管理** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **普通查询** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **参数绑定** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| ** TMQ ** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **Schemaless** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **DataFrame** | 不支持 | 支持 | 不支持 | 不支持 | 不支持 | 不支持 |
| **功能特性** | **Java** | **Python** | **Go** | **C#** | **Node.js** | **Rust** |
| ------------------- | -------- | ---------- | ------ | ------ | ----------- | -------- |
| **连接管理** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **普通查询** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **参数绑定** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **数据订阅(TMQ)** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **Schemaless** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **DataFrame** | 不支持 | 支持 | 不支持 | 不支持 | 不支持 | 不支持 |
:::info
由于不同编程语言数据库框架规范不同,并不意味着所有 C/C++ 接口都需要对应封装支持。
......@@ -56,16 +56,15 @@ TDengine 版本更新往往会增加新的功能特性,列表中的连接器
### 使用 http (REST 或 WebSocket) 接口
| **功能特性** | **Java** | **Python** | **Go** | **C#(暂不支持)** | **Node.js** | **Rust** |
| ------------------------------ | -------- | ---------- | -------- | ------------------ | ----------- | -------- |
| **连接管理** | 支持 | 支持 | 支持 | N/A | 支持 | 支持 |
| **普通查询** | 支持 | 支持 | 支持 | N/A | 支持 | 支持 |
| **连续查询** | 支持 | 支持 | 支持 | N/A | 支持 | 支持 |
| **参数绑定** | 不支持 | 暂不支持 | 暂不支持 | N/A | 不支持 | 支持 |
| ** TMQ ** | 不支持 | 暂不支持 | 暂不支持 | N/A | 不支持 | 支持 |
| **Schemaless** | 暂不支持 | 暂不支持 | 暂不支持 | N/A | 不支持 | 暂不支持 |
| **批量拉取(基于 WebSocket)** | 支持 | 支持 | 暂不支持 | N/A | 不支持 | 支持 |
| **DataFrame** | 不支持 | 支持 | 不支持 | N/A | 不支持 | 不支持 |
| **功能特性** | **Java** | **Python** | **Go** | **C# ** | **Node.js** | **Rust** |
| ------------------------------ | -------- | ---------- | -------- | -------- | ----------- | -------- |
| **连接管理** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **普通查询** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **参数绑定** | 暂不支持 | 暂不支持 | 暂不支持 | 支持 | 暂不支持 | 支持 |
| **数据订阅(TMQ)** | 暂不支持 | 暂不支持 | 暂不支持 | 暂不支持 | 暂不支持 | 支持 |
| **Schemaless** | 暂不支持 | 暂不支持 | 暂不支持 | 暂不支持 | 暂不支持 | 暂不支持 |
| **批量拉取(基于 WebSocket)** | 支持 | 支持 | 暂不支持 | 支持 | 暂不支持 | 支持 |
| **DataFrame** | 不支持 | 支持 | 不支持 | 不支持 | 不支持 | 不支持 |
:::warning
......
......@@ -256,6 +256,7 @@ description: TDengine 保留关键字的详细列表
### V
- VALUE
- VALUES
- VARIABLE
- VARIABLES
......
......@@ -787,6 +787,7 @@ typedef struct {
int32_t sstTrigger;
int16_t hashPrefix;
int16_t hashSuffix;
int32_t tsdbPageSize;
} SCreateDbReq;
int32_t tSerializeSCreateDbReq(void* buf, int32_t bufLen, SCreateDbReq* pReq);
......
......@@ -89,240 +89,241 @@
#define TK_KEEP 71
#define TK_PAGES 72
#define TK_PAGESIZE 73
#define TK_PRECISION 74
#define TK_REPLICA 75
#define TK_STRICT 76
#define TK_VGROUPS 77
#define TK_SINGLE_STABLE 78
#define TK_RETENTIONS 79
#define TK_SCHEMALESS 80
#define TK_WAL_LEVEL 81
#define TK_WAL_FSYNC_PERIOD 82
#define TK_WAL_RETENTION_PERIOD 83
#define TK_WAL_RETENTION_SIZE 84
#define TK_WAL_ROLL_PERIOD 85
#define TK_WAL_SEGMENT_SIZE 86
#define TK_SST_TRIGGER 87
#define TK_TABLE_PREFIX 88
#define TK_TABLE_SUFFIX 89
#define TK_NK_COLON 90
#define TK_TABLE 91
#define TK_NK_LP 92
#define TK_NK_RP 93
#define TK_STABLE 94
#define TK_ADD 95
#define TK_COLUMN 96
#define TK_MODIFY 97
#define TK_RENAME 98
#define TK_TAG 99
#define TK_SET 100
#define TK_NK_EQ 101
#define TK_USING 102
#define TK_TAGS 103
#define TK_COMMENT 104
#define TK_BOOL 105
#define TK_TINYINT 106
#define TK_SMALLINT 107
#define TK_INT 108
#define TK_INTEGER 109
#define TK_BIGINT 110
#define TK_FLOAT 111
#define TK_DOUBLE 112
#define TK_BINARY 113
#define TK_TIMESTAMP 114
#define TK_NCHAR 115
#define TK_UNSIGNED 116
#define TK_JSON 117
#define TK_VARCHAR 118
#define TK_MEDIUMBLOB 119
#define TK_BLOB 120
#define TK_VARBINARY 121
#define TK_DECIMAL 122
#define TK_MAX_DELAY 123
#define TK_WATERMARK 124
#define TK_ROLLUP 125
#define TK_TTL 126
#define TK_SMA 127
#define TK_FIRST 128
#define TK_LAST 129
#define TK_SHOW 130
#define TK_DATABASES 131
#define TK_TABLES 132
#define TK_STABLES 133
#define TK_MNODES 134
#define TK_MODULES 135
#define TK_QNODES 136
#define TK_FUNCTIONS 137
#define TK_INDEXES 138
#define TK_ACCOUNTS 139
#define TK_APPS 140
#define TK_CONNECTIONS 141
#define TK_LICENCES 142
#define TK_GRANTS 143
#define TK_QUERIES 144
#define TK_SCORES 145
#define TK_TOPICS 146
#define TK_VARIABLES 147
#define TK_BNODES 148
#define TK_SNODES 149
#define TK_CLUSTER 150
#define TK_TRANSACTIONS 151
#define TK_DISTRIBUTED 152
#define TK_CONSUMERS 153
#define TK_SUBSCRIPTIONS 154
#define TK_VNODES 155
#define TK_LIKE 156
#define TK_INDEX 157
#define TK_FUNCTION 158
#define TK_INTERVAL 159
#define TK_TOPIC 160
#define TK_AS 161
#define TK_WITH 162
#define TK_META 163
#define TK_CONSUMER 164
#define TK_GROUP 165
#define TK_DESC 166
#define TK_DESCRIBE 167
#define TK_RESET 168
#define TK_QUERY 169
#define TK_CACHE 170
#define TK_EXPLAIN 171
#define TK_ANALYZE 172
#define TK_VERBOSE 173
#define TK_NK_BOOL 174
#define TK_RATIO 175
#define TK_NK_FLOAT 176
#define TK_OUTPUTTYPE 177
#define TK_AGGREGATE 178
#define TK_BUFSIZE 179
#define TK_STREAM 180
#define TK_INTO 181
#define TK_TRIGGER 182
#define TK_AT_ONCE 183
#define TK_WINDOW_CLOSE 184
#define TK_IGNORE 185
#define TK_EXPIRED 186
#define TK_KILL 187
#define TK_CONNECTION 188
#define TK_TRANSACTION 189
#define TK_BALANCE 190
#define TK_VGROUP 191
#define TK_MERGE 192
#define TK_REDISTRIBUTE 193
#define TK_SPLIT 194
#define TK_DELETE 195
#define TK_INSERT 196
#define TK_NULL 197
#define TK_NK_QUESTION 198
#define TK_NK_ARROW 199
#define TK_ROWTS 200
#define TK_TBNAME 201
#define TK_QSTART 202
#define TK_QEND 203
#define TK_QDURATION 204
#define TK_WSTART 205
#define TK_WEND 206
#define TK_WDURATION 207
#define TK_CAST 208
#define TK_NOW 209
#define TK_TODAY 210
#define TK_TIMEZONE 211
#define TK_CLIENT_VERSION 212
#define TK_SERVER_VERSION 213
#define TK_SERVER_STATUS 214
#define TK_CURRENT_USER 215
#define TK_COUNT 216
#define TK_LAST_ROW 217
#define TK_BETWEEN 218
#define TK_IS 219
#define TK_NK_LT 220
#define TK_NK_GT 221
#define TK_NK_LE 222
#define TK_NK_GE 223
#define TK_NK_NE 224
#define TK_MATCH 225
#define TK_NMATCH 226
#define TK_CONTAINS 227
#define TK_IN 228
#define TK_JOIN 229
#define TK_INNER 230
#define TK_SELECT 231
#define TK_DISTINCT 232
#define TK_WHERE 233
#define TK_PARTITION 234
#define TK_BY 235
#define TK_SESSION 236
#define TK_STATE_WINDOW 237
#define TK_SLIDING 238
#define TK_FILL 239
#define TK_VALUE 240
#define TK_NONE 241
#define TK_PREV 242
#define TK_LINEAR 243
#define TK_NEXT 244
#define TK_HAVING 245
#define TK_RANGE 246
#define TK_EVERY 247
#define TK_ORDER 248
#define TK_SLIMIT 249
#define TK_SOFFSET 250
#define TK_LIMIT 251
#define TK_OFFSET 252
#define TK_ASC 253
#define TK_NULLS 254
#define TK_ABORT 255
#define TK_AFTER 256
#define TK_ATTACH 257
#define TK_BEFORE 258
#define TK_BEGIN 259
#define TK_BITAND 260
#define TK_BITNOT 261
#define TK_BITOR 262
#define TK_BLOCKS 263
#define TK_CHANGE 264
#define TK_COMMA 265
#define TK_COMPACT 266
#define TK_CONCAT 267
#define TK_CONFLICT 268
#define TK_COPY 269
#define TK_DEFERRED 270
#define TK_DELIMITERS 271
#define TK_DETACH 272
#define TK_DIVIDE 273
#define TK_DOT 274
#define TK_EACH 275
#define TK_END 276
#define TK_FAIL 277
#define TK_FILE 278
#define TK_FOR 279
#define TK_GLOB 280
#define TK_ID 281
#define TK_IMMEDIATE 282
#define TK_IMPORT 283
#define TK_INITIALLY 284
#define TK_INSTEAD 285
#define TK_ISNULL 286
#define TK_KEY 287
#define TK_NK_BITNOT 288
#define TK_NK_SEMI 289
#define TK_NOTNULL 290
#define TK_OF 291
#define TK_PLUS 292
#define TK_PRIVILEGE 293
#define TK_RAISE 294
#define TK_REPLACE 295
#define TK_RESTRICT 296
#define TK_ROW 297
#define TK_SEMI 298
#define TK_STAR 299
#define TK_STATEMENT 300
#define TK_STRING 301
#define TK_TIMES 302
#define TK_UPDATE 303
#define TK_VALUES 304
#define TK_VARIABLE 305
#define TK_VIEW 306
#define TK_WAL 307
#define TK_TSDB_PAGESIZE 74
#define TK_PRECISION 75
#define TK_REPLICA 76
#define TK_STRICT 77
#define TK_VGROUPS 78
#define TK_SINGLE_STABLE 79
#define TK_RETENTIONS 80
#define TK_SCHEMALESS 81
#define TK_WAL_LEVEL 82
#define TK_WAL_FSYNC_PERIOD 83
#define TK_WAL_RETENTION_PERIOD 84
#define TK_WAL_RETENTION_SIZE 85
#define TK_WAL_ROLL_PERIOD 86
#define TK_WAL_SEGMENT_SIZE 87
#define TK_STT_TRIGGER 88
#define TK_TABLE_PREFIX 89
#define TK_TABLE_SUFFIX 90
#define TK_NK_COLON 91
#define TK_TABLE 92
#define TK_NK_LP 93
#define TK_NK_RP 94
#define TK_STABLE 95
#define TK_ADD 96
#define TK_COLUMN 97
#define TK_MODIFY 98
#define TK_RENAME 99
#define TK_TAG 100
#define TK_SET 101
#define TK_NK_EQ 102
#define TK_USING 103
#define TK_TAGS 104
#define TK_COMMENT 105
#define TK_BOOL 106
#define TK_TINYINT 107
#define TK_SMALLINT 108
#define TK_INT 109
#define TK_INTEGER 110
#define TK_BIGINT 111
#define TK_FLOAT 112
#define TK_DOUBLE 113
#define TK_BINARY 114
#define TK_TIMESTAMP 115
#define TK_NCHAR 116
#define TK_UNSIGNED 117
#define TK_JSON 118
#define TK_VARCHAR 119
#define TK_MEDIUMBLOB 120
#define TK_BLOB 121
#define TK_VARBINARY 122
#define TK_DECIMAL 123
#define TK_MAX_DELAY 124
#define TK_WATERMARK 125
#define TK_ROLLUP 126
#define TK_TTL 127
#define TK_SMA 128
#define TK_FIRST 129
#define TK_LAST 130
#define TK_SHOW 131
#define TK_DATABASES 132
#define TK_TABLES 133
#define TK_STABLES 134
#define TK_MNODES 135
#define TK_MODULES 136
#define TK_QNODES 137
#define TK_FUNCTIONS 138
#define TK_INDEXES 139
#define TK_ACCOUNTS 140
#define TK_APPS 141
#define TK_CONNECTIONS 142
#define TK_LICENCES 143
#define TK_GRANTS 144
#define TK_QUERIES 145
#define TK_SCORES 146
#define TK_TOPICS 147
#define TK_VARIABLES 148
#define TK_BNODES 149
#define TK_SNODES 150
#define TK_CLUSTER 151
#define TK_TRANSACTIONS 152
#define TK_DISTRIBUTED 153
#define TK_CONSUMERS 154
#define TK_SUBSCRIPTIONS 155
#define TK_VNODES 156
#define TK_LIKE 157
#define TK_INDEX 158
#define TK_FUNCTION 159
#define TK_INTERVAL 160
#define TK_TOPIC 161
#define TK_AS 162
#define TK_WITH 163
#define TK_META 164
#define TK_CONSUMER 165
#define TK_GROUP 166
#define TK_DESC 167
#define TK_DESCRIBE 168
#define TK_RESET 169
#define TK_QUERY 170
#define TK_CACHE 171
#define TK_EXPLAIN 172
#define TK_ANALYZE 173
#define TK_VERBOSE 174
#define TK_NK_BOOL 175
#define TK_RATIO 176
#define TK_NK_FLOAT 177
#define TK_OUTPUTTYPE 178
#define TK_AGGREGATE 179
#define TK_BUFSIZE 180
#define TK_STREAM 181
#define TK_INTO 182
#define TK_TRIGGER 183
#define TK_AT_ONCE 184
#define TK_WINDOW_CLOSE 185
#define TK_IGNORE 186
#define TK_EXPIRED 187
#define TK_KILL 188
#define TK_CONNECTION 189
#define TK_TRANSACTION 190
#define TK_BALANCE 191
#define TK_VGROUP 192
#define TK_MERGE 193
#define TK_REDISTRIBUTE 194
#define TK_SPLIT 195
#define TK_DELETE 196
#define TK_INSERT 197
#define TK_NULL 198
#define TK_NK_QUESTION 199
#define TK_NK_ARROW 200
#define TK_ROWTS 201
#define TK_TBNAME 202
#define TK_QSTART 203
#define TK_QEND 204
#define TK_QDURATION 205
#define TK_WSTART 206
#define TK_WEND 207
#define TK_WDURATION 208
#define TK_CAST 209
#define TK_NOW 210
#define TK_TODAY 211
#define TK_TIMEZONE 212
#define TK_CLIENT_VERSION 213
#define TK_SERVER_VERSION 214
#define TK_SERVER_STATUS 215
#define TK_CURRENT_USER 216
#define TK_COUNT 217
#define TK_LAST_ROW 218
#define TK_BETWEEN 219
#define TK_IS 220
#define TK_NK_LT 221
#define TK_NK_GT 222
#define TK_NK_LE 223
#define TK_NK_GE 224
#define TK_NK_NE 225
#define TK_MATCH 226
#define TK_NMATCH 227
#define TK_CONTAINS 228
#define TK_IN 229
#define TK_JOIN 230
#define TK_INNER 231
#define TK_SELECT 232
#define TK_DISTINCT 233
#define TK_WHERE 234
#define TK_PARTITION 235
#define TK_BY 236
#define TK_SESSION 237
#define TK_STATE_WINDOW 238
#define TK_SLIDING 239
#define TK_FILL 240
#define TK_VALUE 241
#define TK_NONE 242
#define TK_PREV 243
#define TK_LINEAR 244
#define TK_NEXT 245
#define TK_HAVING 246
#define TK_RANGE 247
#define TK_EVERY 248
#define TK_ORDER 249
#define TK_SLIMIT 250
#define TK_SOFFSET 251
#define TK_LIMIT 252
#define TK_OFFSET 253
#define TK_ASC 254
#define TK_NULLS 255
#define TK_ABORT 256
#define TK_AFTER 257
#define TK_ATTACH 258
#define TK_BEFORE 259
#define TK_BEGIN 260
#define TK_BITAND 261
#define TK_BITNOT 262
#define TK_BITOR 263
#define TK_BLOCKS 264
#define TK_CHANGE 265
#define TK_COMMA 266
#define TK_COMPACT 267
#define TK_CONCAT 268
#define TK_CONFLICT 269
#define TK_COPY 270
#define TK_DEFERRED 271
#define TK_DELIMITERS 272
#define TK_DETACH 273
#define TK_DIVIDE 274
#define TK_DOT 275
#define TK_EACH 276
#define TK_END 277
#define TK_FAIL 278
#define TK_FILE 279
#define TK_FOR 280
#define TK_GLOB 281
#define TK_ID 282
#define TK_IMMEDIATE 283
#define TK_IMPORT 284
#define TK_INITIALLY 285
#define TK_INSTEAD 286
#define TK_ISNULL 287
#define TK_KEY 288
#define TK_NK_BITNOT 289
#define TK_NK_SEMI 290
#define TK_NOTNULL 291
#define TK_OF 292
#define TK_PLUS 293
#define TK_PRIVILEGE 294
#define TK_RAISE 295
#define TK_REPLACE 296
#define TK_RESTRICT 297
#define TK_ROW 298
#define TK_SEMI 299
#define TK_STAR 300
#define TK_STATEMENT 301
#define TK_STRING 302
#define TK_TIMES 303
#define TK_UPDATE 304
#define TK_VALUES 305
#define TK_VARIABLE 306
#define TK_VIEW 307
#define TK_WAL 308
#define TK_NK_SPACE 300
#define TK_NK_COMMENT 301
......
......@@ -64,6 +64,7 @@ typedef struct SDatabaseOptions {
int64_t keep[3];
int32_t pages;
int32_t pagesize;
int32_t tsdbPageSize;
char precisionStr[3];
int8_t precision;
int8_t replica;
......
......@@ -300,6 +300,9 @@ typedef enum ELogicConditionType {
#define TSDB_DEFAULT_PAGES_PER_VNODE 256
#define TSDB_MIN_PAGESIZE_PER_VNODE 1 // unit KB
#define TSDB_MAX_PAGESIZE_PER_VNODE 16384
#define TSDB_DEFAULT_TSDB_PAGESIZE 4
#define TSDB_MIN_TSDB_PAGESIZE 1 // unit KB
#define TSDB_MAX_TSDB_PAGESIZE 16384
#define TSDB_DEFAULT_PAGESIZE_PER_VNODE 4
#define TSDB_MIN_DAYS_PER_FILE 60 // unit minute
#define TSDB_MAX_DAYS_PER_FILE (3650 * 1440)
......
......@@ -2038,6 +2038,7 @@ int32_t tSerializeSCreateDbReq(void *buf, int32_t bufLen, SCreateDbReq *pReq) {
if (tEncodeI8(&encoder, pRetension->freqUnit) < 0) return -1;
if (tEncodeI8(&encoder, pRetension->keepUnit) < 0) return -1;
}
if (tEncodeI32(&encoder, pReq->tsdbPageSize) < 0) return -1;
tEndEncode(&encoder);
int32_t tlen = encoder.pos;
......@@ -2098,6 +2099,8 @@ int32_t tDeserializeSCreateDbReq(void *buf, int32_t bufLen, SCreateDbReq *pReq)
}
}
if (tDecodeI32(&decoder, &pReq->tsdbPageSize) < 0) return -1;
tEndDecode(&decoder);
tDecoderClear(&decoder);
......
......@@ -113,8 +113,9 @@ struct SRSmaStat {
volatile int64_t nBufItems; // number of items in queue buffer
SRWLatch lock; // r/w lock for rsma fs(e.g. qtaskinfo)
volatile int32_t nFetchAll; // active number of fetch all
int8_t triggerStat; // shared by fetch tasks
int8_t commitStat; // 0 not in committing, 1 in committing
volatile int8_t triggerStat; // shared by fetch tasks
volatile int8_t commitStat; // 0 not in committing, 1 in committing
volatile int8_t delFlag; // 0 no deleted SRSmaInfo, 1 has deleted SRSmaInfo
SRSmaFS fs; // for recovery/snapshot r/w
SHashObj *infoHash; // key: suid, value: SRSmaInfo
tsem_t notEmpty; // has items in queue buffer
......@@ -142,7 +143,7 @@ struct SRSmaInfoItem {
int8_t level : 4;
int8_t fetchLevel : 4;
int8_t triggerStat;
uint16_t nSkipped;
uint16_t nScanned;
int32_t maxDelay; // ms
tmr_h tmrId;
};
......@@ -196,16 +197,21 @@ typedef enum {
int32_t tdCheckAndInitSmaEnv(SSma *pSma, int8_t smaType);
void tdDestroySmaEnv(SSmaEnv *pSmaEnv);
void *tdFreeSmaEnv(SSmaEnv *pSmaEnv);
int32_t tdRefSmaStat(SSma *pSma, SSmaStat *pStat);
int32_t tdUnRefSmaStat(SSma *pSma, SSmaStat *pStat);
int32_t tdLockSma(SSma *pSma);
int32_t tdUnLockSma(SSma *pSma);
void *tdAcquireSmaRef(int32_t rsetId, int64_t refId);
int32_t tdReleaseSmaRef(int32_t rsetId, int64_t refId);
static FORCE_INLINE void tdRefSmaStat(SSma *pSma, SSmaStat *pStat) {
int32_t ref = T_REF_INC(pStat);
smaDebug("vgId:%d, ref sma stat:%p, val:%d", SMA_VID(pSma), pStat, ref);
}
static FORCE_INLINE void tdUnRefSmaStat(SSma *pSma, SSmaStat *pStat) {
int32_t ref = T_REF_DEC(pStat);
smaDebug("vgId:%d, unref sma stat:%p, val:%d", SMA_VID(pSma), pStat, ref);
}
// rsma
int32_t tdRefRSmaInfo(SSma *pSma, SRSmaInfo *pRSmaInfo);
int32_t tdUnRefRSmaInfo(SSma *pSma, SRSmaInfo *pRSmaInfo);
void *tdFreeRSmaInfo(SSma *pSma, SRSmaInfo *pInfo, bool isDeepFree);
int32_t tdRSmaFSOpen(SSma *pSma, int64_t version);
void tdRSmaFSClose(SRSmaFS *fs);
......@@ -218,9 +224,17 @@ int32_t tdRSmaProcessCreateImpl(SSma *pSma, SRSmaParam *param, int64_t suid, con
int32_t tdRSmaProcessExecImpl(SSma *pSma, ERsmaExecType type);
int32_t tdRSmaPersistExecImpl(SRSmaStat *pRSmaStat, SHashObj *pInfoHash);
int32_t tdRSmaProcessRestoreImpl(SSma *pSma, int8_t type, int64_t qtaskFileVer);
void tdRSmaQTaskInfoGetFileName(int32_t vid, int64_t version, char *outputName);
void tdRSmaQTaskInfoGetFullName(int32_t vid, int64_t version, const char *path, char *outputName);
void tdRSmaQTaskInfoGetFileName(int32_t vid, int64_t version, char *outputName);
void tdRSmaQTaskInfoGetFullName(int32_t vid, int64_t version, const char *path, char *outputName);
static FORCE_INLINE void tdRefRSmaInfo(SSma *pSma, SRSmaInfo *pRSmaInfo) {
int32_t ref = T_REF_INC(pRSmaInfo);
smaDebug("vgId:%d, ref rsma info:%p, val:%d", SMA_VID(pSma), pRSmaInfo, ref);
}
static FORCE_INLINE void tdUnRefRSmaInfo(SSma *pSma, SRSmaInfo *pRSmaInfo) {
int32_t ref = T_REF_DEC(pRSmaInfo);
smaDebug("vgId:%d, unref rsma info:%p, val:%d", SMA_VID(pSma), pRSmaInfo, ref);
}
// smaFileUtil ================
......
......@@ -244,6 +244,7 @@ int metaDropSTable(SMeta *pMeta, int64_t verison, SVDropStbReq *pReq, SArray *tb
// check if super table exists
rc = tdbTbGet(pMeta->pNameIdx, pReq->name, strlen(pReq->name) + 1, &pData, &nData);
if (rc < 0 || *(tb_uid_t *)pData != pReq->suid) {
tdbFree(pData);
terrno = TSDB_CODE_TDB_STB_NOT_EXIST;
return -1;
}
......@@ -309,7 +310,7 @@ int metaAlterSTable(SMeta *pMeta, int64_t version, SVCreateStbReq *pReq) {
int64_t oversion;
SDecoder dc = {0};
int32_t ret;
int32_t c;
int32_t c = -2;
tdbTbcOpen(pMeta->pUidIdx, &pUidIdxc, &pMeta->txn);
ret = tdbTbcMoveTo(pUidIdxc, &pReq->suid, sizeof(tb_uid_t), &c);
......@@ -416,20 +417,22 @@ int metaCreateTable(SMeta *pMeta, int64_t version, SVCreateTbReq *pReq, STableMe
me.ctbEntry.pTags = pReq->ctb.pTag;
#ifdef TAG_FILTER_DEBUG
SArray* pTagVals = NULL;
int32_t code = tTagToValArray((STag*)pReq->ctb.pTag, &pTagVals);
SArray *pTagVals = NULL;
int32_t code = tTagToValArray((STag *)pReq->ctb.pTag, &pTagVals);
for (int i = 0; i < taosArrayGetSize(pTagVals); i++) {
STagVal* pTagVal = (STagVal*)taosArrayGet(pTagVals, i);
STagVal *pTagVal = (STagVal *)taosArrayGet(pTagVals, i);
if (IS_VAR_DATA_TYPE(pTagVal->type)) {
char* buf = taosMemoryCalloc(pTagVal->nData + 1, 1);
char *buf = taosMemoryCalloc(pTagVal->nData + 1, 1);
memcpy(buf, pTagVal->pData, pTagVal->nData);
metaDebug("metaTag table:%s varchar index:%d cid:%d type:%d value:%s", pReq->name, i, pTagVal->cid, pTagVal->type, buf);
metaDebug("metaTag table:%s varchar index:%d cid:%d type:%d value:%s", pReq->name, i, pTagVal->cid,
pTagVal->type, buf);
taosMemoryFree(buf);
} else {
double val = 0;
GET_TYPED_DATA(val, double, pTagVal->type, &pTagVal->i64);
metaDebug("metaTag table:%s number index:%d cid:%d type:%d value:%f", pReq->name, i, pTagVal->cid, pTagVal->type, val);
metaDebug("metaTag table:%s number index:%d cid:%d type:%d value:%f", pReq->name, i, pTagVal->cid,
pTagVal->type, val);
}
}
#endif
......
......@@ -210,15 +210,18 @@ static int32_t tdUpdateQTaskInfoFiles(SSma *pSma, SRSmaStat *pStat) {
}
if (fsMaxVer < committed) {
SQTaskFile qFile = {.nRef = 1, .padding = 0, .version = committed, .size = 0};
if (taosArrayPush(pFS->aQTaskInf, &qFile) < 0) {
taosWUnLockLatch(RSMA_FS_LOCK(pStat));
terrno = TSDB_CODE_OUT_OF_MEMORY;
return TSDB_CODE_FAILED;
tdRSmaQTaskInfoGetFullName(TD_VID(pVnode), committed, tfsGetPrimaryPath(pVnode->pTfs), qTaskInfoFullName);
if (taosCheckExistFile(qTaskInfoFullName)) {
SQTaskFile qFile = {.nRef = 1, .padding = 0, .version = committed, .size = 0};
if (taosArrayPush(pFS->aQTaskInf, &qFile) < 0) {
taosWUnLockLatch(RSMA_FS_LOCK(pStat));
terrno = TSDB_CODE_OUT_OF_MEMORY;
return TSDB_CODE_FAILED;
}
}
} else {
smaDebug("vgId:%d, update qinf, no need as committed %" PRIi64 " not larger than fsMaxVer %" PRIi64, TD_VID(pVnode),
committed, fsMaxVer);
committed, fsMaxVer);
}
taosWUnLockLatch(RSMA_FS_LOCK(pStat));
......@@ -314,12 +317,12 @@ static int32_t tdProcessRSmaAsyncPreCommitImpl(SSma *pSma) {
if (tdRSmaPersistExecImpl(pRSmaStat, RSMA_INFO_HASH(pRSmaStat)) < 0) {
return TSDB_CODE_FAILED;
}
smaInfo("vgId:%d, rsma commit, operator state commited, TID:%p", SMA_VID(pSma), (void *)taosGetSelfPthreadId());
smaInfo("vgId:%d, rsma commit, operator state committed, TID:%p", SMA_VID(pSma), (void *)taosGetSelfPthreadId());
#if 0 // consuming task of qTaskInfo clone
// step 4: swap queue/qall and iQueue/iQall
// lock
// taosWLockLatch(SMA_ENV_LOCK(pEnv));
taosWLockLatch(SMA_ENV_LOCK(pEnv));
ASSERT(RSMA_INFO_HASH(pRSmaStat));
......@@ -335,7 +338,7 @@ static int32_t tdProcessRSmaAsyncPreCommitImpl(SSma *pSma) {
}
// unlock
// taosWUnLockLatch(SMA_ENV_LOCK(pEnv));
taosWUnLockLatch(SMA_ENV_LOCK(pEnv));
#endif
return TSDB_CODE_SUCCESS;
......@@ -380,25 +383,26 @@ static int32_t tdProcessRSmaAsyncPostCommitImpl(SSma *pSma) {
// step 1: merge qTaskInfo and iQTaskInfo
// lock
// taosWLockLatch(SMA_ENV_LOCK(pEnv));
void *pIter = NULL;
while ((pIter = taosHashIterate(RSMA_INFO_HASH(pRSmaStat), pIter))) {
tb_uid_t *pSuid = (tb_uid_t *)taosHashGetKey(pIter, NULL);
SRSmaInfo *pRSmaInfo = *(SRSmaInfo **)pIter;
if (RSMA_INFO_IS_DEL(pRSmaInfo)) {
int32_t refVal = T_REF_VAL_GET(pRSmaInfo);
if (refVal == 0) {
taosHashRemove(RSMA_INFO_HASH(pRSmaStat), pSuid, sizeof(*pSuid));
} else {
smaDebug(
"vgId:%d, rsma async post commit, not free rsma info since ref is %d although already deleted for "
"table:%" PRIi64,
SMA_VID(pSma), refVal, *pSuid);
if (1 == atomic_val_compare_exchange_8(&pRSmaStat->delFlag, 1, 0)) {
taosWLockLatch(SMA_ENV_LOCK(pEnv));
void *pIter = NULL;
while ((pIter = taosHashIterate(RSMA_INFO_HASH(pRSmaStat), pIter))) {
tb_uid_t *pSuid = (tb_uid_t *)taosHashGetKey(pIter, NULL);
SRSmaInfo *pRSmaInfo = *(SRSmaInfo **)pIter;
if (RSMA_INFO_IS_DEL(pRSmaInfo)) {
int32_t refVal = T_REF_VAL_GET(pRSmaInfo);
if (refVal == 0) {
taosHashRemove(RSMA_INFO_HASH(pRSmaStat), pSuid, sizeof(*pSuid));
} else {
smaDebug(
"vgId:%d, rsma async post commit, not free rsma info since ref is %d although already deleted for "
"table:%" PRIi64,
SMA_VID(pSma), refVal, *pSuid);
}
continue;
}
continue;
}
#if 0
if (pRSmaInfo->taskInfo[0]) {
if (pRSmaInfo->iTaskInfo[0]) {
......@@ -413,10 +417,11 @@ static int32_t tdProcessRSmaAsyncPostCommitImpl(SSma *pSma) {
taosHashPut(RSMA_INFO_HASH(pRSmaStat), pSuid, sizeof(tb_uid_t), pIter, sizeof(pIter));
smaDebug("vgId:%d, rsma async post commit, migrated from iRsmaInfoHash for table:%" PRIi64, SMA_VID(pSma), *pSuid);
#endif
}
}
// unlock
// taosWUnLockLatch(SMA_ENV_LOCK(pEnv));
// unlock
taosWUnLockLatch(SMA_ENV_LOCK(pEnv));
}
tdUpdateQTaskInfoFiles(pSma, pRSmaStat);
......
......@@ -177,39 +177,6 @@ void *tdFreeSmaEnv(SSmaEnv *pSmaEnv) {
return NULL;
}
int32_t tdRefSmaStat(SSma *pSma, SSmaStat *pStat) {
if (!pStat) return 0;
int ref = T_REF_INC(pStat);
smaDebug("vgId:%d, ref sma stat:%p, val:%d", SMA_VID(pSma), pStat, ref);
return 0;
}
int32_t tdUnRefSmaStat(SSma *pSma, SSmaStat *pStat) {
if (!pStat) return 0;
int ref = T_REF_DEC(pStat);
smaDebug("vgId:%d, unref sma stat:%p, val:%d", SMA_VID(pSma), pStat, ref);
return 0;
}
int32_t tdRefRSmaInfo(SSma *pSma, SRSmaInfo *pRSmaInfo) {
if (!pRSmaInfo) return 0;
int ref = T_REF_INC(pRSmaInfo);
smaDebug("vgId:%d, ref rsma info:%p, val:%d", SMA_VID(pSma), pRSmaInfo, ref);
return 0;
}
int32_t tdUnRefRSmaInfo(SSma *pSma, SRSmaInfo *pRSmaInfo) {
if (!pRSmaInfo) return 0;
int ref = T_REF_DEC(pRSmaInfo);
smaDebug("vgId:%d, unref rsma info:%p, val:%d", SMA_VID(pSma), pRSmaInfo, ref);
return 0;
}
static void tRSmaInfoHashFreeNode(void *data) {
SRSmaInfo *pRSmaInfo = NULL;
SRSmaInfoItem *pItem = NULL;
......@@ -492,6 +459,8 @@ static int32_t tdRsmaStopExecutor(const SSma *pSma) {
taosThreadJoin(pthread[i], NULL);
}
}
smaInfo("vgId:%d, rsma executor stopped, number:%d", SMA_VID(pSma), tsNumOfVnodeRsmaThreads);
}
return 0;
}
\ No newline at end of file
......@@ -19,7 +19,7 @@
#define RSMA_QTASKINFO_HEAD_LEN (sizeof(int32_t) + sizeof(int8_t) + sizeof(int64_t)) // len + type + suid
#define RSMA_QTASKEXEC_SMOOTH_SIZE (100) // cnt
#define RSMA_SUBMIT_BATCH_SIZE (1024) // cnt
#define RSMA_FETCH_DELAY_MAX (900000) // ms
#define RSMA_FETCH_DELAY_MAX (120000) // ms
#define RSMA_FETCH_ACTIVE_MAX (1000) // ms
#define RSMA_FETCH_INTERVAL (5000) // ms
......@@ -302,7 +302,7 @@ static int32_t tdSetRSmaInfoItemParams(SSma *pSma, SRSmaParam *param, SRSmaStat
return TSDB_CODE_FAILED;
}
SRSmaInfoItem *pItem = &(pRSmaInfo->items[idx]);
pItem->triggerStat = TASK_TRIGGER_STAT_INACTIVE;
pItem->triggerStat = TASK_TRIGGER_STAT_ACTIVE; // fetch the data when reboot
if (param->maxdelay[idx] < TSDB_MIN_ROLLUP_MAX_DELAY) {
int64_t msInterval =
convertTimeFromPrecisionToUnit(pRetention[idx + 1].freq, pTsdbCfg->precision, TIME_UNIT_MILLISECOND);
......@@ -320,7 +320,9 @@ static int32_t tdSetRSmaInfoItemParams(SSma *pSma, SRSmaParam *param, SRSmaStat
SRSmaRef rsmaRef = {.refId = pStat->refId, .suid = pRSmaInfo->suid};
taosHashPut(smaMgmt.refHash, &pItem, POINTER_BYTES, &rsmaRef, sizeof(rsmaRef));
taosTmrReset(tdRSmaFetchTrigger, pItem->maxDelay, pItem, smaMgmt.tmrHandle, &pItem->tmrId);
pItem->fetchLevel = pItem->level;
taosTmrReset(tdRSmaFetchTrigger, RSMA_FETCH_INTERVAL, pItem, smaMgmt.tmrHandle, &pItem->tmrId);
smaInfo("vgId:%d, item:%p table:%" PRIi64 " level:%" PRIi8 " maxdelay:%" PRIi64 " watermark:%" PRIi64
", finally maxdelay:%" PRIi32,
......@@ -470,6 +472,7 @@ int32_t tdProcessRSmaDrop(SSma *pSma, SVDropStbReq *pReq) {
}
// set del flag for data in mem
atomic_store_8(&pRSmaStat->delFlag, 1);
RSMA_INFO_SET_DEL(pRSmaInfo);
tdUnRefRSmaInfo(pSma, pRSmaInfo);
......@@ -939,25 +942,25 @@ static SRSmaInfo *tdAcquireRSmaInfoBySuid(SSma *pSma, int64_t suid) {
return NULL;
}
// taosRLockLatch(SMA_ENV_LOCK(pEnv));
taosRLockLatch(SMA_ENV_LOCK(pEnv));
pRSmaInfo = taosHashGet(RSMA_INFO_HASH(pStat), &suid, sizeof(tb_uid_t));
if (pRSmaInfo && (pRSmaInfo = *(SRSmaInfo **)pRSmaInfo)) {
if (RSMA_INFO_IS_DEL(pRSmaInfo)) {
// taosRUnLockLatch(SMA_ENV_LOCK(pEnv));
taosRUnLockLatch(SMA_ENV_LOCK(pEnv));
return NULL;
}
if (!pRSmaInfo->taskInfo[0]) {
if (tdRSmaInfoClone(pSma, pRSmaInfo) < 0) {
// taosRUnLockLatch(SMA_ENV_LOCK(pEnv));
taosRUnLockLatch(SMA_ENV_LOCK(pEnv));
return NULL;
}
}
tdRefRSmaInfo(pSma, pRSmaInfo);
// taosRUnLockLatch(SMA_ENV_LOCK(pEnv));
taosRUnLockLatch(SMA_ENV_LOCK(pEnv));
ASSERT(pRSmaInfo->suid == suid);
return pRSmaInfo;
}
// taosRUnLockLatch(SMA_ENV_LOCK(pEnv));
taosRUnLockLatch(SMA_ENV_LOCK(pEnv));
return NULL;
}
......@@ -1709,21 +1712,23 @@ static int32_t tdRSmaFetchAllResult(SSma *pSma, SRSmaInfo *pInfo) {
continue;
}
int64_t curMs = taosGetTimestampMs();
if ((pItem->nSkipped * pItem->maxDelay) > RSMA_FETCH_DELAY_MAX) {
smaDebug("vgId:%d, suid:%" PRIi64 " level:%" PRIi8 " nSkipped:%" PRIi8 " maxDelay:%d, fetch executed",
SMA_VID(pSma), pInfo->suid, i, pItem->nSkipped, pItem->maxDelay);
} else if (((curMs - pInfo->lastRecv) < RSMA_FETCH_ACTIVE_MAX)) {
++pItem->nSkipped;
smaTrace("vgId:%d, suid:%" PRIi64 " level:%" PRIi8 " curMs:%" PRIi64 " lastRecv:%" PRIi64 ", fetch skipped ",
SMA_VID(pSma), pInfo->suid, i, curMs, pInfo->lastRecv);
continue;
if ((++pItem->nScanned * pItem->maxDelay) > RSMA_FETCH_DELAY_MAX) {
smaDebug("vgId:%d, suid:%" PRIi64 " level:%" PRIi8 " nScanned:%" PRIi8 " maxDelay:%d, fetch executed",
SMA_VID(pSma), pInfo->suid, i, pItem->nScanned, pItem->maxDelay);
} else {
smaDebug("vgId:%d, suid:%" PRIi64 " level:%" PRIi8 " curMs:%" PRIi64 " lastRecv:%" PRIi64 ", fetch executed ",
SMA_VID(pSma), pInfo->suid, i, curMs, pInfo->lastRecv);
int64_t curMs = taosGetTimestampMs();
if ((curMs - pInfo->lastRecv) < RSMA_FETCH_ACTIVE_MAX) {
smaTrace("vgId:%d, suid:%" PRIi64 " level:%" PRIi8 " curMs:%" PRIi64 " lastRecv:%" PRIi64 ", fetch skipped ",
SMA_VID(pSma), pInfo->suid, i, curMs, pInfo->lastRecv);
atomic_store_8(&pItem->triggerStat, TASK_TRIGGER_STAT_ACTIVE); // restore the active stat
continue;
} else {
smaDebug("vgId:%d, suid:%" PRIi64 " level:%" PRIi8 " curMs:%" PRIi64 " lastRecv:%" PRIi64 ", fetch executed ",
SMA_VID(pSma), pInfo->suid, i, curMs, pInfo->lastRecv);
}
}
pItem->nSkipped = 0;
pItem->nScanned = 0;
if ((terrno = qSetMultiStreamInput(taskInfo, &dataBlock, 1, STREAM_INPUT__DATA_BLOCK)) < 0) {
goto _err;
......@@ -1734,12 +1739,12 @@ static int32_t tdRSmaFetchAllResult(SSma *pSma, SRSmaInfo *pInfo) {
}
tdCleanupStreamInputDataBlock(taskInfo);
smaInfo("vgId:%d, suid:%" PRIi64 " level:%" PRIi8 " nSkipped:%" PRIi8 " maxDelay:%d, fetch finished",
SMA_VID(pSma), pInfo->suid, i, pItem->nSkipped, pItem->maxDelay);
smaDebug("vgId:%d, suid:%" PRIi64 " level:%" PRIi8 " nScanned:%" PRIi8 " maxDelay:%d, fetch finished",
SMA_VID(pSma), pInfo->suid, i, pItem->nScanned, pItem->maxDelay);
} else {
smaDebug("vgId:%d, suid:%" PRIi64 " level:%" PRIi8 " nSkipped:%" PRIi8
smaDebug("vgId:%d, suid:%" PRIi64 " level:%" PRIi8 " nScanned:%" PRIi8
" maxDelay:%d, fetch not executed as fetch level is %" PRIi8,
SMA_VID(pSma), pInfo->suid, i, pItem->nSkipped, pItem->maxDelay, pItem->fetchLevel);
SMA_VID(pSma), pInfo->suid, i, pItem->nScanned, pItem->maxDelay, pItem->fetchLevel);
}
}
......@@ -1829,6 +1834,7 @@ int32_t tdRSmaProcessExecImpl(SSma *pSma, ERsmaExecType type) {
bool occupied = (batchMax <= 1);
if (batchMax > 1) {
batchMax = 100 / batchMax;
batchMax = TMAX(batchMax, 4);
}
while (occupied || (++batchCnt < batchMax)) { // greedy mode
taosReadAllQitems(pInfo->queue, pInfo->qall); // queue has mutex lock
......@@ -1838,13 +1844,15 @@ int32_t tdRSmaProcessExecImpl(SSma *pSma, ERsmaExecType type) {
smaDebug("vgId:%d, batchSize:%d, execType:%" PRIi8, SMA_VID(pSma), qallItemSize, type);
}
int8_t oldStat = atomic_val_compare_exchange_8(RSMA_COMMIT_STAT(pRSmaStat), 0, 2);
if (oldStat == 0 ||
((oldStat == 2) && atomic_load_8(RSMA_TRIGGER_STAT(pRSmaStat)) < TASK_TRIGGER_STAT_PAUSED)) {
atomic_fetch_add_32(&pRSmaStat->nFetchAll, 1);
tdRSmaFetchAllResult(pSma, pInfo);
if (0 == atomic_sub_fetch_32(&pRSmaStat->nFetchAll, 1)) {
atomic_store_8(RSMA_COMMIT_STAT(pRSmaStat), 0);
if (RSMA_INFO_ITEM(pInfo, 0)->fetchLevel || RSMA_INFO_ITEM(pInfo, 1)->fetchLevel) {
int8_t oldStat = atomic_val_compare_exchange_8(RSMA_COMMIT_STAT(pRSmaStat), 0, 2);
if (oldStat == 0 ||
((oldStat == 2) && atomic_load_8(RSMA_TRIGGER_STAT(pRSmaStat)) < TASK_TRIGGER_STAT_PAUSED)) {
atomic_fetch_add_32(&pRSmaStat->nFetchAll, 1);
tdRSmaFetchAllResult(pSma, pInfo);
if (0 == atomic_sub_fetch_32(&pRSmaStat->nFetchAll, 1)) {
atomic_store_8(RSMA_COMMIT_STAT(pRSmaStat), 0);
}
}
}
......@@ -1917,7 +1925,7 @@ int32_t tdRSmaProcessExecImpl(SSma *pSma, ERsmaExecType type) {
tsem_wait(&pRSmaStat->notEmpty);
if ((pEnv->flag & SMA_ENV_FLG_CLOSE) && (atomic_load_64(&pRSmaStat->nBufItems) <= 0)) {
smaInfo("vgId:%d, exec task end, flag:%" PRIi8 ", nBufItems:%" PRIi64, SMA_VID(pSma), pEnv->flag,
smaDebug("vgId:%d, exec task end, flag:%" PRIi8 ", nBufItems:%" PRIi64, SMA_VID(pSma), pEnv->flag,
atomic_load_64(&pRSmaStat->nBufItems));
break;
}
......
......@@ -178,7 +178,6 @@ static int32_t tdProcessTSmaInsertImpl(SSma *pSma, int64_t indexUid, const char
return TSDB_CODE_FAILED;
}
tdRefSmaStat(pSma, pStat);
pTsmaStat = SMA_STAT_TSMA(pStat);
if (!pTsmaStat->pTSma) {
......@@ -230,9 +229,7 @@ static int32_t tdProcessTSmaInsertImpl(SSma *pSma, int64_t indexUid, const char
goto _err;
}
tdUnRefSmaStat(pSma, pStat);
return TSDB_CODE_SUCCESS;
_err:
tdUnRefSmaStat(pSma, pStat);
return TSDB_CODE_FAILED;
}
......@@ -279,7 +279,7 @@ static int32_t tsdbScanAndTryFixFS(STsdb *pTsdb) {
goto _err;
}
if (size != pTsdb->fs.pDelFile->size) {
if (size != tsdbLogicToFileSize(pTsdb->fs.pDelFile->size, pTsdb->pVnode->config.tsdbPageSize)) {
code = TSDB_CODE_FILE_CORRUPTED;
goto _err;
}
......
......@@ -2815,6 +2815,7 @@ static STsdb* getTsdbByRetentions(SVnode* pVnode, TSKEY winSKey, SRetention* ret
if (VND_IS_RSMA(pVnode)) {
int8_t level = 0;
int64_t now = taosGetTimestamp(pVnode->config.tsdbCfg.precision);
int64_t offset = TSDB_TICK_PER_SECOND(pVnode->config.tsdbCfg.precision);
for (int8_t i = 0; i < TSDB_RETENTION_MAX; ++i) {
SRetention* pRetention = retentions + level;
......@@ -2824,7 +2825,7 @@ static STsdb* getTsdbByRetentions(SVnode* pVnode, TSKEY winSKey, SRetention* ret
}
break;
}
if ((now - pRetention->keep) <= winSKey) {
if ((now - pRetention->keep) <= (winSKey + offset)) {
break;
}
++level;
......
此差异已折叠。
......@@ -3445,8 +3445,7 @@ static bool isWstartColumnExist(SFillOperatorInfo* pInfo) {
}
for (int32_t i = 0; i < pInfo->numOfNotFillExpr; ++i) {
SExprInfo* exprInfo = pInfo->pNotFillExprInfo + i;
if (exprInfo->pExpr->nodeType == QUERY_NODE_COLUMN &&
exprInfo->base.numOfParams == 1 &&
if (exprInfo->pExpr->nodeType == QUERY_NODE_COLUMN && exprInfo->base.numOfParams == 1 &&
exprInfo->base.pParam[0].pCol->colType == COLUMN_TYPE_WINDOW_START) {
return true;
}
......@@ -3462,7 +3461,8 @@ static int32_t createWStartTsAsNotFillExpr(SFillOperatorInfo* pInfo, SFillPhysiN
return TSDB_CODE_QRY_SYS_ERROR;
}
SExprInfo* notFillExprs = taosMemoryRealloc(pInfo->pNotFillExprInfo, (pInfo->numOfNotFillExpr + 1) * sizeof(SExprInfo));
SExprInfo* notFillExprs =
taosMemoryRealloc(pInfo->pNotFillExprInfo, (pInfo->numOfNotFillExpr + 1) * sizeof(SExprInfo));
if (notFillExprs == NULL) {
return TSDB_CODE_OUT_OF_MEMORY;
}
......@@ -3473,7 +3473,7 @@ static int32_t createWStartTsAsNotFillExpr(SFillOperatorInfo* pInfo, SFillPhysiN
pInfo->pNotFillExprInfo = notFillExprs;
return TSDB_CODE_SUCCESS;
}
return TSDB_CODE_SUCCESS;
}
......@@ -3513,8 +3513,8 @@ SOperatorInfo* createFillOperatorInfo(SOperatorInfo* downstream, SFillPhysiNode*
&numOfOutputCols, COL_MATCH_FROM_SLOT_ID);
code = initFillInfo(pInfo, pExprInfo, pInfo->numOfExpr, pInfo->pNotFillExprInfo, pInfo->numOfNotFillExpr,
(SNodeListNode*)pPhyFillNode->pValues, pPhyFillNode->timeRange, pResultInfo->capacity,
pTaskInfo->id.str, pInterval, type, order);
(SNodeListNode*)pPhyFillNode->pValues, pPhyFillNode->timeRange, pResultInfo->capacity,
pTaskInfo->id.str, pInterval, type, order);
if (code != TSDB_CODE_SUCCESS) {
goto _error;
}
......@@ -4461,6 +4461,9 @@ int32_t setOutputBuf(STimeWindow* win, SResultRow** pResult, int64_t tableGroupI
};
char* value = NULL;
int32_t size = pAggSup->resultRowSize;
/*if (streamStateGet(pTaskInfo->streamInfo.pState, &key, (void**)&value, &size) < 0) {*/
/*value = taosMemoryCalloc(1, size);*/
/*}*/
if (streamStateAddIfNotExist(pTaskInfo->streamInfo.pState, &key, (void**)&value, &size) < 0) {
return TSDB_CODE_QRY_OUT_OF_MEMORY;
}
......@@ -4474,6 +4477,7 @@ int32_t setOutputBuf(STimeWindow* win, SResultRow** pResult, int64_t tableGroupI
int32_t releaseOutputBuf(SExecTaskInfo* pTaskInfo, SWinKey* pKey, SResultRow* pResult) {
streamStateReleaseBuf(pTaskInfo->streamInfo.pState, pKey, pResult);
/*taosMemoryFree((*(void**)pResult));*/
return TSDB_CODE_SUCCESS;
}
......
......@@ -5704,7 +5704,7 @@ static SSDataBlock* doStreamIntervalAgg(SOperatorInfo* pOperator) {
maxTs = TMAX(maxTs, pBlock->info.window.ekey);
doStreamIntervalAggImpl(pOperator, &pInfo->binfo.resultRowInfo, pBlock, MAIN_SCAN, pUpdatedMap);
// new disc buf
// doStreamIntervalAggImpl2(pOperator, pBlock, pBlock->info.groupId, pUpdatedMap);
/*doStreamIntervalAggImpl2(pOperator, pBlock, pBlock->info.groupId, pUpdatedMap);*/
}
pInfo->twAggSup.maxTs = TMAX(pInfo->twAggSup.maxTs, maxTs);
......
......@@ -303,7 +303,7 @@ SIndexTerm* indexTermCreate(int64_t suid, SIndexOperOnColumn oper, uint8_t colTy
buf = strndup(INDEX_DATA_NULL_STR, (int32_t)strlen(INDEX_DATA_NULL_STR));
len = (int32_t)strlen(INDEX_DATA_NULL_STR);
} else {
const char* emptyStr = " ";
static const char* emptyStr = " ";
buf = strndup(emptyStr, (int32_t)strlen(emptyStr));
len = (int32_t)strlen(emptyStr);
}
......@@ -585,6 +585,12 @@ int idxFlushCacheToTFile(SIndex* sIdx, void* cache, bool quit) {
idxTRsltDestroy(tr);
int ret = idxGenTFile(sIdx, pCache, result);
if (ret != 0) {
indexError("failed to merge");
} else {
int64_t cost = taosGetTimestampUs() - st;
indexInfo("success to merge , time cost: %" PRId64 "ms", cost / 1000);
}
idxDestroyFinalRslt(result);
idxCacheDestroyImm(pCache);
......@@ -595,12 +601,6 @@ int idxFlushCacheToTFile(SIndex* sIdx, void* cache, bool quit) {
tfileReaderUnRef(pReader);
idxCacheUnRef(pCache);
int64_t cost = taosGetTimestampUs() - st;
if (ret != 0) {
indexError("failed to merge, time cost: %" PRId64 "ms", cost / 1000);
} else {
indexInfo("success to merge , time cost: %" PRId64 "ms", cost / 1000);
}
atomic_store_32(&pCache->merging, 0);
if (quit) {
idxPost(sIdx);
......
......@@ -19,11 +19,12 @@
#include "tchecksum.h"
#include "tcoding.h"
static void fstPackDeltaIn(IdxFstFile* wrt, CompiledAddr nodeAddr, CompiledAddr transAddr, uint8_t nBytes) {
static FORCE_INLINE void fstPackDeltaIn(IdxFstFile* wrt, CompiledAddr nodeAddr, CompiledAddr transAddr,
uint8_t nBytes) {
CompiledAddr deltaAddr = (transAddr == EMPTY_ADDRESS) ? EMPTY_ADDRESS : nodeAddr - transAddr;
idxFilePackUintIn(wrt, deltaAddr, nBytes);
}
static uint8_t fstPackDetla(IdxFstFile* wrt, CompiledAddr nodeAddr, CompiledAddr transAddr) {
static FORCE_INLINE uint8_t fstPackDetla(IdxFstFile* wrt, CompiledAddr nodeAddr, CompiledAddr transAddr) {
uint8_t nBytes = packDeltaSize(nodeAddr, transAddr);
fstPackDeltaIn(wrt, nodeAddr, transAddr, nBytes);
return nBytes;
......@@ -39,7 +40,7 @@ FstUnFinishedNodes* fstUnFinishedNodesCreate() {
fstUnFinishedNodesPushEmpty(nodes, false);
return nodes;
}
static void unFinishedNodeDestroyElem(void* elem) {
static void FORCE_INLINE unFinishedNodeDestroyElem(void* elem) {
FstBuilderNodeUnfinished* b = (FstBuilderNodeUnfinished*)elem;
fstBuilderNodeDestroy(b->node);
taosMemoryFree(b->last);
......
......@@ -30,14 +30,14 @@ typedef struct {
static void deleteDataBlockFromLRU(const void* key, size_t keyLen, void* value) { taosMemoryFree(value); }
static void idxGenLRUKey(char* buf, const char* path, int32_t blockId) {
static FORCE_INLINE void idxGenLRUKey(char* buf, const char* path, int32_t blockId) {
char* p = buf;
SERIALIZE_STR_VAR_TO_BUF(p, path, strlen(path));
SERIALIZE_VAR_TO_BUF(p, '_', char);
idxInt2str(blockId, p, 0);
return;
}
static int idxFileCtxDoWrite(IFileCtx* ctx, uint8_t* buf, int len) {
static FORCE_INLINE int idxFileCtxDoWrite(IFileCtx* ctx, uint8_t* buf, int len) {
if (ctx->type == TFILE) {
int nwr = taosWriteFile(ctx->file.pFile, buf, len);
assert(nwr == len);
......@@ -47,7 +47,7 @@ static int idxFileCtxDoWrite(IFileCtx* ctx, uint8_t* buf, int len) {
ctx->offset += len;
return len;
}
static int idxFileCtxDoRead(IFileCtx* ctx, uint8_t* buf, int len) {
static FORCE_INLINE int idxFileCtxDoRead(IFileCtx* ctx, uint8_t* buf, int len) {
int nRead = 0;
if (ctx->type == TFILE) {
#ifdef USE_MMAP
......@@ -111,7 +111,7 @@ static int idxFileCtxDoReadFrom(IFileCtx* ctx, uint8_t* buf, int len, int32_t of
} while (len > 0);
return total;
}
static int idxFileCtxGetSize(IFileCtx* ctx) {
static FORCE_INLINE int idxFileCtxGetSize(IFileCtx* ctx) {
if (ctx->type == TFILE) {
int64_t file_size = 0;
taosStatFile(ctx->file.buf, &file_size, NULL);
......@@ -119,7 +119,7 @@ static int idxFileCtxGetSize(IFileCtx* ctx) {
}
return 0;
}
static int idxFileCtxDoFlush(IFileCtx* ctx) {
static FORCE_INLINE int idxFileCtxDoFlush(IFileCtx* ctx) {
if (ctx->type == TFILE) {
taosFsyncFile(ctx->file.pFile);
} else {
......
......@@ -16,7 +16,7 @@
#include "indexFstRegistry.h"
#include "os.h"
uint64_t fstRegistryHash(FstRegistry* registry, FstBuilderNode* bNode) {
static FORCE_INLINE uint64_t fstRegistryHash(FstRegistry* registry, FstBuilderNode* bNode) {
// TODO(yihaoDeng): refactor later
const uint64_t FNV_PRIME = 1099511628211;
uint64_t h = 14695981039346656037u;
......
......@@ -15,7 +15,7 @@
#include "indexFstSparse.h"
static void sparSetUtil(int32_t *buf, int32_t cap) {
static FORCE_INLINE void sparSetInitBuf(int32_t *buf, int32_t cap) {
for (int32_t i = 0; i < cap; i++) {
buf[i] = -1;
}
......@@ -28,8 +28,8 @@ FstSparseSet *sparSetCreate(int32_t sz) {
ss->dense = (int32_t *)taosMemoryMalloc(sz * sizeof(int32_t));
ss->sparse = (int32_t *)taosMemoryMalloc(sz * sizeof(int32_t));
sparSetUtil(ss->dense, sz);
sparSetUtil(ss->sparse, sz);
sparSetInitBuf(ss->dense, sz);
sparSetInitBuf(ss->sparse, sz);
ss->cap = sz;
......@@ -90,7 +90,7 @@ void sparSetClear(FstSparseSet *ss) {
if (ss == NULL) {
return;
}
sparSetUtil(ss->dense, ss->cap);
sparSetUtil(ss->sparse, ss->cap);
sparSetInitBuf(ss->dense, ss->cap);
sparSetInitBuf(ss->sparse, ss->cap);
ss->size = 0;
}
......@@ -1034,7 +1034,8 @@ static void tfileGenFileName(char* filename, uint64_t suid, const char* col, int
sprintf(filename, "%" PRIu64 "-%s-%" PRId64 ".tindex", suid, col, version);
return;
}
static void tfileGenFileFullName(char* fullname, const char* path, uint64_t suid, const char* col, int64_t version) {
static void FORCE_INLINE tfileGenFileFullName(char* fullname, const char* path, uint64_t suid, const char* col,
int64_t version) {
char filename[128] = {0};
tfileGenFileName(filename, suid, col, version);
sprintf(fullname, "%s/%s", path, filename);
......
......@@ -21,7 +21,7 @@ typedef struct MergeIndex {
int len;
} MergeIndex;
static int iBinarySearch(SArray *arr, int s, int e, uint64_t k) {
static FORCE_INLINE int iBinarySearch(SArray *arr, int s, int e, uint64_t k) {
uint64_t v;
int32_t m;
while (s <= e) {
......
......@@ -80,6 +80,11 @@ IF(NOT TD_DARWIN)
"${TD_SOURCE_DIR}/include/libs/index"
"${CMAKE_CURRENT_SOURCE_DIR}/../inc"
)
target_include_directories (idxJsonUT
PUBLIC
"${TD_SOURCE_DIR}/include/libs/index"
"${CMAKE_CURRENT_SOURCE_DIR}/../inc"
)
target_link_libraries (idxTest
os
......@@ -102,11 +107,7 @@ IF(NOT TD_DARWIN)
gtest_main
index
)
target_include_directories (idxJsonUT
PUBLIC
"${TD_SOURCE_DIR}/include/libs/index"
"${CMAKE_CURRENT_SOURCE_DIR}/../inc"
)
target_link_libraries (idxTest
os
util
......
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3 * or later ("AGPL"), as published by the Free
* Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <gtest/gtest.h>
#include <algorithm>
#include <iostream>
#include <string>
#include <thread>
#include "index.h"
#include "indexCache.h"
#include "indexFst.h"
#include "indexFstUtil.h"
#include "indexInt.h"
#include "indexTfile.h"
#include "indexUtil.h"
#include "tskiplist.h"
#include "tutil.h"
using namespace std;
static std::string logDir = TD_TMP_DIR_PATH "log";
static void initLog() {
const char *defaultLogFileNamePrefix = "taoslog";
const int32_t maxLogFileNum = 10;
tsAsyncLog = 0;
idxDebugFlag = 143;
strcpy(tsLogDir, logDir.c_str());
taosRemoveDir(tsLogDir);
taosMkDir(tsLogDir);
if (taosInitLog(defaultLogFileNamePrefix, maxLogFileNum) < 0) {
printf("failed to open log file in directory:%s\n", tsLogDir);
}
}
struct WriteBatch {
SIndexMultiTerm *terms;
};
class Idx {
public:
Idx(int _cacheSize = 1024 * 1024 * 4, const char *_path = "tindex") {
opts.cacheSize = _cacheSize;
path += TD_TMP_DIR_PATH;
path += _path;
}
int SetUp(bool remove) {
initLog();
if (remove) taosRemoveDir(path.c_str());
int ret = indexJsonOpen(&opts, path.c_str(), &index);
return ret;
}
int Write(WriteBatch *batch, uint64_t uid) {
// write batch
indexJsonPut(index, batch->terms, uid);
return 0;
}
int Read(const char *json, void *key, int64_t *id) {
// read batch
return 0;
}
void TearDown() { indexJsonClose(index); }
std::string path;
SIndexOpts opts;
SIndex *index;
};
SIndexTerm *indexTermCreateT(int64_t suid, SIndexOperOnColumn oper, uint8_t colType, const char *colName,
int32_t nColName, const char *colVal, int32_t nColVal) {
char buf[256] = {0};
int16_t sz = nColVal;
memcpy(buf, (uint16_t *)&sz, 2);
memcpy(buf + 2, colVal, nColVal);
if (colType == TSDB_DATA_TYPE_BINARY) {
return indexTermCreate(suid, oper, colType, colName, nColName, buf, sizeof(buf));
} else {
return indexTermCreate(suid, oper, colType, colName, nColName, colVal, nColVal);
}
return NULL;
}
int initWriteBatch(WriteBatch *wb, int batchSize) {
SIndexMultiTerm *terms = indexMultiTermCreate();
std::string colName;
std::string colVal;
for (int i = 0; i < 64; i++) {
colName += '0' + i;
colVal += '0' + i;
}
for (int i = 0; i < batchSize; i++) {
colVal[i % colVal.size()] = '0' + i % 128;
colName[i % colName.size()] = '0' + i % 128;
SIndexTerm *term = indexTermCreateT(0, ADD_VALUE, TSDB_DATA_TYPE_BINARY, colName.c_str(), colName.size(),
colVal.c_str(), colVal.size());
indexMultiTermAdd(terms, term);
}
wb->terms = terms;
return 0;
}
int BenchWrite(Idx *idx, int batchSize, int limit) {
for (int i = 0; i < limit; i += batchSize) {
WriteBatch wb;
idx->Write(&wb, i);
}
return 0;
}
int BenchRead(Idx *idx) { return 0; }
int main() {
// Idx *idx = new Idx;
// if (idx->SetUp(true) != 0) {
// std::cout << "failed to setup index" << std::endl;
// return 0;
// } else {
// std::cout << "succ to setup index" << std::endl;
// }
// BenchWrite(idx, 100, 10000);
return 1;
}
......@@ -271,20 +271,20 @@ void validateFst() {
}
delete m;
}
static std::string logDir = TD_TMP_DIR_PATH "log";
static void initLog() {
const char* defaultLogFileNamePrefix = "taoslog";
const int32_t maxLogFileNum = 10;
tsAsyncLog = 0;
idxDebugFlag = 143;
strcpy(tsLogDir, logDir.c_str());
taosRemoveDir(tsLogDir);
taosMkDir(tsLogDir);
if (taosInitLog(defaultLogFileNamePrefix, maxLogFileNum) < 0) {
printf("failed to open log file in directory:%s\n", tsLogDir);
static std::string logDir = TD_TMP_DIR_PATH "log";
static void initLog() {
const char* defaultLogFileNamePrefix = "taoslog";
const int32_t maxLogFileNum = 10;
tsAsyncLog = 0;
idxDebugFlag = 143;
strcpy(tsLogDir, logDir.c_str());
taosRemoveDir(tsLogDir);
taosMkDir(tsLogDir);
if (taosInitLog(defaultLogFileNamePrefix, maxLogFileNum) < 0) {
printf("failed to open log file in directory:%s\n", tsLogDir);
}
}
class IndexEnv : public ::testing::Test {
......
......@@ -48,6 +48,7 @@ typedef enum EDatabaseOptionType {
DB_OPTION_KEEP,
DB_OPTION_PAGES,
DB_OPTION_PAGESIZE,
DB_OPTION_TSDB_PAGESIZE,
DB_OPTION_PRECISION,
DB_OPTION_REPLICA,
DB_OPTION_STRICT,
......@@ -60,7 +61,7 @@ typedef enum EDatabaseOptionType {
DB_OPTION_WAL_RETENTION_SIZE,
DB_OPTION_WAL_ROLL_PERIOD,
DB_OPTION_WAL_SEGMENT_SIZE,
DB_OPTION_SST_TRIGGER,
DB_OPTION_STT_TRIGGER,
DB_OPTION_TABLE_PREFIX,
DB_OPTION_TABLE_SUFFIX
} EDatabaseOptionType;
......
......@@ -184,6 +184,7 @@ db_options(A) ::= db_options(B) KEEP integer_list(C).
db_options(A) ::= db_options(B) KEEP variable_list(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_KEEP, C); }
db_options(A) ::= db_options(B) PAGES NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_PAGES, &C); }
db_options(A) ::= db_options(B) PAGESIZE NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_PAGESIZE, &C); }
db_options(A) ::= db_options(B) TSDB_PAGESIZE NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_TSDB_PAGESIZE, &C); }
db_options(A) ::= db_options(B) PRECISION NK_STRING(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_PRECISION, &C); }
db_options(A) ::= db_options(B) REPLICA NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_REPLICA, &C); }
db_options(A) ::= db_options(B) STRICT NK_STRING(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_STRICT, &C); }
......@@ -207,7 +208,7 @@ db_options(A) ::= db_options(B) WAL_RETENTION_SIZE NK_MINUS(D) NK_INTEGER(C).
}
db_options(A) ::= db_options(B) WAL_ROLL_PERIOD NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_WAL_ROLL_PERIOD, &C); }
db_options(A) ::= db_options(B) WAL_SEGMENT_SIZE NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_WAL_SEGMENT_SIZE, &C); }
db_options(A) ::= db_options(B) SST_TRIGGER NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_SST_TRIGGER, &C); }
db_options(A) ::= db_options(B) STT_TRIGGER NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_STT_TRIGGER, &C); }
db_options(A) ::= db_options(B) TABLE_PREFIX NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_TABLE_PREFIX, &C); }
db_options(A) ::= db_options(B) TABLE_SUFFIX NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_TABLE_SUFFIX, &C); }
......@@ -226,7 +227,7 @@ alter_db_option(A) ::= KEEP variable_list(B).
//alter_db_option(A) ::= REPLICA NK_INTEGER(B). { A.type = DB_OPTION_REPLICA; A.val = B; }
//alter_db_option(A) ::= STRICT NK_STRING(B). { A.type = DB_OPTION_STRICT; A.val = B; }
alter_db_option(A) ::= WAL_LEVEL NK_INTEGER(B). { A.type = DB_OPTION_WAL; A.val = B; }
alter_db_option(A) ::= SST_TRIGGER NK_INTEGER(B). { A.type = DB_OPTION_SST_TRIGGER; A.val = B; }
alter_db_option(A) ::= STT_TRIGGER NK_INTEGER(B). { A.type = DB_OPTION_STT_TRIGGER; A.val = B; }
%type integer_list { SNodeList* }
%destructor integer_list { nodesDestroyList($$); }
......
......@@ -826,6 +826,7 @@ SNode* createDefaultDatabaseOptions(SAstCreateContext* pCxt) {
pOptions->keep[2] = TSDB_DEFAULT_KEEP;
pOptions->pages = TSDB_DEFAULT_PAGES_PER_VNODE;
pOptions->pagesize = TSDB_DEFAULT_PAGESIZE_PER_VNODE;
pOptions->tsdbPageSize = TSDB_DEFAULT_TSDB_PAGESIZE;
pOptions->precision = TSDB_DEFAULT_PRECISION;
pOptions->replica = TSDB_DEFAULT_DB_REPLICA;
pOptions->strict = TSDB_DEFAULT_DB_STRICT;
......@@ -858,6 +859,7 @@ SNode* createAlterDatabaseOptions(SAstCreateContext* pCxt) {
pOptions->keep[2] = -1;
pOptions->pages = -1;
pOptions->pagesize = -1;
pOptions->tsdbPageSize = -1;
pOptions->precision = -1;
pOptions->replica = -1;
pOptions->strict = -1;
......@@ -918,6 +920,9 @@ SNode* setDatabaseOption(SAstCreateContext* pCxt, SNode* pOptions, EDatabaseOpti
case DB_OPTION_PAGESIZE:
pDbOptions->pagesize = taosStr2Int32(((SToken*)pVal)->z, NULL, 10);
break;
case DB_OPTION_TSDB_PAGESIZE:
pDbOptions->tsdbPageSize = taosStr2Int32(((SToken*)pVal)->z, NULL, 10);
break;
case DB_OPTION_PRECISION:
COPY_STRING_FORM_STR_TOKEN(pDbOptions->precisionStr, (SToken*)pVal);
break;
......@@ -955,7 +960,7 @@ SNode* setDatabaseOption(SAstCreateContext* pCxt, SNode* pOptions, EDatabaseOpti
case DB_OPTION_WAL_SEGMENT_SIZE:
pDbOptions->walSegmentSize = taosStr2Int32(((SToken*)pVal)->z, NULL, 10);
break;
case DB_OPTION_SST_TRIGGER:
case DB_OPTION_STT_TRIGGER:
pDbOptions->sstTrigger = taosStr2Int32(((SToken*)pVal)->z, NULL, 10);
break;
case DB_OPTION_TABLE_PREFIX:
......
......@@ -1134,6 +1134,9 @@ static int32_t parseTableOptions(SInsertParseContext* pCxt) {
return buildSyntaxErrMsg(&pCxt->msg, "Invalid option ttl", sToken.z);
}
pCxt->createTblReq.ttl = taosStr2Int32(sToken.z, NULL, 10);
if (pCxt->createTblReq.ttl < 0) {
return buildSyntaxErrMsg(&pCxt->msg, "Invalid option ttl", sToken.z);
}
} else if (TK_COMMENT == sToken.type) {
pCxt->pSql += index;
NEXT_TOKEN(pCxt->pSql, sToken);
......
此差异已折叠。
此差异已折叠。
......@@ -248,9 +248,12 @@ int32_t streamSearchAndAddBlock(SStreamTask* pTask, SStreamDispatchReq* pReqs, S
char* ctbName = buildCtbNameByGroupId(pTask->shuffleDispatcher.stbFullName, groupId);
SArray* vgInfo = pTask->shuffleDispatcher.dbInfo.pVgroupInfos;
// TODO: get hash function by hashMethod
uint32_t hashValue = MurmurHash3_32(ctbName, strlen(ctbName));
/*uint32_t hashValue = MurmurHash3_32(ctbName, strlen(ctbName));*/
SUseDbRsp* pDbInfo = &pTask->shuffleDispatcher.dbInfo;
uint32_t hashValue =
taosGetTbHashVal(ctbName, strlen(ctbName), pDbInfo->hashMethod, pDbInfo->hashPrefix, pDbInfo->hashSuffix);
taosMemoryFree(ctbName);
bool found = false;
// TODO: optimize search
int32_t j;
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册