未验证 提交 c753ec81 编写于 作者: W wade zhang 提交者: GitHub

Merge pull request #16240 from danielclow/docs/daniel-3.0

doc: english version of 3.0 documentation
......@@ -116,7 +116,7 @@ Data Collection Point (DCP) refers to hardware or software that collects metrics
## Table
Since time-series data is most likely to be structured data, TDengine adopts the traditional relational database model to process them with a short learning curve. You need to create a database, create tables, then insert data points and execute queries to explore the data.
Since time-series data is most likely to be structured data, TDengine adopts the traditional relational database model to process them with a short learning curve. You need to create a database, create tables, then insert data points and execute queries to explore the data.
To make full use of time-series data characteristics, TDengine adopts a strategy of "**One Table for One Data Collection Point**". TDengine requires the user to create a table for each data collection point (DCP) to store collected time-series data. For example, if there are over 10 million smart meters, it means 10 million tables should be created. For the table above, 4 tables should be created for devices D1001, D1002, D1003, and D1004 to store the data collected. This design has several benefits:
......@@ -125,25 +125,28 @@ To make full use of time-series data characteristics, TDengine adopts a strategy
3. The metric data from a DCP is continuously stored, block by block. If you read data for a period of time, it can greatly reduce random read operations and improve read and query performance by orders of magnitude.
4. Inside a data block for a DCP, columnar storage is used, and different compression algorithms are used for different data types. Metrics generally don't vary as significantly between themselves over a time range as compared to other metrics, which allows for a higher compression rate.
If the metric data of multiple DCPs are traditionally written into a single table, due to uncontrollable network delays, the timing of the data from different DCPs arriving at the server cannot be guaranteed, write operations must be protected by locks, and metric data from one DCP cannot be guaranteed to be continuously stored together. **One table for one data collection point can ensure the best performance of insert and query of a single data collection point to the greatest possible extent.**
If the metric data of multiple DCPs are traditionally written into a single table, due to uncontrollable network delays, the timing of the data from different DCPs arriving at the server cannot be guaranteed, write operations must be protected by locks, and metric data from one DCP cannot be guaranteed to be continuously stored together. ** One table for one data collection point can ensure the best performance of insert and query of a single data collection point to the greatest possible extent.**
TDengine suggests using DCP ID as the table name (like D1001 in the above table). Each DCP may collect one or multiple metrics (like the current, voltage, phase as above). Each metric has a corresponding column in the table. The data type for a column can be int, float, string and others. In addition, the first column in the table must be a timestamp. TDengine uses the time stamp as the index, and won’t build the index on any metrics stored. Column wise storage is used.
Complex devices, such as connected cars, may have multiple DCPs. In this case, multiple tables are created for a single device, one table per DCP.
## Super Table (STable)
The design of one table for one data collection point will require a huge number of tables, which is difficult to manage. Furthermore, applications often need to take aggregation operations among DCPs, thus aggregation operations will become complicated. To support aggregation over multiple tables efficiently, the STable(Super Table) concept is introduced by TDengine.
STable is a template for a type of data collection point. A STable contains a set of data collection points (tables) that have the same schema or data structure, but with different static attributes (tags). To describe a STable, in addition to defining the table structure of the metrics, it is also necessary to define the schema of its tags. The data type of tags can be int, float, string, and there can be multiple tags, which can be added, deleted, or modified afterward. If the whole system has N different types of data collection points, N STables need to be established.
In the design of TDengine, **a table is used to represent a specific data collection point, and STable is used to represent a set of data collection points of the same type**.
In the design of TDengine, **a table is used to represent a specific data collection point, and STable is used to represent a set of data collection points of the same type**.
## Subtable
When creating a table for a specific data collection point, the user can use a STable as a template and specify the tag values of this specific DCP to create it. **The table created by using a STable as the template is called subtable** in TDengine. The difference between regular table and subtable is:
When creating a table for a specific data collection point, the user can use a STable as a template and specify the tag values of this specific DCP to create it. ** The table created by using a STable as the template is called subtable** in TDengine. The difference between regular table and subtable is:
1. Subtable is a table, all SQL commands applied on a regular table can be applied on subtable.
2. Subtable is a table with extensions, it has static tags (labels), and these tags can be added, deleted, and updated after it is created. But a regular table does not have tags.
3. A subtable belongs to only one STable, but a STable may have many subtables. Regular tables do not belong to a STable.
4. A regular table can not be converted into a subtable, and vice versa.
4. A regular table can not be converted into a subtable, and vice versa.
The relationship between a STable and the subtables created based on this STable is as follows:
......@@ -151,13 +154,13 @@ The relationship between a STable and the subtables created based on this STable
2. The schema of metrics or labels cannot be adjusted through subtables, and it can only be changed via STable. Changes to the schema of a STable takes effect immediately for all associated subtables.
3. STable defines only one template and does not store any data or label information by itself. Therefore, data cannot be written to a STable, only to subtables.
Queries can be executed on both a table (subtable) and a STable. For a query on a STable, TDengine will treat the data in all its subtables as a whole data set for processing. TDengine will first find the subtables that meet the tag filter conditions, then scan the time-series data of these subtables to perform aggregation operation, which reduces the number of data sets to be scanned which in turn greatly improves the performance of data aggregation across multiple DCPs.
Queries can be executed on both a table (subtable) and a STable. For a query on a STable, TDengine will treat the data in all its subtables as a whole data set for processing. TDengine will first find the subtables that meet the tag filter conditions, then scan the time-series data of these subtables to perform aggregation operation, which reduces the number of data sets to be scanned which in turn greatly improves the performance of data aggregation across multiple DCPs. In essence, querying a supertable is a very efficient aggregate query on multiple DCPs of the same type.
In TDengine, it is recommended to use a subtable instead of a regular table for a DCP.
In TDengine, it is recommended to use a subtable instead of a regular table for a DCP.
## Database
A database is a collection of tables. TDengine allows a running instance to have multiple databases, and each database can be configured with different storage policies. Different types of DCPs often have different data characteristics, including the frequency of data collection, data retention time, the number of replications, the size of data blocks, whether data is allowed to be updated, and so on. In order for TDengine to work with maximum efficiency in various scenarios, TDengine recommends that STables with different data characteristics be created in different databases.
A database is a collection of tables. TDengine allows a running instance to have multiple databases, and each database can be configured with different storage policies. The [characteristics of time-series data](https://www.taosdata.com/blog/2019/07/09/86.html) from different data collection points may be different. Characteristics include collection frequency, retention policy and others which determine how you create and configure the database. For e.g. days to keep, number of replicas, data block size, whether data updates are allowed and other configurable parameters would be determined by the characteristics of your data and your business requirements. In order for TDengine to work with maximum efficiency in various scenarios, TDengine recommends that STables with different data characteristics be created in different databases.
In a database, there can be one or more STables, but a STable belongs to only one database. All tables owned by a STable are stored in only one database.
......@@ -167,4 +170,4 @@ FQDN (Fully Qualified Domain Name) is the full domain name of a specific compute
Each node of a TDengine cluster is uniquely identified by an End Point, which consists of an FQDN and a Port, such as h1.tdengine.com:6030. In this way, when the IP changes, we can still use the FQDN to dynamically find the node without changing any configuration of the cluster. In addition, FQDN is used to facilitate unified access to the same cluster from the Intranet and the Internet.
TDengine does not recommend using an IP address to access the cluster. FQDN is recommended for cluster management.
TDengine does not recommend using an IP address to access the cluster. FQDN is recommended for cluster management.
---
sidebar_label: Docker
title: 通过 Docker 快速体验 TDengine
title: Quick Install on Docker
---
:::info
如果您希望对 TDengine 贡献代码或对内部实现感兴趣,请参考我们的 [TDengine GitHub 主页](https://github.com/taosdata/TDengine) 下载源码构建和安装.
:::
本节首先介绍如何通过 Docker 快速体验 TDengine,然后介绍如何在 Docker 环境下体验 TDengine 的写入和查询功能。
This document describes how to install TDengine in a Docker container and perform queries and inserts. To get started with TDengine in a non-containerized environment, see [Quick Install](../../get-started/package). If you want to view the source code, build TDengine yourself, or contribute to the project, see the [TDengine GitHub repository](https://github.com/taosdata/TDengine).
## 启动 TDengine
## Run TDengine
如果已经安装了 docker, 只需执行下面的命令。
If Docker is already installed on your computer, run the following command:
```shell
docker run -d -p 6030:6030 -p 6041/6041 -p 6043-6049/6043-6049 -p 6043-6049:6043-6049/udp tdengine/tdengine
docker run -d -p 6030:6030 -p 6041:6041 -p 6043-6049:6043-6049 -p 6043-6049:6043-6049/udp tdengine/tdengine
```
注意:TDengine 3.0 服务端仅使用 6030 TCP 端口。6041 为 taosAdapter 所使用提供 REST 服务端口。6043-6049 为 taosAdapter 提供第三方应用接入所使用端口,可根据需要选择是否打开。
Note that TDengine Server uses TCP port 6030. Port 6041 is used by taosAdapter for the REST API service. Ports 6043 through 6049 are used by taosAdapter for other connectors. You can open these ports as needed.
确定该容器已经启动并且在正常运行
Run the following command to ensure that your container is running:
```shell
docker ps
```
进入该容器并执行 bash
Enter the container and open the bash shell:
```shell
docker exec -it <container name> bash
```
然后就可以执行相关的 Linux 命令操作和访问 TDengine
You can now access TDengine or run other Linux commands.
## 运行 TDengine CLI
Note: For information about installing docker, see the [official documentation](https://docs.docker.com/get-docker/).
进入容器,执行 taos
## Open the TDengine CLI
On the container, run the following command to open the TDengine CLI:
```
$ taos
Welcome to the TDengine shell from Linux, Client Version:3.0.0.0
Copyright (c) 2022 by TAOS Data, Inc. All rights reserved.
Server is Community Edition.
taos>
```
## 写入数据
## Insert Data into TDengine
可以使用 TDengine 的自带工具 taosBenchmark 快速体验 TDengine 的写入。
You can use the `taosBenchmark` tool included with TDengine to write test data into your deployment.
进入容器,启动 taosBenchmark:
To do so, run the following command:
```bash
$ taosBenchmark
```
该命令将在数据库 test 下面自动创建一张超级表 meters,该超级表下有 1 万张表,表名为 "d0" 到 "d9999",每张表有 1 万条记录,每条记录有 (ts, current, voltage, phase) 四个字段,时间戳从 "2017-07-14 10:40:00 000" 到 "2017-07-14 10:40:09 999",每张表带有标签 location 和 groupId,groupId 被设置为 1 到 10, location 被设置为 "San Francisco" 或者 "Los Angeles"等城市名称。
This command creates the `meters` supertable in the `test` database. In the `meters` supertable, it then creates 10,000 subtables named `d0` to `d9999`. Each table has 10,000 rows and each row has four columns: `ts`, `current`, `voltage`, and `phase`. The timestamps of the data in these columns range from 2017-07-14 10:40:00 000 to 2017-07-14 10:40:09 999. Each table is randomly assigned a `groupId` tag from 1 to ten and a `location` tag of either `California.SanFrancisco` or `California.SanDiego`.
这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能。
The `taosBenchmark` command creates a deployment with 100 million data points that you can use for testing purposes. The time required depends on the hardware specifications of the local system.
taosBenchmark 命令本身带有很多选项,配置表的数目、记录条数等等,您可以设置不同参数进行体验,请执行 `taosBenchmark --help` 详细列出。taosBenchmark 详细使用方法请参照 [taosBenchmark 参考手册](../../reference/taosbenchmark)
You can customize the test deployment that taosBenchmark creates by specifying command-line parameters. For information about command-line parameters, run the `taosBenchmark --help` command. For more information about taosBenchmark, see [taosBenchmark](/reference/taosbenchmark).
## 体验查询
## Query Data in TDengine
使用上述 taosBenchmark 插入数据后,可以在 TDengine CLI 输入查询命令,体验查询速度。。
After using taosBenchmark to create your test deployment, you can run queries in the TDengine CLI to test its performance. For example:
查询超级表下记录总条数:
Query the number of rows in the `meters` supertable:
```sql
taos> select count(*) from test.meters;
```
查询 1 亿条记录的平均值、最大值、最小值等:
Query the average, maximum, and minimum values of all 100 million rows of data:
```sql
taos> select avg(current), max(voltage), min(phase) from test.meters;
```
查询 location="San Francisco" 的记录总条数:
Query the number of rows whose `location` tag is `California.SanFrancisco`:
```sql
taos> select count(*) from test.meters where location="San Francisco";
```
查询 groupId=10 的所有记录的平均值、最大值、最小值等:
Query the average, maximum, and minimum values of all rows whose `groupId` tag is `10`:
```sql
taos> select avg(current), max(voltage), min(phase) from test.meters where groupId=10;
```
对表 d10 按 10s 进行平均值、最大值和最小值聚合统计:
Query the average, maximum, and minimum values for table `d10` in 10 second intervals:
```sql
taos> select avg(current), max(voltage), min(phase) from test.d10 interval(10s);
```
## 其它
## Additional Information
更多关于在 Docker 环境下使用 TDengine 的细节,请参考 [在 Docker 下使用 TDengine](../../reference/docker)
For more information about deploying TDengine in a Docker environment, see [Using TDengine in Docker](../../reference/docker).
---
sidebar_label: 安装包
title: 使用安装包立即开始
sidebar_label: Package
title: Quick Install from Package
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import PkgListV3 from "/components/PkgListV3";
:::info
如果您希望对 TDengine 贡献代码或对内部实现感兴趣,请参考我们的 [TDengine GitHub 主页](https://github.com/taosdata/TDengine) 下载源码构建和安装.
For information about installing TDengine on Docker, see [Quick Install on Docker](../../get-started/docker). If you want to view the source code, build TDengine yourself, or contribute to the project, see the [TDengine GitHub repository](https://github.com/taosdata/TDengine).
:::
The full package of TDengine includes the TDengine Server (`taosd`), TDengine Client (`taosc`), taosAdapter for connecting with third-party systems and providing a RESTful interface, a command-line interface, and some tools. Note that taosAdapter supports Linux only. In addition to connectors for multiple languages, TDengine also provides a [REST API](../../reference/rest-api) through [taosAdapter](../../reference/taosadapter).
TDengine 开源版本提供 deb 和 rpm 格式安装包,用户可以根据自己的运行环境选择合适的安装包。其中 deb 支持 Debian/Ubuntu 及衍生系统,rpm 支持 CentOS/RHEL/SUSE 及衍生系统。同时我们也为企业用户提供 tar.gz 格式安装包,也支持通过 `apt-get` 工具从线上进行安装。
The standard server installation package includes `taos`, `taosd`, `taosAdapter`, `taosBenchmark`, and sample code. You can also download a lite package that includes only `taosd` and the C/C++ connector.
## 安装
The TDengine Community Edition is released as .deb and .rpm packages. The .deb package can be installed on Debian, Ubuntu, and derivative systems. The .rpm package can be installed on CentOS, RHEL, SUSE, and derivative systems. A .tar.gz package is also provided for enterprise customers, and you can install TDengine over `apt-get` as well. The .tar.tz package includes `taosdump` and the TDinsight installation script. If you want to use these utilities with the .deb or .rpm package, download and install taosTools separately. TDengine can also be installed on 64-bit Windows servers.
## Installation
<Tabs>
<TabItem value="apt-get" label="apt-get">
可以使用 apt-get 工具从官方仓库安装。
<TabItem label=".deb" value="debinst">
**安装包仓库**
1. Download the .deb installation package.
<PkgListV3 type={6}/>
2. In the directory where the package is located, use `dpkg` to install the package:
```bash
wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-stable stable main" | sudo tee /etc/apt/sources.list.d/tdengine-stable.list
# Enter the name of the package that you downloaded.
sudo dpkg -i TDengine-server-<version>-Linux-x64.deb
```
如果安装 Beta 版需要安装包仓库
</TabItem>
```bash
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-beta beta main" | sudo tee /etc/apt/sources.list.d/tdengine-beta.list
```
<TabItem label=".rpm" value="rpminst">
**使用 apt-get 命令安装**
1. Download the .rpm installation package.
<PkgListV3 type={5}/>
2. In the directory where the package is located, use rpm to install the package:
```bash
sudo apt-get update
apt-cache policy tdengine
sudo apt-get install tdengine
# Enter the name of the package that you downloaded.
sudo rpm -ivh TDengine-server-<version>-Linux-x64.rpm
```
:::tip
apt-get 方式只适用于 Debian 或 Ubuntu 系统
::::
</TabItem>
<TabItem label="Deb 安装" value="debinst">
1、从官网下载获得 deb 安装包,例如 TDengine-server-3.0.0.0-Linux-x64.deb;
2、进入到 TDengine-server-3.0.0.0-Linux-x64.deb 安装包所在目录,执行如下的安装命令:
<TabItem label=".tar.gz" value="tarinst">
1. Download the .tar.gz installation package.
<PkgListV3 type={0}/>
2. In the directory where the package is located, use `tar` to decompress the package:
```bash
sudo dpkg -i TDengine-server-3.0.0.0-Linux-x64.deb
# Enter the name of the package that you downloaded.
tar -zxvf TDengine-server-<version>-Linux-x64.tar.gz
```
</TabItem>
<TabItem label="RPM 安装" value="rpminst">
1、从官网下载获得 rpm 安装包,例如 TDengine-server-3.0.0.0-Linux-x64.rpm;
2、进入到 TDengine-server-3.0.0.0-Linux-x64.rpm 安装包所在目录,执行如下的安装命令:
In the directory to which the package was decompressed, run `install.sh`:
```bash
sudo rpm -ivh TDengine-server-3.0.0.0-Linux-x64.rpm
sudo ./install.sh
```
:::info
Users will be prompted to enter some configuration information when install.sh is executing. The interactive mode can be disabled by executing `./install.sh -e no`. `./install.sh -h` can show all parameters with detailed explanation.
:::
</TabItem>
<TabItem label="tar.gz 安装" value="tarinst">
<TabItem label="Windows" value="windows">
1. Download the Windows installation package.
<PkgListV3 type={3}/>
2. Run the downloaded package to install TDengine.
</TabItem>
<TabItem value="apt-get" label="apt-get">
You can use `apt-get` to install TDengine from the official package repository.
1、从官网下载获得 tar.gz 安装包,例如 TDengine-server-3.0.0.0-Linux-x64.tar.gz;
2、进入到 TDengine-server-3.0.0.0-Linux-x64.tar.gz 安装包所在目录,先解压文件后,进入子目录,执行其中的 install.sh 安装脚本:
**Configure the package repository**
```bash
tar -zxvf TDengine-server-3.0.0.0-Linux-x64.tar.gz
wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-stable stable main" | sudo tee /etc/apt/sources.list.d/tdengine-stable.list
```
解压后进入相应路径,执行
You can install beta versions by configuring the following repository:
```bash
sudo ./install.sh
wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-beta beta main" | sudo tee /etc/apt/sources.list.d/tdengine-beta.list
```
:::info
install.sh 安装脚本在执行过程中,会通过命令行交互界面询问一些配置信息。如果希望采取无交互安装方式,那么可以用 -e no 参数来执行 install.sh 脚本。运行 `./install.sh -h` 指令可以查看所有参数的详细说明信息。
**Install TDengine with `apt-get`**
:::
```bash
sudo apt-get update
apt-cache policy tdengine
sudo apt-get install tdengine
```
:::tip
This installation method is supported only for Debian and Ubuntu.
::::
</TabItem>
</Tabs>
:::info
For information about TDengine releases, see [Release History](../../releases).
:::
:::note
当安装第一个节点时,出现 Enter FQDN:提示的时候,不需要输入任何内容。只有当安装第二个或以后更多的节点时,才需要输入已有集群中任何一个可用节点的 FQDN,支持该新节点加入集群。当然也可以不输入,而是在新节点启动前,配置到新节点的配置文件中。
On the first node in your TDengine cluster, leave the `Enter FQDN:` prompt blank and press **Enter**. On subsequent nodes, you can enter the end point of the first dnode in the cluster. You can also configure this setting after you have finished installing TDengine.
:::
## 启动
## Quick Launch
安装后,请使用 `systemctl` 命令来启动 TDengine 的服务进程。
<Tabs>
<TabItem label="Linux" value="linux">
After the installation is complete, run the following command to start the TDengine service:
```bash
systemctl start taosd
```
检查服务是否正常工作:
Run the following command to confirm that TDengine is running normally:
```bash
systemctl status taosd
```
如果服务进程处于活动状态,则 status 指令会显示如下的相关信息:
Output similar to the following indicates that TDengine is running normally:
```
Active: active (running)
```
如果后台服务进程处于停止状态,则 status 指令会显示如下的相关信息:
Output similar to the following indicates that TDengine has not started successfully:
```
Active: inactive (dead)
```
如果 TDengine 服务正常工作,那么您可以通过 TDengine 的命令行程序 `taos` 来访问并体验 TDengine。
After confirming that TDengine is running, run the `taos` command to access the TDengine CLI.
systemctl 命令汇总:
The following `systemctl` commands can help you manage TDengine:
- 启动服务进程:`systemctl start taosd`
- Start TDengine Server: `systemctl start taosd`
- 停止服务进程:`systemctl stop taosd`
- Stop TDengine Server: `systemctl stop taosd`
- 重启服务进程:`systemctl restart taosd`
- Restart TDengine Server: `systemctl restart taosd`
- 查看服务状态:`systemctl status taosd`
- Check TDengine Server status: `systemctl status taosd`
:::info
- systemctl 命令需要 _root_ 权限来运行,如果您非 _root_ 用户,请在命令前添加 sudo 。
- `systemctl stop taosd` 指令在执行后并不会马上停止 TDengine 服务,而是会等待系统中必要的落盘工作正常完成。在数据量很大的情况下,这可能会消耗较长时间。
- 如果系统中不支持 `systemd`,也可以用手动运行 `/usr/local/taos/bin/taosd` 方式启动 TDengine 服务。
- The `systemctl` command requires _root_ privileges. If you are not logged in as the `root` user, use the `sudo` command.
- The `systemctl stop taosd` command does not instantly stop TDengine Server. The server is stopped only after all data in memory is flushed to disk. The time required depends on the cache size.
- If your system does not include `systemd`, you can run `/usr/local/taos/bin/taosd` to start TDengine manually.
:::
## TDengine 命令行 (CLI)
</TabItem>
<TabItem label="Windows" value="windows">
After the installation is complete, run `C:\TDengine\taosd.exe` to start TDengine Server.
</TabItem>
</Tabs>
## Command Line Interface
为便于检查 TDengine 的状态,执行数据库 (Database) 的各种即席(Ad Hoc)查询,TDengine 提供一命令行应用程序(以下简称为 TDengine CLI) taos。要进入 TDengine 命令行,您只要在安装有 TDengine 的 Linux 终端执行 `taos` 即可。
You can use the TDengine CLI to monitor your TDengine deployment and execute ad hoc queries. To open the CLI, run the following command:
```bash
taos
```
如果连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印错误消息出来(请参考 [FAQ](/train-faq/faq) 来解决终端连接服务端失败的问题)。 TDengine CLI 的提示符号如下:
The TDengine CLI displays a welcome message and version information to indicate that its connection to the TDengine service was successful. If an error message is displayed, see the [FAQ](/train-faq/faq) for troubleshooting information. At the following prompt, you can execute SQL commands.
```cmd
taos>
```
在 TDengine CLI 中,用户可以通过 SQL 命令来创建/删除数据库、表等,并进行数据库(database)插入查询操作。在终端中运行的 SQL 语句需要以分号结束来运行。示例:
For example, you can create and delete databases and tables and run all types of queries. Each SQL command must be end with a semicolon (;). For example:
```sql
create database demo;
......@@ -170,51 +202,51 @@ select * from t;
Query OK, 2 row(s) in set (0.003128s)
```
除执行 SQL 语句外,系统管理员还可以从 TDengine CLI 进行检查系统运行状态、添加删除用户账号等操作。TDengine CLI 连同应用驱动也可以独立安装在 Linux 或 Windows 机器上运行,更多细节请参考 [这里](../../reference/taos-shell/)
You can also can monitor the deployment status, add and remove user accounts, and manage running instances. You can run the TDengine CLI on either Linux or Windows machines. For more information, see [TDengine CLI](../../reference/taos-shell/).
## 使用 taosBenchmark 体验写入速度
## Test data insert performance
启动 TDengine 的服务,在 Linux 终端执行 `taosBenchmark` (曾命名为 `taosdemo`):
After your TDengine Server is running normally, you can run the taosBenchmark utility to test its performance:
```bash
taosBenchmark
```
该命令将在数据库 test 下面自动创建一张超级表 meters,该超级表下有 1 万张表,表名为 "d0" 到 "d9999",每张表有 1 万条记录,每条记录有 (ts, current, voltage, phase) 四个字段,时间戳从 "2017-07-14 10:40:00 000" 到 "2017-07-14 10:40:09 999",每张表带有标签 location 和 groupId,groupId 被设置为 1 到 10, location 被设置为 "California.SanFrancisco" 或者 "California.LosAngeles"。
This command creates the `meters` supertable in the `test` database. In the `meters` supertable, it then creates 10,000 subtables named `d0` to `d9999`. Each table has 10,000 rows and each row has four columns: `ts`, `current`, `voltage`, and `phase`. The timestamps of the data in these columns range from 2017-07-14 10:40:00 000 to 2017-07-14 10:40:09 999. Each table is randomly assigned a `groupId` tag from 1 to ten and a `location` tag of either `California.SanFrancisco` or `California.LosAngeles`.
这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能,即使在一台普通的 PC 服务器往往也仅需十几秒。
The `taosBenchmark` command creates a deployment with 100 million data points that you can use for testing purposes. The time required to create the deployment depends on your hardware. On most modern servers, the deployment is created in less than a minute.
taosBenchmark 命令本身带有很多选项,配置表的数目、记录条数等等,您可以设置不同参数进行体验,请执行 `taosBenchmark --help` 详细列出。taosBenchmark 详细使用方法请参照 [如何使用 taosBenchmark 对 TDengine 进行性能测试](https://www.taosdata.com/2021/10/09/3111.html)
You can customize the test deployment that taosBenchmark creates by specifying command-line parameters. For information about command-line parameters, run the `taosBenchmark --help` command. For more information about taosBenchmark, see [taosBenchmark](../../reference/taosbenchmark).
## 使用 TDengine CLI 体验查询速度
## Test data query performance
使用上述 taosBenchmark 插入数据后,可以在 TDengine CLI 输入查询命令,体验查询速度。
After using taosBenchmark to create your test deployment, you can run queries in the TDengine CLI to test its performance:
查询超级表下记录总条数:
Query the number of rows in the `meters` supertable:
```sql
taos> select count(*) from test.meters;
```
查询 1 亿条记录的平均值、最大值、最小值等:
Query the average, maximum, and minimum values of all 100 million rows of data:
```sql
taos> select avg(current), max(voltage), min(phase) from test.meters;
```
查询 location="California.SanFrancisco" 的记录总条数:
Query the number of rows whose `location` tag is `California.SanFrancisco`:
```sql
taos> select count(*) from test.meters where location="California.SanFrancisco";
```
查询 groupId=10 的所有记录的平均值、最大值、最小值等:
Query the average, maximum, and minimum values of all rows whose `groupId` tag is `10`:
```sql
taos> select avg(current), max(voltage), min(phase) from test.meters where groupId=10;
```
对表 d10 按 10s 进行平均值、最大值和最小值聚合统计:
Query the average, maximum, and minimum values for table `d10` in 10 second intervals:
```sql
taos> select avg(current), max(voltage), min(phase) from test.d10 interval(10s);
......
可以使用 apt-get 工具从官方仓库安装。
You can use `apt-get` to install TDengine from the official package repository.
**安装包仓库**
**Configure the package repository**
```
wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-stable stable main" | sudo tee /etc/apt/sources.list.d/tdengine-stable.list
```
如果安装 Beta 版需要安装包仓库
You can install beta versions by configuring the following package repository:
```
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-beta beta main" | sudo tee /etc/apt/sources.list.d/tdengine-beta.list
```
**使用 apt-get 命令安装**
**Install TDengine with `apt-get`**
```
sudo apt-get update
......@@ -22,5 +22,5 @@ sudo apt-get install tdengine
```
:::tip
apt-get 方式只适用于 Debian 或 Ubuntu 系统
This installation method is supported only for Debian and Ubuntu.
::::
import PkgList from "/components/PkgList";
TDengine 的安装非常简单,从下载到安装成功仅仅只要几秒钟。
TDengine is easy to download and install.
为方便使用,从 2.4.0.10 开始,标准的服务端安装包包含了 taos、taosd、taosAdapter、taosdump、taosBenchmark、TDinsight 安装脚本和示例代码;如果您只需要用到服务端程序和客户端连接的 C/C++ 语言支持,也可以仅下载 lite 版本的安装包。
The standard server installation package includes `taos`, `taosd`, `taosAdapter`, `taosBenchmark`, and sample code. You can also download a lite package that includes only `taosd` and the C/C++ connector.
在安装包格式上,我们提供 tar.gz, rpm 和 deb 格式,为企业客户提供 tar.gz 格式安装包,以方便在特定操作系统上使用。需要注意的是,rpm 和 deb 包不含 taosdump、taosBenchmark 和 TDinsight 安装脚本,这些工具需要通过安装 taosTool 包获得。
You can download the TDengine installation package in .rpm, .deb, or .tar.gz format. The .tar.tz package includes `taosdump` and the TDinsight installation script. If you want to use these utilities with the .deb or .rpm package, download and install taosTools separately.
发布版本包括稳定版和 Beta 版,Beta 版含有更多新功能。正式上线或测试建议安装稳定版。您可以根据需要选择下载:
Between official releases, beta versions may be released that contain new features. Do not use beta versions for production or testing environments. Select the installation package appropriate for your system.
<PkgList type={0}/>
具体的安装方法,请参见[安装包的安装和卸载](/operation/pkg-install)。
For information about installing TDengine, see [Install and Uninstall](/operation/pkg-install).
下载其他组件、最新 Beta 版及之前版本的安装包,请点击[这里](https://www.taosdata.com/all-downloads)
For information about TDengine releases, see [All Downloads](https://tdengine.com/all-downloads)
查看 Release Notes, 请点击[这里](https://github.com/taosdata/TDengine/releases)
and [Release Notes](https://github.com/taosdata/TDengine/releases).
---
title: 立即开始
description: '快速设置 TDengine 环境并体验其高效写入和查询'
title: Get Started
description: This article describes how to install TDengine and test its performance.
---
TDengine 完整的软件包包括服务端(taosd)、用于与第三方系统对接并提供 RESTful 接口的 taosAdapter、应用驱动(taosc)、命令行程序 (CLI,taos) 和一些工具软件。TDengine 除了提供多种语言的连接器之外,还通过 [taosAdapter](/reference/taosadapter) 提供 [RESTful 接口](/reference/rest-api)
The full package of TDengine includes the TDengine Server (`taosd`), TDengine Client (`taosc`), taosAdapter for connecting with third-party systems and providing a RESTful interface, a command-line interface, and some tools. In addition to connectors for multiple languages, TDengine also provides a [RESTful interface](/reference/rest-api) through [taosAdapter](/reference/taosadapter).
本章主要介绍如何利用 Docker 或者安装包快速设置 TDengine 环境并体验其高效写入和查询。
You can install and run TDengine on Linux and Windows machines as well as Docker containers.
```mdx-code-block
import DocCardList from '@theme/DocCardList';
......
---
sidebar_label: Connect
title: Connect
description: "This document explains how to establish connections to TDengine, and briefly introduces how to install and use TDengine connectors."
description: "This document explains how to establish connections to TDengine and how to install and use TDengine connectors."
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import ConnJava from "./\_connect_java.mdx";
import ConnGo from "./\_connect_go.mdx";
import ConnRust from "./\_connect_rust.mdx";
import ConnNode from "./\_connect_node.mdx";
import ConnPythonNative from "./\_connect_python.mdx";
import ConnCSNative from "./\_connect_cs.mdx";
import ConnC from "./\_connect_c.mdx";
import ConnR from "./\_connect_r.mdx";
import InstallOnWindows from "../../14-reference/03-connector/\_linux_install.mdx";
import InstallOnLinux from "../../14-reference/03-connector/\_windows_install.mdx";
import VerifyLinux from "../../14-reference/03-connector/\_verify_linux.mdx";
import VerifyWindows from "../../14-reference/03-connector/\_verify_windows.mdx";
Any application programs running on any kind of platform can access TDengine through the REST API provided by TDengine. For details, please refer to [REST API](/reference/rest-api/). Additionally, application programs can use the connectors of multiple programming languages including C/C++, Java, Python, Go, Node.js, C#, Rust to access TDengine. This chapter describes how to establish a connection to TDengine and briefly introduces how to install and use connectors. TDengine community also provides connectors in LUA and PHP languages. For details about the connectors, please refer to [Connectors](/reference/connector/).
import ConnJava from "./_connect_java.mdx";
import ConnGo from "./_connect_go.mdx";
import ConnRust from "./_connect_rust.mdx";
import ConnNode from "./_connect_node.mdx";
import ConnPythonNative from "./_connect_python.mdx";
import ConnCSNative from "./_connect_cs.mdx";
import ConnC from "./_connect_c.mdx";
import ConnR from "./_connect_r.mdx";
import ConnPHP from "./_connect_php.mdx";
import InstallOnWindows from "../../14-reference/03-connector/_linux_install.mdx";
import InstallOnLinux from "../../14-reference/03-connector/_windows_install.mdx";
import VerifyLinux from "../../14-reference/03-connector/_verify_linux.mdx";
import VerifyWindows from "../../14-reference/03-connector/_verify_windows.mdx";
Any application running on any platform can access TDengine through the REST API provided by TDengine. For information, see [REST API](/reference/rest-api/). Applications can also use the connectors for various programming languages, including C/C++, Java, Python, Go, Node.js, C#, and Rust, to access TDengine. These connectors support connecting to TDengine clusters using both native interfaces (taosc). Some connectors also support connecting over a REST interface. Community developers have also contributed several unofficial connectors, such as the ADO.NET connector, the Lua connector, and the PHP connector.
## Establish Connection
There are two ways for a connector to establish connections to TDengine:
1. Connection through the REST API provided by the taosAdapter component, this way is called "REST connection" hereinafter.
2. Connection through the TDengine client driver (taosc), this way is called "Native connection" hereinafter.
1. REST connection through the REST API provided by the taosAdapter component.
2. Native connection through the TDengine client driver (taosc).
For REST and native connections, connectors provide similar APIs for performing operations and running SQL statements on your databases. The main difference is the method of establishing the connection, which is not visible to users.
Key differences:
1. The TDengine client driver (taosc) has the highest performance with all the features of TDengine like [Parameter Binding](/reference/connector/cpp#parameter-binding-api), [Subscription](/reference/connector/cpp#subscription-and-consumption-api), etc.
2. The TDengine client driver (taosc) is not supported across all platforms, and applications built on taosc may need to be modified when updating taosc to newer versions.
3. The REST connection is more accessible with cross-platform support, however it results in a 30% performance downgrade.
1. The TDengine client driver (taosc) has the highest performance with all the features of TDengine like [Parameter Binding](/reference/connector/cpp#parameter-binding-api), [Subscription](/reference/connector/cpp#subscription-and-consumption-api), etc.
## Install Client Driver taosc
......@@ -136,19 +137,19 @@ Node.js connector provides different ways of establishing connections by providi
1. Install Node.js Native Connector
```
npm install @tdengine/client
```
```
npm install @tdengine/client
```
:::note
It's recommend to use Node whose version is between `node-v12.8.0` and `node-v13.0.0`.
:::
:::
2. Install Node.js REST Connector
```
npm install @tdengine/rest
```
```
npm install @tdengine/rest
```
</TabItem>
<TabItem label="C#" value="csharp">
......@@ -236,14 +237,14 @@ phpize && ./configure --enable-swoole && make -j && make install
**Enable The Extension:**
Option One: Add `extension=tdengine` in `php.ini`
Option One: Add `extension=tdengine` in `php.ini`
Option Two: Specify the extension on CLI `php -d extension=tdengine test.php`
</TabItem>
</Tabs>
## Establish Connection
## Establish a connection
Prior to establishing connection, please make sure TDengine is already running and accessible. The following sample code assumes TDengine is running on the same host as the client program, with FQDN configured to "localhost" and serverPort configured to "6030".
......@@ -272,6 +273,9 @@ Prior to establishing connection, please make sure TDengine is already running a
<TabItem label="C" value="c">
<ConnC />
</TabItem>
<TabItem label="PHP" value="php">
<ConnPHP />
</TabItem>
</Tabs>
:::tip
......
......@@ -6,29 +6,32 @@ The data model employed by TDengine is similar to that of a relational database.
Note: before you read this chapter, please make sure you have already read through [Key Concepts](/concept/), since TDengine introduces new concepts like "one table for one [data collection point](/concept/#data-collection-point)" and "[super table](/concept/#super-table-stable)".
## Create Database
The [characteristics of time-series data](https://www.taosdata.com/blog/2019/07/09/86.html) from different data collection points may be different. Characteristics include collection frequency, retention policy and others which determine how you create and configure the database. For e.g. days to keep, number of replicas, data block size, whether data updates are allowed and other configurable parameters would be determined by the characteristics of your data and your business requirements. For TDengine to operate with the best performance, we strongly recommend that you create and configure different databases for data with different characteristics. This allows you, for example, to set up different storage and retention policies. When creating a database, there are a lot of parameters that can be configured such as, the days to keep data, the number of replicas, the number of memory blocks, time precision, the minimum and maximum number of rows in each data block, whether compression is enabled, the time range of the data in single data file and so on. Below is an example of the SQL statement to create a database.
The characteristics of time-series data from different data collection points may be different. Characteristics include collection frequency, retention policy and others which determine how you create and configure the database. For e.g. days to keep, number of replicas, data block size, whether data updates are allowed and other configurable parameters would be determined by the characteristics of your data and your business requirements. For TDengine to operate with the best performance, we strongly recommend that you create and configure different databases for data with different characteristics. This allows you, for example, to set up different storage and retention policies. When creating a database, there are a lot of parameters that can be configured such as, the days to keep data, the number of replicas, the size of the cache, time precision, the minimum and maximum number of rows in each data block, whether compression is enabled, the time range of the data in single data file and so on. An example is shown as follows:
```sql
CREATE DATABASE power KEEP 365 DURATION 10 BUFFER 16 WAL_LEVEL 1;
```
In the above SQL statement:
- a database named "power" will be created
- the data in it will be kept for 365 days, which means that data older than 365 days will be deleted automatically
- a database named "power" is created
- the data in it is retained for 365 days, which means that data older than 365 days will be deleted automatically
- a new data file will be created every 10 days
- the size of memory cache for writing is 16 MB
- data will be firstly written to WAL without FSYNC
For more details please refer to [Database](/taos-sql/database).
- the size of the write cache pool on each vnode is 16 MB
- the number of vgroups is 100
- WAL is enabled but fsync is disabled For more details please refer to [Database](/taos-sql/database).
After creating a database, the current database in use can be switched using SQL command `USE`. For example the SQL statement below switches the current database to `power`. Without the current database specified, table name must be preceded with the corresponding database name.
After creating a database, the current database in use can be switched using SQL command `USE`. For example the SQL statement below switches the current database to `power`.
```sql
USE power;
```
Without the current database specified, table name must be preceded with the corresponding database name.
:::note
- Any table or STable must belong to a database. To create a table or STable, the database it belongs to must be ready.
......@@ -41,14 +44,9 @@ USE power;
In a time-series application, there may be multiple kinds of data collection points. For example, in the electrical power system there are meters, transformers, bus bars, switches, etc. For easy and efficient aggregation of multiple tables, one STable needs to be created for each kind of data collection point. For example, for the meters in [table 1](/concept/#model_table1), the SQL statement below can be used to create the super table.
```sql
CREATE STable meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int);
CREATE STABLE meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int);
```
:::note
If you are using versions prior to 2.0.15, the `STable` keyword needs to be replaced with `TABLE`.
:::
Similar to creating a regular table, when creating a STable, the name and schema need to be provided. In the STable schema, the first column must always be a timestamp (like ts in the example), and the other columns (like current, voltage and phase in the example) are the data collected. The remaining columns can [contain data of type](/taos-sql/data-type/) integer, float, double, string etc. In addition, the schema for tags, like location and groupId in the example, must be provided. The tag type can be integer, float, string, etc. Tags are essentially the static properties of a data collection point. For example, properties like the location, device type, device group ID, manager ID are tags. Tags in the schema can be added, removed or updated. Please refer to [STable](/taos-sql/stable) for more details.
For each kind of data collection point, a corresponding STable must be created. There may be many STables in an application. For electrical power system, we need to create a STable respectively for meters, transformers, busbars, switches. There may be multiple kinds of data collection points on a single device, for example there may be one data collection point for electrical data like current and voltage and another data collection point for environmental data like temperature, humidity and wind direction. Multiple STables are required for these kinds of devices.
......@@ -65,13 +63,8 @@ CREATE TABLE d1001 USING meters TAGS ("California.SanFrancisco", 2);
In the above SQL statement, "d1001" is the table name, "meters" is the STable name, followed by the value of tag "Location" and the value of tag "groupId", which are "California.SanFrancisco" and "2" respectively in the example. The tag values can be updated after the table is created. Please refer to [Tables](/taos-sql/table) for details.
In the TDengine system, it's recommended to create a table for a data collection point via STable. A table created via STable is called subtable in some parts of the TDengine documentation. All SQL commands applied on regular tables can be applied on subtables.
:::tip
It's suggested to use the globally unique ID of a data collection point as the table name. For example the device serial number could be used as a unique ID. If a unique ID doesn't exist, multiple IDs that are not globally unique can be combined to form a globally unique ID. It's not recommended to use a globally unique ID as tag value.
:::
## Create Table Automatically
In some circumstances, it's unknown whether the table already exists when inserting rows. The table can be created automatically using the SQL statement below, and nothing will happen if the table already exists.
......@@ -86,8 +79,6 @@ For more details please refer to [Create Table Automatically](/taos-sql/insert#a
## Single Column vs Multiple Column
A multiple columns data model is supported in TDengine. As long as multiple metrics are collected by the same data collection point at the same time, i.e. the timestamps are identical, these metrics can be put in a single STable as columns.
However, there is another kind of design, i.e. single column data model in which a table is created for each metric. This means that a STable is required for each kind of metric. For example in a single column model, 3 STables would be required for current, voltage and phase.
A multiple columns data model is supported in TDengine. As long as multiple metrics are collected by the same data collection point at the same time, i.e. the timestamps are identical, these metrics can be put in a single STable as columns. However, there is another kind of design, i.e. single column data model in which a table is created for each metric. This means that a STable is required for each kind of metric. For example in a single column model, 3 STables would be required for current, voltage and phase.
It's recommended to use a multiple column data model as much as possible because insert and query performance is higher. In some cases, however, the collected metrics may vary frequently and so the corresponding STable schema needs to be changed frequently too. In such cases, it's more convenient to use single column data model.
---
sidebar_label: Insert Using SQL
title: Insert Using SQL
---
......@@ -19,13 +18,14 @@ import CsSQL from "./_cs_sql.mdx";
import CsStmt from "./_cs_stmt.mdx";
import CSQL from "./_c_sql.mdx";
import CStmt from "./_c_stmt.mdx";
import PhpSQL from "./_php_sql.mdx";
import PhpStmt from "./_php_stmt.mdx";
## Introduction
Application programs can execute `INSERT` statement through connectors to insert rows. The TDengine CLI can also be used to manually insert data.
### Insert Single Row
The below SQL statement is used to insert one row into table "d1001".
```sql
......@@ -42,7 +42,7 @@ INSERT INTO d1001 VALUES (1538548684000, 10.2, 220, 0.23) (1538548696650, 10.3,
### Insert into Multiple Tables
Data can be inserted into multiple tables in single SQL statement. The example below inserts 2 rows into table "d1001" and 1 row into table "d1002".
Data can be inserted into multiple tables in the same SQL statement. The example below inserts 2 rows into table "d1001" and 1 row into table "d1002".
```sql
INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6, 218, 0.33) d1002 VALUES (1538548696800, 12.3, 221, 0.31);
......@@ -52,19 +52,19 @@ For more details about `INSERT` please refer to [INSERT](/taos-sql/insert).
:::info
- Inserting in batches can improve performance. Normally, the higher the batch size, the better the performance. Please note that a single row can't exceed 48 KB bytes and each SQL statement can't exceed 1 MB.
- Inserting with multiple threads can also improve performance. However, depending on the system resources on the application side and the server side, when the number of inserting threads grows beyond a specific point the performance may drop instead of improving. The proper number of threads needs to be tested in a specific environment to find the best number. The proper number of threads may be impacted by the system resources on the server side, the system resources on the client side, the table schemas, etc.
- Inserting in batches can improve performance. The higher the batch size, the better the performance. Please note that a single row can't exceed 48K bytes and each SQL statement can't exceed 1MB.
- Inserting with multiple threads can also improve performance. However, at a certain point, increasing the number of threads no longer offers any benefit and can even decrease performance due to the overhead involved in frequent thread switching. The optimal number of threads for a system depends on the processing capabilities and configuration of the server, the configuration of the database, the data schema, and the batch size for writing data. In general, more powerful clients and servers can support higher numbers of concurrently writing threads. Given a sufficiently powerful server, a higher number of vgroups for a database also increases the number of concurrent writes. Finally, a simpler data schema enables more concurrent writes as well.
:::
:::warning
- If the timestamp for the row to be inserted already exists in the table, the old data will be overritten by the new values for the columns for which new values are provided, columns for which no new values are provided are not impacted.
- The timestamp to be inserted must be newer than the timestamp of subtracting current time by the parameter `KEEP`. If `KEEP` is set to 3650 days, then the data older than 3650 days ago can't be inserted. The timestamp to be inserted can't be newer than the timestamp of current time plus parameter `DURATION`. If `DAYS` is set to 2, the data newer than 2 days later can't be inserted.
- If the timestamp of a new record already exists in a table, columns with new data for that timestamp replace old data with new data, while columns without new data are not affected.
- The timestamp to be inserted must be newer than the timestamp of subtracting current time by the parameter `KEEP`. If `KEEP` is set to 3650 days, then the data older than 3650 days ago can't be inserted. The timestamp to be inserted cannot be newer than the timestamp of current time plus parameter `DURATION`. If `DURATION` is set to 2, the data newer than 2 days later can't be inserted.
:::
## Examples
## Sample program
### Insert Using SQL
......@@ -90,6 +90,9 @@ For more details about `INSERT` please refer to [INSERT](/taos-sql/insert).
<TabItem label="C" value="c">
<CSQL />
</TabItem>
<TabItem label="PHP" value="php">
<PhpSQL />
</TabItem>
</Tabs>
:::note
......@@ -101,7 +104,7 @@ For more details about `INSERT` please refer to [INSERT](/taos-sql/insert).
### Insert with Parameter Binding
TDengine also provides API support for parameter binding. Similar to MySQL, only `?` can be used in these APIs to represent the parameters to bind. Parameter binding support for inserting data has improved significantly to improve the insert performance by avoiding the cost of parsing SQL statements.
TDengine also provides API support for parameter binding. Similar to MySQL, only `?` can be used in these APIs to represent the parameters to bind. This avoids the resource consumption of SQL syntax parsing when writing data through the parameter binding interface, thus significantly improving write performance in most cases.
Parameter binding is available only with native connection.
......@@ -127,4 +130,8 @@ Parameter binding is available only with native connection.
<TabItem label="C" value="c">
<CStmt />
</TabItem>
<TabItem label="PHP" value="php">
<PhpStmt />
</TabItem>
</Tabs>
......@@ -21,15 +21,15 @@ In the InfluxDB Line protocol format, a single line of text is used to represent
measurement,tag_set field_set timestamp
```
- `measurement` will be used as the name of the STable
- `tag_set` will be used as tags, with format like `<tag_key>=<tag_value>,<tag_key>=<tag_value>`
- `field_set`will be used as data columns, with format like `<field_key>=<field_value>,<field_key>=<field_value>`
- `measurement` will be used as the name of the STable Enter a comma (,) between `measurement` and `tag_set`.
- `tag_set` will be used as tags, with format like `<tag_key>=<tag_value>,<tag_key>=<tag_value>` Enter a space between `tag_set` and `field_set`.
- `field_set`will be used as data columns, with format like `<field_key>=<field_value>,<field_key>=<field_value>` Enter a space between `field_set` and `timestamp`.
- `timestamp` is the primary key timestamp corresponding to this row of data
For example:
```
meters,location=California.LoSangeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611249500
meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611249500
```
:::note
......@@ -42,7 +42,6 @@ meters,location=California.LoSangeles,groupid=2 current=13.4,voltage=223,phase=0
For more details please refer to [InfluxDB Line Protocol](https://docs.influxdata.com/influxdb/v2.0/reference/syntax/line-protocol/) and [TDengine Schemaless](/reference/schemaless/#Schemaless-Line-Protocol)
## Examples
<Tabs defaultValue="java" groupId="lang">
......
......@@ -17,19 +17,19 @@ import CTelnet from "./_c_opts_telnet.mdx";
A single line of text is used in OpenTSDB line protocol to represent one row of data. OpenTSDB employs a single column data model, so each line can only contain a single data column. There can be multiple tags. Each line contains 4 parts as below:
```
```txt
<metric> <timestamp> <value> <tagk_1>=<tagv_1>[ <tagk_n>=<tagv_n>]
```
- `metric` will be used as the STable name.
- `timestamp` is the timestamp of current row of data. The time precision will be determined automatically based on the length of the timestamp. Second and millisecond time precision are supported.
- `value` is a metric which must be a numeric value, the corresponding column name is "_value".
- `value` is a metric which must be a numeric value, The corresponding column name is "value".
- The last part is the tag set separated by spaces, all tags will be converted to nchar type automatically.
For example:
```txt
meters.current 1648432611250 11.3 location=California.LoSangeles groupid=3
meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3
```
Please refer to [OpenTSDB Telnet API](http://opentsdb.net/docs/build/html/api_telnet/put.html) for more details.
......@@ -63,7 +63,7 @@ Please refer to [OpenTSDB Telnet API](http://opentsdb.net/docs/build/html/api_te
taos> use test;
Database changed.
taos> show STables;
taos> show stables;
name | created_time | columns | tags | tables |
============================================================================================
meters.current | 2022-03-30 17:04:10.877 | 2 | 2 | 2 |
......@@ -73,8 +73,8 @@ Query OK, 2 row(s) in set (0.002544s)
taos> select tbname, * from `meters.current`;
tbname | _ts | _value | groupid | location |
==================================================================================================================================
t_0e7bcfa21a02331c06764f275... | 2022-03-28 09:56:51.249 | 10.800000000 | 3 | California.LoSangeles |
t_0e7bcfa21a02331c06764f275... | 2022-03-28 09:56:51.250 | 11.300000000 | 3 | California.LoSangeles |
t_0e7bcfa21a02331c06764f275... | 2022-03-28 09:56:51.249 | 10.800000000 | 3 | California.LosAngeles |
t_0e7bcfa21a02331c06764f275... | 2022-03-28 09:56:51.250 | 11.300000000 | 3 | California.LosAngeles |
t_7e7b26dd860280242c6492a16... | 2022-03-28 09:56:51.249 | 10.300000000 | 2 | California.SanFrancisco |
t_7e7b26dd860280242c6492a16... | 2022-03-28 09:56:51.250 | 12.600000000 | 2 | California.SanFrancisco |
Query OK, 4 row(s) in set (0.005399s)
......
......@@ -15,37 +15,37 @@ import CJson from "./_c_opts_json.mdx";
## Introduction
A JSON string is used in OpenTSDB JSON to represent one or more rows of data, for example:
A JSON string is used in OpenTSDB JSON to represent one or more rows of data, for example: For example:
```json
[
{
"metric": "sys.cpu.nice",
"timestamp": 1346846400,
"value": 18,
"tags": {
"host": "web01",
"dc": "lga"
}
},
{
"metric": "sys.cpu.nice",
"timestamp": 1346846400,
"value": 9,
"tags": {
"host": "web02",
"dc": "lga"
}
{
"metric": "sys.cpu.nice",
"timestamp": 1346846400,
"value": 18,
"tags": {
"host": "web01",
"dc": "lga"
}
},
{
"metric": "sys.cpu.nice",
"timestamp": 1346846400,
"value": 9,
"tags": {
"host": "web02",
"dc": "lga"
}
}
]
```
Similar to OpenTSDB line protocol, `metric` will be used as the STable name, `timestamp` is the timestamp to be used, `value` represents the metric collected, `tags` are the tag sets.
Please refer to [OpenTSDB HTTP API](http://opentsdb.net/docs/build/html/api_http/put.html) for more details.
:::note
- In JSON protocol, strings will be converted to nchar type and numeric values will be converted to double type.
- Only data in array format is accepted and so an array must be used even if there is only one row.
......@@ -74,13 +74,13 @@ Please refer to [OpenTSDB HTTP API](http://opentsdb.net/docs/build/html/api_http
</TabItem>
</Tabs>
The above sample code will created 2 STables automatically while each STable has 2 rows of data.
2 STables will be created automatically and each STable has 2 rows of data in the above sample code.
```cmd
taos> use test;
Database changed.
taos> show STables;
taos> show stables;
name | created_time | columns | tags | tables |
============================================================================================
meters.current | 2022-03-29 16:05:25.193 | 2 | 2 | 1 |
......
......@@ -3,6 +3,6 @@
```
:::tip
`github.com/taosdata/driver-go/v2/wrapper` module in driver-go is the wrapper for C API, it can be used to insert data with parameter binding.
`github.com/taosdata/driver-go/v3/wrapper` module in driver-go is the wrapper for C API, it can be used to insert data with parameter binding.
:::
---
Sidebar_label: Query data
title: Query data
title: Query Data
description: "This chapter introduces major query functionalities and how to perform sync and async query using connectors."
---
......@@ -13,6 +12,7 @@ import RustQuery from "./_rust.mdx";
import NodeQuery from "./_js.mdx";
import CsQuery from "./_cs.mdx";
import CQuery from "./_c.mdx";
import PhpQuery from "./_php.mdx";
import PyAsync from "./_py_async.mdx";
import NodeAsync from "./_js_async.mdx";
import CsAsync from "./_cs_async.mdx";
......@@ -24,9 +24,8 @@ SQL is used by TDengine as its query language. Application programs can send SQL
- Query on single column or multiple columns
- Filter on tags or data columns:>, <, =, <\>, like
- Grouping of results: `Group By`
- Sorting of results: `Order By`
- Limit the number of results: `Limit/Offset`
- Grouping of results: `Group By` - Sorting of results: `Order By` - Limit the number of results: `Limit/Offset`
- Windowed aggregate queries for time windows (interval), session windows (session), and state windows (state_window)
- Arithmetic on columns of numeric types or aggregate results
- Join query with timestamp alignment
- Aggregate functions: count, max, min, avg, sum, twa, stddev, leastsquares, top, bottom, first, last, percentile, apercentile, last_row, spread, diff
......@@ -34,10 +33,6 @@ SQL is used by TDengine as its query language. Application programs can send SQL
For example, the SQL statement below can be executed in TDengine CLI `taos` to select records with voltage greater than 215 and limit the output to only 2 rows.
```sql
select * from d1001 where voltage > 215 order by ts desc limit 2;
```
```title=Output
taos> select * from d1001 where voltage > 215 order by ts desc limit 2;
ts | current | voltage | phase |
======================================================================================
......@@ -46,16 +41,14 @@ taos> select * from d1001 where voltage > 215 order by ts desc limit 2;
Query OK, 2 row(s) in set (0.001100s)
```
To meet the requirements of varied use cases, some special functions have been added in TDengine. Some examples are `twa` (Time Weighted Average), `spread` (The difference between the maximum and the minimum), and `last_row` (the last row). Furthermore, continuous query is also supported in TDengine.
To meet the requirements of varied use cases, some special functions have been added in TDengine. Some examples are `twa` (Time Weighted Average), `spread` (The difference between the maximum and the minimum), and `last_row` (the last row).
For detailed query syntax please refer to [Select](/taos-sql/select).
For detailed query syntax, see [Select](../../taos-sql././select).
## Aggregation among Tables
In most use cases, there are always multiple kinds of data collection points. A new concept, called STable (abbreviation for super table), is used in TDengine to represent one type of data collection point, and a subtable is used to represent a specific data collection point of that type. Tags are used by TDengine to represent the static properties of data collection points. A specific data collection point has its own values for static properties. By specifying filter conditions on tags, aggregation can be performed efficiently among all the subtables created via the same STable, i.e. same type of data collection points. Aggregate functions applicable for tables can be used directly on STables; the syntax is exactly the same.
In summary, records across subtables can be aggregated by a simple query on their STable. It is like a join operation. However, tables belonging to different STables can not be aggregated.
### Example 1
In TDengine CLI `taos`, use the SQL below to get the average voltage of all the meters in California grouped by location.
......@@ -81,7 +74,7 @@ taos> SELECT count(*), max(current) FROM meters where groupId = 2;
Query OK, 1 row(s) in set (0.002136s)
```
Join queries are only allowed between subtables of the same STable. In [Select](/taos-sql/select), all query operations are marked as to whether they support STables or not.
In [Select](../../taos-sql././select), all query operations are marked as to whether they support STables or not.
## Down Sampling and Interpolation
......@@ -129,7 +122,7 @@ In many use cases, it's hard to align the timestamp of the data collected by eac
Interpolation can be performed in TDengine if there is no data in a time range.
For more details please refer to [Aggregate by Window](/taos-sql/interval).
For more information, see [Aggregate by Window](../../taos-sql/interval).
## Examples
......@@ -159,6 +152,9 @@ In the section describing [Insert](/develop/insert-data/sql-writing), a database
<TabItem label="C" value="c">
<CQuery />
</TabItem>
<TabItem label="PHP" value="php">
<PhpQuery />
</TabItem>
</Tabs>
:::note
......
---
sidebar_label: Stream Processing
description: "The TDengine stream processing engine combines data inserts, preprocessing, analytics, real-time computation, and alerting into a single component."
title: Stream Processing
---
Raw time-series data is often cleaned and preprocessed before being permanently stored in a database. In a traditional time-series solution, this generally requires the deployment of stream processing systems such as Kafka or Flink. However, the complexity of such systems increases the cost of development and maintenance.
With the stream processing engine built into TDengine, you can process incoming data streams in real time and define stream transformations in SQL. Incoming data is automatically processed, and the results are pushed to specified tables based on triggering rules that you define. This is a lightweight alternative to complex processing engines that returns computation results in milliseconds even in high throughput scenarios.
The stream processing engine includes data filtering, scalar function computation (including user-defined functions), and window aggregation, with support for sliding windows, session windows, and event windows. Stream processing can write data to supertables from other supertables, standard tables, or subtables. When you create a stream, the target supertable is automatically created. New data is then processed and written to that supertable according to the rules defined for the stream. You can use PARTITION BY statements to partition the data by table name or tag. Separate partitions are then written to different subtables within the target supertable.
TDengine stream processing supports the aggregation of supertables that are deployed across multiple vnodes. It can also handle out-of-order writes and includes a watermark mechanism that determines the extent to which out-of-order data is accepted by the system. You can configure whether to drop or reprocess out-of-order data through the **ignore expired** parameter.
For more information, see [Stream Processing](../../taos-sql/stream).
## Create a Stream
```sql
CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name AS subquery
stream_options: {
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
WATERMARK time
IGNORE EXPIRED [0 | 1]
}
```
For more information, see [Stream Processing](../../taos-sql/stream).
## Usage Scenario 1
It is common that smart electrical meter systems for businesses generate millions of data points that are widely dispersed and not ordered. The time required to clean and convert this data makes efficient, real-time processing impossible for traditional solutions. This scenario shows how you can configure TDengine stream processing to drop data points over 220 V, find the maximum voltage for 5 second windows, and output this data to a table.
### Create a Database for Raw Data
A database including one supertable and four subtables is created as follows:
```sql
DROP DATABASE IF EXISTS power;
CREATE DATABASE power;
USE power;
CREATE STABLE meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int);
CREATE TABLE d1001 USING meters TAGS ("California.SanFrancisco", 2);
CREATE TABLE d1002 USING meters TAGS ("California.SanFrancisco", 3);
CREATE TABLE d1003 USING meters TAGS ("California.LosAngeles", 2);
CREATE TABLE d1004 USING meters TAGS ("California.LosAngeles", 3);
```
### Create a Stream
```sql
create stream current_stream into current_stream_output_stb as select _wstart as start, _wend as end, max(current) as max_current from meters where voltage <= 220 interval (5s);
```
### Write Data
```sql
insert into d1001 values("2018-10-03 14:38:05.000", 10.30000, 219, 0.31000);
insert into d1001 values("2018-10-03 14:38:15.000", 12.60000, 218, 0.33000);
insert into d1001 values("2018-10-03 14:38:16.800", 12.30000, 221, 0.31000);
insert into d1002 values("2018-10-03 14:38:16.650", 10.30000, 218, 0.25000);
insert into d1003 values("2018-10-03 14:38:05.500", 11.80000, 221, 0.28000);
insert into d1003 values("2018-10-03 14:38:16.600", 13.40000, 223, 0.29000);
insert into d1004 values("2018-10-03 14:38:05.000", 10.80000, 223, 0.29000);
insert into d1004 values("2018-10-03 14:38:06.500", 11.50000, 221, 0.35000);
```
### Query the Results
```sql
taos> select start, end, max_current from current_stream_output_stb;
start | end | max_current |
===========================================================================
2018-10-03 14:38:05.000 | 2018-10-03 14:38:10.000 | 10.30000 |
2018-10-03 14:38:15.000 | 2018-10-03 14:38:20.000 | 12.60000 |
Query OK, 2 rows in database (0.018762s)
```
## Usage Scenario 2
In this scenario, the active power and reactive power are determined from the data gathered in the previous scenario. The location and name of each meter are concatenated with a period (.) between them, and the data set is partitioned by meter name and written to a new database.
### Create a Database for Raw Data
The procedure from the previous scenario is used to create the database.
### Create a Stream
```sql
create stream power_stream into power_stream_output_stb as select ts, concat_ws(".", location, tbname) as meter_location, current*voltage*cos(phase) as active_power, current*voltage*sin(phase) as reactive_power from meters partition by tbname;
```
### Write data
The procedure from the previous scenario is used to write the data.
### Query the Results
```sql
taos> select ts, meter_location, active_power, reactive_power from power_stream_output_stb;
ts | meter_location | active_power | reactive_power |
===================================================================================================================
2018-10-03 14:38:05.000 | California.LosAngeles.d1004 | 2307.834596289 | 688.687331847 |
2018-10-03 14:38:06.500 | California.LosAngeles.d1004 | 2387.415754896 | 871.474763418 |
2018-10-03 14:38:05.500 | California.LosAngeles.d1003 | 2506.240411679 | 720.680274962 |
2018-10-03 14:38:16.600 | California.LosAngeles.d1003 | 2863.424274422 | 854.482390839 |
2018-10-03 14:38:05.000 | California.SanFrancisco.d1001 | 2148.178871730 | 688.120784090 |
2018-10-03 14:38:15.000 | California.SanFrancisco.d1001 | 2598.589176205 | 890.081451418 |
2018-10-03 14:38:16.800 | California.SanFrancisco.d1001 | 2588.728381186 | 829.240910475 |
2018-10-03 14:38:16.650 | California.SanFrancisco.d1002 | 2175.595991997 | 555.520860397 |
Query OK, 8 rows in database (0.014753s)
```
此差异已折叠。
---
sidebar_label: Cache
title: Cache
description: "Caching System inside TDengine"
sidebar_label: Caching
title: Caching
description: "This document describes the caching component of TDengine."
---
To achieve the purpose of high performance data writing and querying, TDengine employs a lot of caching technologies in both server side and client side.
TDengine uses various kinds of caching techniques to efficiently write and query data. This document describes the caching component of TDengine.
## Write Cache
The cache management policy in TDengine is First-In-First-Out (FIFO). FIFO is also known as insert driven cache management policy and it is different from read driven cache management, which is more commonly known as Least-Recently-Used (LRU). FIFO simply stores the latest data in cache and flushes the oldest data in cache to disk, when the cache usage reaches a threshold. In IoT use cases, it is the current state i.e. the latest or most recent data that is important. The cache policy in TDengine, like much of the design and architecture of TDengine, is based on the nature of IoT data.
TDengine uses an insert-driven cache management policy, known as first in, first out (FIFO). This policy differs from read-driven "least recently used (LRU)" cache management. A FIFO policy stores the latest data in cache and flushes the oldest data from cache to disk when the cache usage reaches a threshold. In IoT use cases, the most recent data or the current state is most important. The cache policy in TDengine, like much of the design and architecture of TDengine, is based on the nature of IoT data.
The memory space used by each vnode as write cache is determined when creating a database. Parameter `vgroups` and `buffer` can be used to specify the number of vnode and the size of write cache for each vnode when creating the database. Then, the total size of write cache for this database is `vgroups * buffer`.
When you create a database, you can configure the size of the write cache on each vnode. The **vgroups** parameter determines the number of vgroups that process data in the database, and the **buffer** parameter determines the size of the write cache for each vnode.
```sql
create database db0 vgroups 100 buffer 16MB
```
The above statement creates a database of 100 vnodes while each vnode has a write cache of 16MB.
Even though in theory it's always better to have a larger cache, the extra effect would be very minor once the size of cache grows beyond a threshold. So normally it's enough to use the default value of `buffer` parameter.
In theory, larger cache sizes are always better. However, at a certain point, it becomes impossible to improve performance by increasing cache size. In most scenarios, you can retain the default cache settings.
## Read Cache
When creating a database, it's also possible to specify whether to cache the latest data of each sub table, using parameter `cachelast`. There are 3 cases:
- 0: No cache for latest data
- 1: The last row of each table is cached, `last_row` function can benefit significantly from it
- 2: The latest non-NULL value of each column for each table is cached, `last` function can benefit very much when there is no `where`, `group by`, `order by` or `interval` clause
- 3: Bot hthe last row and the latest non-NULL value of each column for each table are cached, identical to the behavior of both 1 and 2 are set together
When you create a database, you can configure whether the latest data from every subtable is cached. To do so, set the *cachelast* parameter as follows:
- 0: Caching is disabled.
- 1: The latest row of data in each subtable is cached. This option significantly improves the performance of the `LAST_ROW` function
- 2: The latest non-null value in each column of each subtable is cached. This option significantly improves the performance of the `LAST` function in normal situations, such as WHERE, ORDER BY, GROUP BY, and INTERVAL statements.
- 3: Rows and columns are both cached. This option is equivalent to simultaneously enabling options 1 and 2.
## Meta Cache
## Metadata Cache
To process data writing and querying efficiently, each vnode caches the metadata that's already retrieved. Parameters `pages` and `pagesize` are used to specify the size of metadata cache for each vnode.
To improve query and write performance, each vnode caches the metadata that it receives. When you create a database, you can configure the size of the metadata cache through the *pages* and *pagesize* parameters.
```sql
create database db0 pages 128 pagesize 16kb
```
The above statement will create a database db0 each of whose vnode is allocated a meta cache of `128 * 16 KB = 2 MB` .
The preceding SQL statement creates 128 pages on each vnode in the `db0` database. Each page has a 16 KB metadata cache.
## File System Cache
TDengine utilizes WAL to provide basic reliability. The essential of WAL is to append data in a disk file, so the file system cache also plays an important role in the writing performance. Parameter `wal` can be used to specify the policy of writing WAL, there are 2 cases:
- 1: Write data to WAL without calling fsync, the data is actually written to the file system cache without flushing immediately, in this way you can get better write performance
- 2: Write data to WAL and invoke fsync, the data is immediately flushed to disk, in this way you can get higher reliability
TDengine implements data reliability by means of a write-ahead log (WAL). Writing data to the WAL is essentially writing data to the disk in an ordered, append-only manner. For this reason, the file system cache plays an important role in write performance. When you create a database, you can use the *wal* parameter to choose higher performance or higher reliability.
- 1: This option writes to the WAL but does not enable fsync. New data written to the WAL is stored in the file system cache but not written to disk. This provides better performance.
- 2: This option writes to the WAL and enables fsync. New data written to the WAL is immediately written to disk. This provides better data reliability.
## Client Cache
To improve the overall efficiency of processing data, besides the above caches, the core library `libtaos.so` (also referred to as `taosc`) which all client programs depend on also has its own cache. `taosc` caches the metadata of the databases, super tables, tables that the invoking client has accessed, plus other critical metadata such as the cluster topology.
In addition to the server-side caching discussed previously, the core client library `libtaos.so` also makes use of caching. TDengine Client caches the metadata of all databases, supertables, and subtables that it has accessed, as well as the cluster topology.
When multiple client programs are accessing a TDengine cluster, if one of the clients modifies some metadata, the cache may become invalid in other clients. If this case happens, the client programs need to "reset query cache" to invalidate the whole cache so that `taosc` is enforced to repull the metadata it needs to rebuild the cache.
If a client modifies certain metadata while multiple clients are simultaneously accessing a TDengine cluster, the metadata caches on each client may fail or become out of sync. If this occurs, run the `reset query cache` command on the affected clientsto force them to obtain fresh metadata and reset their caches.
此差异已折叠。
```c
{{#include docs/examples/c/subscribe_demo.c}}
```
\ No newline at end of file
{{#include docs/examples/c/tmq_example.c}}
```
```py
{{#include docs/examples/python/subscribe_demo.py}}
```
\ No newline at end of file
{{#include docs/examples/python/tmq_example.py}}
```
......@@ -2,13 +2,12 @@
title: Developer Guide
---
To develop an application to process time-series data using TDengine, we recommend taking the following steps:
1. Choose the method to connect to TDengine. No matter what programming language you use, you can always use the REST interface to access TDengine, but you can also use connectors unique to each programming language.
2. Design the data model based on your own use cases. Learn the [concepts](/concept/) of TDengine including "one table for one data collection point" and the "super table" (STable) concept; learn about static labels, collected metrics, and subtables. Depending on the characteristics of your data and your requirements, you may decide to create one or more databases, and you should design the STable schema to fit your data.
Before creating an application to process time-series data with TDengine, consider the following:
1. Choose the method to connect to TDengine. TDengine offers a REST API that can be used with any programming language. It also has connectors for a variety of languages.
2. Design the data model based on your own use cases. Consider the main [concepts](/concept/) of TDengine, including "one table per data collection point" and the supertable. Learn about static labels, collected metrics, and subtables. Depending on the characteristics of your data and your requirements, you decide to create one or more databases and design a supertable schema that fit your data.
3. Decide how you will insert data. TDengine supports writing using standard SQL, but also supports schemaless writing, so that data can be written directly without creating tables manually.
4. Based on business requirements, find out what SQL query statements need to be written. You may be able to repurpose any existing SQL.
5. If you want to run real-time analysis based on time series data, including various dashboards, it is recommended that you use the TDengine continuous query feature instead of deploying complex streaming processing systems such as Spark or Flink.
5. If you want to run real-time analysis based on time series data, including various dashboards, use the TDengine stream processing component instead of deploying complex systems such as Spark or Flink.
6. If your application has modules that need to consume inserted data, and they need to be notified when new data is inserted, it is recommended that you use the data subscription function provided by TDengine without the need to deploy Kafka.
7. In many use cases (such as fleet management), the application needs to obtain the latest status of each data collection point. It is recommended that you use the cache function of TDengine instead of deploying Redis separately.
8. If you find that the SQL functions of TDengine cannot meet your requirements, then you can use user-defined functions to solve the problem.
......
---
sidebar_label: Manual Deployment
title: Manual Deployment and Management
---
## Prerequisites
### Step 0
The FQDN of all hosts must be setup properly. For e.g. FQDNs may have to be configured in the /etc/hosts file on each host. You must confirm that each FQDN can be accessed from any other host. For e.g. you can do this by using the `ping` command. If you have a DNS server on your network, contact your network administrator for assistance.
### Step 1
If any previous version of TDengine has been installed and configured on any host, the installation needs to be removed and the data needs to be cleaned up. For details about uninstalling please refer to [Install and Uninstall](/operation/pkg-install). To clean up the data, please use `rm -rf /var/lib/taos/\*` assuming the `dataDir` is configured as `/var/lib/taos`.
:::note
FQDN information is written to file. If you have started TDengine without configuring or changing the FQDN, ensure that data is backed up or no longer needed before running the `rm -rf /var/lib\taos/\*` command.
:::
:::note
- The host where the client program runs also needs to be configured properly for FQDN, to make sure all hosts for client or server can be accessed from any other. In other words, the hosts where the client is running are also considered as a part of the cluster.
:::
### Step 2
- Please ensure that your firewall rules do not block TCP/UDP on ports 6030-6042 on all hosts in the cluster.
### Step 3
Now it's time to install TDengine on all hosts but without starting `taosd`. Note that the versions on all hosts should be same. If you are prompted to input the existing TDengine cluster, simply press carriage return to ignore the prompt.
### Step 4
Now each physical node (referred to, hereinafter, as `dnode` which is an abbreviation for "data node") of TDengine needs to be configured properly.
To get the hostname on any host, the command `hostname -f` can be executed.
`ping <FQDN>` command can be executed on each host to check whether any other host is accessible from it. If any host is not accessible, the network configuration, like /etc/hosts or DNS configuration, needs to be checked and revised, to make any two hosts accessible to each other. Hosts that are not accessible to each other cannot form a cluster.
On the physical machine running the application, ping the dnode that is running taosd. If the dnode is not accessible, the application cannot connect to taosd. In this case, verify the DNS and hosts settings on the physical node running the application.
The end point of each dnode is the output hostname and port, such as h1.taosdata.com:6030.
### Step 5
Modify the TDengine configuration file `/etc/taos/taos.cfg` on each node. Assuming the first dnode of TDengine cluster is "h1.taosdata.com:6030", its `taos.cfg` is configured as following.
```c
// firstEp is the end point to connect to when any dnode starts
firstEp h1.taosdata.com:6030
// must be configured to the FQDN of the host where the dnode is launched
fqdn h1.taosdata.com
// the port used by the dnode, default is 6030
serverPort 6030
```
`firstEp` and `fqdn` must be configured properly. In `taos.cfg` of all dnodes in TDengine cluster, `firstEp` must be configured to point to same address, i.e. the first dnode of the cluster. `fqdn` and `serverPort` compose the address of each node itself. Retain the default values for other parameters.
For all the dnodes in a TDengine cluster, the below parameters must be configured exactly the same, any node whose configuration is different from dnodes already in the cluster can't join the cluster.
| **#** | **Parameter** | **Definition** |
| ----- | ------------------ | ------------------------------------------- |
| 1 | statusInterval | The interval by which dnode reports its status to mnode |
| 2 | timezone | Timezone |
| 3 | locale | System region and encoding |
| 4 | charset | Character set |
## Start Cluster
The first dnode can be started following the instructions in [Get Started](/get-started/). Then TDengine CLI `taos` can be launched to execute command `show dnodes`, the output is as following for example:
```
taos> show dnodes;
id | endpoint | vnodes | support_vnodes | status | create_time | note |
============================================================================================================================================
1 | h1.taosdata.com:6030 | 0 | 1024 | ready | 2022-07-16 10:50:42.673 | |
Query OK, 1 rows affected (0.007984s)
```
From the above output, it is shown that the end point of the started dnode is "h1.taosdata.com:6030", which is the `firstEp` of the cluster.
## Add DNODE
There are a few steps necessary to add other dnodes in the cluster.
Second, we can start `taosd` as instructed in [Get Started](/get-started/).
Then, on the first dnode i.e. h1.taosdata.com in our example, use TDengine CLI `taos` to execute the following command:
```sql
CREATE DNODE "h2.taos.com:6030";
````
This adds the end point of the new dnode (from Step 4) into the end point list of the cluster. In the command "fqdn:port" should be quoted using double quotes. Change `"h2.taos.com:6030"` to the end point of your new dnode.
Then on the first dnode h1.taosdata.com, execute `show dnodes` in `taos`
```sql
SHOW DNODES;
```
to show whether the second dnode has been added in the cluster successfully or not. If the status of the newly added dnode is offline, please check:
- Whether the `taosd` process is running properly or not
- In the log file `taosdlog.0` to see whether the fqdn and port are correct and add the correct end point if not.
The above process can be repeated to add more dnodes in the cluster.
:::tip
Any node that is in the cluster and online can be the firstEp of new nodes.
Nodes use the firstEp parameter only when joining a cluster for the first time. After a node has joined the cluster, it stores the latest mnode in its end point list and no longer makes use of firstEp.
However, firstEp is used by clients that connect to the cluster. For example, if you run `taos shell` without arguments, it connects to the firstEp by default.
Two dnodes that are launched without a firstEp value operate independently of each other. It is not possible to add one dnode to the other dnode and form a cluster. It is also not possible to form two independent clusters into a new cluster.
:::
## Show DNODEs
The below command can be executed in TDengine CLI `taos`
```sql
SHOW DNODES;
```
to list all dnodes in the cluster, including ID, end point (fqdn:port), status (ready, offline), number of vnodes, number of free vnodes and so on. We recommend executing this command after adding or removing a dnode.
Below is the example output of this command.
```
taos> show dnodes;
id | endpoint | vnodes | support_vnodes | status | create_time | note |
============================================================================================================================================
1 | trd01:6030 | 100 | 1024 | ready | 2022-07-15 16:47:47.726 | |
Query OK, 1 rows affected (0.006684s)
```
## Show VGROUPs
To utilize system resources efficiently and provide scalability, data sharding is required. The data of each database is divided into multiple shards and stored in multiple vnodes. These vnodes may be located on different dnodes. One way of scaling out is to add more vnodes on dnodes. Each vnode can only be used for a single DB, but one DB can have multiple vnodes. The allocation of vnode is scheduled automatically by mnode based on system resources of the dnodes.
Launch TDengine CLI `taos` and execute below command:
```sql
USE SOME_DATABASE;
SHOW VGROUPS;
```
The example output is below:
```
taos> use db;
Database changed.
taos> show vgroups;
vgroup_id | db_name | tables | v1_dnode | v1_status | v2_dnode | v2_status | v3_dnode | v3_status | status | nfiles | file_size | tsma |
================================================================================================================================================================================================
2 | db | 0 | 1 | leader | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 0 |
3 | db | 0 | 1 | leader | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 0 |
4 | db | 0 | 1 | leader | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 0 |
Query OK, 8 row(s) in set (0.001154s)
```
## Drop DNODE
Before running the TDengine CLI, ensure that the taosd process has been stopped on the dnode that you want to delete.
```sql
DROP DNODE "fqdn:port";
```
or
```sql
DROP DNODE dnodeId;
```
to drop or remove a dnode from the cluster. In the command, you can get `dnodeId` from `show dnodes`.
:::warning
- Once a dnode is dropped, it can't rejoin the cluster. To rejoin, the dnode needs to deployed again after cleaning up the data directory. Before dropping a dnode, the data belonging to the dnode MUST be migrated/backed up according to your data retention, data security or other SOPs.
- Please note that `drop dnode` is different from stopping `taosd` process. `drop dnode` just removes the dnode out of TDengine cluster. Only after a dnode is dropped, can the corresponding `taosd` process be stopped.
- Once a dnode is dropped, other dnodes in the cluster will be notified of the drop and will not accept the request from the dropped dnode.
- dnodeID is allocated automatically and can't be manually modified. dnodeID is generated in ascending order without duplication.
:::
---
sidebar_label: Kubernetes
title: Deploying a TDengine Cluster in Kubernetes
---
TDengine is a cloud-native time-series database that can be deployed on Kubernetes. This document gives a step-by-step description of how you can use YAML files to create a TDengine cluster and introduces common operations for TDengine in a Kubernetes environment.
## Prerequisites
Before deploying TDengine on Kubernetes, perform the following:
* Install and configure minikube, kubectl, and helm.
* Install and deploy Kubernetes and ensure that it can be accessed and used normally. Update any container registries or other services as necessary.
You can download the configuration files in this document from [GitHub](https://github.com/taosdata/TDengine-Operator/tree/3.0/src/tdengine).
## Configure the service
Create a service configuration file named `taosd-service.yaml`. Record the value of `metadata.name` (in this example, `taos`) for use in the next step. Add the ports required by TDengine:
```yaml
---
apiVersion: v1
kind: Service
metadata:
name: "taosd"
labels:
app: "tdengine"
spec:
ports:
- name: tcp6030
- protocol: "TCP"
port: 6030
- name: tcp6041
- protocol: "TCP"
port: 6041
selector:
app: "tdengine"
```
## Configure the service as StatefulSet
Configure the TDengine service as a StatefulSet.
Create the `tdengine.yaml` file and set `replicas` to 3. In this example, the region is set to Asia/Shanghai and 10 GB of standard storage are allocated per node. You can change the configuration based on your environment and business requirements.
```yaml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: "tdengine"
labels:
app: "tdengine"
spec:
serviceName: "taosd"
replicas: 3
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app: "tdengine"
template:
metadata:
name: "tdengine"
labels:
app: "tdengine"
spec:
containers:
- name: "tdengine"
image: "tdengine/tdengine:3.0.0.0"
imagePullPolicy: "IfNotPresent"
ports:
- name: tcp6030
- protocol: "TCP"
containerPort: 6030
- name: tcp6041
- protocol: "TCP"
containerPort: 6041
env:
# POD_NAME for FQDN config
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
# SERVICE_NAME and NAMESPACE for fqdn resolve
- name: SERVICE_NAME
value: "taosd"
- name: STS_NAME
value: "tdengine"
- name: STS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# TZ for timezone settings, we recommend to always set it.
- name: TZ
value: "Asia/Shanghai"
# TAOS_ prefix will configured in taos.cfg, strip prefix and camelCase.
- name: TAOS_SERVER_PORT
value: "6030"
# Must set if you want a cluster.
- name: TAOS_FIRST_EP
value: "$(STS_NAME)-0.$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local:$(TAOS_SERVER_PORT)"
# TAOS_FQND should always be setted in k8s env.
- name: TAOS_FQDN
value: "$(POD_NAME).$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local"
volumeMounts:
- name: taosdata
mountPath: /var/lib/taos
readinessProbe:
exec:
command:
- taos-check
initialDelaySeconds: 5
timeoutSeconds: 5000
livenessProbe:
exec:
command:
- taos-check
initialDelaySeconds: 15
periodSeconds: 20
volumeClaimTemplates:
- metadata:
name: taosdata
spec:
accessModes:
- "ReadWriteOnce"
storageClassName: "standard"
resources:
requests:
storage: "10Gi"
```
## Use kubectl to deploy TDengine
Run the following commands:
```bash
kubectl apply -f taosd-service.yaml
kubectl apply -f tdengine.yaml
```
The preceding configuration generates a TDengine cluster with three nodes in which dnodes are automatically configured. You can run the `show dnodes` command to query the nodes in the cluster:
```bash
kubectl exec -i -t tdengine-0 -- taos -s "show dnodes"
kubectl exec -i -t tdengine-1 -- taos -s "show dnodes"
kubectl exec -i -t tdengine-2 -- taos -s "show dnodes"
```
The output is as follows:
```
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | note |
============================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:14:57.285 | |
2 | tdengine-1.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:11.302 | |
3 | tdengine-2.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:23.290 | |
Query OK, 3 rows in database (0.003655s)
```
## Enable port forwarding
The kubectl port forwarding feature allows applications to access the TDengine cluster running on Kubernetes.
```
kubectl port-forward tdengine-0 6041:6041 &
```
Use curl to verify that the TDengine REST API is working on port 6041:
```
$ curl -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
Handling connection for 6041
{"code":0,"column_meta":[["name","VARCHAR",64],["create_time","TIMESTAMP",8],["vgroups","SMALLINT",2],["ntables","BIGINT",8],["replica","TINYINT",1],["strict","VARCHAR",4],["duration","VARCHAR",10],["keep","VARCHAR",32],["buffer","INT",4],["pagesize","INT",4],["pages","INT",4],["minrows","INT",4],["maxrows","INT",4],["comp","TINYINT",1],["precision","VARCHAR",2],["status","VARCHAR",10],["retention","VARCHAR",60],["single_stable","BOOL",1],["cachemodel","VARCHAR",11],["cachesize","INT",4],["wal_level","TINYINT",1],["wal_fsync_period","INT",4],["wal_retention_period","INT",4],["wal_retention_size","BIGINT",8],["wal_roll_period","INT",4],["wal_segment_size","BIGINT",8]],"data":[["information_schema",null,null,16,null,null,null,null,null,null,null,null,null,null,null,"ready",null,null,null,null,null,null,null,null,null,null],["performance_schema",null,null,10,null,null,null,null,null,null,null,null,null,null,null,"ready",null,null,null,null,null,null,null,null,null,null]],"rows":2}
```
## Enable the dashboard for visualization
The minikube dashboard command enables visualized cluster management.
```
$ minikube dashboard
* Verifying dashboard health ...
* Launching proxy ...
* Verifying proxy health ...
* Opening http://127.0.0.1:46617/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...
http://127.0.0.1:46617/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
```
In some public clouds, minikube cannot be remotely accessed if it is bound to 127.0.0.1. In this case, use the kubectl proxy command to map the port to 0.0.0.0. Then, you can access the dashboard by using a web browser to open the dashboard URL above on the public IP address and port of the virtual machine.
```
$ kubectl proxy --accept-hosts='^.*$' --address='0.0.0.0'
```
## Scaling Out Your Cluster
TDengine clusters can scale automatically:
```bash
kubectl scale statefulsets tdengine --replicas=4
```
The preceding command increases the number of replicas to 4. After running this command, query the pod status:
```bash
kubectl get pods -l app=tdengine
```
The output is as follows:
```
NAME READY STATUS RESTARTS AGE
tdengine-0 1/1 Running 0 161m
tdengine-1 1/1 Running 0 161m
tdengine-2 1/1 Running 0 32m
tdengine-3 1/1 Running 0 32m
```
The status of all pods is Running. Once the pod status changes to Ready, you can check the dnode status:
```bash
kubectl exec -i -t tdengine-3 -- taos -s "show dnodes"
```
The following output shows that the TDengine cluster has been expanded to 4 replicas:
```
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | note |
============================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:14:57.285 | |
2 | tdengine-1.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:11.302 | |
3 | tdengine-2.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:23.290 | |
4 | tdengine-3.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:33:16.039 | |
Query OK, 4 rows in database (0.008377s)
```
## Scaling In Your Cluster
When you scale in a TDengine cluster, your data is migrated to different nodes. You must run the drop dnodes command in TDengine to remove dnodes before scaling in your Kubernetes environment.
Note: In a Kubernetes StatefulSet service, the newest pods are always removed first. For this reason, when you scale in your TDengine cluster, ensure that you drop the newest dnodes.
```
$ kubectl exec -i -t tdengine-0 -- taos -s "drop dnode 4"
```
```bash
$ kubectl exec -it tdengine-0 -- taos -s "show dnodes"
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | note |
============================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:14:57.285 | |
2 | tdengine-1.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:11.302 | |
3 | tdengine-2.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:23.290 | |
Query OK, 3 rows in database (0.004861s)
```
Verify that the dnode have been successfully removed by running the `kubectl exec -i -t tdengine-0 -- taos -s "show dnodes"` command. Then run the following command to remove the pod:
```
kubectl scale statefulsets tdengine --replicas=3
```
The newest pod in the deployment is removed. Run the `kubectl get pods -l app=tdengine` command to query the pod status:
```
$ kubectl get pods -l app=tdengine
NAME READY STATUS RESTARTS AGE
tdengine-0 1/1 Running 0 4m7s
tdengine-1 1/1 Running 0 3m55s
tdengine-2 1/1 Running 0 2m28s
```
After the pod has been removed, manually delete the PersistentVolumeClaim (PVC). Otherwise, future scale-outs will attempt to use existing data.
```bash
$ kubectl delete pvc taosdata-tdengine-3
```
Your cluster has now been safely scaled in, and you can scale it out again as necessary.
```bash
$ kubectl scale statefulsets tdengine --replicas=4
statefulset.apps/tdengine scaled
it@k8s-2:~/TDengine-Operator/src/tdengine$ kubectl get pods -l app=tdengine
NAME READY STATUS RESTARTS AGE
tdengine-0 1/1 Running 0 35m
tdengine-1 1/1 Running 0 34m
tdengine-2 1/1 Running 0 12m
tdengine-3 0/1 ContainerCreating 0 4s
it@k8s-2:~/TDengine-Operator/src/tdengine$ kubectl get pods -l app=tdengine
NAME READY STATUS RESTARTS AGE
tdengine-0 1/1 Running 0 35m
tdengine-1 1/1 Running 0 34m
tdengine-2 1/1 Running 0 12m
tdengine-3 0/1 Running 0 7s
it@k8s-2:~/TDengine-Operator/src/tdengine$ kubectl exec -it tdengine-0 -- taos -s "show dnodes"
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 17:38:49.012 | |
2 | tdengine-1.taosd.default.sv... | 1 | 4 | ready | 2022-07-25 17:39:01.517 | |
5 | tdengine-2.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 18:01:36.479 | |
6 | tdengine-3.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 18:13:54.411 | |
Query OK, 4 row(s) in set (0.001348s)
```
## Remove a TDengine Cluster
To fully remove a TDengine cluster, you must delete its statefulset, svc, configmap, and pvc entries:
```bash
kubectl delete statefulset -l app=tdengine
kubectl delete svc -l app=tdengine
kubectl delete pvc -l app=tdengine
kubectl delete configmap taoscfg
```
## Troubleshooting
### Error 1
If you remove a pod without first running `drop dnode`, some TDengine nodes will go offline.
```
$ kubectl exec -it tdengine-0 -- taos -s "show dnodes"
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 17:38:49.012 | |
2 | tdengine-1.taosd.default.sv... | 1 | 4 | ready | 2022-07-25 17:39:01.517 | |
5 | tdengine-2.taosd.default.sv... | 0 | 4 | offline | 2022-07-25 18:01:36.479 | status msg timeout |
6 | tdengine-3.taosd.default.sv... | 0 | 4 | offline | 2022-07-25 18:13:54.411 | status msg timeout |
Query OK, 4 row(s) in set (0.001323s)
```
### Error 2
If the number of nodes after a scale-in is less than the value of the replica parameter, the cluster will go down:
Create a database with replica set to 2 and add data.
```bash
kubectl exec -i -t tdengine-0 -- \
taos -s \
"create database if not exists test replica 2;
use test;
create table if not exists t1(ts timestamp, n int);
insert into t1 values(now, 1)(now+1s, 2);"
```
Scale in to one node:
```bash
kubectl scale statefulsets tdengine --replicas=1
```
In the TDengine CLI, you can see that no database operations succeed:
```
taos> show dnodes;
id | end_point | vnodes | cores | status | role | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 2 | 40 | ready | any | 2021-06-01 15:55:52.562 | |
2 | tdengine-1.taosd.default.sv... | 1 | 40 | offline | any | 2021-06-01 15:56:07.212 | status msg timeout |
Query OK, 2 row(s) in set (0.000845s)
taos> show dnodes;
id | end_point | vnodes | cores | status | role | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 2 | 40 | ready | any | 2021-06-01 15:55:52.562 | |
2 | tdengine-1.taosd.default.sv... | 1 | 40 | offline | any | 2021-06-01 15:56:07.212 | status msg timeout |
Query OK, 2 row(s) in set (0.000837s)
taos> use test;
Database changed.
taos> insert into t1 values(now, 3);
DB error: Unable to resolve FQDN (0.013874s)
```
---
sidebar_label: Helm
title: Use Helm to deploy TDengine
---
Helm is a package manager for Kubernetes that can provide more capabilities in deploying on Kubernetes.
## Install Helm
```bash
curl -fsSL -o get_helm.sh \
https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod +x get_helm.sh
./get_helm.sh
```
Helm uses the kubectl and kubeconfig configurations to perform Kubernetes operations. For more information, see the Rancher configuration for Kubernetes installation.
## Install TDengine Chart
To use TDengine Chart, download it from GitHub:
```bash
wget https://github.com/taosdata/TDengine-Operator/raw/3.0/helm/tdengine-3.0.0.tgz
```
Query the storageclass of your Kubernetes deployment:
```bash
kubectl get storageclass
```
With minikube, the default value is standard.
Use Helm commands to install TDengine:
```bash
helm install tdengine tdengine-3.0.0.tgz \
--set storage.className=<your storage class name>
```
You can configure a small storage size in minikube to ensure that your deployment does not exceed your available disk space.
```bash
helm install tdengine tdengine-3.0.0.tgz \
--set storage.className=standard \
--set storage.dataSize=2Gi \
--set storage.logSize=10Mi
```
After TDengine is deployed, TDengine Chart outputs information about how to use TDengine:
```bash
export POD_NAME=$(kubectl get pods --namespace default \
-l "app.kubernetes.io/name=tdengine,app.kubernetes.io/instance=tdengine" \
-o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default exec $POD_NAME -- taos -s "show dnodes; show mnodes"
kubectl --namespace default exec -it $POD_NAME -- taos
```
You can test the deployment by creating a table:
```bash
kubectl --namespace default exec $POD_NAME -- \
taos -s "create database test;
use test;
create table t1 (ts timestamp, n int);
insert into t1 values(now, 1)(now + 1s, 2);
select * from t1;"
```
## Configuring Values
You can configure custom parameters in TDengine with the `values.yaml` file.
Run the `helm show values` command to see all parameters supported by TDengine Chart.
```bash
helm show values tdengine-3.0.0.tgz
```
Save the output of this command as `values.yaml`. Then you can modify this file with your desired values and use it to deploy a TDengine cluster:
```bash
helm install tdengine tdengine-3.0.0.tgz -f values.yaml
```
The parameters are described as follows:
```yaml
# Default values for tdengine.
# This is a YAML-formatted file.
# Declare variables to be passed into helm templates.
replicaCount: 1
image:
prefix: tdengine/tdengine
#pullPolicy: Always
# Overrides the image tag whose default is the chart appVersion.
# tag: "3.0.0.0"
service:
# ClusterIP is the default service type, use NodeIP only if you know what you are doing.
type: ClusterIP
ports:
# TCP range required
tcp: [6030, 6041, 6042, 6043, 6044, 6046, 6047, 6048, 6049, 6060]
# UDP range
udp: [6044, 6045]
# Set timezone here, not in taoscfg
timezone: "Asia/Shanghai"
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
storage:
# Set storageClassName for pvc. K8s use default storage class if not set.
#
className: ""
dataSize: "100Gi"
logSize: "10Gi"
nodeSelectors:
taosd:
# node selectors
clusterDomainSuffix: ""
# Config settings in taos.cfg file.
#
# The helm/k8s support will use environment variables for taos.cfg,
# converting an upper-snake-cased variable like `TAOS_DEBUG_FLAG`,
# to a camelCase taos config variable `debugFlag`.
#
# See the variable list at https://www.taosdata.com/cn/documentation/administrator .
#
# Note:
# 1. firstEp/secondEp: should not be setted here, it's auto generated at scale-up.
# 2. serverPort: should not be setted, we'll use the default 6030 in many places.
# 3. fqdn: will be auto generated in kubenetes, user should not care about it.
# 4. role: currently role is not supported - every node is able to be mnode and vnode.
#
# Btw, keep quotes "" around the value like below, even the value will be number or not.
taoscfg:
# Starts as cluster or not, must be 0 or 1.
# 0: all pods will start as a seperate TDengine server
# 1: pods will start as TDengine server cluster. [default]
CLUSTER: "1"
# number of replications, for cluster only
TAOS_REPLICA: "1"
# number of days per DB file
# TAOS_DAYS: "10"
# number of days to keep DB file, default is 10 years.
#TAOS_KEEP: "3650"
# cache block size (Mbyte)
#TAOS_CACHE: "16"
# number of cache blocks per vnode
#TAOS_BLOCKS: "6"
# minimum rows of records in file block
#TAOS_MIN_ROWS: "100"
# maximum rows of records in file block
#TAOS_MAX_ROWS: "4096"
#
# TAOS_NUM_OF_THREADS_PER_CORE: number of threads per CPU core
#TAOS_NUM_OF_THREADS_PER_CORE: "1.0"
#
# TAOS_NUM_OF_COMMIT_THREADS: number of threads to commit cache data
#TAOS_NUM_OF_COMMIT_THREADS: "4"
#
# TAOS_RATIO_OF_QUERY_CORES:
# the proportion of total CPU cores available for query processing
# 2.0: the query threads will be set to double of the CPU cores.
# 1.0: all CPU cores are available for query processing [default].
# 0.5: only half of the CPU cores are available for query.
# 0.0: only one core available.
#TAOS_RATIO_OF_QUERY_CORES: "1.0"
#
# TAOS_KEEP_COLUMN_NAME:
# the last_row/first/last aggregator will not change the original column name in the result fields
#TAOS_KEEP_COLUMN_NAME: "0"
# enable/disable backuping vnode directory when removing vnode
#TAOS_VNODE_BAK: "1"
# enable/disable installation / usage report
#TAOS_TELEMETRY_REPORTING: "1"
# enable/disable load balancing
#TAOS_BALANCE: "1"
# max timer control blocks
#TAOS_MAX_TMR_CTRL: "512"
# time interval of system monitor, seconds
#TAOS_MONITOR_INTERVAL: "30"
# number of seconds allowed for a dnode to be offline, for cluster only
#TAOS_OFFLINE_THRESHOLD: "8640000"
# RPC re-try timer, millisecond
#TAOS_RPC_TIMER: "1000"
# RPC maximum time for ack, seconds.
#TAOS_RPC_MAX_TIME: "600"
# time interval of dnode status reporting to mnode, seconds, for cluster only
#TAOS_STATUS_INTERVAL: "1"
# time interval of heart beat from shell to dnode, seconds
#TAOS_SHELL_ACTIVITY_TIMER: "3"
# minimum sliding window time, milli-second
#TAOS_MIN_SLIDING_TIME: "10"
# minimum time window, milli-second
#TAOS_MIN_INTERVAL_TIME: "10"
# maximum delay before launching a stream computation, milli-second
#TAOS_MAX_STREAM_COMP_DELAY: "20000"
# maximum delay before launching a stream computation for the first time, milli-second
#TAOS_MAX_FIRST_STREAM_COMP_DELAY: "10000"
# retry delay when a stream computation fails, milli-second
#TAOS_RETRY_STREAM_COMP_DELAY: "10"
# the delayed time for launching a stream computation, from 0.1(default, 10% of whole computing time window) to 0.9
#TAOS_STREAM_COMP_DELAY_RATIO: "0.1"
# max number of vgroups per db, 0 means configured automatically
#TAOS_MAX_VGROUPS_PER_DB: "0"
# max number of tables per vnode
#TAOS_MAX_TABLES_PER_VNODE: "1000000"
# the number of acknowledgments required for successful data writing
#TAOS_QUORUM: "1"
# enable/disable compression
#TAOS_COMP: "2"
# write ahead log (WAL) level, 0: no wal; 1: write wal, but no fysnc; 2: write wal, and call fsync
#TAOS_WAL_LEVEL: "1"
# if walLevel is set to 2, the cycle of fsync being executed, if set to 0, fsync is called right away
#TAOS_FSYNC: "3000"
# the compressed rpc message, option:
# -1 (no compression)
# 0 (all message compressed),
# > 0 (rpc message body which larger than this value will be compressed)
#TAOS_COMPRESS_MSG_SIZE: "-1"
# max length of an SQL
#TAOS_MAX_SQL_LENGTH: "1048576"
# the maximum number of records allowed for super table time sorting
#TAOS_MAX_NUM_OF_ORDERED_RES: "100000"
# max number of connections allowed in dnode
#TAOS_MAX_SHELL_CONNS: "5000"
# max number of connections allowed in client
#TAOS_MAX_CONNECTIONS: "5000"
# stop writing logs when the disk size of the log folder is less than this value
#TAOS_MINIMAL_LOG_DIR_G_B: "0.1"
# stop writing temporary files when the disk size of the tmp folder is less than this value
#TAOS_MINIMAL_TMP_DIR_G_B: "0.1"
# if disk free space is less than this value, taosd service exit directly within startup process
#TAOS_MINIMAL_DATA_DIR_G_B: "0.1"
# One mnode is equal to the number of vnode consumed
#TAOS_MNODE_EQUAL_VNODE_NUM: "4"
# enbale/disable http service
#TAOS_HTTP: "1"
# enable/disable system monitor
#TAOS_MONITOR: "1"
# enable/disable recording the SQL statements via restful interface
#TAOS_HTTP_ENABLE_RECORD_SQL: "0"
# number of threads used to process http requests
#TAOS_HTTP_MAX_THREADS: "2"
# maximum number of rows returned by the restful interface
#TAOS_RESTFUL_ROW_LIMIT: "10240"
# The following parameter is used to limit the maximum number of lines in log files.
# max number of lines per log filters
# numOfLogLines 10000000
# enable/disable async log
#TAOS_ASYNC_LOG: "0"
#
# time of keeping log files, days
#TAOS_LOG_KEEP_DAYS: "0"
# The following parameters are used for debug purpose only.
# debugFlag 8 bits mask: FILE-SCREEN-UNUSED-HeartBeat-DUMP-TRACE_WARN-ERROR
# 131: output warning and error
# 135: output debug, warning and error
# 143: output trace, debug, warning and error to log
# 199: output debug, warning and error to both screen and file
# 207: output trace, debug, warning and error to both screen and file
#
# debug flag for all log type, take effect when non-zero value\
#TAOS_DEBUG_FLAG: "143"
# enable/disable recording the SQL in taos client
#TAOS_ENABLE_RECORD_SQL: "0"
# generate core file when service crash
#TAOS_ENABLE_CORE_FILE: "1"
# maximum display width of binary and nchar fields in the shell. The parts exceeding this limit will be hidden
#TAOS_MAX_BINARY_DISPLAY_WIDTH: "30"
# enable/disable stream (continuous query)
#TAOS_STREAM: "1"
# in retrieve blocking model, only in 50% query threads will be used in query processing in dnode
#TAOS_RETRIEVE_BLOCKING_MODEL: "0"
# the maximum allowed query buffer size in MB during query processing for each data node
# -1 no limit (default)
# 0 no query allowed, queries are disabled
#TAOS_QUERY_BUFFER_SIZE: "-1"
```
## Scaling Out
For information about scaling out your deployment, see Kubernetes. Additional Helm-specific is described as follows.
First, obtain the name of the StatefulSet service for your deployment.
```bash
export STS_NAME=$(kubectl get statefulset \
-l "app.kubernetes.io/name=tdengine" \
-o jsonpath="{.items[0].metadata.name}")
```
You can scale out your deployment by adding replicas. The following command scales a deployment to three nodes:
```bash
kubectl scale --replicas 3 statefulset/$STS_NAME
```
Run the `show dnodes` and `show mnodes` commands to check whether the scale-out was successful.
## Scaling In
:::warning
Exercise caution when scaling in a cluster.
:::
Determine which dnodes you want to remove and drop them manually.
```bash
kubectl --namespace default exec $POD_NAME -- \
cat /var/lib/taos/dnode/dnodeEps.json \
| jq '.dnodeInfos[1:] |map(.dnodeFqdn + ":" + (.dnodePort|tostring)) | .[]' -r
kubectl --namespace default exec $POD_NAME -- taos -s "show dnodes"
kubectl --namespace default exec $POD_NAME -- taos -s 'drop dnode "<you dnode in list>"'
```
## Remove a TDengine Cluster
You can use Helm to remove your cluster:
```bash
helm uninstall tdengine
```
However, Helm does not remove PVCs automatically. After you remove your cluster, manually remove all PVCs.
---
title: Deployment
---
TDengine has a native distributed design and provides the ability to scale out. A few nodes can form a TDengine cluster. If you need higher processing power, you just need to add more nodes into the cluster. TDengine uses virtual node technology to virtualize a node into multiple virtual nodes to achieve load balancing. At the same time, TDengine can group virtual nodes on different nodes into virtual node groups, and use the replication mechanism to ensure the high availability of the system. The cluster feature of TDengine is completely open source.
This document describes how to manually deploy a cluster on a host as well as how to deploy on Kubernetes and by using Helm.
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
---
sidebar_label: Data Types
title: Data Types
description: "TDengine supports a variety of data types including timestamp, float, JSON and many others."
---
## TIMESTAMP
## Timestamp
When using TDengine to store and query data, the most important part of the data is timestamp. Timestamp must be specified when creating and inserting data rows. Timestamp must follow the rules below:
......@@ -18,52 +19,54 @@ Time precision in TDengine can be set by the `PRECISION` parameter when executin
```sql
CREATE DATABASE db_name PRECISION 'ns';
```
## Data Types
In TDengine, the data types below can be used when specifying a column or tag.
| # | **type** | **Bytes** | **Description** |
| --- | :-------: | --------- | ------------------------- |
| 1 | TIMESTAMP | 8 | Default precision is millisecond, microsecond and nanosecond are also supported |
| --- | :-------: | --------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1 | TIMESTAMP | 8 | Default precision is millisecond, microsecond and nanosecond are also supported |
| 2 | INT | 4 | Integer, the value range is [-2^31, 2^31-1] |
| 3 |INT UNSIGNED|4 | Unsigned integer, the value range is [0, 2^31-1] |
| 3 | INT UNSIGNED| 4| unsigned integer, the value range is [0, 2^32-1]
| 4 | BIGINT | 8 | Long integer, the value range is [-2^63, 2^63-1] |
| 5 | BIGINT UNSIGNED | 8 | Unsigned long integer, the value range is [0, 2^63-1] |
| 5 | BIGINT UNSIGNED | 8 | unsigned long integer, the value range is [0, 2^64-1] |
| 6 | FLOAT | 4 | Floating point number, the effective number of digits is 6-7, the value range is [-3.4E38, 3.4E38] |
| 7 | DOUBLE | 8 | Double precision floating point number, the effective number of digits is 15-16, the value range is [-1.7E308, 1.7E308] |
| 8 | BINARY | User Defined | Single-byte string for ASCII visible characters. Length must be specified when defining a column or tag of binary type. The string length can be up to 16374 bytes. The string value must be quoted with single quotes. The literal single quote inside the string must be preceded with back slash like `\'` |
| 9 | SMALLINT | 2 | Short integer, the value range is [-32768, 32767] |
| 10 | SMALLINT UNSIGNED | 2 | Unsigned short integer, the value range is [0, 32767] |
| 11 | TINYINT | 1 | Single-byte integer, the value range is [-128, 127] |
| 12 | TINYINT UNSIGNED | 1 | Unsigned single-byte integer, the value range is [0, 127] |
| 13 | BOOL | 1 | Bool, the value range is {true, false} |
| 14 | NCHAR | User Defined| Multi-Byte string that can include multi byte characters like Chinese characters. Each character of NCHAR type consumes 4 bytes storage. The string value should be quoted with single quotes. Literal single quote inside the string must be preceded with backslash, like `\’`. The length must be specified when defining a column or tag of NCHAR type, for example nchar(10) means it can store at most 10 characters of nchar type and will consume fixed storage of 40 bytes. An error will be reported if the string value exceeds the length defined. |
| 8 | BINARY | User Defined | Single-byte string for ASCII visible characters. Length must be specified when defining a column or tag of binary type. |
| 9 | SMALLINT | 2 | Short integer, the value range is [-32768, 32767] |
| 10 | INT UNSIGNED| 2| unsigned integer, the value range is [0, 65535]|
| 11 | TINYINT | 1 | Single-byte integer, the value range is [-128, 127] |
| 12 | TINYINT UNSIGNED | 1 | unsigned single-byte integer, the value range is [0, 255] |
| 13 | BOOL | 1 | Bool, the value range is {true, false} |
| 14 | NCHAR | User Defined| Multi-Byte string that can include multi byte characters like Chinese characters. Each character of NCHAR type consumes 4 bytes storage. The string value should be quoted with single quotes. Literal single quote inside the string must be preceded with backslash, like `\’`. The length must be specified when defining a column or tag of NCHAR type, for example nchar(10) means it can store at most 10 characters of nchar type and will consume fixed storage of 40 bytes. An error will be reported if the string value exceeds the length defined. |
| 15 | JSON | | JSON type can only be used on tags. A tag of json type is excluded with any other tags of any other type |
| 16 | VARCHAR | User Defined| Alias of BINARY type |
| 16 | VARCHAR | User-defined | Alias of BINARY |
:::note
- TDengine is case insensitive and treats any characters in the sql command as lower case by default, case sensitive strings must be quoted with single quotes.
- Only ASCII visible characters are suggested to be used in a column or tag of BINARY type. Multi-byte characters must be stored in NCHAR type.
- Only ASCII visible characters are suggested to be used in a column or tag of BINARY type. Multi-byte characters must be stored in NCHAR type.
- The length of BINARY can be up to 16374 bytes. The string value must be quoted with single quotes. You must specify a length in bytes for a BINARY value, for example binary(20) for up to twenty single-byte characters. If the data exceeds the specified length, an error will occur. The literal single quote inside the string must be preceded with back slash like `\'`
- Numeric values in SQL statements will be determined as integer or float type according to whether there is decimal point or whether scientific notation is used, so attention must be paid to avoid overflow. For example, 9999999999999999999 will be considered as overflow because it exceeds the upper limit of long integer, but 9999999999999999999.0 will be considered as a legal float number.
:::
## Constants
TDengine supports constants of multiple data type.
TDengine supports a variety of constants:
| # | **Syntax** | **Type** | **Description** |
| --- | :-------: | --------- | -------------------------------------- |
| 1 | [{+ \| -}]123 | BIGINT | Numeric constants are treated as BIGINT type. The value will be truncated if it exceeds the range of BIGINT type. |
| 2 | 123.45 | DOUBLE | Floating number constants are treated as DOUBLE type. TDengine determines whether it's a floating number based on if decimal point or scientific notation is used. |
| 3 | 1.2E3 | DOUBLE | Constants in scientific notation are treated ad DOUBLE type. |
| 4 | 'abc' | BINARY | String constants enclosed by single quotes are treated as BINARY type. Its size is determined as the acutal length. Single quote itself can be included by preceding backslash, i.e. `\'`, in a string constant. |
| 5 | "abc" | BINARY | String constants enclosed by double quotes are treated as BINARY type. Its size is determined as the acutal length. Double quote itself can be included by preceding backslash, i.e. `\"`, in a string constant. |
| 6 | TIMESTAMP {'literal' \| "literal"} | TIMESTAMP | A string constant following `TIMESTAMP` keyword is treated as TIMESTAMP type. The string should be in the format of "YYYY-MM-DD HH:mm:ss.MS". Its time precision is same as that of the current database being used. |
| 7 | {TRUE \| FALSE} | BOOL | BOOL type contant. |
| 8 | {'' \| "" \| '\t' \| "\t" \| ' ' \| " " \| NULL } | -- | NULL constant, it can be used for any type.|
| 1 | [{+ \| -}]123 | BIGINT | Integer literals are of type BIGINT. Data that exceeds the length of the BIGINT type is truncated. |
| 2 | 123.45 | DOUBLE | Floating-point literals are of type DOUBLE. Numeric values will be determined as integer or float type according to whether there is decimal point or whether scientific notation is used. |
| 3 | 1.2E3 | DOUBLE | Literals in scientific notation are of type DOUBLE. |
| 4 | 'abc' | BINARY | Content enclosed in single quotation marks is of type BINARY. The size of a BINARY is the size of the string in bytes. A literal single quote inside the string must be escaped with a backslash (\'). |
| 5 | 'abc' | BINARY | Content enclosed in double quotation marks is of type BINARY. The size of a BINARY is the size of the string in bytes. A literal double quote inside the string must be escaped with a backslash (\"). |
| 6 | TIMESTAMP {'literal' \| "literal"} | TIMESTAMP | The TIMESTAMP keyword indicates that the following string literal is interpreted as a timestamp. The string must be in YYYY-MM-DD HH:mm:ss.MS format. The precision is inherited from the database configuration. |
| 7 | {TRUE \| FALSE} | BOOL | Boolean literals are of type BOOL. |
| 8 | {'' \| "" \| '\t' \| "\t" \| ' ' \| " " \| NULL } | -- | The preceding characters indicate null literals. These can be used with any data type. |
:::note
- TDengine determines whether it's a floating number based on if decimal point or scientific notation is used. So whether the value is determined as overflow depends on both the value and the determined type. For example, 9999999999999999999 is determined as overflow because it exceeds the upper limit of BIGINT type, while 9999999999999999999.0 is considered as a valid floating number because it is within the range of DOUBLE type.
Numeric values will be determined as integer or float type according to whether there is decimal point or whether scientific notation is used, so attention must be paid to avoid overflow. For example, 9999999999999999999 will be considered as overflow because it exceeds the upper limit of long integer, but 9999999999999999999.0 will be considered as a legal float number.
:::
......@@ -4,123 +4,153 @@ title: Database
description: "create and drop database, show or change database parameters"
---
## Create Database
## Create a Database
```sql
CREATE DATABASE [IF NOT EXISTS] db_name [database_options]
database_options:
database_option ...
database_option: {
BUFFER value
| CACHEMODEL {'none' | 'last_row' | 'last_value' | 'both'}
| CACHESIZE value
| COMP {0 | 1 | 2}
| DURATION value
| WAL_FSYNC_PERIOD value
| MAXROWS value
| MINROWS value
| KEEP value
| PAGES value
| PAGESIZE value
| PRECISION {'ms' | 'us' | 'ns'}
| REPLICA value
| RETENTIONS ingestion_duration:keep_duration ...
| STRICT {'off' | 'on'}
| WAL_LEVEL {1 | 2}
| VGROUPS value
| SINGLE_STABLE {0 | 1}
| WAL_RETENTION_PERIOD value
| WAL_ROLL_PERIOD value
| WAL_RETENTION_SIZE value
| WAL_SEGMENT_SIZE value
}
```
## Parameters
- BUFFER: specifies the size (in MB) of the write buffer for each vnode. Enter a value between 3 and 16384. The default value is 96.
- CACHEMODEL: specifies how the latest data in subtables is stored in the cache. The default value is none.
- none: The latest data is not cached.
- last_row: The last row of each subtable is cached. This option significantly improves the performance of the LAST_ROW function.
- last_value: The last non-null value of each column in each subtable is cached. This option significantly improves the performance of the LAST function under normal circumstances, such as statements including the WHERE, ORDER BY, GROUP BY, and INTERVAL keywords.
- both: The last row of each subtable and the last non-null value of each column in each subtable are cached.
- CACHESIZE: specifies the amount (in MB) of memory used for subtable caching on each vnode. Enter a value between 1 and 65536. The default value is 1.
- COMP: specifies how databases are compressed. The default value is 2.
- 0: Compression is disabled.
- 1: One-pass compression is enabled.
- 2: Two-pass compression is enabled.
- DURATION: specifies the time period contained in each data file. After the time specified by this parameter has elapsed, TDengine creates a new data file to store incoming data. You can use m (minutes), h (hours), and d (days) as the unit, for example DURATION 100h or DURATION 10d. If you do not include a unit, d is used by default.
- WAL_FSYNC_PERIOD: specifies the interval (in milliseconds) at which data is written from the WAL to disk. This parameter takes effect only when the WAL parameter is set to 2. The default value is 3000. Enter a value between 0 and 180000. The value 0 indicates that incoming data is immediately written to disk.
- MAXROWS: specifies the maximum number of rows recorded in a block. The default value is 4096.
- MINROWS: specifies the minimum number of rows recorded in a block. The default value is 100.
- KEEP: specifies the time for which data is retained. Enter a value between 1 and 365000. The default value is 3650. The value of the KEEP parameter must be greater than or equal to the value of the DURATION parameter. TDengine automatically deletes data that is older than the value of the KEEP parameter. You can use m (minutes), h (hours), and d (days) as the unit, for example KEEP 100h or KEEP 10d. If you do not include a unit, d is used by default.
- PAGES: specifies the number of pages in the metadata storage engine cache on each vnode. Enter a value greater than or equal to 64. The default value is 256. The space occupied by metadata storage on each vnode is equal to the product of the values of the PAGESIZE and PAGES parameters. The space occupied by default is 1 MB.
- PAGESIZE: specifies the size (in KB) of each page in the metadata storage engine cache on each vnode. The default value is 4. Enter a value between 1 and 16384.
- PRECISION: specifies the precision at which a database records timestamps. Enter ms for milliseconds, us for microseconds, or ns for nanoseconds. The default value is ms.
- REPLICA: specifies the number of replicas that are made of the database. Enter 1 or 3. The default value is 1. The value of the REPLICA parameter cannot exceed the number of dnodes in the cluster.
- RETENTIONS: specifies the retention period for data aggregated at various intervals. For example, RETENTIONS 15s:7d,1m:21d,15m:50d indicates that data aggregated every 15 seconds is retained for 7 days, data aggregated every 1 minute is retained for 21 days, and data aggregated every 15 minutes is retained for 50 days. You must enter three aggregation intervals and corresponding retention periods.
- STRICT: specifies whether strong data consistency is enabled. The default value is off.
- on: Strong consistency is enabled and implemented through the Raft consensus algorithm. In this mode, an operation is considered successful once it is confirmed by half of the nodes in the cluster.
- off: Strong consistency is disabled. In this mode, an operation is considered successful when it is initiated by the local node.
- WAL_LEVEL: specifies whether fsync is enabled. The default value is 1.
- 1: WAL is enabled but fsync is disabled.
- 2: WAL and fsync are both enabled.
- VGROUPS: specifies the initial number of vgroups when a database is created.
- SINGLE_STABLE: specifies whether the database can contain more than one supertable.
- 0: The database can contain multiple supertables.
- 1: The database can contain only one supertable.
- WAL_RETENTION_PERIOD: specifies the time after which WAL files are deleted. This parameter is used for data subscription. Enter a time in seconds. The default value is 0. A value of 0 indicates that each WAL file is deleted immediately after its contents are written to disk. -1: WAL files are never deleted.
- WAL_RETENTION_SIZE: specifies the size at which WAL files are deleted. This parameter is used for data subscription. Enter a size in KB. The default value is 0. A value of 0 indicates that each WAL file is deleted immediately after its contents are written to disk. -1: WAL files are never deleted.
- WAL_ROLL_PERIOD: specifies the time after which WAL files are rotated. After this period elapses, a new WAL file is created. The default value is 0. A value of 0 indicates that a new WAL file is created only after the previous WAL file was written to disk.
- WAL_SEGMENT_SIZE: specifies the maximum size of a WAL file. After the current WAL file reaches this size, a new WAL file is created. The default value is 0. A value of 0 indicates that a new WAL file is created only after the previous WAL file was written to disk.
### Example Statement
```sql
create database if not exists db vgroups 10 buffer 10
```
The preceding SQL statement creates a database named db that has 10 vgroups and whose vnodes have a 10 MB cache.
### Specify the Database in Use
```
CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep] [DAYS days] [UPDATE 1];
USE db_name;
```
:::info
1. KEEP specifies the number of days for which the data in the database will be retained. The default value is 3650 days, i.e. 10 years. The data will be deleted automatically once its age exceeds this threshold.
2. UPDATE specifies whether the data can be updated and how the data can be updated.
1. UPDATE set to 0 means update operation is not allowed. The update for data with an existing timestamp will be discarded silently and the original record in the database will be preserved as is.
2. UPDATE set to 1 means the whole row will be updated. The columns for which no value is specified will be set to NULL.
3. UPDATE set to 2 means updating a subset of columns for a row is allowed. The columns for which no value is specified will be kept unchanged.
3. The maximum length of database name is 33 bytes.
4. The maximum length of a SQL statement is 65,480 bytes.
5. Below are the parameters that can be used when creating a database
- cache: [Description](/reference/config/#cache)
- blocks: [Description](/reference/config/#blocks)
- days: [Description](/reference/config/#days)
- keep: [Description](/reference/config/#keep)
- minRows: [Description](/reference/config/#minrows)
- maxRows: [Description](/reference/config/#maxrows)
- wal: [Description](/reference/config/#wallevel)
- fsync: [Description](/reference/config/#fsync)
- update: [Description](/reference/config/#update)
- cacheLast: [Description](/reference/config/#cachelast)
- replica: [Description](/reference/config/#replica)
- quorum: [Description](/reference/config/#quorum)
- comp: [Description](/reference/config/#comp)
- precision: [Description](/reference/config/#precision)
6. Please note that all of the parameters mentioned in this section are configured in configuration file `taos.cfg` on the TDengine server. If not specified in the `create database` statement, the values from taos.cfg are used by default. To override default parameters, they must be specified in the `create database` statement.
:::
The preceding SQL statement switches to the specified database. (If you connect to TDengine over the REST API, this statement does not take effect.)
## Show Current Configuration
## Drop a Database
```
SHOW VARIABLES;
DROP DATABASE [IF EXISTS] db_name
```
## Specify The Database In Use
The preceding SQL statement deletes the specified database. This statement will delete all tables in the database and destroy all vgroups associated with it. Exercise caution when using this statement.
```
USE db_name;
```
:::note
This way is not applicable when using a REST connection. In a REST connection the database name must be specified before a table or stable name. For e.g. to query the stable "meters" in database "test" the query would be "SELECT count(*) from test.meters"
## Change Database Configuration
:::
```sql
ALTER DATABASE db_name [alter_database_options]
## Drop Database
alter_database_options:
alter_database_option ...
```
DROP DATABASE [IF EXISTS] db_name;
alter_database_option: {
CACHEMODEL {'none' | 'last_row' | 'last_value' | 'both'}
| CACHESIZE value
| WAL_LEVEL value
| WAL_FSYNC_PERIOD value
| KEEP value
}
```
:::note
All data in the database will be deleted too. This command must be used with extreme caution. Please follow your organization's data integrity, data backup, data security or any other applicable SOPs before using this command.
Other parameters cannot be modified after the database has been created.
:::
## Change Database Configuration
## View Databases
Some examples are shown below to demonstrate how to change the configuration of a database. Please note that some configuration parameters can be changed after the database is created, but some cannot. For details of the configuration parameters of database please refer to [Configuration Parameters](/reference/config/).
### View All Databases
```
ALTER DATABASE db_name COMP 2;
```
COMP parameter specifies whether the data is compressed and how the data is compressed.
```
ALTER DATABASE db_name REPLICA 2;
```
REPLICA parameter specifies the number of replicas of the database.
```
ALTER DATABASE db_name KEEP 365;
SHOW DATABASES;
```
KEEP parameter specifies the number of days for which the data will be kept.
### View the CREATE Statement for a Database
```
ALTER DATABASE db_name QUORUM 2;
SHOW CREATE DATABASE db_name;
```
QUORUM parameter specifies the necessary number of confirmations to determine whether the data is written successfully.
The preceding SQL statement can be used in migration scenarios. This command can be used to get the CREATE statement, which can be used in another TDengine instance to create the exact same database.
```
ALTER DATABASE db_name BLOCKS 100;
```
### View Database Configuration
BLOCKS parameter specifies the number of memory blocks used by each VNODE.
```
ALTER DATABASE db_name CACHELAST 0;
```sql
SHOW DATABASES \G;
```
CACHELAST parameter specifies whether and how the latest data of a sub table is cached.
The preceding SQL statement shows the value of each parameter for the specified database. One value is displayed per line.
:::tip
The above parameters can be changed using `ALTER DATABASE` command without restarting. For more details of all configuration parameters please refer to [Configuration Parameters](/reference/config/).
:::
## Delete Expired Data
## Show All Databases
```
SHOW DATABASES;
```
## Show The Create Statement of A Database
```
SHOW CREATE DATABASE db_name;
```sql
TRIM DATABASE db_name;
```
This command is useful when migrating the data from one TDengine cluster to another. This command can be used to get the CREATE statement, which can be used in another TDengine instance to create the exact same database.
The preceding SQL statement deletes data that has expired and orders the remaining data in accordance with the storage configuration.
---
sidebar_label: Table
title: Table
description: create super table, normal table and sub table, drop tables and change tables
---
## Create Table
```
CREATE TABLE [IF NOT EXISTS] tb_name (timestamp_field_name TIMESTAMP, field1_name data_type1 [, field2_name data_type2 ...]);
```
:::info
You create standard tables and subtables with the `CREATE TABLE` statement.
```sql
CREATE TABLE [IF NOT EXISTS] [db_name.]tb_name (create_definition [, create_definitionn] ...) [table_options]
CREATE TABLE create_subtable_clause
CREATE TABLE [IF NOT EXISTS] [db_name.]tb_name (create_definition [, create_definitionn] ...)
[TAGS (create_definition [, create_definitionn] ...)]
[table_options]
create_subtable_clause: {
create_subtable_clause [create_subtable_clause] ...
| [IF NOT EXISTS] [db_name.]tb_name USING [db_name.]stb_name [(tag_name [, tag_name] ...)] TAGS (tag_value [, tag_value] ...)
}
create_definition:
col_name column_definition
column_definition:
type_name [comment 'string_value']
table_options:
table_option ...
table_option: {
COMMENT 'string_value'
| WATERMARK duration[,duration]
| MAX_DELAY duration[,duration]
| ROLLUP(func_name [, func_name] ...)
| SMA(col_name [, col_name] ...)
| TTL value
}
```
**More explanations**
1. The first column of a table MUST be of type TIMESTAMP. It is automatically set as the primary key.
2. The maximum length of the table name is 192 bytes.
3. The maximum length of each row is 48k bytes, please note that the extra 2 bytes used by each BINARY/NCHAR column are also counted.
4. The name of the subtable can only consist of characters from the English alphabet, digits and underscore. Table names can't start with a digit. Table names are case insensitive.
5. The maximum length in bytes must be specified when using BINARY or NCHAR types.
6. Escape character "\`" can be used to avoid the conflict between table names and reserved keywords, above rules will be bypassed when using escape character on table names, but the upper limit for the name length is still valid. The table names specified using escape character are case sensitive. Only ASCII visible characters can be used with escape character.
6. Escape character "\`" can be used to avoid the conflict between table names and reserved keywords, above rules will be bypassed when using escape character on table names, but the upper limit for the name length is still valid. The table names specified using escape character are case sensitive.
For example \`aBc\` and \`abc\` are different table names but `abc` and `aBc` are same table names because they are both converted to `abc` internally.
Only ASCII visible characters can be used with escape character.
:::
**Parameter description**
1. COMMENT: specifies comments for the table. This parameter can be used with supertables, standard tables, and subtables.
2. WATERMARK: specifies the time after which the window is closed. The default value is 5 seconds. Enter a value between 0 and 15 minutes in milliseconds, seconds, or minutes. You can enter multiple values separated by commas (,). This parameter applies only to supertables and takes effect only when the RETENTIONS parameter has been specified for the database.
3. MAX_DELAY: specifies the maximum latency for pushing computation results. The default value is 15 minutes or the value of the INTERVAL parameter, whichever is smaller. Enter a value between 0 and 15 minutes in milliseconds, seconds, or minutes. You can enter multiple values separated by commas (,). Note: Retain the default value if possible. Configuring a small MAX_DELAY may cause results to be frequently pushed, affecting storage and query performance. This parameter applies only to supertables and takes effect only when the RETENTIONS parameter has been specified for the database.
4. ROLLUP: specifies aggregate functions to roll up. Rolling up a function provides downsampled results based on multiple axes. This parameter applies only to supertables and takes effect only when the RETENTIONS parameter has been specified for the database. You can specify only one function to roll up. The rollup takes effect on all columns except TS. Enter one of the following values: avg, sum, min, max, last, or first.
5. SMA: specifies functions on which to enable small materialized aggregates (SMA). SMA is user-defined precomputation of aggregates based on data blocks. Enter one of the following values: max, min, or sum This parameter can be used with supertables and standard tables.
6. TTL: specifies the time to live (TTL) for the table. If the period specified by the TTL parameter elapses without any data being written to the table, TDengine will automatically delete the table. Note: The system may not delete the table at the exact moment that the TTL expires. Enter a value in days. The default value is 0. Note: The TTL parameter has a higher priority than the KEEP parameter. If a table is marked for deletion because the TTL has expired, it will be deleted even if the time specified by the KEEP parameter has not elapsed. This parameter can be used with standard tables and subtables.
### Create Subtable Using STable As Template
## Create Subtables
```
### Create a Subtable
```sql
CREATE TABLE [IF NOT EXISTS] tb_name USING stb_name TAGS (tag_value1, ...);
```
The above command creates a subtable using the specified super table as a template and the specified tag values.
### Create a Subtable with Specified Tags
### Create Subtable Using STable As Template With A Subset of Tags
```
```sql
CREATE TABLE [IF NOT EXISTS] tb_name USING stb_name (tag_name1, ...) TAGS (tag_value1, ...);
```
The tags for which no value is specified will be set to NULL.
The preceding SQL statement creates a subtable based on a supertable but specifies a subset of tags to use. Tags that are not included in this subset are assigned a null value.
### Create Tables in Batch
### Create Multiple Subtables
```
```sql
CREATE TABLE [IF NOT EXISTS] tb_name1 USING stb_name TAGS (tag_value1, ...) [IF NOT EXISTS] tb_name2 USING stb_name TAGS (tag_value2, ...) ...;
```
This can be used to create a lot of tables in a single SQL statement while making table creation much faster.
You can create multiple subtables in a single SQL statement provided that all subtables use the same supertable. For performance reasons, do not create more than 3000 tables per statement.
## Modify a Table
:::info
```sql
ALTER TABLE [db_name.]tb_name alter_table_clause
alter_table_clause: {
alter_table_options
| ADD COLUMN col_name column_type
| DROP COLUMN col_name
| MODIFY COLUMN col_name column_type
| RENAME COLUMN old_col_name new_col_name
}
alter_table_options:
alter_table_option ...
alter_table_option: {
TTL value
| COMMENT 'string_value'
}
- Creating tables in batch must use a super table as a template.
- The length of single statement is suggested to be between 1,000 and 3,000 bytes for best performance.
```
:::
**More explanations**
You can perform the following modifications on existing tables:
1. ADD COLUMN: adds a column to the supertable.
2. DROP COLUMN: deletes a column from the supertable.
3. MODIFY COLUMN: changes the length of the data type specified for the column. Note that you can only specify a length greater than the current length.
4. RENAME COLUMN: renames a specified column in the table.
## Drop Tables
### Add a Column
```
DROP TABLE [IF EXISTS] tb_name;
```sql
ALTER TABLE tb_name ADD COLUMN field_name data_type;
```
## Show All Tables In Current Database
### Delete a Column
```
SHOW TABLES [LIKE tb_name_wildcard];
```sql
ALTER TABLE tb_name DROP COLUMN field_name;
```
## Show Create Statement of A Table
### Modify the Data Length
```sql
ALTER TABLE tb_name MODIFY COLUMN field_name data_type(length);
```
SHOW CREATE TABLE tb_name;
```
This is useful when migrating the data in one TDengine cluster to another one because it can be used to create the exact same tables in the target database.
## Show Table Definition
### Rename a Column
```sql
ALTER TABLE tb_name RENAME COLUMN old_col_name new_col_name
```
DESCRIBE tb_name;
## Modify a Subtable
```sql
ALTER TABLE [db_name.]tb_name alter_table_clause
alter_table_clause: {
alter_table_options
| SET TAG tag_name = new_tag_value
}
alter_table_options:
alter_table_option ...
alter_table_option: {
TTL value
| COMMENT 'string_value'
}
```
## Change Table Definition
**More explanations**
1. Only the value of a tag can be modified directly. For all other modifications, you must modify the supertable from which the subtable was created.
### Add A Column
### Change Tag Value Of Sub Table
```
ALTER TABLE tb_name ADD COLUMN field_name data_type;
ALTER TABLE tb_name SET TAG tag_name=new_tag_value;
```
:::info
## Delete a Table
1. The maximum number of columns is 4096, the minimum number of columns is 2.
2. The maximum length of a column name is 64 bytes.
The following SQL statement deletes one or more tables.
:::
### Remove A Column
```
ALTER TABLE tb_name DROP COLUMN field_name;
```sql
DROP TABLE [IF EXISTS] [db_name.]tb_name [, [IF EXISTS] [db_name.]tb_name] ...
```
:::note
If a table is created using a super table as template, the table definition can only be changed on the corresponding super table, and the change will be automatically applied to all the subtables created using this super table as template. For tables created in the normal way, the table definition can be changed directly on the table.
## View Tables
:::
### View All Tables
### Change Column Length
The following SQL statement shows all tables in the current database.
```
ALTER TABLE tb_name MODIFY COLUMN field_name data_type(length);
```sql
SHOW TABLES [LIKE tb_name_wildchar];
```
If the type of a column is variable length, like BINARY or NCHAR, this command can be used to change the length of the column.
### View the CREATE Statement for a Table
:::note
If a table is created using a super table as template, the table definition can only be changed on the corresponding super table, and the change will be automatically applied to all the subtables created using this super table as template. For tables created in the normal way, the table definition can be changed directly on the table.
```
SHOW CREATE TABLE tb_name;
```
:::
This command is useful in migrating data from one TDengine cluster to another because it can be used to create the exact same tables in the target database.
### Change Tag Value Of Sub Table
## View the Table Schema
```
ALTER TABLE tb_name SET TAG tag_name=new_tag_value;
```
This command can be used to change the tag value if the table is created using a super table as template.
DESCRIBE [db_name.]tb_name;
```
\ No newline at end of file
---
sidebar_label: STable
title: Super Table
sidebar_label: Supertable
title: Supertable
---
:::note
## Create a Supertable
Keyword `STable`, abbreviated for super table, is supported since version 2.0.15.
```sql
CREATE STABLE [IF NOT EXISTS] stb_name (create_definition [, create_definitionn] ...) TAGS (create_definition [, create_definition] ...) [table_options]
create_definition:
col_name column_definition
column_definition:
type_name [COMMENT 'string_value']
```
:::
**More explanations**
- Each supertable can have a maximum of 4096 columns, including tags. The minimum number of columns is 3: a timestamp column used as the key, one tag column, and one data column.
- When you create a supertable, you can add comments to columns and tags.
- The TAGS keyword defines the tag columns for the supertable. The following restrictions apply to tag columns:
- A tag column can use the TIMESTAMP data type, but the values in the column must be fixed numbers. Timestamps including formulae, such as "now + 10s", cannot be stored in a tag column.
- The name of a tag column cannot be the same as the name of any other column.
- The name of a tag column cannot be a reserved keyword.
- Each supertable must contain between 1 and 128 tags. The total length of the TAGS keyword cannot exceed 16 KB.
- For more information about table parameters, see Create a Table.
## Create STable
## View a Supertable
### View All Supertables
```
CREATE STable [IF NOT EXISTS] stb_name (timestamp_field_name TIMESTAMP, field1_name data_type1 [, field2_name data_type2 ...]) TAGS (tag1_name tag_type1, tag2_name tag_type2 [, tag3_name tag_type3]);
SHOW STABLES [LIKE tb_name_wildcard];
```
The SQL statement of creating a STable is similar to that of creating a table, but a special column set named `TAGS` must be specified with the names and types of the tags.
:::info
The preceding SQL statement shows all supertables in the current TDengine database, including the name, creation time, number of columns, number of tags, and number of subtabels for each supertable.
1. A tag can be of type timestamp, since version 2.1.3.0, but its value must be fixed and arithmetic operations cannot be performed on it. Prior to version 2.1.3.0, tag types specified in TAGS could not be of type timestamp.
2. The tag names specified in TAGS should NOT be the same as other columns.
3. The tag names specified in TAGS should NOT be the same as any reserved keywords.(Please refer to [keywords](/taos-sql/keywords/)
4. The maximum number of tags specified in TAGS is 128, there must be at least one tag, and the total length of all tag columns should NOT exceed 16KB.
:::
## Drop STable
### View the CREATE Statement for a Supertable
```
DROP STable [IF EXISTS] stb_name;
SHOW CREATE STABLE stb_name;
```
All the subtables created using the deleted STable will be deleted automatically.
The preceding SQL statement can be used in migration scenarios. It returns the CREATE statement that was used to create the specified supertable. You can then use the returned statement to create an identical supertable on another TDengine database.
## Show All STables
## View the Supertable Schema
```
SHOW STableS [LIKE tb_name_wildcard];
DESCRIBE [db_name.]stb_name;
```
This command can be used to display the information of all STables in the current database, including name, creation time, number of columns, number of tags, and number of tables created using this STable.
## Show The Create Statement of A STable
## Drop STable
```
SHOW CREATE STable stb_name;
DROP STABLE [IF EXISTS] [db_name.]stb_name
```
This command is useful in migrating data from one TDengine cluster to another because it can be used to create the exact same STable in the target database.
Note: Deleting a supertable will delete all subtables created from the supertable, including all data within those subtables.
## Get STable Definition
## Modify a Supertable
```sql
ALTER STABLE [db_name.]tb_name alter_table_clause
alter_table_clause: {
alter_table_options
| ADD COLUMN col_name column_type
| DROP COLUMN col_name
| MODIFY COLUMN col_name column_type
| ADD TAG tag_name tag_type
| DROP TAG tag_name
| MODIFY TAG tag_name tag_type
| RENAME TAG old_tag_name new_tag_name
}
alter_table_options:
alter_table_option ...
alter_table_option: {
COMMENT 'string_value'
}
```
DESCRIBE stb_name;
```
## Change Columns Of STable
**More explanations**
Modifications to the table schema of a supertable take effect on all subtables within the supertable. You cannot modify the table schema of subtables individually. When you modify the tag schema of a supertable, the modifications automatically take effect on all of its subtables.
- ADD COLUMN: adds a column to the supertable.
- DROP COLUMN: deletes a column from the supertable.
- MODIFY COLUMN: changes the length of a BINARY or NCHAR column. Note that you can only specify a length greater than the current length.
- ADD TAG: adds a tag to the supertable.
- DROP TAG: deletes a tag from the supertable. When you delete a tag from a supertable, it is automatically deleted from all subtables within the supertable.
- MODIFY TAG: modifies the definition of a tag in the supertable. You can use this keyword to change the length of a BINARY or NCHAR tag column. Note that you can only specify a length greater than the current length.
- RENAME TAG: renames a specified tag in the supertable. When you rename a tag in a supertable, it is automatically renamed in all subtables within the supertable.
### Add A Column
### Add a Column
```
ALTER STable stb_name ADD COLUMN field_name data_type;
ALTER STABLE stb_name ADD COLUMN col_name column_type;
```
### Remove A Column
### Delete a Column
```
ALTER STable stb_name DROP COLUMN field_name;
ALTER STABLE stb_name DROP COLUMN col_name;
```
### Change Column Length
### Modify the Data Length
```
ALTER STable stb_name MODIFY COLUMN field_name data_type(length);
ALTER STABLE stb_name MODIFY COLUMN col_name data_type(length);
```
This command can be used to change (or more specifically, increase) the length of a column of variable length types, like BINARY or NCHAR.
## Change Tags of A STable
The preceding SQL statement changes the length of a BINARY or NCHAR data column. Note that you can only specify a length greater than the current length.
### Add A Tag
```
ALTER STable stb_name ADD TAG new_tag_name tag_type;
ALTER STABLE stb_name ADD TAG tag_name tag_type;
```
This command is used to add a new tag for a STable and specify the tag type.
The preceding SQL statement adds a tag of the specified type to the supertable. A supertable cannot contain more than 128 tags. The total length of all tags in a supertable cannot exceed 16 KB.
### Remove A Tag
```
ALTER STable stb_name DROP TAG tag_name;
ALTER STABLE stb_name DROP TAG tag_name;
```
The tag will be removed automatically from all the subtables, created using the super table as template, once a tag is removed from a super table.
The preceding SQL statement deletes a tag from the supertable. When you delete a tag from a supertable, it is automatically deleted from all subtables within the supertable.
### Change A Tag
```
ALTER STable stb_name CHANGE TAG old_tag_name new_tag_name;
ALTER STABLE stb_name RENAME TAG old_tag_name new_tag_name;
```
The tag name will be changed automatically for all the subtables, created using the super table as template, once a tag name is changed for a super table.
The preceding SQL statement renames a tag in the supertable. When you rename a tag in a supertable, it is automatically renamed in all subtables within the supertable.
### Change Tag Length
```
ALTER STable stb_name MODIFY TAG tag_name data_type(length);
ALTER STABLE stb_name MODIFY TAG tag_name data_type(length);
```
This command can be used to change (or more specifically, increase) the length of a tag of variable length types, like BINARY or NCHAR.
The preceding SQL statement changes the length of a BINARY or NCHAR tag column. Note that you can only specify a length greater than the current length. (Available in 2.1.3.0 and later versions)
### View a Supertable
You can run projection and aggregate SELECT queries on supertables, and you can filter by tag or column by using the WHERE keyword.
If you do not include an ORDER BY clause, results are returned by subtable. These results are not ordered. You can include an ORDER BY clause in your query to strictly order the results.
:::note
Changing tag values can be applied to only subtables. All other tag operations, like add tag, remove tag, however, can be applied to only STable. If a new tag is added for a STable, the tag will be added with NULL value for all its subtables.
All tag operations except for updating the value of a tag must be performed on the supertable and not on individual subtables. If you add a tag to an existing supertable, the tag is automatically added with a null value to all subtables within the supertable.
:::
---
sidebar_label: Insert
title: Insert
---
......@@ -17,47 +18,62 @@ INSERT INTO
...];
```
## Insert Single or Multiple Rows
**Timestamps**
Single row or multiple rows specified with VALUES can be inserted into a specific table. For example:
1. All data writes must include a timestamp. With regard to timestamps, note the following:
A single row is inserted using the below statement.
2. The precision of a timestamp depends on its format. The precision configured for the database affects only timestamps that are inserted as long integers (UNIX time). Timestamps inserted as date and time strings are not affected. As an example, the timestamp 2021-07-13 16:16:48 is equivalent to 1626164208 in UNIX time. This UNIX time is modified to 1626164208000 for databases with millisecond precision, 1626164208000000 for databases with microsecond precision, and 1626164208000000000 for databases with nanosecond precision.
```sq;
3. If you want to insert multiple rows simultaneously, do not use the NOW function in the timestamp. Using the NOW function in this situation will cause multiple rows to have the same timestamp and prevent them from being stored correctly. This is because the NOW function obtains the current time on the client, and multiple instances of NOW in a single statement will return the same time.
The earliest timestamp that you can use when inserting data is equal to the current time on the server minus the value of the KEEP parameter. The latest timestamp that you can use when inserting data is equal to the current time on the server plus the value of the DURATION parameter. You can configure the KEEP and DURATION parameters when you create a database. The default values are 3650 days for the KEEP parameter and 10 days for the DURATION parameter.
**Syntax**
1. The USING clause automatically creates the specified subtable if it does not exist. If it's unknown whether the table already exists, the table can be created automatically while inserting using the SQL statement below. To use this functionality, a STable must be used as template and tag values must be provided. Any tags that you do not specify will be assigned a null value.
2. You can insert data into specified columns. Any columns in which you do not insert data will be assigned a null value.
3. The VALUES clause inserts one or more rows of data into a table.
4. The FILE clause inserts tags or data from a comma-separates values (CSV) file. Do not include headers in your CSV files.
5. A single INSERT statement can write data to multiple tables.
6. The INSERT statement is fully parsed before being executed, so that if any element of the statement fails, the entire statement will fail. For example, the following statement will not create a table because the latter part of the statement is invalid:
```sql
INSERT INTO d1001 USING meters TAGS('Beijing.Chaoyang', 2) VALUES('a');
```
7. However, an INSERT statement that writes data to multiple subtables can succeed for some tables and fail for others. This situation is caused because vnodes perform write operations independently of each other. One vnode failing to write data does not affect the ability of other vnodes to write successfully.
## Insert a Record
Single row or multiple rows specified with VALUES can be inserted into a specific table. A single row is inserted using the below statement.
```sql
INSERT INTO d1001 VALUES (NOW, 10.2, 219, 0.32);
```
## Insert Multiple Records
Double rows are inserted using the below statement.
```sql
INSERT INTO d1001 VALUES ('2021-07-13 14:06:32.272', 10.2, 219, 0.32) (1626164208000, 10.15, 217, 0.33);
```
:::note
1. In the second example above, different formats are used in the two rows to be inserted. In the first row, the timestamp format is a date and time string, which is interpreted from the string value only. In the second row, the timestamp format is a long integer, which will be interpreted based on the database time precision.
2. When trying to insert multiple rows in a single statement, only the timestamp of one row can be set as NOW, otherwise there will be duplicate timestamps among the rows and the result may be out of expectation because NOW will be interpreted as the time when the statement is executed.
3. The oldest timestamp that is allowed is subtracting the KEEP parameter from current time.
4. The newest timestamp that is allowed is adding the DAYS parameter to current time.
:::
## Insert Into Specific Columns
## Write to a Specified Column
Data can be inserted into specific columns, either single row or multiple row, while other columns will be inserted as NULL value.
Data can be inserted into specific columns, either single row or multiple row, while other columns will be inserted as NULL value. The key (timestamp) cannot be null. For example:
```
```sql
INSERT INTO d1001 (ts, current, phase) VALUES ('2021-07-13 14:06:33.196', 10.27, 0.31);
```
:::info
If no columns are explicitly specified, all the columns must be provided with values, this is called "all column mode". The insert performance of all column mode is much better than specifying a subset of columns, so it's encouraged to use "all column mode" while providing NULL value explicitly for the columns for which no actual value can be provided.
:::
## Insert Into Multiple Tables
One or multiple rows can be inserted into multiple tables in a single SQL statement, with or without specifying specific columns.
One or multiple rows can be inserted into multiple tables in a single SQL statement, with or without specifying specific columns. For example:
```sql
INSERT INTO d1001 VALUES ('2021-07-13 14:06:34.630', 10.2, 219, 0.32) ('2021-07-13 14:06:35.779', 10.15, 217, 0.33)
......@@ -66,19 +82,19 @@ INSERT INTO d1001 VALUES ('2021-07-13 14:06:34.630', 10.2, 219, 0.32) ('2021-07-
## Automatically Create Table When Inserting
If it's unknown whether the table already exists, the table can be created automatically while inserting using the SQL statement below. To use this functionality, a STable must be used as template and tag values must be provided.
If it's unknown whether the table already exists, the table can be created automatically while inserting using the SQL statement below. To use this functionality, a STable must be used as template and tag values must be provided. For example:
```sql
INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) VALUES ('2021-07-13 14:06:32.272', 10.2, 219, 0.32);
```
It's not necessary to provide values for all tags when creating tables automatically, the tags without values provided will be set to NULL.
It's not necessary to provide values for all tags when creating tables automatically, the tags without values provided will be set to NULL. For example:
```sql
INSERT INTO d21001 USING meters (groupId) TAGS (2) VALUES ('2021-07-13 14:06:33.196', 10.15, 217, 0.33);
```
Multiple rows can also be inserted into the same table in a single SQL statement.
Multiple rows can also be inserted into the same table in a single SQL statement. For example:
```sql
INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) VALUES ('2021-07-13 14:06:34.630', 10.2, 219, 0.32) ('2021-07-13 14:06:35.779', 10.15, 217, 0.33)
......@@ -86,10 +102,6 @@ INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) VALUES ('202
d21003 USING meters (groupId) TAGS (2) (ts, current, phase) VALUES ('2021-07-13 14:06:34.255', 10.27, 0.31);
```
:::info
Prior to version 2.0.20.5, when using `INSERT` to create tables automatically and specifying the columns, the column names must follow the table name immediately. From version 2.0.20.5, the column names can follow the table name immediately, also can be put between `TAGS` and `VALUES`. In the same SQL statement, however, these two ways of specifying column names can't be mixed.
:::
## Insert Rows From A File
Besides using `VALUES` to insert one or multiple rows, the data to be inserted can also be prepared in a CSV file with comma as separator and each field value quoted by single quotes. Table definition is not required in the CSV file. For example, if file "/tmp/csvfile.csv" contains the below data:
......@@ -107,58 +119,13 @@ INSERT INTO d1001 FILE '/tmp/csvfile.csv';
## Create Tables Automatically and Insert Rows From File
From version 2.1.5.0, tables can be automatically created using a super table as template when inserting data from a CSV file, like below:
```sql
INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) FILE '/tmp/csvfile.csv';
```
Multiple tables can be automatically created and inserted in a single SQL statement, like below:
When writing data from a file, you can automatically create the specified subtable if it does not exist. For example:
```sql
INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) FILE '/tmp/csvfile_21001.csv'
d21002 USING meters (groupId) TAGS (2) FILE '/tmp/csvfile_21002.csv';
```
## More About Insert
For SQL statement like `insert`, a stream parsing strategy is applied. That means before an error is found and the execution is aborted, the part prior to the error point has already been executed. Below is an experiment to help understand the behavior.
First, a super table is created.
```sql
CREATE TABLE meters(ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS(location BINARY(30), groupId INT);
```
It can be proven that the super table has been created by `SHOW STableS`, but no table exists using `SHOW TABLES`.
```
taos> SHOW STableS;
name | created_time | columns | tags | tables |
============================================================================================
meters | 2020-08-06 17:50:27.831 | 4 | 2 | 0 |
Query OK, 1 row(s) in set (0.001029s)
taos> SHOW TABLES;
Query OK, 0 row(s) in set (0.000946s)
```
Then, try to create table d1001 automatically when inserting data into it.
```sql
INSERT INTO d1001 USING meters TAGS('California.SanFrancisco', 2) VALUES('a');
```
The output shows the value to be inserted is invalid. But `SHOW TABLES` proves that the table has been created automatically by the `INSERT` statement.
```
DB error: invalid SQL: 'a' (invalid timestamp) (0.039494s)
taos> SHOW TABLES;
table_name | created_time | columns | STable_name |
======================================================================================================
d1001 | 2020-08-06 17:52:02.097 | 4 | meters |
Query OK, 1 row(s) in set (0.001091s)
```
From the above experiment, we can see that while the value to be inserted is invalid the table is still created.
此差异已折叠。
......@@ -4,8 +4,7 @@ description: "Delete data from table or Stable"
title: Delete Data
---
TDengine provides the functionality of deleting data from a table or STable according to specified time range, it can be used to cleanup abnormal data generated due to device failure. Please be noted that this functionality is only available in Enterprise version, please refer to [TDengine Enterprise Edition](https://tdengine.com/products#enterprise-edition-link)
TDengine provides the functionality of deleting data from a table or STable according to specified time range, it can be used to cleanup abnormal data generated due to device failure.
**Syntax:**
......@@ -16,21 +15,21 @@ DELETE FROM [ db_name. ] tb_name [WHERE condition];
**Description:** Delete data from a table or STable
**Parameters:**
- `db_name`: Optional parameter, specifies the database in which the table exists; if not specified, the current database will be used.
- `tb_name`: Mandatory parameter, specifies the table name from which data will be deleted, it can be normal table, subtable or STable.
- `condition`: Optional parameter, specifies the data filter condition. If no condition is specified all data will be deleted, so please be cautions to delete data without any condition. The condition used here is only applicable to the first column, i.e. the timestamp column. If the table is a STable, the condition is also applicable to tag columns.
- `condition`: Optional parameter, specifies the data filter condition. If no condition is specified all data will be deleted, so please be cautions to delete data without any condition. The condition used here is only applicable to the first column, i.e. the timestamp column.
**More Explanations:**
The data can't be recovered once deleted, so please be cautious to use the functionality of deleting data. It's better to firstly make sure the data to be deleted using `select` then execute `delete`.
The data can't be recovered once deleted, so please be cautious to use the functionality of deleting data. It's better to firstly make sure the data to be deleted using `select` then execute `delete`.
**Example:**
`meters` is a STable, in which `groupid` is a tag column of int type. Now we want to delete the data older than 2021-10-01 10:40:00.100 and `groupid` is 1. The SQL for this purpose is like below:
`meters` is a STable, in which `groupid` is a tag column of int type. Now we want to delete the data older than 2021-10-01 10:40:00.100. You can perform this action by running the following SQL statement:
```sql
delete from meters where ts < '2021-10-01 10:40:00.100' and groupid=1 ;
delete from meters where ts < '2021-10-01 10:40:00.100' ;
```
The output is:
......
此差异已折叠。
---
sidebar_label: Distinguished
title: Distinguished Query for Time Series Database
sidebar_label: Time-Series Extensions
title: Time-Series Extensions
---
Aggregation by time window is supported in TDengine. For example, in the case where temperature sensors report the temperature every seconds, the average temperature for every 10 minutes can be retrieved by performing a query with a time window.
Window related clauses are used to divide the data set to be queried into subsets and then aggregation is performed across the subsets. There are three kinds of windows: time window, status window, and session window. There are two kinds of time windows: sliding window and flip time/tumbling window.
As a purpose-built database for storing and processing time-series data, TDengine provides time-series-specific extensions to standard SQL.
## Time Window
These extensions include tag-partitioned queries and windowed queries.
The `INTERVAL` clause is used to generate time windows of the same time interval. The `SLIDING` parameter is used to specify the time step for which the time window moves forward. The query is performed on one time window each time, and the time window moves forward with time. When defining a continuous query, both the size of the time window and the step of forward sliding time need to be specified. As shown in the figure blow, [t0s, t0e] ,[t1s , t1e], [t2s, t2e] are respectively the time ranges of three time windows on which continuous queries are executed. The time step for which time window moves forward is marked by `sliding time`. Query, filter and aggregate operations are executed on each time window respectively. When the time step specified by `SLIDING` is same as the time interval specified by `INTERVAL`, the sliding time window is actually a flip time/tumbling window.
![TDengine Database Time Window](./timewindow-1.webp)
## Tag-Partitioned Queries
`INTERVAL` and `SLIDING` should be used with aggregate functions and select functions. The SQL statement below is illegal because no aggregate or selection function is used with `INTERVAL`.
When you query a supertable, you may need to partition the supertable by tag and perform additional operations on a specific partition. In this case, you can use the following SQL clause:
```sql
PARTITION BY part_list
```
SELECT * FROM temp_tb_1 INTERVAL(1m);
```
The time step specified by `SLIDING` cannot exceed the time interval specified by `INTERVAL`. The SQL statement below is illegal because the time length specified by `SLIDING` exceeds that specified by `INTERVAL`.
```
SELECT COUNT(*) FROM temp_tb_1 INTERVAL(1m) SLIDING(2m);
```
When the time length specified by `SLIDING` is the same as that specified by `INTERVAL`, the sliding window is actually a flip/tumbling window. The minimum time range specified by `INTERVAL` is 10 milliseconds (10a) prior to version 2.1.5.0. Since version 2.1.5.0, the minimum time range by `INTERVAL` can be 1 microsecond (1u). However, if the DB precision is millisecond, the minimum time range is 1 millisecond (1a). Please note that the `timezone` parameter should be configured to be the same value in the `taos.cfg` configuration file on client side and server side.
## Status Window
In case of using integer, bool, or string to represent the status of a device at any given moment, continuous rows with the same status belong to a status window. Once the status changes, the status window closes. As shown in the following figure, there are two status windows according to status, [2019-04-28 14:22:07,2019-04-28 14:22:10] and [2019-04-28 14:22:11,2019-04-28 14:22:12]. Status window is not applicable to STable for now.
![TDengine Database Status Window](./timewindow-3.webp)
part_list can be any scalar expression, such as a column, constant, scalar function, or a combination of the preceding items.
`STATE_WINDOW` is used to specify the column on which the status window will be based. For example:
A PARTITION BY clause with a tag is processed as follows:
```
SELECT COUNT(*), FIRST(ts), status FROM temp_tb_1 STATE_WINDOW(status);
```
## Session Window
- The PARTITION BY clause must occur after the WHERE clause and cannot be used with a JOIN clause.
- The PARTITION BY clause partitions the super table by the specified tag group, and the specified calculation is performed on each partition. The calculation performed is determined by the rest of the statement - a window clause, GROUP BY clause, or SELECT clause.
- You can use PARTITION BY together with a window clause or GROUP BY clause. In this case, the window or GROUP BY clause takes effect on every partition. For example, the following statement partitions the table by the location tag, performs downsampling over a 10 minute window, and returns the maximum value:
```sql
SELECT COUNT(*), FIRST(ts) FROM temp_tb_1 SESSION(ts, tol_val);
select max(current) from meters partition by location interval(10m)
```
The primary key, i.e. timestamp, is used to determine which session window a row belongs to. If the time interval between two adjacent rows is within the time range specified by `tol_val`, they belong to the same session window; otherwise they belong to two different session windows. As shown in the figure below, if the limit of time interval for the session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:10,2019-04-28 14:22:30] and [2019-04-28 14:23:10,2019-04-28 14:23:30], because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds.
![TDengine Database Session Window](./timewindow-2.webp)
If the time interval between two continuous rows are within the time interval specified by `tol_value` they belong to the same session window; otherwise a new session window is started automatically. Session window is not supported on STable for now.
## More On Window Aggregate
### Syntax
## Windowed Queries
The full syntax of aggregate by window is as follows:
Aggregation by time window is supported in TDengine. For example, in the case where temperature sensors report the temperature every seconds, the average temperature for every 10 minutes can be retrieved by performing a query with a time window. Window related clauses are used to divide the data set to be queried into subsets and then aggregation is performed across the subsets. There are three kinds of windows: time window, status window, and session window. There are two kinds of time windows: sliding window and flip time/tumbling window. The query syntax is as follows:
```sql
SELECT function_list FROM tb_name
......@@ -63,27 +38,36 @@ SELECT function_list FROM tb_name
[STATE_WINDOW(col)]
[INTERVAL(interval [, offset]) [SLIDING sliding]]
[FILL({NONE | VALUE | PREV | NULL | LINEAR | NEXT})]
SELECT function_list FROM stb_name
[WHERE where_condition]
[INTERVAL(interval [, offset]) [SLIDING sliding]]
[FILL({NONE | VALUE | PREV | NULL | LINEAR | NEXT})]
[GROUP BY tags]
```
### Restrictions
The following restrictions apply:
### Restricted Functions
- Aggregate functions and select functions can be used in `function_list`, with each function having only one output. For example COUNT, AVG, SUM, STDDEV, LEASTSQUARES, PERCENTILE, MIN, MAX, FIRST, LAST. Functions having multiple outputs, such as DIFF or arithmetic operations can't be used.
- `LAST_ROW` can't be used together with window aggregate.
- Scalar functions, like CEIL/FLOOR, can't be used with window aggregate.
### Other Rules
- The window clause must occur after the PARTITION BY clause and before the GROUP BY clause. It cannot be used with a GROUP BY clause.
- SELECT clauses on windows can contain only the following expressions:
- Constants
- Aggregate functions
- Expressions that include the preceding expressions.
- The window clause cannot be used with a GROUP BY clause.
- `WHERE` clause can be used to specify the starting and ending time and other filter conditions
- `FILL` clause is used to specify how to fill when there is data missing in any window, including:
1. NONE: No fill (the default fill mode)
2. VALUE:Fill with a fixed value, which should be specified together, for example `FILL(VALUE, 1.23)`
3. PREV:Fill with the previous non-NULL value, `FILL(PREV)`
4. NULL:Fill with NULL, `FILL(NULL)`
5. LINEAR:Fill with the closest non-NULL value, `FILL(LINEAR)`
6. NEXT:Fill with the next non-NULL value, `FILL(NEXT)`
### FILL Clause
`FILL` clause is used to specify how to fill when there is data missing in any window, including:
1. NONE: No fill (the default fill mode)
2. VALUE:Fill with a fixed value, which should be specified together, for example `FILL(VALUE, 1.23)` Note: The value filled depends on the data type. For example, if you run FILL(VALUE 1.23) on an integer column, the value 1 is filled.
3. PREV:Fill with the previous non-NULL value, `FILL(PREV)`
4. NULL:Fill with NULL, `FILL(NULL)`
5. LINEAR:Fill with the closest non-NULL value, `FILL(LINEAR)`
6. NEXT:Fill with the next non-NULL value, `FILL(NEXT)`
:::info
......@@ -93,17 +77,66 @@ SELECT function_list FROM stb_name
:::
Aggregate by time window is also used in continuous query, please refer to [Continuous Query](/develop/continuous-query).
### Time Window
## Examples
There are two kinds of time windows: sliding window and flip time/tumbling window.
The `INTERVAL` clause is used to generate time windows of the same time interval. The `SLIDING` parameter is used to specify the time step for which the time window moves forward. The query is performed on one time window each time, and the time window moves forward with time. When defining a continuous query, both the size of the time window and the step of forward sliding time need to be specified. As shown in the figure blow, [t0s, t0e] ,[t1s , t1e], [t2s, t2e] are respectively the time ranges of three time windows on which continuous queries are executed. The time step for which time window moves forward is marked by `sliding time`. Query, filter and aggregate operations are executed on each time window respectively. When the time step specified by `SLIDING` is same as the time interval specified by `INTERVAL`, the sliding time window is actually a flip time/tumbling window.
![TDengine Database Time Window](./timewindow-1.webp)
`INTERVAL` and `SLIDING` should be used with aggregate functions and select functions. The SQL statement below is illegal because no aggregate or selection function is used with `INTERVAL`.
```
SELECT * FROM temp_tb_1 INTERVAL(1m);
```
The time step specified by `SLIDING` cannot exceed the time interval specified by `INTERVAL`. The SQL statement below is illegal because the time length specified by `SLIDING` exceeds that specified by `INTERVAL`.
```
SELECT COUNT(*) FROM temp_tb_1 INTERVAL(1m) SLIDING(2m);
```
When using time windows, note the following:
- The window length for aggregation depends on the value of INTERVAL. The minimum interval is 10 ms. You can configure a window as an offset from UTC 0:00. The offset cannot be smaler than the interval. You can use SLIDING to specify the length of time that the window moves forward.
Please note that the `timezone` parameter should be configured to be the same value in the `taos.cfg` configuration file on client side and server side.
- The result set is in ascending order of timestamp when you aggregate by time window.
### Status Window
In case of using integer, bool, or string to represent the status of a device at any given moment, continuous rows with the same status belong to a status window. Once the status changes, the status window closes. As shown in the following figure, there are two status windows according to status, [2019-04-28 14:22:07,2019-04-28 14:22:10] and [2019-04-28 14:22:11,2019-04-28 14:22:12]. Status window is not applicable to STable for now.
![TDengine Database Status Window](./timewindow-3.webp)
`STATE_WINDOW` is used to specify the column on which the status window will be based. For example:
```
SELECT COUNT(*), FIRST(ts), status FROM temp_tb_1 STATE_WINDOW(status);
```
### Session Window
The primary key, i.e. timestamp, is used to determine which session window a row belongs to. As shown in the figure below, if the limit of time interval for the session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:10,2019-04-28 14:22:30] and [2019-04-28 14:23:10,2019-04-28 14:23:30] because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds.
![TDengine Database Session Window](./timewindow-2.webp)
If the time interval between two continuous rows are within the time interval specified by `tol_value` they belong to the same session window; otherwise a new session window is started automatically. Session window is not supported on STable for now.
```
SELECT COUNT(*), FIRST(ts) FROM temp_tb_1 SESSION(ts, tol_val);
```
### Examples
A table of intelligent meters can be created by the SQL statement below:
```sql
```
CREATE TABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT);
```
The average current, maximum current and median of current in every 10 minutes for the past 24 hours can be calculated using the SQL statement below, with missing values filled with the previous non-NULL values.
The average current, maximum current and median of current in every 10 minutes for the past 24 hours can be calculated using the SQL statement below, with missing values filled with the previous non-NULL values. The query statement is as follows:
```
SELECT AVG(current), MAX(current), APERCENTILE(current, 50) FROM meters
......
---
sidebar_label: 消息队列
title: 消息队列
sidebar_label: Data Subscription
title: Data Subscription
---
TDengine 3.0.0.0 开始对消息队列做了大幅的优化和增强以简化用户的解决方案。
The information in this document is related to the TDengine data subscription feature.
## 创建订阅主题
## Create a Topic
```sql
CREATE TOPIC [IF NOT EXISTS] topic_name AS {subquery | DATABASE db_name | STABLE stb_name };
CREATE TOPIC [IF NOT EXISTS] topic_name AS subquery;
```
订阅主题包括三种:列订阅、超级表订阅和数据库订阅。
**列订阅是**用 subquery 描述,支持过滤和标量函数和 UDF 标量函数,不支持 JOIN、GROUP BY、窗口切分子句、聚合函数和 UDF 聚合函数。列订阅规则如下:
You can use filtering, scalar functions, and user-defined scalar functions with a topic. JOIN, GROUP BY, windows, aggregate functions, and user-defined aggregate functions are not supported. The following rules apply to subscribing to a column:
1. TOPIC 一旦创建则返回结果的字段确定
2. 被订阅或用于计算的列不可被删除、修改
3. 列可以新增,但新增的列不出现在订阅结果字段中
4. 对于 select \*,则订阅展开为创建时所有的列(子表、普通表为数据列,超级表为数据列加标签列)
1. The returned field is determined when the topic is created.
2. Columns to which a consumer is subscribed or that are involved in calculations cannot be deleted or modified.
3. If you add a column, the new column will not appear in the results for the subscription.
4. If you run `SELECT \*`, all columns in the subscription at the time of its creation are displayed. This includes columns in supertables, standard tables, and subtables. Supertables are shown as data columns plus tag columns.
**超级表订阅和数据库订阅**规则如下:
1. 被订阅主体的 schema 变更不受限
2. 返回消息中 schema 是块级别的,每块的 schema 可能不一样
3. 列变更后写入的数据若未落盘,将以写入时的 schema 返回
4. 列变更后写入的数据若未已落盘,将以落盘时的 schema 返回
## 删除订阅主题
## Delete a Topic
```sql
DROP TOPIC [IF EXISTS] topic_name;
```
此时如果该订阅主题上存在 consumer,则此 consumer 会收到一个错误。
If a consumer is subscribed to the topic that you delete, the consumer will receive an error.
## 查看订阅主题
## View Topics
## SHOW TOPICS
......@@ -43,24 +36,24 @@ DROP TOPIC [IF EXISTS] topic_name;
SHOW TOPICS;
```
显示当前数据库下的所有主题的信息。
The preceding command displays all topics in the current database.
## 创建消费组
## Create Consumer Group
消费组的创建只能通过 TDengine 客户端驱动或者连接器所提供的 API 创建。
You can create consumer groups only through the TDengine Client driver or the API provided by a connector.
## 删除消费组
## Delete Consumer Group
```sql
DROP CONSUMER GROUP [IF EXISTS] cgroup_name ON topic_name;
```
删除主题 topic_name 上的消费组 cgroup_name。
This deletes the cgroup_name in the topic_name.
## 查看消费组
## View Consumer Groups
```sql
SHOW CONSUMERS;
```
显示当前数据库下所有活跃的消费者的信息。
The preceding command displays all consumer groups in the current database.
此差异已折叠。
......@@ -5,62 +5,62 @@ title: Operators
## Arithmetic Operators
| # | **Operator** | **Data Types** | **Description** |
| --- | :----------: | -------------- | --------------------------------------------------------- |
| 1 | +, - | Numeric Types | Representing positive or negative numbers, unary operator |
| 2 | +, - | Numeric Types | Addition and substraction, binary operator |
| 3 | \*, / | Numeric Types | Multiplication and division, binary oeprator |
| 4 | % | Numeric Types | Taking the remainder, binary operator |
| # | **Operator** | **Supported Data Types** | **Description** |
| --- | :--------: | -------------- | -------------------------- |
| 1 | +, - | Numeric | Expresses sign. Unary operators. |
| 2 | +, - | Numeric | Expresses addition and subtraction. Binary operators. |
| 3 | \*, / | Numeric | Expresses multiplication and division. Binary operators. |
| 4 | % | Numeric | Expresses modulo. Binary operator. |
## Bitwise Operators
| # | **Operator** | **Data Types** | **Description** |
| --- | :----------: | -------------- | ----------------------------- |
| 1 | & | Numeric Types | Bitewise AND, binary operator |
| 2 | \| | Numeric Types | Bitewise OR, binary operator |
| # | **Operator** | **Supported Data Types** | **Description** |
| --- | :--------: | -------------- | ------------------ |
| 1 | & | Numeric | Bitwise AND. Binary operator. |
| 2 | \| | Numeric | Bitwise OR. Binary operator. |
## JSON Operator
## JSON Operators
`->` operator can be used to get the value of a key in a column of JSON type, the left oeprand is the column name, the right operand is a string constant. For example, `col->'name'` returns the value of key `'name'`.
The `->` operator returns the value for a key in JSON column. Specify the column indicator on the left of the operator and the key name on the right of the operator. For example, `col->name` returns the value of the name key.
## Set Operator
## Set Operators
Set operators are used to combine the results of two queries into single result. A query including set operators is called a combined query. The number of rows in each result in a combined query must be same, and the type is determined by the first query's result, the type of the following queriess result must be able to be converted to the type of the first query's result, the conversion rule is same as `CAST` function.
Set operators combine the results of two queries. Queries that include set operators are known as compound queries. The expressions corresponding to each query in the select list in a compound query must match in number. The results returned take the data type of the first query, and the data type returned by subsequent queries must be convertible into the data type of the first query. The conditions of the `CAST` function apply to this conversion.
TDengine provides 2 set operators: `UNION ALL` and `UNION`. `UNION ALL` combines the results without removing duplicate data. `UNION` combines the results and remove duplicate data rows. In single SQL statement, at most 100 set operators can be used.
TDengine supports the `UNION` and `UNION ALL` operations. UNION ALL collects all query results and returns them as a composite result without deduplication. UNION collects all query results and returns them as a deduplicated composite result. In a single SQL statement, at most 100 set operators can be supported.
## Comparsion Operator
## Comparison Operators
| # | **Operator** | **Data Types** | **Description** |
| --- | :---------------: | ------------------------------------------------------------------- | ----------------------------------------------- |
| 1 | = | Except for BLOB, MEDIUMBLOB and JSON | Equal |
| 2 | <\>, != | Except for BLOB, MEDIUMBLOB, JSON and primary key of timestamp type | Not equal |
| 3 | \>, < | Except for BLOB, MEDIUMBLOB and JSON | Greater than, less than |
| 4 | \>=, <= | Except for BLOB, MEDIUMBLOB and JSON | Greater than or equal to, less than or equal to |
| 5 | IS [NOT] NULL | Any types | Is NULL or NOT |
| 6 | [NOT] BETWEEN AND | Except for BLOB, MEDIUMBLOB and JSON | In a value range or not |
| 7 | IN | Except for BLOB, MEDIUMBLOB, JSON and primary key of timestamp type | In a list of values or not |
| 8 | LIKE | BINARY, NCHAR and VARCHAR | Wildcard matching |
| 9 | MATCH, NMATCH | BINARY, NCHAR and VARCHAR | Regular expression matching |
| 10 | CONTAINS | JSON | If A key exists in JSON |
| # | **Operator** | **Supported Data Types** | **Description** |
| --- | :---------------: | -------------------------------------------------------------------- | -------------------- |
| 1 | = | All types except BLOB, MEDIUMBLOB, and JSON | Equal to |
| 2 | <\>, != | All types except BLOB, MEDIUMBLOB, and JSON; the primary key (timestamp) is also not supported | Not equal to |
| 3 | \>, < | All types except BLOB, MEDIUMBLOB, and JSON | Greater than and less than |
| 4 | \>=, <= | All types except BLOB, MEDIUMBLOB, and JSON | Greater than or equal to and less than or equal to |
| 5 | IS [NOT] NULL | All types | Indicates whether the value is null |
| 6 | [NOT] BETWEEN AND | All types except BLOB, MEDIUMBLOB, and JSON | Closed interval comparison |
| 7 | IN | All types except BLOB, MEDIUMBLOB, and JSON; the primary key (timestamp) is also not supported | Equal to any value in the list |
| 8 | LIKE | BINARY, NCHAR, and VARCHAR | Wildcard match |
| 9 | MATCH, NMATCH | BINARY, NCHAR, and VARCHAR | Regular expression match |
| 10 | CONTAINS | JSON | Indicates whether the key exists |
`LIKE` operator uses wildcard to match a string, the rules are:
LIKE is used together with wildcards to match strings. Its usage is described as follows:
- '%' matches 0 to any number of characters; '\_' matches any single ASCII character.
- \_ can be used to match a `_` in the string, i.e. using escape character backslash `\`
- Wildcard string is 100 bytes at most. Longer a wildcard string is, worse the performance of LIKE operator is.
- '%' matches 0 or any number of characters, '\_' matches any single ASCII character.
- `\_` is used to match the \_ in the string.
- The maximum length of wildcard string is 100 bytes. A very long wildcard string may slowdown the execution performance of `LIKE` operator.
`MATCH` and `NMATCH` operators use regular expressions to match a string, the rules are:
MATCH and NMATCH are used together with regular expressions to match strings. Their usage is described as follows:
- Regular expressions of POSIX standard are supported.
- Only `tbname`, i.e. table name of sub tables, and tag columns of string types can be matched with regular expression, data columns are not supported.
- Regular expression string is 128 bytes at most, and can be adjusted by setting parameter `maxRegexStringLen`, which is a client side configuration and needs to restart the client to take effect.
- Use POSIX regular expression syntax. For more information, see Regular Expressions.
- Regular expression can be used against only table names, i.e. `tbname`, and tags of binary/nchar types, but can't be used against data columns.
- The maximum length of regular expression string is 128 bytes. Configuration parameter `maxRegexStringLen` can be used to set the maximum allowed regular expression. It's a configuration parameter on the client side, and will take effect after restarting the client.
## Logical Operators
| # | **Operator** | **Data Types** | **Description** |
| --- | :----------: | -------------- | ---------------------------------------------------------------------------------------- |
| 1 | AND | BOOL | Logical AND, return TRUE if both conditions are TRUE; return FALSE if any one is FALSE. |
| 2 | OR | BOOL | Logical OR, return TRUE if any condition is TRUE; return FALSE if both are FALSE |
| # | **Operator** | **Supported Data Types** | **Description** |
| --- | :--------: | -------------- | --------------------------------------------------------------------------- |
| 1 | AND | BOOL | Logical AND; if both conditions are true, TRUE is returned; If either condition is false, FALSE is returned.
| 2 | OR | BOOL | Logical OR; if either condition is true, TRUE is returned; If both conditions are false, FALSE is returned.
TDengine uses shortcircut optimization when performing logical operations. For AND operator, if the first condition is evaluated to FALSE, then the second one is not evaluated. For OR operator, if the first condition is evaluated to TRUE, then the second one is not evaluated.
TDengine performs short-path optimization when calculating logical conditions. If the first condition for AND is false, FALSE is returned without calculating the second condition. If the first condition for OR is true, TRUE is returned without calculating the second condition
此差异已折叠。
......@@ -2,7 +2,7 @@
title: Escape Characters
---
Below table is the list of escape characters used in TDengine.
## Escape Characters
| Escape Character | **Actual Meaning** |
| :--------------: | ------------------------ |
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
......@@ -2,10 +2,6 @@ Execute TDengine CLI program `taos` directly from the Linux shell to connect to
```text
$ taos
Welcome to the TDengine shell from Linux, Client Version:3.0.0.0
Copyright (c) 2022 by TAOS Data, Inc. All rights reserved.
Server is Community Edition.
taos> show databases;
name | create_time | vgroups | ntables | replica | strict | duration | keep | buffer | pagesize | pages | minrows | maxrows | comp | precision | status | retention | single_stable | cachemodel | cachesize | wal_level | wal_fsync_period | wal_retention_period | wal_retention_size | wal_roll_period | wal_seg_size |
......
Go to the `C:\TDengine` directory from `cmd` and execute TDengine CLI program `taos.exe` directly to connect to the TDengine service and enter the TDengine CLI interface, for example, as follows:
```text
Welcome to the TDengine shell from Windows, Client Version:3.0.0.0
Copyright (c) 2022 by TAOS Data, Inc. All rights reserved.
Server is Community Edition.
taos> show databases;
name | create_time | vgroups | ntables | replica | strict | duration | keep | buffer | pagesize | pages | minrows | maxrows | comp | precision | status | retention | single_stable | cachemodel | cachesize | wal_level | wal_fsync_period | wal_retention_period | wal_retention_size | wal_roll_period | wal_seg_size |
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================
......
......@@ -31,8 +31,10 @@ All executable files of TDengine are in the _/usr/local/taos/bin_ directory by d
:::note
taosdump after version 2.4.0.0 require taosTools as a standalone installation. A new version of taosBenchmark is include in taosTools too.
:::
:::tip
You can configure different data directories and log directories by modifying the system configuration file `taos.cfg`.
:::
此差异已折叠。
label: Reference
label: Reference
\ No newline at end of file
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册