From fa7c66df043f12061c026384e55e0e3974ec4a3c Mon Sep 17 00:00:00 2001 From: gccgdb1234 Date: Sun, 15 May 2022 11:45:02 +0800 Subject: [PATCH] docs: correct some terms --- docs-en/04-develop/01-connect/index.md | 2 +- docs-en/04-develop/02-model/index.mdx | 30 ++-- .../03-insert-data/02-influxdb-line.mdx | 2 +- .../03-insert-data/03-opentsdb-telnet.mdx | 8 +- .../03-insert-data/04-opentsdb-json.mdx | 6 +- docs-en/04-develop/04-query-data/index.mdx | 10 +- docs-en/04-develop/05-continuous-query.mdx | 4 +- docs-en/04-develop/06-subscribe.mdx | 6 +- docs-en/04-develop/07-cache.md | 2 +- docs-en/04-develop/08-udf.md | 8 +- docs-en/12-taos-sql/04-stable.md | 26 +-- docs-en/12-taos-sql/05-insert.md | 6 +- docs-en/12-taos-sql/06-select.md | 16 +- docs-en/12-taos-sql/07-function.md | 166 +++++++++--------- docs-en/12-taos-sql/08-interval.md | 6 +- docs-en/12-taos-sql/09-limit.md | 8 +- docs-en/12-taos-sql/10-json.md | 6 +- docs-en/12-taos-sql/12-keywords.md | 6 +- docs-en/13-operation/08-export.md | 6 +- docs-en/13-operation/11-optimize.md | 2 +- 20 files changed, 163 insertions(+), 163 deletions(-) diff --git a/docs-en/04-develop/01-connect/index.md b/docs-en/04-develop/01-connect/index.md index 6a33e3e04d..6926170a7c 100644 --- a/docs-en/04-develop/01-connect/index.md +++ b/docs-en/04-develop/01-connect/index.md @@ -19,7 +19,7 @@ import InstallOnLinux from "../../14-reference/03-connector/\_windows_install.md import VerifyLinux from "../../14-reference/03-connector/\_verify_linux.mdx"; import VerifyWindows from "../../14-reference/03-connector/\_verify_windows.mdx"; -Any application programs running on any kind of platforms can access TDengine through the REST API provided by TDengine. For the details please refer to [REST API](/reference/rest-api/). Besides, application programs can use the connectors of multiple languages to access TDengine, including C/C++, Java, Python, Go, Node.js, C#, and Rust. This chapter describes how to establish connection to TDengine and briefly introduce how to install and use connectors. For details about the connectors please refer to [Connectors](https://docs.taosdata.com/reference/connector/) +Any application programs running on any kind of platforms can access TDengine through the REST API provided by TDengine. For the details please refer to [REST API](/reference/rest-api/). Besides, application programs can use the connectors of multiple languages to access TDengine, including C/C++, Java, Python, Go, Node.js, C#, and Rust. This chapter describes how to establish connection to TDengine and briefly introduce how to install and use connectors. For details about the connectors please refer to [Connectors](/reference/connector/) ## Establish Connection diff --git a/docs-en/04-develop/02-model/index.mdx b/docs-en/04-develop/02-model/index.mdx index b0b58b0a25..64a7d47adf 100644 --- a/docs-en/04-develop/02-model/index.mdx +++ b/docs-en/04-develop/02-model/index.mdx @@ -4,13 +4,13 @@ slug: /model title: Data Model --- -The data model employed by TDengine is similar to relational, users need to create database and tables. For a specific use case, the design of databases, stables (abbreviated for super table), and tables need to be considered. This chapter will explain the concept without syntax details. +The data model employed by TDengine is similar to relational, users need to create database and tables. For a specific use case, the design of databases, STables (abbreviated for super table), and tables need to be considered. This chapter will explain the concept without syntax details. There is an [video training course](https://www.taosdata.com/blog/2020/11/11/1945.html) that can be referred to for learning purpose. ## Create Database -The characteristics of data from different data collecting points may be different, such as collection frequency, days to keep, number of replicas, data block size, whether it's allowed to update data, etc. For TDengine to operate with best performance, it's strongly suggested to put the data with different characteristics into different databases because different storage policy can be set for each database. When creating a database, there are a lot of parameters that can be configured, such as the days to keep data, the number of replicas, the number of memory blocks, time precision, the minimum and maximum number of rows in each data block, compress or not, the time range of the data in single data file, etc. Below is an example of the SQL statement for creating a database. +The characteristics of data from different data collection points may be different, such as collection frequency, days to keep, number of replicas, data block size, whether it's allowed to update data, etc. For TDengine to operate with best performance, it's strongly suggested to put the data with different characteristics into different databases because different storage policy can be set for each database. When creating a database, there are a lot of parameters that can be configured, such as the days to keep data, the number of replicas, the number of memory blocks, time precision, the minimum and maximum number of rows in each data block, compress or not, the time range of the data in single data file, etc. Below is an example of the SQL statement for creating a database. ```sql CREATE DATABASE power KEEP 365 DAYS 10 BLOCKS 6 UPDATE 1; @@ -26,7 +26,7 @@ USE power; :::note -- Any table or stable must belong to a database. To create a table or stable, the database it belongs to must be ready. +- Any table or STable must belong to a database. To create a table or STable, the database it belongs to must be ready. - JOIN operation can't be performed tables from two different databases. - Timestamp needs to be specified when inserting rows or querying historical rows. @@ -37,35 +37,35 @@ USE power; In a typical IoT system, there may be a lot of kinds of devices. For example, in the electrical power system there are meters, transformers, bus bars, switches, etc. For easy aggregate of multiple tables, one STable needs to be created for each kind of devices. For example, for the meters in [table 1](/tdinternal/arch#model_table1), below SQL statement can be used to create the super table. ```sql -CREATE STABLE meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int); +CREATE STable meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int); ``` :::note -If you are using versions prior to 2.0.15, the `STABLE` keyword needs to be replaced with `TABLE`. +If you are using versions prior to 2.0.15, the `STable` keyword needs to be replaced with `TABLE`. ::: -Similar to creating a normal table, when creating a stable, name and schema need to be provided too. In the stable schema, the first column must be timestamp (like ts in the example), and other columns (like current, voltage and phase in the example) are the data collected. The type of a column can be integer, floating point number, string ,etc. Besides, the schema for tags need t obe provided, like location and groupId in the example. The type of a tag can be integer, floating point number, string, etc. The static properties of a data collection point can be defined as tags, like the location, device type, device group ID, manager ID, etc. Tags in the schema can be added, removed or altered. Please refer to [STable](/taos-sql/stable) for more details. +Similar to creating a normal table, when creating a STable, name and schema need to be provided too. In the STable schema, the first column must be timestamp (like ts in the example), and other columns (like current, voltage and phase in the example) are the data collected. The type of a column can be integer, floating point number, string ,etc. Besides, the schema for tags need t obe provided, like location and groupId in the example. The type of a tag can be integer, floating point number, string, etc. The static properties of a data collection point can be defined as tags, like the location, device type, device group ID, manager ID, etc. Tags in the schema can be added, removed or altered. Please refer to [STable](/taos-sql/STable) for more details. -Each kind of data collecting points needs a corresponding stable to be created, so there may be many stables in an IoT system. For electrical power system, we need to create a stable respectively for meters, transformers, bug bars, switches. There may be multiple kinds of data collecting points on a single device, for example there may be one data collecting point for electrical data like current and voltage and another point for environmental data like temperature, humidity and wind direction, multiple stables are required for such kind of device. +Each kind of data collection points needs a corresponding STable to be created, so there may be many STables in an IoT system. For electrical power system, we need to create a STable respectively for meters, transformers, bug bars, switches. There may be multiple kinds of data collection points on a single device, for example there may be one data collection point for electrical data like current and voltage and another point for environmental data like temperature, humidity and wind direction, multiple STables are required for such kind of device. -At most 4096 (or 1024 prior to version 2.1.7.0) columns are allowed in a stable. If there are more than 4096 of physical variables to bo collected for a single collecting point, multiple stables are required for such kind of data collecting point. There can be multiple databases in system, while one or more stables can exist in a database. +At most 4096 (or 1024 prior to version 2.1.7.0) columns are allowed in a STable. If there are more than 4096 of metrics to bo collected for a single collection point, multiple STables are required for such kind of data collection point. There can be multiple databases in system, while one or more STables can exist in a database. ## Create Table -A specific table needs to be created for each data collecting point. Similar to RDBMS, table name and schema are required to create a table. Beside, one or more tags can be created for each table. To create a table, a stable needs to be used as template and the values need to be specified for the tags. For example, for the meters in [Table 1](/tdinternal/arch#model_table1), the table can be created using below SQL statement. +A specific table needs to be created for each data collection point. Similar to RDBMS, table name and schema are required to create a table. Beside, one or more tags can be created for each table. To create a table, a STable needs to be used as template and the values need to be specified for the tags. For example, for the meters in [Table 1](/tdinternal/arch#model_table1), the table can be created using below SQL statement. ```sql CREATE TABLE d1001 USING meters TAGS ("Beijing.Chaoyang", 2); ``` -In the above SQL statement, "d1001" is the table name, "meters" is the stable name, followed by the value of tag "Location" and the value of tag "groupId", which are "Beijing.Chaoyang" and "2" respectively in the example. The tag values can be altered after the table is created. Please refer to [Tables](/taos-sql/table) for details. +In the above SQL statement, "d1001" is the table name, "meters" is the STable name, followed by the value of tag "Location" and the value of tag "groupId", which are "Beijing.Chaoyang" and "2" respectively in the example. The tag values can be altered after the table is created. Please refer to [Tables](/taos-sql/table) for details. :::warning -It's not recommended to create a table in a database while using a stable from another database as template. +It's not recommended to create a table in a database while using a STable from another database as template. :::tip -It's suggested to use the global unique ID of a data collecting point as the table name, for example the device serial number. If there isn't such a unique ID, multiple IDs that are not global unique can be combined to form a global unique ID. It's not recommended to use a global unique ID as tag value. +It's suggested to use the global unique ID of a data collection point as the table name, for example the device serial number. If there isn't such a unique ID, multiple IDs that are not global unique can be combined to form a global unique ID. It's not recommended to use a global unique ID as tag value. ### Create Table Automatically @@ -75,12 +75,12 @@ In some circumstances, it's not sure whether the table already exists when inser INSERT INTO d1001 USING meters TAGS ("Beijng.Chaoyang", 2) VALUES (now, 10.2, 219, 0.32); ``` -In the above SQL statement, a row with value `(now, 10.2, 219, 0.32)` will be inserted into table "d1001". If table "d1001" doesn't exist, it will be created automatically using stable "meters" as template with tag value `"Beijing.Chaoyang", 2`. +In the above SQL statement, a row with value `(now, 10.2, 219, 0.32)` will be inserted into table "d1001". If table "d1001" doesn't exist, it will be created automatically using STable "meters" as template with tag value `"Beijing.Chaoyang", 2`. For more details please refer to [Create Table Automatically](/taos-sql/insert#automatically-create-table-when-inserting). ## Single Column vs Multiple Column -Multiple columns data model is supported in TDengine. As long as multiple physical variables are collected by same data collecting point at same time, i.e. the timestamp are identical, these variables can be put in single stable as columns. However, there is another kind of design, i.e. single column data model, a table is created for each physical variable, which means a stable is required for each kind of physical variables. For example, 3 stables are required for current, voltage and phase. +Multiple columns data model is supported in TDengine. As long as multiple metrics are collected by same data collection point at same time, i.e. the timestamp are identical, these variables can be put in single STable as columns. However, there is another kind of design, i.e. single column data model, a table is created for each metric, which means a STable is required for each kind of metrics. For example, 3 STables are required for current, voltage and phase. -It's recommended to use multiple column data model as possible because it's better in the speed of inserting or querying rows. In some cases, however, the physical variables to be collected vary frequently and correspondingly the stable schema needs to be changed frequently too. In such case, it's more convenient to use single column data model. +It's recommended to use multiple column data model as possible because it's better in the speed of inserting or querying rows. In some cases, however, the metrics to be collected vary frequently and correspondingly the STable schema needs to be changed frequently too. In such case, it's more convenient to use single column data model. diff --git a/docs-en/04-develop/03-insert-data/02-influxdb-line.mdx b/docs-en/04-develop/03-insert-data/02-influxdb-line.mdx index b5ea308803..172003d203 100644 --- a/docs-en/04-develop/03-insert-data/02-influxdb-line.mdx +++ b/docs-en/04-develop/03-insert-data/02-influxdb-line.mdx @@ -21,7 +21,7 @@ A single line of text is used in InfluxDB Line protocol format represents one ro measurement,tag_set field_set timestamp ``` -- `measurement` will be used as the stable name +- `measurement` will be used as the STable name - `tag_set` will be used as tags, with format like `=,=` - `field_set`will be used as data columns, with format like `=,=` - `timestamp` is the primary key timestamp corresponding to this row of data diff --git a/docs-en/04-develop/03-insert-data/03-opentsdb-telnet.mdx b/docs-en/04-develop/03-insert-data/03-opentsdb-telnet.mdx index ca06a906c2..66bb67c256 100644 --- a/docs-en/04-develop/03-insert-data/03-opentsdb-telnet.mdx +++ b/docs-en/04-develop/03-insert-data/03-opentsdb-telnet.mdx @@ -21,9 +21,9 @@ A single line of text is used in OpenTSDB line protocol to represent one row of =[ =] ``` -- `metric` will be used as stable name. +- `metric` will be used as STable name. - `timestamp` is the timestamp of current row of data. The time precision will be determined automatically based on the length of the timestamp. second and millisecond time precision are supported.\ -- `value` is a physical variable which must be a numeric value, the corresponding column name is "value". +- `value` is a metric which must be a numeric value, the corresponding column name is "value". - The last part is tag sets separated by space, all tags will be converted to nchar type automatically. For example: @@ -60,13 +60,13 @@ Please refer to [OpenTSDB Telnet API](http://opentsdb.net/docs/build/html/api_te -2 stables will be crated automatically while each stable has 4 rows of data in the above sample code. +2 STables will be crated automatically while each STable has 4 rows of data in the above sample code. ```cmd taos> use test; Database changed. -taos> show stables; +taos> show STables; name | created_time | columns | tags | tables | ============================================================================================ meters.current | 2022-03-30 17:04:10.877 | 2 | 2 | 2 | diff --git a/docs-en/04-develop/03-insert-data/04-opentsdb-json.mdx b/docs-en/04-develop/03-insert-data/04-opentsdb-json.mdx index a4d084bff6..8d9f150df3 100644 --- a/docs-en/04-develop/03-insert-data/04-opentsdb-json.mdx +++ b/docs-en/04-develop/03-insert-data/04-opentsdb-json.mdx @@ -40,7 +40,7 @@ A JSON string is sued in OpenTSDB JSON to represent one or more rows of data, fo ] ``` -Similar to OpenTSDB line protocol, `metric` will be used as the stable name, `timestamp` is the timestamp to be used, `value` represents the physical variable collected, `tags` are the tag sets. +Similar to OpenTSDB line protocol, `metric` will be used as the STable name, `timestamp` is the timestamp to be used, `value` represents the metric collected, `tags` are the tag sets. Please refer to [OpenTSDB HTTP API](http://opentsdb.net/docs/build/html/api_http/put.html) for more details. @@ -77,13 +77,13 @@ Please refer to [OpenTSDB HTTP API](http://opentsdb.net/docs/build/html/api_http -The above sample code will created 2 stables automatically while each stable has 2 rows of data. +The above sample code will created 2 STables automatically while each STable has 2 rows of data. ```cmd taos> use test; Database changed. -taos> show stables; +taos> show STables; name | created_time | columns | tags | tables | ============================================================================================ meters.current | 2022-03-29 16:05:25.193 | 2 | 2 | 1 | diff --git a/docs-en/04-develop/04-query-data/index.mdx b/docs-en/04-develop/04-query-data/index.mdx index 641dc23cb8..eb31888e10 100644 --- a/docs-en/04-develop/04-query-data/index.mdx +++ b/docs-en/04-develop/04-query-data/index.mdx @@ -53,7 +53,7 @@ For detailed query syntax please refer to [Select](/taos-sql/select). ## Join Query -In IoT use cases, there are always multiple data collecting points of same kind. A new concept, called STable (abbreviated for super table), is used in TDengine to represent a kind of data collecting points, and a table is used to represent a specific data collecting point. Tags are used by TDengine to represent the static properties of data collecting points. A specific data collecting point has its own values for static properties. By specifying filter conditions on tags, join query can be performed efficiently between all the tables belonging to same stable, i.e. same kind of data collecting points, can be. Aggregate functions applicable for tables can be used directly on stables, syntax is exactly same. +In IoT use cases, there are always multiple data collection points of same kind. A new concept, called STable (abbreviated for super table), is used in TDengine to represent a kind of data collection points, and a table is used to represent a specific data collection point. Tags are used by TDengine to represent the static properties of data collection points. A specific data collection point has its own values for static properties. By specifying filter conditions on tags, join query can be performed efficiently between all the tables belonging to same STable, i.e. same kind of data collection points, can be. Aggregate functions applicable for tables can be used directly on STables, syntax is exactly same. ### Example 1 @@ -80,7 +80,7 @@ taos> SELECT count(*), max(current) FROM meters where groupId = 2 and ts > now - Query OK, 1 row(s) in set (0.002136s) ``` -Join query is allowed between only the tables of same stable. In [Select](/taos-sql/select), all query operations are marked as whether it supports stable or not. +Join query is allowed between only the tables of same STable. In [Select](/taos-sql/select), all query operations are marked as whether it supports STable or not. ## Down Sampling and Interpolation @@ -95,7 +95,7 @@ taos> SELECT sum(current) FROM d1001 INTERVAL(10s); Query OK, 2 row(s) in set (0.000883s) ``` -Down sampling can also be used for stable. For example, below SQL statement can be used to get the sum of current from all meters in BeiJing. +Down sampling can also be used for STable. For example, below SQL statement can be used to get the sum of current from all meters in BeiJing. ``` taos> SELECT SUM(current) FROM meters where location like "Beijing%" INTERVAL(1s); @@ -123,7 +123,7 @@ taos> SELECT SUM(current) FROM meters INTERVAL(1s, 500a); Query OK, 5 row(s) in set (0.001521s) ``` -In IoT use cases, it's hard to align the timestamp of the data collected by each collecting point. However, a lot of algorithms like FFT require the data to be aligned with same time interval and application programs have to handle by themselves in many systems. In TDengine, it's easy to achieve the alignment using down sampling. +In IoT use cases, it's hard to align the timestamp of the data collected by each collection point. However, a lot of algorithms like FFT require the data to be aligned with same time interval and application programs have to handle by themselves in many systems. In TDengine, it's easy to achieve the alignment using down sampling. Interpolation can be performed in TDengine if there is no data in a time range. @@ -133,7 +133,7 @@ For more details please refer to [Aggregate by Window](/taos-sql/interval). ### Query -In the section describing [Insert](/develop/insert-data/sql-writing), a database named `power` is created and some data are inserted into stable `meters`. Below sample code demonstrates how to query the data in this stable. +In the section describing [Insert](/develop/insert-data/sql-writing), a database named `power` is created and some data are inserted into STable `meters`. Below sample code demonstrates how to query the data in this STable. diff --git a/docs-en/04-develop/05-continuous-query.mdx b/docs-en/04-develop/05-continuous-query.mdx index f0250d9cf2..97e32a17ff 100644 --- a/docs-en/04-develop/05-continuous-query.mdx +++ b/docs-en/04-develop/05-continuous-query.mdx @@ -4,7 +4,7 @@ description: "Continuous query is a query that's executed automatically accordin title: "Continuous Query" --- -Continuous query is a query that's executed automatically according to predefined frequency to provide aggregate query capability by time window, it's actually a simplified time driven stream computing. Continuous query can be performed on a table or stable in TDengine. The result of continuous query can be pushed to client or written back to TDengine. Each query is executed on a time window, which moves forward with time. The size of time window and the forward sliding time need to be specified with parameter `INTERVAL` and `SLIDING` respectively. +Continuous query is a query that's executed automatically according to predefined frequency to provide aggregate query capability by time window, it's actually a simplified time driven stream computing. Continuous query can be performed on a table or STable in TDengine. The result of continuous query can be pushed to client or written back to TDengine. Each query is executed on a time window, which moves forward with time. The size of time window and the forward sliding time need to be specified with parameter `INTERVAL` and `SLIDING` respectively. Continuous query in TDengine is time driven, and can be defined using TAOS SQL directly without any extra operations. With continuous query, the result can be generated according to time window to achieve down sampling of original data. Once a continuous query is defined using TAOS SQL, the query is automatically executed at the end of each time window and the result is pushed back to client or written to TDengine. @@ -30,7 +30,7 @@ SLIDING: The time step for which the time window moves forward each time ## How to Use -In this section the use case of meters will be used to introduce how to use continuous query. Assume the stable and sub tables have been created using below SQL statement. +In this section the use case of meters will be used to introduce how to use continuous query. Assume the STable and sub tables have been created using below SQL statement. ```sql create table meters (ts timestamp, current float, voltage int, phase float) tags (location binary(64), groupId int); diff --git a/docs-en/04-develop/06-subscribe.mdx b/docs-en/04-develop/06-subscribe.mdx index f80667032b..45b13d94c4 100644 --- a/docs-en/04-develop/06-subscribe.mdx +++ b/docs-en/04-develop/06-subscribe.mdx @@ -28,7 +28,7 @@ taos_consume taos_unsubscribe ``` -For more details about these API please refer to [C/C++ Connector](/reference/connector/cpp). Their usage will be introduced below using the use case of meters, in which the schema of stable and sub tables please refer to the previous section "continuous query". Full sample code can be found [here](https://github.com/taosdata/TDengine/blob/master/examples/c/subscribe.c). +For more details about these API please refer to [C/C++ Connector](/reference/connector/cpp). Their usage will be introduced below using the use case of meters, in which the schema of STable and sub tables please refer to the previous section "continuous query". Full sample code can be found [here](https://github.com/taosdata/TDengine/blob/master/examples/c/subscribe.c). If we want to get notification and take some actions if the current exceeds a threshold, like 10A, from some meters, there are two ways: @@ -42,7 +42,7 @@ select * from D1002 where ts > {last_timestamp2} and current > 10; The above way works, but the problem is that the number of `select` statements increases with the number of meters grows. Finally the performance of both client side and server side will be unacceptable once the number of meters grows to a big enough number. -A better way is to query on the stable, only one `select` is enough regardless of the number of meters, like below: +A better way is to query on the STable, only one `select` is enough regardless of the number of meters, like below: ```sql select * from meters where ts > {last_timestamp} and current > 10; @@ -155,7 +155,7 @@ Now let's see the effect of the above sample code, assuming below prerequisites - The sample code has been downloaded to local system 示 - TDengine has been installed and launched properly on same system -- The database, stable, sub tables required in the sample code have been ready +- The database, STable, sub tables required in the sample code have been ready It's ready to launch below command in the directory where the sample code resides to compile and start the program. diff --git a/docs-en/04-develop/07-cache.md b/docs-en/04-develop/07-cache.md index 3148d84abe..13db6c3638 100644 --- a/docs-en/04-develop/07-cache.md +++ b/docs-en/04-develop/07-cache.md @@ -12,7 +12,7 @@ The memory space used by TDengine cache is fixed in size, according to the confi Memory pool is divided into blocks and data is stored in row format in memory and each block follows FIFO policy. The size of each block is determined by configuration parameter `cache`, the number of blocks for each vnode is determined by `blocks`. For each vnode, the total cache size is `cache * blocks`. It's better to set the size of each block to hold at least tends of rows. -`last_row` function can be used to retrieve the last row of a table or a stable to quickly show the current state of devices on monitoring screen. For example below SQL statement retrieves the latest voltage of all meters in Chaoyang district of Beijing. +`last_row` function can be used to retrieve the last row of a table or a STable to quickly show the current state of devices on monitoring screen. For example below SQL statement retrieves the latest voltage of all meters in Chaoyang district of Beijing. ```sql select last_row(voltage) from meters where location='Beijing.Chaoyang'; diff --git a/docs-en/04-develop/08-udf.md b/docs-en/04-develop/08-udf.md index 893eba80bb..e344e4024c 100644 --- a/docs-en/04-develop/08-udf.md +++ b/docs-en/04-develop/08-udf.md @@ -43,7 +43,7 @@ Below function template can be used to define your own aggregate function. `void abs_max_merge(char* data, int32_t numOfRows, char* dataOutput, int32_t* numOfOutput, SUdfInit* buf)` -`udfMergeFunc` is the place holder of function name, the function implemented with the above template is used to aggregate the intermediate result, only can be used in the aggregate query for stable. +`udfMergeFunc` is the place holder of function name, the function implemented with the above template is used to aggregate the intermediate result, only can be used in the aggregate query for STable. Definitions of the parameters: @@ -55,7 +55,7 @@ Definitions of the parameters: [abs_max.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/abs_max.c) is an user defined aggregate function to get the maximum from the absolute value of a column. -The internal processing is that the data affected by the select statement will be divided into multiple row blocks and `udfNormalFunc`, i.e. `abs_max` in this case, is performed on each row block to generate the intermediate of each sub table, then `udfMergeFunc`, i.e. `abs_max_merge` in this case, is performed on the intermediate result of sub tables to aggregate to generate the final or intermediate result of stable. The intermediate result of stable is finally processed by `udfFinalizeFunc` to generate the final result, which contain either 0 or 1 row. +The internal processing is that the data affected by the select statement will be divided into multiple row blocks and `udfNormalFunc`, i.e. `abs_max` in this case, is performed on each row block to generate the intermediate of each sub table, then `udfMergeFunc`, i.e. `abs_max_merge` in this case, is performed on the intermediate result of sub tables to aggregate to generate the final or intermediate result of STable. The intermediate result of STable is finally processed by `udfFinalizeFunc` to generate the final result, which contain either 0 or 1 row. Other typical scenarios, like covariance, can also be achieved by aggregate UDF. @@ -79,7 +79,7 @@ The naming of 3 kinds of UDF, i.e. udfNormalFunc, udfMergeFunc, and udfFinalizeF According to the kind of UDF to implement, the functions that need to be implemented are different. - Scalar function:udfNormalFunc is required -- Aggregate function:udfNormalFunc, udfMergeFunc (if query on stable) and udfFinalizeFunc are required +- Aggregate function:udfNormalFunc, udfMergeFunc (if query on STable) and udfFinalizeFunc are required To be more accurate, assuming we want to implement a UDF named "foo". If the function is a scalar function, what we really need to implement is `foo`; if the function is aggregate function, we need to implement `foo`, `foo_merge`, and `foo_finalize`. For aggregate UDF, even though one of the three functions is not necessary, there must be an empty implementation. @@ -164,7 +164,7 @@ SHOW FUNCTIONS; The function name specified when creating UDF can be used directly in SQL statements, just like builtin functions. ```sql -SELECT X(c) FROM table/stable; +SELECT X(c) FROM table/STable; ``` The above SQL statement invokes function X for column c. diff --git a/docs-en/12-taos-sql/04-stable.md b/docs-en/12-taos-sql/04-stable.md index 25375dbe3e..8d763ac22f 100644 --- a/docs-en/12-taos-sql/04-stable.md +++ b/docs-en/12-taos-sql/04-stable.md @@ -5,14 +5,14 @@ title: Super Table :::note -Keyword `STABLE`, abbreviated for super table, is supported since version 2.0.15. +Keyword `STable`, abbreviated for super table, is supported since version 2.0.15. ::: ## Crate STable ``` -CREATE STABLE [IF NOT EXISTS] stb_name (timestamp_field_name TIMESTAMP, field1_name data_type1 [, field2_name data_type2 ...]) TAGS (tag1_name tag_type1, tag2_name tag_type2 [, tag3_name tag_type3]); +CREATE STable [IF NOT EXISTS] stb_name (timestamp_field_name TIMESTAMP, field1_name data_type1 [, field2_name data_type2 ...]) TAGS (tag1_name tag_type1, tag2_name tag_type2 [, tag3_name tag_type3]); ``` The SQL statement of creating STable is similar to that of creating table, but a special column named as `TAGS` must be specified with the names and types of the tags. @@ -29,15 +29,15 @@ The SQL statement of creating STable is similar to that of creating table, but a ## Drop STable ``` -DROP STABLE [IF EXISTS] stb_name; +DROP STable [IF EXISTS] stb_name; ``` -All the sub-tables created using the deleted stable will be deleted automatically. +All the sub-tables created using the deleted STable will be deleted automatically. ## Show All STables ``` -SHOW STABLES [LIKE tb_name_wildcard]; +SHOW STableS [LIKE tb_name_wildcard]; ``` This command can be used to display the information of all STables in the current database, including name, creation time, number of columns, number of tags, number of tables created using this STable. @@ -45,7 +45,7 @@ This command can be used to display the information of all STables in the curren ## Show The Create Statement of A STable ``` -SHOW CREATE STABLE stb_name; +SHOW CREATE STable stb_name; ``` This command is useful in migrating data from one TDengine cluster to another one because it can be used to create an exactly same STable in the target database. @@ -61,19 +61,19 @@ DESCRIBE stb_name; ### Add A Column ``` -ALTER STABLE stb_name ADD COLUMN field_name data_type; +ALTER STable stb_name ADD COLUMN field_name data_type; ``` ### Remove A Column ``` -ALTER STABLE stb_name DROP COLUMN field_name; +ALTER STable stb_name DROP COLUMN field_name; ``` ### Change Column Length ``` -ALTER STABLE stb_name MODIFY COLUMN field_name data_type(length); +ALTER STable stb_name MODIFY COLUMN field_name data_type(length); ``` This command can be used to change (or incerase, more specifically) the length of a column of variable length types, like BINARY or NCHAR. @@ -83,7 +83,7 @@ This command can be used to change (or incerase, more specifically) the length o ### Add A Tag ``` -ALTER STABLE stb_name ADD TAG new_tag_name tag_type; +ALTER STable stb_name ADD TAG new_tag_name tag_type; ``` This command is used to add a new tag for a STable and specify the tag type. @@ -91,7 +91,7 @@ This command is used to add a new tag for a STable and specify the tag type. ### Remove A Tag ``` -ALTER STABLE stb_name DROP TAG tag_name; +ALTER STable stb_name DROP TAG tag_name; ``` The tag will be removed automatically from all the sub tables crated using the super table as template once a tag is removed from a super table. @@ -99,7 +99,7 @@ The tag will be removed automatically from all the sub tables crated using the s ### Change A Tag ``` -ALTER STABLE stb_name CHANGE TAG old_tag_name new_tag_name; +ALTER STable stb_name CHANGE TAG old_tag_name new_tag_name; ``` The tag name will be changed automatically from all the sub tables crated using the super table as template once a tag name is changed for a super table. @@ -107,7 +107,7 @@ The tag name will be changed automatically from all the sub tables crated using ### Change Tag Length ``` -ALTER STABLE stb_name MODIFY TAG tag_name data_type(length); +ALTER STable stb_name MODIFY TAG tag_name data_type(length); ``` This command can be used to change (or incerase, more specifically) the length of a tag of variable length types, like BINARY or NCHAR. diff --git a/docs-en/12-taos-sql/05-insert.md b/docs-en/12-taos-sql/05-insert.md index b8d80f85c4..36af96160b 100644 --- a/docs-en/12-taos-sql/05-insert.md +++ b/docs-en/12-taos-sql/05-insert.md @@ -131,10 +131,10 @@ Firstly, a super table is created. CREATE TABLE meters(ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS(location BINARY(30), groupId INT); ``` -It can be proved that the super table has been created by `SHOW STABLES`, but no table exists by `SHOW TABLES`. +It can be proved that the super table has been created by `SHOW STableS`, but no table exists by `SHOW TABLES`. ``` -taos> SHOW STABLES; +taos> SHOW STableS; name | created_time | columns | tags | tables | ============================================================================================ meters | 2020-08-06 17:50:27.831 | 4 | 2 | 0 | @@ -156,7 +156,7 @@ The output shows the value to be inserted is invalid. But `SHOW TABLES` proves t DB error: invalid SQL: 'a' (invalid timestamp) (0.039494s) taos> SHOW TABLES; - table_name | created_time | columns | stable_name | + table_name | created_time | columns | STable_name | ====================================================================================================== d1001 | 2020-08-06 17:52:02.097 | 4 | meters | Query OK, 1 row(s) in set (0.001091s) diff --git a/docs-en/12-taos-sql/06-select.md b/docs-en/12-taos-sql/06-select.md index 5179c212f6..f28c8dc2e6 100644 --- a/docs-en/12-taos-sql/06-select.md +++ b/docs-en/12-taos-sql/06-select.md @@ -216,7 +216,7 @@ Query OK, 1 row(s) in set (0.000081s) ## \_block_dist -**Description**: Get the data block distribution of a table or stable. +**Description**: Get the data block distribution of a table or STable. ```SQL title="Syntax" SELECT _block_dist() FROM { tb_name | stb_name } @@ -226,7 +226,7 @@ SELECT _block_dist() FROM { tb_name | stb_name } **Sub Query**:Sub query or nested query are not supported -**Return value**: A string which includes the data block distribution of the specified table or stable, i.e. the histogram of rows stored in the data blocks of the table or stable. +**Return value**: A string which includes the data block distribution of the specified table or STable, i.e. the histogram of rows stored in the data blocks of the table or STable. ```text title="Result" summary: @@ -235,7 +235,7 @@ summary: **More explanation about above example**: -- Histogram about the rows stored in the data blocks of the table or stable: the value of rows for 5%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, 95%, and 99% +- Histogram about the rows stored in the data blocks of the table or STable: the value of rows for 5%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, 95%, and 99% - Minimum number of rows stored in a data block, i.e. Min=[392(Rows)] - Maximum number of rows stored in a data block, i.e. Max=[800(Rows)] - Average number of rows stored in a data block, i.e. Avg=[666(Rows)] @@ -347,7 +347,7 @@ The maximum length of regular expression string is 128 bytes. Configuration para ## JOIN -From version 2.2.0.0, inner join is fully supported in TDengine. More specifically, the inner join between table and table, that between stable and stable, and that between sub query and sub query are supported. +From version 2.2.0.0, inner join is fully supported in TDengine. More specifically, the inner join between table and table, that between STable and STable, and that between sub query and sub query are supported. Only primary key, i.e. timestamp, can be used in the join operation between table and table. For example: @@ -357,11 +357,11 @@ FROM temp_tb_1 t1, pressure_tb_1 t2 WHERE t1.ts = t2.ts ``` -In the join operation between stable and stable, besides the primary key, i.e. timestamp, tags can also be used. For example: +In the join operation between STable and STable, besides the primary key, i.e. timestamp, tags can also be used. For example: ```sql SELECT * -FROM temp_stable t1, temp_stable t2 +FROM temp_STable t1, temp_STable t2 WHERE t1.ts = t2.ts AND t1.deviceid = t2.deviceid AND t1.status=0; ``` @@ -370,7 +370,7 @@ Similary, join operation can be performed on the result set of multiple sub quer :::note Restrictions on join operation: -- The number of tables or stables in single join operation can't exceed 10. +- The number of tables or STables in single join operation can't exceed 10. - `FILL` is not allowed in the query statement that includes JOIN operation. - Arithmetic operation is not allowed on the result set of join operation. - `GROUP BY` is not allowed on a part of tables that participate in join operation. @@ -394,7 +394,7 @@ SELECT ... FROM (SELECT ... FROM ...) ...; - Only one layer of nesting is allowed, that means no sub query is allowed in a sub query - The result set returned by the inner query will be used as a "virtual table" by the outer query, the "virtual table" can be renamed using `AS` keyword for easy reference in the outer query. - Sub query is not allowed in continuous query. -- JOIN operation is allowed between tables/stables inside both inner and outer queries. Join operation can be performed on the result set of the inner query. +- JOIN operation is allowed between tables/STables inside both inner and outer queries. Join operation can be performed on the result set of the inner query. - UNION operation is not allowed in either inner query or outer query. - The functionalities that can be used in the inner query is same as non-nested query. - `ORDER BY` inside the inner query doesn't make any sense but will slow down the query performance significantly, so please avoid such usage. diff --git a/docs-en/12-taos-sql/07-function.md b/docs-en/12-taos-sql/07-function.md index fd26ec4814..1badb5915e 100644 --- a/docs-en/12-taos-sql/07-function.md +++ b/docs-en/12-taos-sql/07-function.md @@ -48,13 +48,13 @@ Query OK, 1 row(s) in set (0.001075s) SELECT AVG(field_name) FROM tb_name [WHERE clause]; ``` -**Description**:Get the average value of a column in a table or stable +**Description**:Get the average value of a column in a table or STable **Return value type**:Double precision floating number **Applicable column types**:Data types except for timestamp, binary, nchar and bool -**Applicable table types**:table, stable +**Applicable table types**:table, STable **Examples**: @@ -84,11 +84,11 @@ SELECT TWA(field_name) FROM tb_name WHERE clause; **Applicable column types**:Data types except for timestamp, binary, nchar and bool -**Applicable table types**:table, stable +**Applicable table types**:table, STable **More explanations**: -- From version 2.1.3.0, function TWA can be used on stble with `GROUP BY`, i.e. timelines generated by `GROUP BY tbname` on a stable. +- From version 2.1.3.0, function TWA can be used on stble with `GROUP BY`, i.e. timelines generated by `GROUP BY tbname` on a STable. ### IRATE @@ -102,11 +102,11 @@ SELECT IRATE(field_name) FROM tb_name WHERE clause; **Applicable column types**:Data types except for timestamp, binary, nchar and bool -**Applicable table types**:table, stable +**Applicable table types**:table, STable **More explanations**: -- From version 2.1.3.0, function IRATE can be used on stble with `GROUP BY`, i.e. timelines generated by `GROUP BY tbname` on a stable. +- From version 2.1.3.0, function IRATE can be used on stble with `GROUP BY`, i.e. timelines generated by `GROUP BY tbname` on a STable. ### SUM @@ -114,13 +114,13 @@ SELECT IRATE(field_name) FROM tb_name WHERE clause; SELECT SUM(field_name) FROM tb_name [WHERE clause]; ``` -**Description**:The sum of a specific column in a table or stable +**Description**:The sum of a specific column in a table or STable **Return value type**:Double precision floating number or long integer **Applicable column types**:Data types except for timestamp, binary, nchar and bool -**Applicable table types**:table, stable +**Applicable table types**:table, STable **Examples**: @@ -144,13 +144,13 @@ Query OK, 1 row(s) in set (0.000980s) SELECT STDDEV(field_name) FROM tb_name [WHERE clause]; ``` -**Description**:Standard deviation of a specific column in a table or stable +**Description**:Standard deviation of a specific column in a table or STable **Return value type**:Double precision floating number **Applicable column types**:Data types except for timestamp, binary, nchar and bool -**Applicable table types**:table, stable (starting from version 2.0.15.1) +**Applicable table types**:table, STable (starting from version 2.0.15.1) **Examples**: @@ -270,13 +270,13 @@ When any selective function is used, timestamp column or tag columns including ` SELECT MIN(field_name) FROM {tb_name | stb_name} [WHERE clause]; ``` -**Description**:The minimum value of a specific column in a table or stable +**Description**:The minimum value of a specific column in a table or STable **Return value type**:Same as the data type of the column being operated **Applicable column types**:Data types except for timestamp, binary, nchar and bool -**Applicable table types**:table, stable +**Applicable table types**:table, STable **Examples**: @@ -300,13 +300,13 @@ Query OK, 1 row(s) in set (0.000950s) SELECT MAX(field_name) FROM { tb_name | stb_name } [WHERE clause]; ``` -**Description**:The maximum value of a specific column of a table or stable +**Description**:The maximum value of a specific column of a table or STable **Return value type**:Same as the data type of the column being operated **Applicable column types**:Data types except for timestamp, binary, nchar and bool -**Applicable table types**:table, stable +**Applicable table types**:table, STable **Examples**: @@ -330,13 +330,13 @@ Query OK, 1 row(s) in set (0.000987s) SELECT FIRST(field_name) FROM { tb_name | stb_name } [WHERE clause]; ``` -**Description**:The first non-null value of a specific column in a table or stable +**Description**:The first non-null value of a specific column in a table or STable **Return value type**:Same as the column being operated **Applicable column types**:Any data type -**Applicable table types**:table, stable +**Applicable table types**:table, STable **More explanations**: @@ -366,19 +366,19 @@ Query OK, 1 row(s) in set (0.001023s) SELECT LAST(field_name) FROM { tb_name | stb_name } [WHERE clause]; ``` -**Description**:The last non-NULL value of a specific column in a table or stable +**Description**:The last non-NULL value of a specific column in a table or STable **Return value type**:Same as the column being operated **Applicable column types**:Any data type -**Applicable table types**:table, stable +**Applicable table types**:table, STable **More explanations**: - LAST(\*) can be used to get the last non-NULL value of all columns - If the values of a column in the result set are all NULL, NULL is returned for that column; if all columns in the result are all NULL, no result will be returned. -- When it's used on a stable, if there are multiple values with the timestamp in the result set, one of them will be returned randomly and it's not guaranteed that the same value is returned if the same query is run multiple times. +- When it's used on a STable, if there are multiple values with the timestamp in the result set, one of them will be returned randomly and it's not guaranteed that the same value is returned if the same query is run multiple times. **Examples**: @@ -402,13 +402,13 @@ Query OK, 1 row(s) in set (0.000843s) SELECT TOP(field_name, K) FROM { tb_name | stb_name } [WHERE clause]; ``` -**Description**: The greatest _k_ values of a specific column in a table or stable. If a value has multiple occurrences in the column but counting all of them in will exceed the upper limit _k_, then a part of them will be returned randomly. +**Description**: The greatest _k_ values of a specific column in a table or STable. If a value has multiple occurrences in the column but counting all of them in will exceed the upper limit _k_, then a part of them will be returned randomly. **Return value type**:Same as the column being operated **Applicable column types**:Data types except for timestamp, binary, nchar and bool -**Applicable table types**:table, stable +**Applicable table types**:table, STable **More explanations**: @@ -441,13 +441,13 @@ Query OK, 2 row(s) in set (0.000810s) SELECT BOTTOM(field_name, K) FROM { tb_name | stb_name } [WHERE clause]; ``` -**Description**:The least _k_ values of a specific column in a table or stable. If a value has multiple occurrences in the column but counting all of them in will exceed the upper limit _k_, then a part of them will be returned randomly. +**Description**:The least _k_ values of a specific column in a table or STable. If a value has multiple occurrences in the column but counting all of them in will exceed the upper limit _k_, then a part of them will be returned randomly. **Return value type**:Same as the column being operated **Applicable column types**: Data types except for timestamp, binary, nchar and bool -**Applicable table types**: table, stable +**Applicable table types**: table, STable **More explanations**: @@ -512,7 +512,7 @@ FROM { tb_name | stb_name } [WHERE clause] **Applicable column types**: Data types except for timestamp, binary, nchar and bool -**Applicable table types**: table, stable +**Applicable table types**: table, STable **More explanations** @@ -548,17 +548,17 @@ Query OK, 1 row(s) in set (0.011639s) SELECT LAST_ROW(field_name) FROM { tb_name | stb_name }; ``` -**Description**: The last row of a table or stable +**Description**: The last row of a table or STable **Return value type**: Same as the column being operated **Applicable column types**: Any data type -**Applicable table types**: table, stable +**Applicable table types**: table, STable **More explanations**: -- When it's used against a stable, multiple rows with the same and largest timestamp may exist, in this case one of them is returned randomly and it's not guaranteed that the result is same if the query is run multiple times. +- When it's used against a STable, multiple rows with the same and largest timestamp may exist, in this case one of them is returned randomly and it's not guaranteed that the result is same if the query is run multiple times. - Can't be used with `INTERVAL`. **Examples**: @@ -589,7 +589,7 @@ SELECT INTERP(field_name) FROM { tb_name | stb_name } [WHERE where_condition] [ **Applicable column types**: Numeric data types -**Applicable table types**: table, stable, nested query +**Applicable table types**: table, STable, nested query **More explanations** @@ -598,7 +598,7 @@ SELECT INTERP(field_name) FROM { tb_name | stb_name } [WHERE where_condition] [ - The output time range of `INTERP` is specified by `RANGE(timestamp1,timestamp2)` parameter, with timestamp1<=timestamp2. timestamp1 is the starting point of the output time range and must be specified. timestamp2 is the ending point of the output time range and must be specified. If `RANGE` is not specified, then the timestamp of the first row that matches the filter condition is treated as timestamp1, the timestamp of the last row that matches the filter condition is treated as timestamp2. - The number of rows in the result set of `INTERP` is determined by the parameter `EVERY`. Starting from timestamp1, one interpolation is performed for every time interval specified `EVERY` parameter. If `EVERY` parameter is not used, the time windows will be considered as no ending timestamp, i.e. there is only one time window from timestamp1. - Interpolation is performed based on `FILL` parameter. No interpolation is performed if `FILL` is not used, that means either the original data that matches is returned or nothing is returned. -- `INTERP` can only be used to interpolate in single timeline. So it must be used with `group by tbname` when it's used on a stable. It can't be used with `GROUP BY` when it's used in the inner query of a nested query. +- `INTERP` can only be used to interpolate in single timeline. So it must be used with `group by tbname` when it's used on a STable. It can't be used with `GROUP BY` when it's used in the inner query of a nested query. - The result of `INTERP` is not influenced by `ORDER BY TIMESTAMP`, which impacts the output order only.. **Examples**: Based on the `meters` schema used throughout the documents @@ -645,13 +645,13 @@ SELECT INTERP(field_name) FROM { tb_name | stb_name } WHERE ts='timestamp' [FILL **Applicable column types**: Numeric data type -**Applicable table types**: table, stable +**Applicable table types**: table, STable **More explanations**: - It can be used from version 2.0.15.0 - Time slice must be specified. If there is no data matching the specified time slice, interpolation is performed based on `FILL` parameter. Conditions such as tags or `tbname` can be used `Where` clause can be used to filter data. -- The timestamp specified must be within the time range of the data rows of the table or stable. If it is beyond the valid time range, nothing is returned even with `FILL` parameter. +- The timestamp specified must be within the time range of the data rows of the table or STable. If it is beyond the valid time range, nothing is returned even with `FILL` parameter. - `INTERP` can be used to query only single time point once. `INTERP` can be used with `EVERY` to get the interpolation value every time interval. - **Examples**: @@ -741,7 +741,7 @@ SELECT UNIQUE(field_name) FROM {tb_name | stb_name} [WHERE clause]; **More explanations**: -- It can be used against table or stable, but can't be used together with time window, like `interval`, `state_window` or `session_window` . +- It can be used against table or STable, but can't be used together with time window, like `interval`, `state_window` or `session_window` . - Considering the number of result sets is unpredictable, it's suggested to limit the distinct values under 100,000 to control the memory usage, otherwise error will be returned. **Examples**: @@ -785,12 +785,12 @@ SELECT {DIFF(field_name, ignore_negative) | DIFF(field_name)} FROM tb_name [WHER **Applicable column types**: Data types except for timestamp, binary, nchar and bool -**Applicable table types**: table, stable +**Applicable table types**: table, STable **More explanations**: - The number of result rows is the number of rows subtracted by one, no output for the first row -- From version 2.1.30, `DIFF` can be used on stable with `GROUP by tbname` +- From version 2.1.30, `DIFF` can be used on STable with `GROUP by tbname` - From version 2.6.0, `ignore_negative` parameter is supported **Examples**: @@ -816,12 +816,12 @@ SELECT DERIVATIVE(field_name, time_interval, ignore_negative) FROM tb_name [WHER **Applicable column types**: Data types except for timestamp, binary, nchar and bool -**Applicable table types**: table, stable +**Applicable table types**: table, STable **More explanations**: - It is available from version 2.1.3.0, the number of result rows is the number of total rows in the time range subtracted by one, no output for the first row.\ -- It can be used together with `GROUP BY tbname` against a stable. +- It can be used together with `GROUP BY tbname` against a STable. **Examples**: @@ -849,7 +849,7 @@ SELECT SPREAD(field_name) FROM { tb_name | stb_name } [WHERE clause]; **Applicable column types**: Data types except for binary, nchar, and bool -**Applicable table types**: table, stable +**Applicable table types**: table, STable **More explanations**: Can be used on a column of TIMESTAMP type, the result is the time range size.可 @@ -881,7 +881,7 @@ SELECT CEIL(field_name) FROM { tb_name | stb_name } [WHERE clause]; **Applicable data types**: Data types except for timestamp, binary, nchar, bool -**Applicable table types**: table, stable +**Applicable table types**: table, STable **Applicable nested query**: inner query and outer query @@ -913,9 +913,9 @@ SELECT ROUND(field_name) FROM { tb_name | stb_name } [WHERE clause]; ### CSUM - ```sql +```sql SELECT CSUM(field_name) FROM { tb_name | stb_name } [WHERE clause] - ``` +``` **Description**: The cumulative sum of each row for a specific column. The number of output rows is same as that of the input rows. @@ -923,24 +923,24 @@ SELECT ROUND(field_name) FROM { tb_name | stb_name } [WHERE clause]; **Applicable data types**: Data types except for timestamp, binary, nchar, and bool -**Applicable table types**: table, stable +**Applicable table types**: table, STable **Applicable nested query**: Inner query and Outer query **More explanations**: -- Can't be used on tags when it's used on stable +- Can't be used on tags when it's used on STable - Arithmetic operation can't be performed on the result of `csum` function - Can only be used with aggregate functions -- `Group by tbname` must be used together on a stable to force the result on a single timeline +- `Group by tbname` must be used together on a STable to force the result on a single timeline **Applicable versions**: From 2.3.0.x ### MAVG - ```sql +```sql SELECT MAVG(field_name, K) FROM { tb_name | stb_name } [WHERE clause] - ``` +``` **Description**: The moving average of continuous _k_ values of a specific column. If the number of input rows is less than _k_, nothing is returned. The applicable range is _k_ is [1,1000]. @@ -950,37 +950,37 @@ SELECT ROUND(field_name) FROM { tb_name | stb_name } [WHERE clause]; **Applicable nested query**: Inner query and Outer query -**Applicable table types**: table, stable +**Applicable table types**: table, STable **More explanations**: - Arithmetic operation can't be performed on the result of `MAVG`. - Can only be used with data columns, can't be used with tags. - Can't be used with aggregate functions.\(Aggregation)函数一起使用; -- Must be used with `GROUP BY tbname` when it's used on a stable to force the result on each single timeline.该 +- Must be used with `GROUP BY tbname` when it's used on a STable to force the result on each single timeline.该 **Applicable versions**: From 2.3.0.x ### SAMPLE - ```sql +```sql SELECT SAMPLE(field_name, K) FROM { tb_name | stb_name } [WHERE clause] - ``` +``` **Description**: _k_ sampling values of a specific column. The applicable range of _k_ is [1,10000] **Return value type**: Same as the column being operated plus the associated timestamp -**Applicable data types**: Any data type except for tags of stable +**Applicable data types**: Any data type except for tags of STable -**Applicable table types**: table, stable +**Applicable table types**: table, STable **Applicable nested query**: Inner query and Outer query **More explanations**: - Arithmetic operation can't be operated on the result of `SAMPLE` function -- Must be used with `Group by tbname` when it's used on a stable to force the result on each single timeline +- Must be used with `Group by tbname` when it's used on a STable to force the result on each single timeline **Applicable versions**: From 2.3.0.x @@ -996,7 +996,7 @@ SELECT ASIN(field_name) FROM { tb_name | stb_name } [WHERE clause] **Applicable data types**: Data types except for timestamp, binary, nchar, bool -**Applicable table types**: table, stable +**Applicable table types**: table, STable **Applicable nested query**: Inner query and Outer query @@ -1019,7 +1019,7 @@ SELECT ACOS(field_name) FROM { tb_name | stb_name } [WHERE clause] **Applicable data types**: Data types except for timestamp, binary, nchar, bool -**Applicable table types**: table, stable +**Applicable table types**: table, STable **Applicable nested query**: Inner query and Outer query @@ -1044,7 +1044,7 @@ SELECT ATAN(field_name) FROM { tb_name | stb_name } [WHERE clause] **Applicable data types**: Data types except for timestamp, binary, nchar, bool -**Applicable table types**: table, stable +**Applicable table types**: table, STable **Applicable nested query**: Inner query and Outer query @@ -1069,7 +1069,7 @@ SELECT SIN(field_name) FROM { tb_name | stb_name } [WHERE clause] **Applicable data types**: Data types except for timestamp, binary, nchar, bool -**Applicable table types**: table, stable +**Applicable table types**: table, STable **Applicable nested query**: Inner query and Outer query @@ -1094,7 +1094,7 @@ SELECT COS(field_name) FROM { tb_name | stb_name } [WHERE clause] **Applicable data types**: Data types except for timestamp, binary, nchar, bool -**Applicable table types**: table, stable +**Applicable table types**: table, STable **Applicable nested query**: Inner query and Outer query @@ -1119,7 +1119,7 @@ SELECT TAN(field_name) FROM { tb_name | stb_name } [WHERE clause] **Applicable data types**: Data types except for timestamp, binary, nchar, bool -**Applicable table types**: table, stable +**Applicable table types**: table, STable **Applicable nested query**: Inner query and Outer query @@ -1142,7 +1142,7 @@ SELECT POW(field_name, power) FROM { tb_name | stb_name } [WHERE clause] **Applicable data types**: Data types except for timestamp, binary, nchar, bool -**Applicable table types**: table, stable +**Applicable table types**: table, STable **Applicable nested query**: Inner query and Outer query @@ -1165,7 +1165,7 @@ SELECT LOG(field_name, base) FROM { tb_name | stb_name } [WHERE clause] **Applicable data types**: Data types except for timestamp, binary, nchar, bool -**Applicable table types**: table, stable +**Applicable table types**: table, STable **Applicable nested query**: Inner query and Outer query @@ -1188,7 +1188,7 @@ SELECT ABS(field_name) FROM { tb_name | stb_name } [WHERE clause] **Applicable data types**: Data types except for timestamp, binary, nchar, bool -**Applicable table types**: table, stable +**Applicable table types**: table, STable **Applicable nested query**: Inner query and Outer query @@ -1211,7 +1211,7 @@ SELECT SQRT(field_name) FROM { tb_name | stb_name } [WHERE clause] **Applicable data types**: Data types except for timestamp, binary, nchar, bool -**Applicable table types**: table, stable +**Applicable table types**: table, STable **Applicable nested query**: Inner query and Outer query @@ -1261,7 +1261,7 @@ SELECT CONCAT(str1|column1, str2|column2, ...) FROM { tb_name | stb_name } [WHER **Applicable data types**: The input data must be in either all BINARY or in all NCHAR; can't be used on tag columns -**Applicable table types**: table, stable +**Applicable table types**: table, STable **Applicable nested query**: Inner query and Outer query @@ -1279,7 +1279,7 @@ SELECT CONCAT_WS(separator, str1|column1, str2|column2, ...) FROM { tb_name | st **Applicable data types**: The input data must be in either all BINARY or in all NCHAR; can't be used on tag columns -**Applicable table types**: table, stable +**Applicable table types**: table, STable **Applicable nested query**: Inner query and Outer query @@ -1301,7 +1301,7 @@ SELECT LENGTH(str|column) FROM { tb_name | stb_name } [WHERE clause] **Applicable data types**: BINARY or NCHAR, can't be used on tags -**Applicable table types**: table, stable +**Applicable table types**: table, STable **Applicable nested query**: Inner query and Outer query @@ -1323,7 +1323,7 @@ SELECT CHAR_LENGTH(str|column) FROM { tb_name | stb_name } [WHERE clause] **Applicable data types**: BINARY or NCHAR, can't be used on tags -**Applicable table types**: table, stable +**Applicable table types**: table, STable **Applicable nested query**: Inner query and Outer query @@ -1345,7 +1345,7 @@ SELECT LOWER(str|column) FROM { tb_name | stb_name } [WHERE clause] **Applicable data types**: BINARY or NCHAR, can't be used on tags -**Applicable table types**: table, stable +**Applicable table types**: table, STable **Applicable nested query**: Inner query and Outer query @@ -1367,7 +1367,7 @@ SELECT UPPER(str|column) FROM { tb_name | stb_name } [WHERE clause] **Applicable data types**: BINARY or NCHAR, can't be used on tags -**Applicable table types**: table, stable +**Applicable table types**: table, STable **Applicable nested query**: Inner query and Outer query @@ -1389,7 +1389,7 @@ SELECT LTRIM(str|column) FROM { tb_name | stb_name } [WHERE clause] **Applicable data types**: BINARY or NCHAR, can't be used on tags -**Applicable table types**: table, stable +**Applicable table types**: table, STable **Applicable nested query**: Inner query and Outer query @@ -1411,7 +1411,7 @@ SELECT RTRIM(str|column) FROM { tb_name | stb_name } [WHERE clause] **Applicable data types**: BINARY or NCHAR, can't be used on tags -**Applicable table types**: table, stable +**Applicable table types**: table, STable **Applicable nested query**: Inner query and Outer query @@ -1433,7 +1433,7 @@ SELECT SUBSTR(str,pos[,len]) FROM { tb_name | stb_name } [WHERE clause] **Applicable data types**: BINARY or NCHAR, can't be used on tags -**Applicable table types**: table, stable +**Applicable table types**: table, STable **Applicable nested query**: Inner query and Outer query @@ -1457,7 +1457,7 @@ SELECT field_name [+|-|*|/|%][Value|field_name] FROM { tb_name | stb_name } [WH **Applicable column types**: Data types except for timestamp, binary, nchar, bool -**Applicable table types**: table, stable +**Applicable table types**: table, STable **More explanations**: @@ -1493,7 +1493,7 @@ SELECT STATECOUNT(field_name, oper, val) FROM { tb_name | stb_name } [WHERE clau **Applicable data types**: Data types excpet for timestamp, binary, nchar, bool -**Applicable table types**: table, stable +**Applicable table types**: table, STable **Applicable nested query**: Outer query only @@ -1501,7 +1501,7 @@ SELECT STATECOUNT(field_name, oper, val) FROM { tb_name | stb_name } [WHERE clau **More explanations**: -- Must be used together with `GROUP BY tbname` when it's used on a stable to force the result into each single timeline] +- Must be used together with `GROUP BY tbname` when it's used on a STable to force the result into each single timeline] - Can't be used with window operation, like interval/state_window/session_window **Examples**: @@ -1548,7 +1548,7 @@ SELECT stateDuration(field_name, oper, val, unit) FROM { tb_name | stb_name } [W **Applicable data types**: Data types excpet for timestamp, binary, nchar, bool -**Applicable table types**: table, stable +**Applicable table types**: table, STable **Applicable nested query**: Outer query only @@ -1556,7 +1556,7 @@ SELECT stateDuration(field_name, oper, val, unit) FROM { tb_name | stb_name } [W **More explanations**: -- Must be used together with `GROUP BY tbname` when it's used on a stable to force the result into each single timeline] +- Must be used together with `GROUP BY tbname` when it's used on a STable to force the result into each single timeline] - Can't be used with window operation, like interval/state_window/session_window **Examples**: @@ -1603,7 +1603,7 @@ INSERT INTO tb_name VALUES (NOW(), ...); **Applicable column types**: TIMESTAMP only -**Applicable table types**: table, stable +**Applicable table types**: table, STable **More explanations**: @@ -1650,7 +1650,7 @@ INSERT INTO tb_name VALUES (TODAY(), ...); **Applicable column types**: TIMESTAMP only -**Applicable table types**: table, stable +**Applicable table types**: table, STable **More explanations**: @@ -1695,7 +1695,7 @@ SELECT TIMEZONE() FROM { tb_name | stb_name } [WHERE clause]; **Applicable column types**: None -**Applicable table types**: table, stable +**Applicable table types**: table, STable **Examples**: @@ -1719,7 +1719,7 @@ SELECT TO_ISO8601(ts_val | ts_col) FROM { tb_name | stb_name } [WHERE clause]; **Applicable column types**: TIMESTAMP, constant or a column -**Applicable table types**: table, stable +**Applicable table types**: table, STable **More explanations**: @@ -1754,7 +1754,7 @@ SELECT TO_UNIXTIMESTAMP(datetime_string | ts_col) FROM { tb_name | stb_name } [W **Applicable column types**: Constant or column of BINARY/NCHAR -**Applicable table types**: table, stable +**Applicable table types**: table, STable **More explanations**: @@ -1789,7 +1789,7 @@ SELECT TIMETRUNCATE(ts_val | datetime_string | ts_col, time_unit) FROM { tb_name **Applicable column types**: UNIX timestamp constant, string constant of date/time format, or a column of timestamp -**Applicable table types**: table, stable +**Applicable table types**: table, STable **More explanations**: @@ -1833,7 +1833,7 @@ SELECT TIMEDIFF(ts_val1 | datetime_string1 | ts_col1, ts_val2 | datetime_string2 **Applicable column types**: UNIX timestamp constant, string constant of date/time format, or a column of TIMESTAMP type -**Applicable table types**: table, stable +**Applicable table types**: table, STable **More explanations**: diff --git a/docs-en/12-taos-sql/08-interval.md b/docs-en/12-taos-sql/08-interval.md index 5a3b130a30..7bf2bd207e 100644 --- a/docs-en/12-taos-sql/08-interval.md +++ b/docs-en/12-taos-sql/08-interval.md @@ -28,7 +28,7 @@ When the time length specified by `SLIDING` is same as that specified by `INTERV ## Status Window -In case of using integer, bool, or string to represent the device status at a moment, the continuous rows with same status belong to same status window. Once the status changes, the status window closes. As shown in the following figure,there are two status windows according to status, [2019-04-28 14:22:07,2019-04-28 14:22:10] and [2019-04-28 14:22:11,2019-04-28 14:22:12]. Status window is not applicable to stable for now. +In case of using integer, bool, or string to represent the device status at a moment, the continuous rows with same status belong to same status window. Once the status changes, the status window closes. As shown in the following figure,there are two status windows according to status, [2019-04-28 14:22:07,2019-04-28 14:22:10] and [2019-04-28 14:22:11,2019-04-28 14:22:12]. Status window is not applicable to STable for now. ![Status Window](/img/sql/timewindow-3.png) @@ -48,7 +48,7 @@ The primary key, i.e. timestamp, is used to determine which session window the r ![Session Window](/img/sql/timewindow-2.png) -If the time interval between two continuous rows are withint the time interval specified by `tol_value` they belong to the same session window; otherwise a new session window is started automatically. Session window is not supported on stable for now. +If the time interval between two continuous rows are withint the time interval specified by `tol_value` they belong to the same session window; otherwise a new session window is started automatically. Session window is not supported on STable for now. ## More On Window Aggregate @@ -89,7 +89,7 @@ SELECT function_list FROM stb_name 1. Huge volume of interpolation output may be returned using `FILL`, so it's recommended to specify the time range when using `FILL`. The maximum interpolation values that can be returned in single query is 10,000,000. 2. The result set is in the ascending order of timestamp in aggregate by time window aggregate. -3. If aggregate by window is used on stable, the aggregate function is performed on all the rows matching the filter conditions. If `GROUP BY` is not used in the query, the result set will be returned in ascending order of timestamp; otherwise the result set is not exactly in the order of ascending timestamp in each group. +3. If aggregate by window is used on STable, the aggregate function is performed on all the rows matching the filter conditions. If `GROUP BY` is not used in the query, the result set will be returned in ascending order of timestamp; otherwise the result set is not exactly in the order of ascending timestamp in each group. ::: Aggregate by time window is also used in continuous query, please refer to [Continuous Query](/develop/continuous-query). diff --git a/docs-en/12-taos-sql/09-limit.md b/docs-en/12-taos-sql/09-limit.md index 3744440a1b..d695f9d999 100644 --- a/docs-en/12-taos-sql/09-limit.md +++ b/docs-en/12-taos-sql/09-limit.md @@ -26,7 +26,7 @@ The legal character set is `[a-zA-Z0-9!?$%^&*()_–+={[}]:;@~#|<,>.?/]`. - Maximum number of tags is 128. There must be at least 1 tag. The total length of tag values should not exceed 16K bytes. - Maximum length of singe SQL statement is 1048576, i.e. 1 MB bytes. It can be configured in the parameter `maxSQLLength` in the client side, the applicable range is [65480, 1048576]. - At most 4096 columns (or 1024 prior to 2.1.7.0) can be returned by `SELECT`, functions in the query statement may constitute columns. Error will be returned if the limit is exceeded. -- Maximum numbers of databases, stables, tables are only depending on the system resources. +- Maximum numbers of databases, STables, tables are only depending on the system resources. - Maximum of database name is 32 bytes, can't include "." and special characters. - Maximum replica number of database is 3 - Maximum length of user name is 23 bytes @@ -47,10 +47,10 @@ The legal character set is `[a-zA-Z0-9!?$%^&*()_–+={[}]:;@~#|<,>.?/]`. ## Restrictions of `ORDER BY` - Only one `order by` is allowed for normal table and sub table. -- At most two `order by` are allowed for stable, and the second one must be `ts`. +- At most two `order by` are allowed for STable, and the second one must be `ts`. - `order by tag` must be used with `group by tag` on same tag, this rule is also applicable to `tbname`. -- `order by column` must be used with `group by column` or `top/bottom` on same column. This rule is applicable to table and stable. -- `order by ts` is applicable to table and stable. +- `order by column` must be used with `group by column` or `top/bottom` on same column. This rule is applicable to table and STable. +- `order by ts` is applicable to table and STable. - If `order by ts` is used with `group by`, the result set is sorted using `ts` in each group. ## Restrictions of Table/Column Names diff --git a/docs-en/12-taos-sql/10-json.md b/docs-en/12-taos-sql/10-json.md index c931c025ce..bd9606a84e 100644 --- a/docs-en/12-taos-sql/10-json.md +++ b/docs-en/12-taos-sql/10-json.md @@ -8,7 +8,7 @@ title: JSON Type 1. Tag of JSON type ```sql - create stable s1 (ts timestamp, v1 int) tags (info json); + create STable s1 (ts timestamp, v1 int) tags (info json); create table s1_1 using s1 tags ('{"k1": "v1"}'); ``` @@ -78,6 +78,6 @@ title: JSON Type For example, below SQL statements are not supported. ```sql; -select jtag->'key' from (select jtag from stable); -select jtag->'key' from (select jtag from stable) where jtag->'key'>0; +select jtag->'key' from (select jtag from STable); +select jtag->'key' from (select jtag from STable) where jtag->'key'>0; ``` diff --git a/docs-en/12-taos-sql/12-keywords.md b/docs-en/12-taos-sql/12-keywords.md index aa976d8d26..b124024feb 100644 --- a/docs-en/12-taos-sql/12-keywords.md +++ b/docs-en/12-taos-sql/12-keywords.md @@ -5,7 +5,7 @@ title: Reserved Keywords ## Reserved Keywords -There are about 200 keywords reserved by TDengine, they can't be used as the name of database, stable or table with either upper case, lower case or mixed case. +There are about 200 keywords reserved by TDengine, they can't be used as the name of database, STable or table with either upper case, lower case or mixed case. **Keywords List** @@ -47,5 +47,5 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam | CONFLICT | GROUP | NE | SLIMIT | VGROUPS | | CONNECTION | GT | NONE | SMALLINT | VIEW | | CONNECTIONS | HAVING | NOT | SOFFSET | VNODES | -| CONNS | ID | NOTNULL | STABLE | WAL | -| COPY | IF | NOW | STABLES | WHERE | +| CONNS | ID | NOTNULL | STable | WAL | +| COPY | IF | NOW | STableS | WHERE | diff --git a/docs-en/13-operation/08-export.md b/docs-en/13-operation/08-export.md index c1b123c8f8..0e84ee0d25 100644 --- a/docs-en/13-operation/08-export.md +++ b/docs-en/13-operation/08-export.md @@ -7,14 +7,14 @@ There are two ways of exporting data from a TDengine cluster, one is SQL stateme ## Export Using SQL -If you want to export the data of a table or a stable, please execute below SQL statement in TDengine CLI. +If you want to export the data of a table or a STable, please execute below SQL statement in TDengine CLI. ```sql select * from >> data.csv; ``` -The data of table or stable specified by `tb_name` will be exported into a file named `data.csv` in CSV format. +The data of table or STable specified by `tb_name` will be exported into a file named `data.csv` in CSV format. ## Export Using taosdump -With `taosdump`, you can choose to export the data of all databases, a database, a table or a stable, you can also choose export the data within a time range, or even only export the schema definition of a table. For the details of using `taosdump` please refer to [Tool for exporting and importing data: taosdump](/reference/taosdump). +With `taosdump`, you can choose to export the data of all databases, a database, a table or a STable, you can also choose export the data within a time range, or even only export the schema definition of a table. For the details of using `taosdump` please refer to [Tool for exporting and importing data: taosdump](/reference/taosdump). diff --git a/docs-en/13-operation/11-optimize.md b/docs-en/13-operation/11-optimize.md index 55146d4759..a3c5df633b 100644 --- a/docs-en/13-operation/11-optimize.md +++ b/docs-en/13-operation/11-optimize.md @@ -15,7 +15,7 @@ Please be noted that a lot of disk I/O is required for defragementation operatio ## Optimize Storage Parameters -The data in different use cases may have different characteristics, such as the days to keep, number of replicas, collection interval, record size, number of collecting points, compression or not, etc. To achieve best efficiency in storage, the parameters in below table can be used, all of them can be either configured in `taos.cfg` as default configuration or in the command `create database`. For detailed definition of these parameters please refer to [Configuration Parameters](/reference/config/). +The data in different use cases may have different characteristics, such as the days to keep, number of replicas, collection interval, record size, number of collection points, compression or not, etc. To achieve best efficiency in storage, the parameters in below table can be used, all of them can be either configured in `taos.cfg` as default configuration or in the command `create database`. For detailed definition of these parameters please refer to [Configuration Parameters](/reference/config/). | # | Parameter | Unit | Definition | **Value Range** | **Default Value** | | --- | --------- | ---- | ------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------- | ----------------- | -- GitLab