提交 bcd4ecfd 编写于 作者: D dingbo

Merge branch 'develop' into docs/dingbo/spellcheck

......@@ -130,15 +130,7 @@ Connection = DriverManager.getConnection(url, properties);
### 13.JDBC 报错: the executed SQL is not a DML or a DDL?
请更新至最新的 JDBC 驱动
```xml
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>2.0.27</version>
</dependency>
```
请更新至最新的 JDBC 驱动,参考 [Java 连接器](/reference/connector/java)
### 14. taos connect failed, reason&#58; invalid timestamp
......
......@@ -4,7 +4,7 @@ sidebar_label: Documentation Home
slug: /
---
TDengine is a high-performance, scalable time series database with SQL support. This document is TDengine user manual. It mainly introduces the basic concepts, installation, features, SQL, APIs, operation, maintenance, kernel design, etc. It’s written mainly for architects, developers and system administrators.
TDengine is a [high-performance](https://tdengine.com/fast), [scalable](https://tdengine.com/scalable) time series database with [SQL support](https://tdengine.com/sql-support). This document is TDengine user manual. It mainly introduces the basic concepts, installation, features, SQL, APIs, operation, maintenance, kernel design, etc. It’s written mainly for architects, developers and system administrators.
TDengine makes full use of the characteristics of time series data, proposes the concepts of "one table for one data collection point" and "super table", and designs an innovative storage engine, which greatly improves the efficiency of data ingestion, querying and storage. To understand the new concepts and use TDengine in the right way, please read [“concepts”](./concept) thoroughly.
......
import PkgList from "/components/PkgList";
TDengine 的安装非常简单,从下载到安装成功仅仅只要几秒钟。
It's very easy to install TDengine and would take you only a few minutes from downloading to finishing installation.
为方便使用,从 2.4.0.10 开始,标准的服务端安装包包含了 taos、taosd、taosAdapter、taosdump、taosBenchmark、TDinsight 安装脚本和示例代码;如果您只需要用到服务端程序和客户端连接的 C/C++ 语言支持,也可以仅下载 lite 版本的安装包。
For the convenience of users, from version 2.4.0.10, the standard server side installation package includes `taos`, `taosd`, `taosAdapter`, `taosBenchmark` and sample code. If only the `taosd` server and C/C++ connector are required, you can also choose to download the lite package.
在安装包格式上,我们提供 tar.gz, rpm 和 deb 格式,为企业客户提供 tar.gz 格式安装包,以方便在特定操作系统上使用。需要注意的是,rpm 和 deb 包不含 taosdump、taosBenchmark 和 TDinsight 安装脚本,这些工具需要通过安装 taosTool 包获得。
Three kinds of packages are provided, tar.gz, rpm and deb. Especially the tar.gz package is provided for the convenience of enterprise customers on different kinds of operating systems, it includes `taosdump` and TDinsight installation script which are normally only provided in taos-tools rpm and deb packages.
发布版本包括稳定版和 Beta 版,Beta 版含有更多新功能。正式上线或测试建议安装稳定版。您可以根据需要选择下载:
Between two major release versions, some beta versions may be delivered for users to try some new features.
<PkgList type={0}/>
具体的安装方法,请参见[安装包的安装和卸载](/operation/pkg-install)。
For the details please refer to [Install and Uninstall](/operation/pkg-install)。
To see the details of versions, please refer to [Download List](https://www.taosdata.com/all-downloads) and [Release Notes](https://github.com/taosdata/TDengine/releases).
下载其他组件、最新 Beta 版及之前版本的安装包,请点击[这里](https://www.taosdata.com/all-downloads)
查看 Release Notes, 请点击[这里](https://github.com/taosdata/TDengine/releases)
......@@ -153,7 +153,7 @@ The second parameter `keep` is used to specify whether to keep the subscription
Now let's see the effect of the above sample code, assuming below prerequisites have been done.
- The sample code has been downloaded to local system
- The sample code has been downloaded to local system
- TDengine has been installed and launched properly on same system
- The database, STable, sub tables required in the sample code have been ready
......
......@@ -128,7 +128,7 @@ CREATE AGGREGATE FUNCTION ids(X) AS ids(Y) OUTPUTTYPE typename(Z) [ BUFSIZE B ];
- ids(X):the function name to be sued in SQL statement, must be consistent with the function name defined by `udfNormalFunc`
- ids(Y):the absolute path of the DLL file including the implementation of the UDF, the path needs to be quoted by single or double quotes
- typename(Z):the output data type, the value is the literal string of the type
- typename(Z):the output data type, the value is the literal string of the type
- B:the size of intermediate buffer, in bytes; it's an optional parameter and the range is [0,512]
For details about how to use intermediate result, please refer to example program [demo.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/demo.c).
......
......@@ -109,6 +109,6 @@ SHOW DNODES;
If the status of the newly added dnode is offlie, please check:
- Whether the `taosd` process is running properly or not
- In the log file `taosdlog.0` to see whether the fqdn and port are correct or not
- In the log file `taosdlog.0` to see whether the fqdn and port are correct or not
The above process can be repeated to add more dnodes in the cluster.
......@@ -206,7 +206,7 @@ It can be seen from above output that vgId 18 has been moved from dndoe 3 to dno
:::note
- Manual load balancing can only be performed when the automatic load balancing is disabled, i.e. `balance` is set to 0.
- Manual load balancing can only be performed when the automatic load balancing is disabled, i.e. `balance` is set to 0.
- Only vnode in normal state, i.e. master or slave, can be moved. vnode can't moved when its in status offline, unsynced or syncing.
- Before moving a vnode, it's necessary to make sure the target dnode has enough resources: CPU, memory and disk.
......
......@@ -19,7 +19,7 @@ CREATE DATABASE db_name PRECISION 'ns';
In TDengine, below data types can be used when specifying a column or tag.
| # | **类型** | **Bytes** | **说明** |
| # | **type** | **Bytes** | **Description** |
| --- | :-------: | --------- | ------------------------- |
| 1 | TIMESTAMP | 8 | Default precision is millisecond, microsecond and nanosecond are also supported |
| 2 | INT | 4 | Integer, the value range is [-2^31+1, 2^31-1], while -2^31 is treated as NULL |
......
......@@ -78,7 +78,7 @@ It's not necessary to provide values for all tag when creating tables automatica
INSERT INTO d21001 USING meters (groupId) TAGS (2) VALUES ('2021-07-13 14:06:33.196', 10.15, 217, 0.33);
```
Multiple rows can also be inserted into same table in single SQL statement using this way.
Multiple rows can also be inserted into same table in single SQL statement using this way.
```sql
INSERT INTO d21001 USING meters TAGS ('Beijing.Chaoyang', 2) VALUES ('2021-07-13 14:06:34.630', 10.2, 219, 0.32) ('2021-07-13 14:06:35.779', 10.15, 217, 0.33)
......@@ -113,7 +113,7 @@ From version 2.1.5.0, tables can be automatically created using a super table as
INSERT INTO d21001 USING meters TAGS ('Beijing.Chaoyang', 2) FILE '/tmp/csvfile.csv';
```
Multiple tables can be automatically created and inserted in single SQL statement, like below:
Multiple tables can be automatically created and inserted in single SQL statement, like below:
```sql
INSERT INTO d21001 USING meters TAGS ('Beijing.Chaoyang', 2) FILE '/tmp/csvfile_21001.csv'
......
......@@ -312,7 +312,7 @@ Logical operations in below table can be used in `where` clause to filter the re
| like | match a wildcard string | **`binary`** **`nchar`** |
| match/nmatch | filter regex | **`binary`** **`nchar`** |
**使用说明**:
**Explanations**:
- Operator `<\>` is equal to `!=`, please be noted that this operator can't be used on the first column of any table, i.e.timestamp column.
- Operator `like` is used together with wildcards to match strings
......
......@@ -167,13 +167,13 @@ Query OK, 1 row(s) in set (0.000915s)
SELECT LEASTSQUARES(field_name, start_val, step_val) FROM tb_name [WHERE clause];
```
**Description**:统计表中某列的值是主键(时间戳)的拟合直线方程.start_val 是自变量初始值,step_val 是自变量的步长值.
**Description**: The linear regression function of the specified column and the timestamp column (primary key), `start_val` is the initial value and `step_val` is the step value.
**Return value type**:A string in the format of "(slope, intercept)"
**Return value type**: A string in the format of "(slope, intercept)"
**Applicable column types**:Data types except for timestamp, binary, nchar and bool
**Applicable column types**: Data types except for timestamp, binary, nchar and bool
**Applicable table types**:table only
**Applicable table types**: table only
**Examples**:
......@@ -736,7 +736,7 @@ SELECT UNIQUE(field_name) FROM {tb_name | stb_name} [WHERE clause];
**Applicable column types**: Any data types except for timestamp
**支持版本**: From version 2.6.0.0
**Applicable versions**: From version 2.6.0.0
**More explanations**:
......@@ -809,7 +809,7 @@ Query OK, 2 row(s) in set (0.001162s)
SELECT DERIVATIVE(field_name, time_interval, ignore_negative) FROM tb_name [WHERE clause];
```
**Description**: The derivative of a specific column. The time rage can be specified by parameter `time_interval` 参数指定, the minimum allowed time range is 1 second (1s); the value of `ignore_negative` can be 0 or 1, 1 means negative values are ignored.
**Description**: The derivative of a specific column. The time rage can be specified by parameter `time_interval`, the minimum allowed time range is 1 second (1s); the value of `ignore_negative` can be 0 or 1, 1 means negative values are ignored.
**Return value type**: Double precision floating point
......@@ -850,7 +850,7 @@ SELECT SPREAD(field_name) FROM { tb_name | stb_name } [WHERE clause];
**Applicable table types**: table, STable
**More explanations**: Can be used on a column of TIMESTAMP type, the result is the time range size.
**More explanations**: Can be used on a column of TIMESTAMP type, the result is the time range size.
**Examples**:
......@@ -955,8 +955,8 @@ SELECT ROUND(field_name) FROM { tb_name | stb_name } [WHERE clause];
- Arithmetic operation can't be performed on the result of `MAVG`.
- Can only be used with data columns, can't be used with tags.
- Can't be used with aggregate functions.\(Aggregation)函数一起使用;
- Must be used with `GROUP BY tbname` when it's used on a STable to force the result on each single timeline.
- Can't be used with aggregate functions.
- Must be used with `GROUP BY tbname` when it's used on a STable to force the result on each single timeline.
**Applicable versions**: From 2.3.0.x
......@@ -1012,7 +1012,7 @@ SELECT ASIN(field_name) FROM { tb_name | stb_name } [WHERE clause]
SELECT ACOS(field_name) FROM { tb_name | stb_name } [WHERE clause]
```
**Description**: The anti-cosine of a specific column
**Description**: The anti-cosine of a specific column
**Return value type**: ouble if the input value is not NULL; or NULL if the input value is NULL
......@@ -1037,7 +1037,7 @@ SELECT ATAN(field_name) FROM { tb_name | stb_name } [WHERE clause]
**Description**: anti-tangent of a specific column
**Description**: The anti-cosine of a specific column
**Description**: The anti-cosine of a specific column
**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL
......@@ -1062,7 +1062,7 @@ SELECT SIN(field_name) FROM { tb_name | stb_name } [WHERE clause]
**Description**: The sine of a specific column
**Description**: The anti-cosine of a specific column
**Description**: The anti-cosine of a specific column
**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL
......@@ -1087,7 +1087,7 @@ SELECT COS(field_name) FROM { tb_name | stb_name } [WHERE clause]
**Description**: The cosine of a specific column
**Description**: The anti-cosine of a specific column
**Description**: The anti-cosine of a specific column
**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL
......@@ -1112,7 +1112,7 @@ SELECT TAN(field_name) FROM { tb_name | stb_name } [WHERE clause]
**Description**: The tangent of a specific column
**Description**: The anti-cosine of a specific column
**Description**: The anti-cosine of a specific column
**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL
......@@ -1183,7 +1183,7 @@ SELECT ABS(field_name) FROM { tb_name | stb_name } [WHERE clause]
**Description**: The absolute of a specific column
**Return value type**: UBIGINT if the input value is integer; DOUBLE if the input value is FLOAT/DOUBLE
**Return value type**: UBIGINT if the input value is integer; DOUBLE if the input value is FLOAT/DOUBLE
**Applicable data types**: Data types except for timestamp, binary, nchar, bool
......
......@@ -12,9 +12,9 @@ The number of vgroups created for each database is same as the number of CPU cor
Database Memory Size = maxVgroupsPerDb * replica * (blocks * cache + 10MB) + numOfTables * (tagSizePerTable + 0.5KB)
```
For example, assuming the default value of `maxVgroupPerDB` is 64, the default value of `cache` 16M, the default value of `blocks` is 6, there are 100,000 tables in a DB, the replica number is 1, total length of tag values is 256 bytes, the total memory required for this DB is: 并且一个 DB 中有 10 万张表,单副本,标签总长度是 256 字节,则这个 DB 总的内存需求为:64 \* 1 \* (16 \* 6 + 10) + 100000 \* (0.25 + 0.5) / 1000 = 6792M.
For example, assuming the default value of `maxVgroupPerDB` is 64, the default value of `cache` 16M, the default value of `blocks` is 6, there are 100,000 tables in a DB, the replica number is 1, total length of tag values is 256 bytes, the total memory required for this DB is: 64 \* 1 \* (16 \* 6 + 10) + 100000 \* (0.25 + 0.5) / 1000 = 6792M.
In real operation of TDengine, we are more concerned about the memory used by each TDengine server process `taosd`.
In real operation of TDengine, we are more concerned about the memory used by each TDengine server process `taosd`.
```
taosd_memory = vnode_memory + mnode_memory + query_memory
......
......@@ -10,7 +10,7 @@ TDengine CLI `taos` supports `source <filename>` command for executing the SQL s
## Import from Data File
In TDengine CLI, data can be imported from a CSV file into an existing table. The data in single CSV must belong to same table and must be consistent with the schema of that table. The SQL statement is as below:
In TDengine CLI, data can be imported from a CSV file into an existing table. The data in single CSV must belong to same table and must be consistent with the schema of that table. The SQL statement is as below:
```sql
insert into tb1 file 'path/data.csv';
......
......@@ -108,7 +108,7 @@ From version 2.2.0.0, the above command can be executed on Linux Shell to test t
The parameter `debugFlag` is used to control the log level of `taosd` server process. The default value is 131, for debug purpose it needs to be escalated to 135 or 143.
Once this parameter is set to 135 or 143, the log file grows very quickly especially when there is huge volume of data insertion and data query requests. If all the logs are stored together, some important information may be missed very easily, so on server side important information is stored at different place from other logs.
Once this parameter is set to 135 or 143, the log file grows very quickly especially when there is huge volume of data insertion and data query requests. If all the logs are stored together, some important information may be missed very easily, so on server side important information is stored at different place from other logs.
- The log at level of INFO, WARNING and ERROR is stored in `taosinfo` so that it is easy to find important information
- The log at level of DEBUG (135) and TRACE (143) and other information not handled by `taosinfo` are stored in `taosdlog`
......
......@@ -565,7 +565,7 @@ public class ParameterBindingDemo {
// set table name
pstmt.setTableName("t5_" + i);
// set tags
pstmt.setTagNString(0, "北京-abc");
pstmt.setTagNString(0, "Beijing-abc");
// set columns
ArrayList<Long> tsList = new ArrayList<>();
......@@ -576,7 +576,7 @@ public class ParameterBindingDemo {
ArrayList<String> f1List = new ArrayList<>();
for (int j = 0; j < numOfRow; j++) {
f1List.add("北京-abc");
f1List.add("Beijing-abc");
}
pstmt.setNString(1, f1List, BINARY_COLUMN_SIZE);
......
......@@ -73,7 +73,7 @@ taos --dump-config
| Attribute | Description |
| ------------- | ------------------------------------------------------------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | The port for external access after `taosd` is started |
| Meaning | The port for external access after `taosd` is started |
| Default Value | 6030 |
| Note | REST service is provided by `taosd` before 2.4.0.0 but by `taosAdapter` after 2.4.0.0, the default port of REST service is 6041 |
......@@ -133,7 +133,7 @@ TDengine uses continuous 13 ports, both TCP and TCP, from the port specified by
| ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | The switch for monitoring inside server. The workload of the hosts, including CPU, memory, disk, network, TTP requests, are collected and stored in a system builtin database `LOG` |
| Value Range | 0: monitoring disabled, 1: monitoring enabled 务. |
| Value Range | 0: monitoring disabled, 1: monitoring enabled |
| Default Value | 0 |
### monitorInterval
......@@ -159,13 +159,13 @@ TDengine uses continuous 13 ports, both TCP and TCP, from the port specified by
### queryBufferSize
| Attribute | Description |
| ------------- | --------------------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | The total memory size reserved for all queries |
| Unit | MB |
| Default Value | |
| Note | It can be estimated by "maximum number of concurrent quries" _ "number of tables" _ 170 |
| Attribute | Description |
| ------------- | ---------------------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | The total memory size reserved for all queries |
| Unit | MB |
| Default Value | None |
| Note | It can be estimated by "maximum number of concurrent queries" _ "number of tables" _ 170 |
### ratioOfQueryCores
......@@ -250,11 +250,11 @@ The locale definition standard on Linux is: <Language\>\_<Region\>.<charset\>, f
### charset
| Attribute | Description |
| ------------- | ---------------------------- |
| Applicable | Server and Client |
| Meaning | Character |
| Default Value | charset set in the system |
| Attribute | Description |
| ------------- | ------------------------- |
| Applicable | Server and Client |
| Meaning | Character |
| Default Value | charset set in the system |
:::info
On Linux, if `charset` is not set in `taos.cfg`, when `taos` is started, the charset is obtained from system locale. If obtaining charset from system locale fails, `taos` would fail to start. So on Linux system, if system locale is set properly, it's not necessary to set `charset` in `taos.cfg`. For example:
......@@ -346,12 +346,12 @@ charset CP936
### walLevel
| Attribute | Description |
| ------------- | ------------------------------------------------------------ |
| Applicable | Server Only |
| Meaning | WAL level |
| Attribute | Description |
| ------------- | ---------------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | WAL level |
| Value Range | 0: wal disabled <br/> 1: wal enabled without fsync <br/> 2: wal enabled with fsync |
| Default Value | 1 |
| Default Value | 1 |
### fsync
......@@ -430,12 +430,12 @@ charset CP936
### quorum
| Attribute | Description |
| ------------- | --------------------------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | The number of required confirmations for data replication in case of multiple replications |
| Value Range | 1,2 |
| Default Value | 1 |
| Attribute | Description |
| ------------- | ------------------------------------------------------------------------------------------ |
| Applicable | Server Only |
| Meaning | The number of required confirmations for data replication in case of multiple replications |
| Value Range | 1,2 |
| Default Value | 1 |
### role
......@@ -552,7 +552,7 @@ charset CP936
| Meaning | The expiration time for dnode online status, once it's reached before receiving status from a node, the dnode becomes offline |
| Unit | second |
| Value Range | 5-7200000 |
| Default Value | 86400\*10(10 天) |
| Default Value | 86400\*10 (i.e. 10 days) |
## Performance Optimization Parameters
......@@ -569,7 +569,7 @@ charset CP936
| Attribute | Description |
| ------------- | --------------------------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | Maximum number of query threads |
| Meaning | Maximum number of query threads |
| Value Range | 0: Only one query thread <br/> 1: Same as number of CPU cores <br/> 2: two times of CPU cores |
| Default Value | 1 |
| Note | This value can be a float number, 0.5 means half of the CPU cores |
......@@ -961,7 +961,7 @@ The parameters described in this section are only application in versions prior
| ------------- | ----------------------- |
| Applicable | Client Only |
| Meaning | Log level of jni module |
| Value Range | 同上 |
| Value Range | Same as debugFlag |
| Default Value | |
### odbcDebugFlag
......@@ -1100,12 +1100,12 @@ If the length of value exceeds `maxBinaryDisplayWidth`, then the actual display
### maxRegexStringLen
| Attribute | Description |
| ------------- | ----------------------------------------------------------- |
| Meaning | Maximum length of regular expression 正则表达式最大允许长度 |
| Value Range | [128, 16384] |
| Default Value | 128 |
| Note | From version 2.3.0.0 |
| Attribute | Description |
| ------------- | ------------------------------------ |
| Meaning | Maximum length of regular expression |
| Value Range | [128, 16384] |
| Default Value | 128 |
| Note | From version 2.3.0.0 |
## Other Parameters
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册