diff --git a/docs-cn/20-third-party/11-kafka.md b/docs-cn/20-third-party/11-kafka.md
index 54a378c63380aacd763459ddc5b1acc012c88a06..d12d5fab75671d8a1e7356e766d0e8979c6519c2 100644
--- a/docs-cn/20-third-party/11-kafka.md
+++ b/docs-cn/20-third-party/11-kafka.md
@@ -281,7 +281,7 @@ INSERT INTO d1001 USING meters TAGS(Beijing.Chaoyang, 2) VALUES('2018-10-03 14:3
使用 TDengine CLI, 执行 SQL 文件。
```
-taos -f prepare-sorce-data.sql
+taos -f prepare-source-data.sql
```
### 创建 Connector 实例
diff --git a/docs-en/02-intro/index.md b/docs-en/02-intro/index.md
index c3e86fcbee41aa847134958225b1b856354a2444..9f2f6b1b67b428bdf3df7f3ead00766989075d76 100644
--- a/docs-en/02-intro/index.md
+++ b/docs-en/02-intro/index.md
@@ -19,7 +19,7 @@ The major features are listed below:
6. Support for [continuous query](/develop/continuous-query).
7. Support for [data subscription](/develop/subscribe) with the capability to specify filter conditions.
8. Support for [cluster](/cluster/), with the capability of increasing processing power by adding more nodes. High availability is supported by replication.
-9. Provides interactive [command-line intrerface](/reference/taos-shell) for management, maintainence and ad-hoc query.
+9. Provides interactive [command-line interface](/reference/taos-shell) for management, maintenance and ad-hoc query.
10. Provides many ways to [import](/operation/import) and [export](/operation/export) data.
11. Provides [monitoring](/operation/monitor) on TDengine running instances.
12. Provides [connectors](/reference/connector/) for [C/C++](/reference/connector/cpp), [Java](/reference/connector/java), [Python](/reference/connector/python), [Go](/reference/connector/go), [Rust](/reference/connector/rust), [Node.js](/reference/connector/node) and other programming languages.
@@ -49,7 +49,7 @@ TDengine makes full use of [the characteristics of time series data](https://tde
- **Interactive Console**: TDengine provides convenient console access to the database to run ad hoc queries, maintain the database, or manage the cluster without any programming.
-With TDengine, the total cost of ownership of time-seriess data platform can be greatly reduced. Because 1: with its superior performance, the computing and storage resources are reduced significantly; 2:with SQL support, it can be seamlessly integrated with many third party tools, and learning costs/migration costs are reduced significantly; 3: with its simple architecture and zero management, the operation and maintainence costs are reduced.
+With TDengine, the total cost of ownership of time-series data platform can be greatly reduced. Because 1: with its superior performance, the computing and storage resources are reduced significantly; 2:with SQL support, it can be seamlessly integrated with many third party tools, and learning costs/migration costs are reduced significantly; 3: with its simple architecture and zero management, the operation and maintenance costs are reduced.
## Technical Ecosystem
In the time-series data processing platform, TDengine stands in a role like this diagram below:
@@ -58,7 +58,7 @@ In the time-series data processing platform, TDengine stands in a role like this
Figure 1. TDengine Technical Ecosystem
-On the left side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right side, visualization/BI tools, HMI, Python/R, and IoT Apps can be connected. TDengine itself provides interactive command-line interface and web interface for management and maintainence.
+On the left side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right side, visualization/BI tools, HMI, Python/R, and IoT Apps can be connected. TDengine itself provides interactive command-line interface and web interface for management and maintenance.
## Suited Scenarios
@@ -103,7 +103,7 @@ As a high-performance, scalable and SQL supported time-series database, TDengine
| Minimize learning and maintenance costs | | | √ | In addition to being easily configurable, standard SQL support and the Taos shell for ad hoc queries makes maintenance simpler, allows reuse and reduces learning costs.|
| Abundant talent supply | √ | | | Given the above, and given the extensive training and professional services provided by TDengine, it is easy to migrate from existing solutions or create a new and lasting solution based on TDengine.|
-## Comparision with other databases
+## Comparison with other databases
- [Writing Performance Comparison of TDengine and InfluxDB ](https://tdengine.com/2022/02/23/4975.html)
- [Query Performance Comparison of TDengine and InfluxDB](https://tdengine.com/2022/02/24/5120.html)
diff --git a/docs-en/04-concept/index.md b/docs-en/04-concept/index.md
index f71674fc0ddc483c1c3371e56bdf17c39506f985..abc553ab6d90042cb2389ba0b71d3b5395dcebfd 100644
--- a/docs-en/04-concept/index.md
+++ b/docs-en/04-concept/index.md
@@ -153,7 +153,7 @@ The relationship between a STable and the subtables created based on this STable
Queries can be executed on both a table (subtable) and a STable. For a query on a STable, TDengine will treat the data in all its subtables as a whole data set for processing. TDengine will first find the subtables that meet the tag filter conditions, then scan the time-series data of these subtables to perform aggregation operation, which can greatly reduce the data sets to be scanned, thus greatly improving the performance of data aggregation across multiple DCPs.
-In TDengine, it is recommended to use a substable instead of a regular table for a DCP.
+In TDengine, it is recommended to use a subtable instead of a regular table for a DCP.
## Database
diff --git a/docs-en/07-develop/01-connect/_connect_python.mdx b/docs-en/07-develop/01-connect/_connect_python.mdx
index f6c8bcfee1d92fae2d1ad320002b805dd9951228..44b7586fadbf618231fce7753d3b4b68853a7f57 100644
--- a/docs-en/07-develop/01-connect/_connect_python.mdx
+++ b/docs-en/07-develop/01-connect/_connect_python.mdx
@@ -1,3 +1,3 @@
```python title="Native Connection"
-{{#include docs-examples/python/connect_exmaple.py}}
+{{#include docs-examples/python/connect_example.py}}
```
diff --git a/docs-en/07-develop/02-model/index.mdx b/docs-en/07-develop/02-model/index.mdx
index 2bd6f0cbd9f1c5b62a3f14f03c93c825f0a8cdaf..962a75338f0384ee8facb4682342e25e536e4ecb 100644
--- a/docs-en/07-develop/02-model/index.mdx
+++ b/docs-en/07-develop/02-model/index.mdx
@@ -43,7 +43,7 @@ If you are using versions prior to 2.0.15, the `STable` keyword needs to be repl
Similar to creating a regular table, when creating a STable, name and schema need to be provided too. In the STable schema, the first column must be timestamp (like ts in the example), and other columns (like current, voltage and phase in the example) are the data collected. The type of a column can be integer, float, double, string ,etc. Besides, the schema for tags need to be provided, like location and groupId in the example. The type of a tag can be integer, float, string, etc. The static properties of a data collection point can be defined as tags, like the location, device type, device group ID, manager ID, etc. Tags in the schema can be added, removed or updated. Please refer to [STable](/taos-sql/stable) for more details.
-For each kind of data collection points, a corresponding STable must be created. There may be man y STables in an application. For electrical power system, we need to create a STable respectively for meters, transformers, busbars, switches. There may be multiple kinds of data collection points on a single device, for example there may be one data collection point for electrical data like current and voltage and another point for environmental data like temperature, humidity and wind direction, multiple STables are required for such kind of device.
+For each kind of data collection points, a corresponding STable must be created. There may be many STables in an application. For electrical power system, we need to create a STable respectively for meters, transformers, busbars, switches. There may be multiple kinds of data collection points on a single device, for example there may be one data collection point for electrical data like current and voltage and another point for environmental data like temperature, humidity and wind direction, multiple STables are required for such kind of device.
At most 4096 (or 1024 prior to version 2.1.7.0) columns are allowed in a STable. If there are more than 4096 of metrics to bo collected for a data collection point, multiple STables are required for such kind of data collection point. There can be multiple databases in system, while one or more STables can exist in a database.
diff --git a/docs-en/07-develop/03-insert-data/04-opentsdb-json.mdx b/docs-en/07-develop/03-insert-data/04-opentsdb-json.mdx
index fb938d26961f86cefd3b5b9d31e4eb3481e10873..d4f723dcdeb78c54ba31fd4f6aa2528a90376c5f 100644
--- a/docs-en/07-develop/03-insert-data/04-opentsdb-json.mdx
+++ b/docs-en/07-develop/03-insert-data/04-opentsdb-json.mdx
@@ -15,7 +15,7 @@ import CJson from "./_c_opts_json.mdx";
## Introduction
-A JSON string is used in OpenTSDB JSON to represent one or more rows of data, for exmaple:
+A JSON string is used in OpenTSDB JSON to represent one or more rows of data, for example:
```json
[
diff --git a/docs-en/07-develop/06-subscribe.mdx b/docs-en/07-develop/06-subscribe.mdx
index 3ed56f98e83b37b2d12ec87f1ad94bfe6c368c8e..56f4ed83d8ebc6f21afbdd2eca2e01f11b313883 100644
--- a/docs-en/07-develop/06-subscribe.mdx
+++ b/docs-en/07-develop/06-subscribe.mdx
@@ -57,10 +57,10 @@ The first step is to create subscription using `taos_subscribe`.
```c
TAOS_SUB* tsub = NULL;
if (async) {
- // create an asynchronized subscription, the callback function will be called every 1s
+ // create an asynchronous subscription, the callback function will be called every 1s
tsub = taos_subscribe(taos, restart, topic, sql, subscribe_callback, &blockFetch, 1000);
} else {
- // create an synchronized subscription, need to call 'taos_consume' manually
+ // create an synchronous subscription, need to call 'taos_consume' manually
tsub = taos_subscribe(taos, restart, topic, sql, NULL, NULL, 0);
}
```
diff --git a/docs-en/07-develop/08-udf.md b/docs-en/07-develop/08-udf.md
index 739f4e25094b6bd76c7c7219a7f6d929d5733282..61639e34404477d3bb5785da129a1d922a4d020e 100644
--- a/docs-en/07-develop/08-udf.md
+++ b/docs-en/07-develop/08-udf.md
@@ -20,7 +20,7 @@ Below function template can be used to define your own scalar function.
`udfNormalFunc` is the place holder of function name, a function implemented based on the above template can be used to perform scalar computation on data rows. The parameters are fixed to control the data exchange between UDF and TDengine.
-- Defintions of the parameters:
+- Definitions of the parameters:
- data:input data
- itype:the type of input data, for details please refer to [type definition in column_meta](/reference/rest-api/), for example 4 represents INT
diff --git a/docs-en/07-develop/index.md b/docs-en/07-develop/index.md
index 046959ea4eb00f50c03dc20bba97d766878d9183..e703f8a20a01618c3ebc2396bfea6cda211feb2f 100644
--- a/docs-en/07-develop/index.md
+++ b/docs-en/07-develop/index.md
@@ -8,7 +8,7 @@ To develop an application, if you are going to use TDengine as the tool to proce
2. Design the data model based on your own application scenarios. According to the data characteristics, you can decide to create one or more databases; learns about static labels, collected metrics, create the STable with right schema, and create the subtables.
3. Decide how to insert data. TDengine supports writing using standard SQL, but also supports schemaless writing, so that data can be written directly without creating tables manually.
4. Based on business requirements, find out what SQL query statements need to be written.
-5. If you want to run real-time analysis based on time series data, including various dashboards, it is recommended that you use the TDengine continuous query feature instead of depolying complex streaming processing systems such as Spark or Flink.
+5. If you want to run real-time analysis based on time series data, including various dashboards, it is recommended that you use the TDengine continuous query feature instead of deploying complex streaming processing systems such as Spark or Flink.
6. If your application has modules that need to consume inserted data, and they need to be notified when new data is inserted, it is recommended that you use the data subscription function provided by TDengine without the need to deploy Kafka.
7. In many scenarios (such as fleet management), the application needs to obtain the latest status of each data collection point. It is recommended that you use the cache function of TDengine instead of deploying Redis separately.
8. If you find that the SQL functions of TDengine cannot meet your requirements, then you can use user-defined functions to solve the problem.
diff --git a/docs-en/10-cluster/01-deploy.md b/docs-en/10-cluster/01-deploy.md
index c773a52854f3bf0f0765e9c1bc4fec550d4a2ba8..8c921797ec038fb8afbf382a980b8f7a197fa898 100644
--- a/docs-en/10-cluster/01-deploy.md
+++ b/docs-en/10-cluster/01-deploy.md
@@ -106,7 +106,7 @@ Then on the first dnode, execute `show dnodes` in `taos` to show whether the sec
SHOW DNODES;
```
-If the status of the newly added dnode is offlie, please check:
+If the status of the newly added dnode is offline, please check:
- Whether the `taosd` process is running properly or not
- In the log file `taosdlog.0` to see whether the fqdn and port are correct or not
diff --git a/docs-en/10-cluster/02-cluster-mgmt.md b/docs-en/10-cluster/02-cluster-mgmt.md
index c5c5b251bfd0c9e927e274b8e204e4121c4e93c8..3fcd68b29ce08519af9a0cde11d5361c6b4cd312 100644
--- a/docs-en/10-cluster/02-cluster-mgmt.md
+++ b/docs-en/10-cluster/02-cluster-mgmt.md
@@ -100,7 +100,7 @@ Query OK, 2 row(s) in set (0.001316s)
## Drop DNODE
-Launch TDengine CLI `taos` and execute the command below to drop or remove a dndoe from the cluster. In the command, `dnodeId` can be gotten from `show dnodes`.
+Launch TDengine CLI `taos` and execute the command below to drop or remove a dnode from the cluster. In the command, `dnodeId` can be gotten from `show dnodes`.
```sql
DROP DNODE "fqdn:port";
@@ -155,7 +155,7 @@ ALTER DNODE BALANCE "VNODE:-DNODE:";
In the above command, `source-dnodeId` is the original dnodeId where the vnode resides, `dest-dnodeId` specifies the target dnode. vgId (vgroup ID) can be shown by `SHOW VGROUPS `.
-Firstly `show vgroups` is executed to show the vgrup distribution.
+Firstly `show vgroups` is executed to show the vgroup distribution.
```
taos> show vgroups;
@@ -202,7 +202,7 @@ taos> show vgroups;
Query OK, 8 row(s) in set (0.001242s)
```
-It can be seen from above output that vgId 18 has been moved from dndoe 3 to dnode 1.
+It can be seen from above output that vgId 18 has been moved from dnode 3 to dnode 1.
:::note
diff --git a/docs-en/12-taos-sql/01-data-type.md b/docs-en/12-taos-sql/01-data-type.md
index 99dabdeb6b466e05825fa67789bddbfdcb233c01..931e3bbac7f0601a9de79d0dfa04ffc94ecced96 100644
--- a/docs-en/12-taos-sql/01-data-type.md
+++ b/docs-en/12-taos-sql/01-data-type.md
@@ -9,7 +9,7 @@ When using TDengine to store and query data, the most important part of the data
- internal function `now` can be used to get the current timestamp of the client side
- the current timestamp of the client side is applied when `now` is used to insert data
- Epoch Time:timestamp can also be a long integer number, which means the number of seconds, milliseconds or nanoseconds, depending on the time precision, from 1970-01-01 00:00:00.000 (UTC/GMT)
-- timestamp can be applied with add/substract operation, for example `now-2h` means 2 hours back from the time at which query is executed,the unit can be b(nanosecond), u(microsecond), a(millisecond), s(second), m(minute), h(hour), d(day), w(week.。 So `select * from t1 where ts > now-2w and ts <= now-1w` means the data between two weeks ago and one week ago. The time unit can also be n (calendar month) or y (calendar year) when specifying the time window for down sampling operation.
+- timestamp can be applied with add/subtract operation, for example `now-2h` means 2 hours back from the time at which query is executed,the unit can be b(nanosecond), u(microsecond), a(millisecond), s(second), m(minute), h(hour), d(day), w(week.。 So `select * from t1 where ts > now-2w and ts <= now-1w` means the data between two weeks ago and one week ago. The time unit can also be n (calendar month) or y (calendar year) when specifying the time window for down sampling operation.
Time precision in TDengine can be set by the `PRECISION` parameter when executing `CREATE DATABASE`, like below, the default time precision is millisecond.
@@ -28,7 +28,7 @@ In TDengine, below data types can be used when specifying a column or tag.
| 5 | DOUBLE | 8 | double precision floating point number, the effective number of digits is 15-16, the value range is [-1.7E308, 1.7E308] |
| 6 | BINARY | User Defined | Single-byte string for ASCII visible characters. Length must be specified when defining a column or tag of binary type. The string length can be up to 16374 bytes. The string value must be quoted with single quotes. The literal single quote inside the string must be preceded with back slash like `\'` |
| 7 | SMALLINT | 2 | Short integer, the value range is [-32767, 32767], while -32768 is treated as NULL |
-| 8 | TINYINT | 1 | Single-byte integer, the value range is [-127, 127], while -128 is treated as NLLL |
+| 8 | TINYINT | 1 | Single-byte integer, the value range is [-127, 127], while -128 is treated as NULL |
| 9 | BOOL | 1 | Bool, the value range is {true, false} |
| 10 | NCHAR | User Defined| Multiple-Byte string that can include like Chinese characters. Each character of NCHAR type consumes 4 bytes storage. The string value should be quoted with single quotes. Literal single quote inside the string must be preceded with backslash, like `\’`. The length must be specified when defining a column or tag of NCHAR type, for example nchar(10) means it can store at most 10 characters of nchar type and will consume fixed storage of 40 bytes. Error will be reported the string value exceeds the length defined. |
| 11 | JSON | | json type can only be used on tag, a tag of json type is excluded with any other tags of any other type |
diff --git a/docs-en/12-taos-sql/03-table.md b/docs-en/12-taos-sql/03-table.md
index 3ec429f9dfe72e59d28df0581d8f118f324e8771..a1524f45f98e8435425a9a937b7f6dc4431b6e06 100644
--- a/docs-en/12-taos-sql/03-table.md
+++ b/docs-en/12-taos-sql/03-table.md
@@ -62,7 +62,7 @@ DROP TABLE [IF EXISTS] tb_name;
## Show All Tables In Current Database
```
-SHOW TABLES [LIKE tb_name_wildcar];
+SHOW TABLES [LIKE tb_name_wildcard];
```
## Show Create Statement of A Table
diff --git a/docs-en/12-taos-sql/04-stable.md b/docs-en/12-taos-sql/04-stable.md
index 8d763ac22f0c64ff898036653c1fd58c6df00298..b7817f90287a6415bee020fb5adc8e6239cc6da4 100644
--- a/docs-en/12-taos-sql/04-stable.md
+++ b/docs-en/12-taos-sql/04-stable.md
@@ -76,7 +76,7 @@ ALTER STable stb_name DROP COLUMN field_name;
ALTER STable stb_name MODIFY COLUMN field_name data_type(length);
```
-This command can be used to change (or incerase, more specifically) the length of a column of variable length types, like BINARY or NCHAR.
+This command can be used to change (or increase, more specifically) the length of a column of variable length types, like BINARY or NCHAR.
## Change Tags of A STable
@@ -110,7 +110,7 @@ The tag name will be changed automatically from all the sub tables crated using
ALTER STable stb_name MODIFY TAG tag_name data_type(length);
```
-This command can be used to change (or incerase, more specifically) the length of a tag of variable length types, like BINARY or NCHAR.
+This command can be used to change (or increase, more specifically) the length of a tag of variable length types, like BINARY or NCHAR.
:::note
Changing tag value can be applied to only sub tables. All other tag operations, like add tag, remove tag, however, can be applied to only STable. If a new tag is added for a STable, the tag will be added with NULL value for all its sub tables.
diff --git a/docs-en/12-taos-sql/07-function.md b/docs-en/12-taos-sql/07-function.md
index e913fbf5dcfe035c7a89e0c460a34999ad640164..9db5f36f92735c659a3bfae84c67089c62d577a6 100644
--- a/docs-en/12-taos-sql/07-function.md
+++ b/docs-en/12-taos-sql/07-function.md
@@ -87,7 +87,7 @@ SELECT TWA(field_name) FROM tb_name WHERE clause;
**More explanations**:
-- From version 2.1.3.0, function TWA can be used on stble with `GROUP BY`, i.e. timelines generated by `GROUP BY tbname` on a STable.
+- From version 2.1.3.0, function TWA can be used on stable with `GROUP BY`, i.e. timelines generated by `GROUP BY tbname` on a STable.
### IRATE
@@ -1014,7 +1014,7 @@ SELECT ACOS(field_name) FROM { tb_name | stb_name } [WHERE clause]
**Description**: The anti-cosine of a specific column
-**Return value type**: ouble if the input value is not NULL; or NULL if the input value is NULL
+**Return value type**: Double if the input value is not NULL; or NULL if the input value is NULL
**Applicable data types**: Data types except for timestamp, binary, nchar, bool
@@ -1757,7 +1757,7 @@ SELECT TO_UNIXTIMESTAMP(datetime_string | ts_col) FROM { tb_name | stb_name } [W
**More explanations**:
-- The input string must be compatible with ISO8601/RFC3339 standard, 0 will be returned if the string can't be covnerted
+- The input string must be compatible with ISO8601/RFC3339 standard, 0 will be returned if the string can't be converted
- The precision of the returned timestamp is same as the precision set for the current data base in use
**Examples**:
diff --git a/docs-en/12-taos-sql/08-interval.md b/docs-en/12-taos-sql/08-interval.md
index 7c365fc9a66bff349bc9a13b9954f9c395510bd2..5cc3fa8cb43749fd40b808699f82a8761525cc6a 100644
--- a/docs-en/12-taos-sql/08-interval.md
+++ b/docs-en/12-taos-sql/08-interval.md
@@ -8,7 +8,7 @@ Window related clauses are used to divide the data set to be queried into subset
## Time Window
-`INTERVAL` claused is used to generate time windows of same time interval, `SLIDING` is used to specify the time step for which the time window moves forward. The query is performed on one time window each time, and the time window moves forward with time. When defining continuous query both the size of time window and the step of forward sliding time need to be specified. As shown in the figure blow, [t0s, t0e] ,[t1s , t1e], [t2s, t2e] are respectively the time range of three time windows on which continuous queries are executed. The time step for which time window moves forward is marked by `sliding time`. Query, filter and aggregate operations are executed on each time window respectively. When the time step specified by `SLIDING` is same as the time interval specified by `INTERVAL`, the sliding time window is actually a flip time window.
+`INTERVAL` clause is used to generate time windows of same time interval, `SLIDING` is used to specify the time step for which the time window moves forward. The query is performed on one time window each time, and the time window moves forward with time. When defining continuous query both the size of time window and the step of forward sliding time need to be specified. As shown in the figure blow, [t0s, t0e] ,[t1s , t1e], [t2s, t2e] are respectively the time range of three time windows on which continuous queries are executed. The time step for which time window moves forward is marked by `sliding time`. Query, filter and aggregate operations are executed on each time window respectively. When the time step specified by `SLIDING` is same as the time interval specified by `INTERVAL`, the sliding time window is actually a flip time window.
![Time Window](/img/sql/timewindow-1.png)
@@ -48,7 +48,7 @@ The primary key, i.e. timestamp, is used to determine which session window the r
![Session Window](/img/sql/timewindow-2.png)
-If the time interval between two continuous rows are withint the time interval specified by `tol_value` they belong to the same session window; otherwise a new session window is started automatically. Session window is not supported on STable for now.
+If the time interval between two continuous rows are within the time interval specified by `tol_value` they belong to the same session window; otherwise a new session window is started automatically. Session window is not supported on STable for now.
## More On Window Aggregate
@@ -73,7 +73,7 @@ SELECT function_list FROM stb_name
### Restrictions
-- Aggregate functions and selection functions can be used in `function_list`, with each function having only one output, for example COUNT, AVG, SUM, STDDEV, LEASTSQUARES, PERCENTILE, MIN, MAX, FIRST, LAST. Functions having multiple ouput can't be used, for example DIFF or arithmetic operations.
+- Aggregate functions and selection functions can be used in `function_list`, with each function having only one output, for example COUNT, AVG, SUM, STDDEV, LEASTSQUARES, PERCENTILE, MIN, MAX, FIRST, LAST. Functions having multiple output can't be used, for example DIFF or arithmetic operations.
- `LAST_ROW` can't be used together with window aggregate.
- Scalar functions, like CEIL/FLOOR, can't be used with window aggregate.
- `WHERE` clause can be used to specify the starting and ending time and other filter conditions
diff --git a/docs-en/13-operation/01-pkg-install.md b/docs-en/13-operation/01-pkg-install.md
index 00802506e681a9e27e338fef363e4157379c5a85..a1aad1c3c96c52689e9f68509c27ccce574d2082 100644
--- a/docs-en/13-operation/01-pkg-install.md
+++ b/docs-en/13-operation/01-pkg-install.md
@@ -229,7 +229,7 @@ During the installation process:
:::note
- When TDengine is uninstalled, the configuration /etc/taos/taos.cfg, data directory /var/lib/taos, log directory /var/log/taos are kept. They can be deleted manually with caution because data can't be recovered once
-- When reinstalling TDengine, if the default configuration file /etc/taos/taos.cfg exists, it will be kept and the configuration file in the installation package will be renamed to taos.cfg.orig and stored at /usr/loca/taos/cfg to be used as configuration sample. Otherwise the configuration file in the installation package will be installed to /etc/taos/taos.cfg and used.
+- When reinstalling TDengine, if the default configuration file /etc/taos/taos.cfg exists, it will be kept and the configuration file in the installation package will be renamed to taos.cfg.orig and stored at /usr/local/taos/cfg to be used as configuration sample. Otherwise the configuration file in the installation package will be installed to /etc/taos/taos.cfg and used.
## Start and Stop
diff --git a/docs-en/13-operation/10-monitor.md b/docs-en/13-operation/10-monitor.md
index bb5d18b3b2fec3cd2a5e4ebc333537806699ce1d..019cf4f2948141fac79587429f1fdc3b06623945 100644
--- a/docs-en/13-operation/10-monitor.md
+++ b/docs-en/13-operation/10-monitor.md
@@ -38,7 +38,7 @@ There are two ways to setup Grafana alert notification.
sudo ./TDinsight.sh -a http://localhost:6041 -u root -p taosdata -E
```
-- The AliClund SMS alert built in TDengine data source plugin can be enabled with parameter `-s`, the parameters of this way are as follows:
+- The AliCloud SMS alert built in TDengine data source plugin can be enabled with parameter `-s`, the parameters of this way are as follows:
- `-I`: AliCloud SMS Key ID
- `-K`: AliCloud SMS Key Secret
diff --git a/docs-en/13-operation/17-diagnose.md b/docs-en/13-operation/17-diagnose.md
index 0543338ec6f50eb5ca8ee0b489a82ccf192535a3..b140d925c07386f93c82d492bb8bcf4d95349f12 100644
--- a/docs-en/13-operation/17-diagnose.md
+++ b/docs-en/13-operation/17-diagnose.md
@@ -115,7 +115,7 @@ Once this parameter is set to 135 or 143, the log file grows very quickly especi
## Client Log
-An independent log file, named as "taoslog+" is generated for each client program, i.e. a client process. The default value of `debugfalg` is also 131 and only log at level of INFO/ERROR/WARNING is recorded, it and needs to be changed to 135 or 143 so that log at DEBUG or TRACE level can be recorded for debugging purpose.
+An independent log file, named as "taoslog+" is generated for each client program, i.e. a client process. The default value of `debugFlag` is also 131 and only log at level of INFO/ERROR/WARNING is recorded, it and needs to be changed to 135 or 143 so that log at DEBUG or TRACE level can be recorded for debugging purpose.
The maximum length of a single log file is controlled by parameter `numOfLogLines` and only 2 log files are kept for each `taosd` server process.
diff --git a/docs-en/14-reference/02-rest-api/02-rest-api.mdx b/docs-en/14-reference/02-rest-api/02-rest-api.mdx
index 93cec9a256341679b87a5d46fbd8059de2ef3dd4..f405d551e530a37a5221e71a824f605fba0c0db9 100644
--- a/docs-en/14-reference/02-rest-api/02-rest-api.mdx
+++ b/docs-en/14-reference/02-rest-api/02-rest-api.mdx
@@ -271,7 +271,7 @@ When the HTTP request URL uses `/rest/sqlutc`, the timestamp of the returned res
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.t1' 192.168.0.1:6041/rest/sqlutc
```
-Respones body:
+Response body:
```json
{
diff --git a/docs-en/14-reference/03-connector/03-connector.mdx b/docs-en/14-reference/03-connector/03-connector.mdx
index b4bb5ea1745efa415c8b75f0781ecf77c8d2e236..6be914bdb4b701f478b6b8b27366d6ebb5a39ec8 100644
--- a/docs-en/14-reference/03-connector/03-connector.mdx
+++ b/docs-en/14-reference/03-connector/03-connector.mdx
@@ -46,7 +46,7 @@ Comparing the connector support for TDengine functional features as follows.
| -------------- | -------- | ---------- | ------ | ------ | ----------- | -------- |
| **Connection Management** | Support | Support | Support | Support | Support | Support |
| **Regular Query** | Support | Support | Support | Support | Support | Support |
-| **Continous Query** | Support | Support | Support | Support | Support | Support |
+| **Continuous Query** | Support | Support | Support | Support | Support | Support |
| **Parameter Binding** | Support | Support | Support | Support | Support | Support |
| **Subscription** | Support | Support | Support | Support | Support | Not Supported |
| **Schemaless** | Support | Support | Support | Support | Support | Support |
diff --git a/docs-en/14-reference/03-connector/_preparition.mdx b/docs-en/14-reference/03-connector/_preparation.mdx
similarity index 80%
rename from docs-en/14-reference/03-connector/_preparition.mdx
rename to docs-en/14-reference/03-connector/_preparation.mdx
index 906fd3b66cd8743cfbe9481ed5c4b14d16dba070..07ebdbca3d891ff51a254bc1b83016f1404bb47e 100644
--- a/docs-en/14-reference/03-connector/_preparition.mdx
+++ b/docs-en/14-reference/03-connector/_preparation.mdx
@@ -2,7 +2,7 @@
:::info
-Since the TDengine client driver is written in C, using the native connection requires loading the client driver shared library file, which is usually included in the TDengine installer. You can install either standard TDengine server installation package or [TDengine client installtion package](/get-started/). For Windows development, you need to install the corresponding [Windows client](https://www.taosdata.com/cn/all-downloads/#TDengine-Windows-Client) for TDengine.
+Since the TDengine client driver is written in C, using the native connection requires loading the client driver shared library file, which is usually included in the TDengine installer. You can install either standard TDengine server installation package or [TDengine client installation package](/get-started/). For Windows development, you need to install the corresponding [Windows client](https://www.taosdata.com/cn/all-downloads/#TDengine-Windows-Client) for TDengine.
- libtaos.so: After successful installation of TDengine on a Linux system, the dependent Linux version of the client driver `libtaos.so` file will be automatically linked to `/usr/lib/libtaos.so`, which is included in the Linux scannable path and does not need to be specified separately.
- taos.dll: After installing the client on Windows, the dependent Windows version of the client driver taos.dll file will be automatically copied to the system default search path C:/Windows/System32, again without the need to specify it separately.
diff --git a/docs-en/14-reference/03-connector/_windows_install.mdx b/docs-en/14-reference/03-connector/_windows_install.mdx
index c050509ed5b29d55c2fefca0cba68e7784498642..2819be615ee0a80da9f0324d8d41e9b247e8a7f6 100644
--- a/docs-en/14-reference/03-connector/_windows_install.mdx
+++ b/docs-en/14-reference/03-connector/_windows_install.mdx
@@ -25,7 +25,7 @@ import PkgList from "/components/PkgList";
:::tip
-1. If you use FQDN to connect to the server, you must ensure the local network environment DNS is configured, or add FQDN addressing records in the `hosts` file, e.g., edit C:\Windows\system32\drivers\etc\hosts and add a record like the following: `192.168.1.99 h1.tados.com`..
+1. If you use FQDN to connect to the server, you must ensure the local network environment DNS is configured, or add FQDN addressing records in the `hosts` file, e.g., edit C:\Windows\system32\drivers\etc\hosts and add a record like the following: `192.168.1.99 h1.taosd.com`..
2. Uninstall: Run unins000.exe to uninstall the TDengine client driver.
:::
diff --git a/docs-en/14-reference/03-connector/cpp.mdx b/docs-en/14-reference/03-connector/cpp.mdx
index 3a934bda51277582a0df931dc7643516156b4390..4b388d32a9050645e268bb267d16e9a5b8aa4bda 100644
--- a/docs-en/14-reference/03-connector/cpp.mdx
+++ b/docs-en/14-reference/03-connector/cpp.mdx
@@ -160,7 +160,7 @@ The base API is used to do things like create database connections and provide a
- user: user name
- pass: password
- db: database name, if the user does not provide, it can also be connected correctly, the user can create a new database through this connection, if the user provides the database name, it means that the database user has already created, the default use of the database
- - port: the port the tasd program is listening on
+ - port: the port the taosd program is listening on
NULL indicates a failure. The application needs to save the returned parameters for subsequent use.
@@ -215,7 +215,7 @@ The APIs described in this subsection are all synchronous interfaces. After bein
- ` TAOS_FIELD *taos_fetch_fields(TAOS_RES *res)`
- Gets the properties of each column of the query result set (column name, column data type, column length), used in conjunction with `taos_num_fileds()` to parse a tuple (one row) of data returned by `taos_fetch_row()`. The structure of `TAOS_FIELD` is as follows.
+ Gets the properties of each column of the query result set (column name, column data type, column length), used in conjunction with `taos_num_fields()` to parse a tuple (one row) of data returned by `taos_fetch_row()`. The structure of `TAOS_FIELD` is as follows.
```c
typedef struct taosField {
diff --git a/docs-en/14-reference/03-connector/csharp.mdx b/docs-en/14-reference/03-connector/csharp.mdx
index 6214abeea4a964df3959b6689c38d9997d64f2f1..ca4b1b9ecea84a7c05e3c9da77f1b44545d89081 100644
--- a/docs-en/14-reference/03-connector/csharp.mdx
+++ b/docs-en/14-reference/03-connector/csharp.mdx
@@ -8,7 +8,7 @@ title: C# Connector
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
-import Preparition from "./_preparition.mdx"
+import Preparation from "./_preparation.mdx"
import CSInsert from "../../07-develop/03-insert-data/_cs_sql.mdx"
import CSInfluxLine from "../../07-develop/03-insert-data/_cs_line.mdx"
import CSOpenTSDBTelnet from "../../07-develop/03-insert-data/_cs_opts_telnet.mdx"
@@ -35,7 +35,7 @@ Please refer to [version support list](/reference/connector#version-support)
## Supported features
-1. Connection Mmanagement
+1. Connection Management
2. General Query
3. Continuous Query
4. Parameter Binding
@@ -183,7 +183,7 @@ namespace TDengineExample
Unhandled exception. System.DllNotFoundException: Unable to load DLL 'taos' or one of its dependencies: The specified module cannot be found.
- This is usually because the program did not find the dependent client driver. The solution is to copy `C:\TDengine\driver\taos.dll` to the `C:\Windows\System32\` directory on Windows, and create the following softlink on Linux `ln -s /usr/local/taos/driver/libtaos.so.x.x .x.x /usr/lib/libtaos.so` will work.
+ This is usually because the program did not find the dependent client driver. The solution is to copy `C:\TDengine\driver\taos.dll` to the `C:\Windows\System32\` directory on Windows, and create the following soft link on Linux `ln -s /usr/local/taos/driver/libtaos.so.x.x .x.x /usr/lib/libtaos.so` will work.
## API Reference
diff --git a/docs-en/14-reference/03-connector/go.mdx b/docs-en/14-reference/03-connector/go.mdx
index 830993140c4a79ab8b76f171d24110631834b33c..fd5930f07ff7184bd8dd5ff19cd3860f9718eaf9 100644
--- a/docs-en/14-reference/03-connector/go.mdx
+++ b/docs-en/14-reference/03-connector/go.mdx
@@ -8,7 +8,7 @@ title: TDengine Go Connector
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
-import Preparition from "./_preparition.mdx"
+import Preparation from "./_preparation.mdx"
import GoInsert from "../../07-develop/03-insert-data/_go_sql.mdx"
import GoInfluxLine from "../../07-develop/03-insert-data/_go_line.mdx"
import GoOpenTSDBTelnet from "../../07-develop/03-insert-data/_go_opts_telnet.mdx"
diff --git a/docs-en/14-reference/03-connector/java.mdx b/docs-en/14-reference/03-connector/java.mdx
index 011fa1332c427dd54cd6d7aa87dfda7df3b8353e..328907c4d781bdea8d30623e01d431cedbf8d0fa 100644
--- a/docs-en/14-reference/03-connector/java.mdx
+++ b/docs-en/14-reference/03-connector/java.mdx
@@ -254,7 +254,7 @@ In the above example, a connection is established to `taosdemo.com`, port is 603
The configuration parameters in properties are as follows.
- TSDBDriver.PROPERTY_KEY_USER: Login TDengine user name, default value 'root'.
-- TSDBDriver.PROPERTY_KEY_PASSWORD: user login password, default value 'tasdata'.
+- TSDBDriver.PROPERTY_KEY_PASSWORD: user login password, default value 'taosdata'.
- TSDBDriver.PROPERTY_KEY_BATCH_LOAD: true: pull the result set in batch when executing query; false: pull the result set row by row. The default value is: false.
- TSDBDriver.PROPERTY_KEY_BATCH_ERROR_IGNORE: true: when executing executeBatch of Statement, if there is a SQL execution failure in the middle, continue to execute the following sq. false: no longer execute any statement after the failed SQL. The default value is: false.
- TSDBDriver.PROPERTY_KEY_CONFIG_DIR: Only works when using JDBC native connection. Client configuration file directory path, default value `/etc/taos` on Linux OS, default value `C:/TDengine/cfg` on Windows OS.
@@ -742,7 +742,7 @@ Example usage is as follows.
//query or insert
// ...
- connection.close(); // put back to conneciton pool
+ connection.close(); // put back to connection pool
}
```
@@ -774,7 +774,7 @@ public static void main(String[] args) throws Exception {
//query or insert
// ...
- connection.close(); // put back to conneciton pool
+ connection.close(); // put back to connection pool
}
```
diff --git a/docs-en/14-reference/03-connector/node.mdx b/docs-en/14-reference/03-connector/node.mdx
index f82fea8a9c8c2e273abb2c789f10bb6b3462ae2d..48f724426a96e62e5b56ab4285e5c5fabc95c765 100644
--- a/docs-en/14-reference/03-connector/node.mdx
+++ b/docs-en/14-reference/03-connector/node.mdx
@@ -8,7 +8,7 @@ title: TDengine Node.js Connector
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
-import Preparition from "./_preparition.mdx";
+import Preparation from "./_preparation.mdx";
import NodeInsert from "../../07-develop/03-insert-data/_js_sql.mdx";
import NodeInfluxLine from "../../07-develop/03-insert-data/_js_line.mdx";
import NodeOpenTSDBTelnet from "../../07-develop/03-insert-data/_js_opts_telnet.mdx";
diff --git a/docs-en/14-reference/03-connector/python.mdx b/docs-en/14-reference/03-connector/python.mdx
index 99a84f657b1a71e904c251c38e5e623c896bdf95..2b238173e04e3e13de36b5ac4d91d0cda290ca72 100644
--- a/docs-en/14-reference/03-connector/python.mdx
+++ b/docs-en/14-reference/03-connector/python.mdx
@@ -26,7 +26,7 @@ We recommend using the latest version of `taospy`, regardless what the version o
## Supported features
-- Native connections support all the core features of TDeingine, including connection management, SQL execution, bind interface, subscriptions, and schemaless writing.
+- Native connections support all the core features of TDengine, including connection management, SQL execution, bind interface, subscriptions, and schemaless writing.
- REST connections support features such as connection management and SQL execution. (SQL execution allows you to: manage databases, tables, and supertables, write data, query data, create continuous queries, etc.).
## Installation
@@ -34,7 +34,7 @@ We recommend using the latest version of `taospy`, regardless what the version o
### Preparation
1. Install Python. Python >= 3.6 is recommended. If Python is not available on your system, refer to the [Python BeginnersGuide](https://wiki.python.org/moin/BeginnersGuide/Download) to install it.
-2. Install [pip](https://pypi.org/project/pip/). In most cases, the Python installer comes with the pip utility. If not, please refer to [pip docuemntation](https://pip.pypa.io/en/stable/installation/) to install it.
+2. Install [pip](https://pypi.org/project/pip/). In most cases, the Python installer comes with the pip utility. If not, please refer to [pip documentation](https://pip.pypa.io/en/stable/installation/) to install it.
If you use a native connection, you will also need to [Install Client Driver](/reference/connector#Install-Client-Driver). The client install package includes the TDengine client dynamic link library (`libtaos.so` or `taos.dll`) and the TDengine CLI.
@@ -200,8 +200,8 @@ The `connect()` function returns a `taos.TaosConnection` instance. In client-sid
All arguments to the `connect()` function are optional keyword arguments. The following are the connection parameters specified.
- `host`: The host to connect to. The default is localhost.
-- `user`: TDenigne user name. The default is `root`.
-- `password`: TDeingine user password. The default is `taosdata`.
+- `user`: TDengine user name. The default is `root`.
+- `password`: TDengine user password. The default is `taosdata`.
- `port`: The port on which the taosAdapter REST service listens. Default is 6041.
- `timeout`: HTTP request timeout in seconds. The default is `socket._GLOBAL_DEFAULT_TIMEOUT`. Usually, no configuration is needed.
diff --git a/docs-en/14-reference/03-connector/rust.mdx b/docs-en/14-reference/03-connector/rust.mdx
index 04f523472ffbc14c1b991513f14b318388d6a520..2c8fe68c1ca8b091b8d685d8e20942a02ab2c5e8 100644
--- a/docs-en/14-reference/03-connector/rust.mdx
+++ b/docs-en/14-reference/03-connector/rust.mdx
@@ -8,7 +8,7 @@ title: TDengine Rust Connector
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
-import Preparition from "./_preparition.mdx"
+import Preparation from "./_preparation.mdx"
import RustInsert from "../../07-develop/03-insert-data/_rust_sql.mdx"
import RustInfluxLine from "../../07-develop/03-insert-data/_rust_line.mdx"
import RustOpenTSDBTelnet from "../../07-develop/03-insert-data/_rust_opts_telnet.mdx"
diff --git a/docs-en/14-reference/08-taos-shell.md b/docs-en/14-reference/08-taos-shell.md
index b3a88f3d5a6124b830847375412faf896cefd12c..fe5e5f2bc29509a4b96646253732076c7a6ee7ea 100644
--- a/docs-en/14-reference/08-taos-shell.md
+++ b/docs-en/14-reference/08-taos-shell.md
@@ -4,11 +4,11 @@ sidebar_label: TDengine CLI
description: Instructions and tips for using the TDengine CLI
---
-The TDengine command-line application (hereafter referred to as `TDengine CLI`) is the most feasility way for users to manipulate and interact with TDengine instances.
+The TDengine command-line application (hereafter referred to as `TDengine CLI`) is the most simplest way for users to manipulate and interact with TDengine instances.
## Installation
-If executed on the TDengine server-side, there is no need for additional installation steps to install TDengine CLI as it is already included and installed automatically. To run TDengine CLI on the environemtn which no TDengine server running, the TDengine client installation package needs to be installed first. For details, please refer to [connector](/reference/connector/).
+If executed on the TDengine server-side, there is no need for additional installation steps to install TDengine CLI as it is already included and installed automatically. To run TDengine CLI on the environment which no TDengine server running, the TDengine client installation package needs to be installed first. For details, please refer to [connector](/reference/connector/).
## Execution
@@ -62,13 +62,13 @@ And many more parameters.
- -f, --file=FILE: Execute the SQL script file in non-interactive mode
- -k, --check=CHECK: Specify the table to be checked
- -l, --pktlen=PKTLEN: Test package size to be used for network testing
-- -n, --netrole=NETROLE: test scope for network connection test, default is `startup`, The value can be `client`, `server`, `rpc`, `startup`, `sync`, `speed`, or `fqdn`.
+- -n, --netrole=NETROLE: test scope for network connection test, default is `startup`. The value can be `client`, `server`, `rpc`, `startup`, `sync`, `speed`, or `fqdn`.
- -r, --raw-time: output the timestamp format as unsigned 64-bits integer (uint64_t in C language)
- -s, --commands=COMMAND: execute SQL commands in non-interactive mode
-- -S, --pkttype=PKTTYPE: Specify the packet type used for network testing. The default is TCP. can be specified as either TCP or UDP when `speed` is specified to netrole parameter
+- -S, --pkttype=PKTTYPE: Specify the packet type used for network testing. The default is TCP, can be specified as either TCP or UDP when `speed` is specified to `netrole` parameter
- -T, --thread=THREADNUM: The number of threads to import data in multi-threaded mode
- -s, --commands: Run TDengine CLI commands without entering the terminal
-- -z, --timezone=TIMEZONE: Specify time zone. Default is the value of current configruation file
+- -z, --timezone=TIMEZONE: Specify time zone. Default is the value of current configuration file
- -V, --version: Print out the current version number
Example.
diff --git a/docs-en/14-reference/12-config/index.md b/docs-en/14-reference/12-config/index.md
index 6248e14d27d675c61a74a83b079daff84b4eaa83..c4e7cc523c400ea5be6610b64f1561246b1bfa24 100644
--- a/docs-en/14-reference/12-config/index.md
+++ b/docs-en/14-reference/12-config/index.md
@@ -90,8 +90,8 @@ TDengine uses continuous 13 ports, both TCP and TCP, from the port specified by
| TCP | 6041 | REST connection between client and server | Prior to 2.4.0.0: serverPort+11; After 2.4.0.0 refer to [taosAdapter](/reference/taosadapter/) |
| TCP | 6042 | Service Port of Arbitrator | The parameter of Arbitrator |
| TCP | 6043 | Service Port of TaosKeeper | The parameter of TaosKeeper |
-| TCP | 6044 | Data access port for StatsD | efer to [taosAdapter](/reference/taosadapter/) |
-| UDP | 6045 | Data access for statsd | efer to [taosAdapter](/reference/taosadapter/) |
+| TCP | 6044 | Data access port for StatsD | refer to [taosAdapter](/reference/taosadapter/) |
+| UDP | 6045 | Data access for statsd | refer to [taosAdapter](/reference/taosadapter/) |
| TCP | 6060 | Port of Monitoring Service in Enterprise version | |
| UDP | 6030-6034 | Communication between client and server | serverPort |
| UDP | 6035-6039 | Communication among server nodes in cluster | serverPort |
@@ -120,7 +120,7 @@ TDengine uses continuous 13 ports, both TCP and TCP, from the port specified by
| Attribute | Description |
| ------------- | ------------------------------------------------------------------- |
| Applicable | Server and Client |
-| Meaning | TCP is used forcely |
+| Meaning | TCP is used by force |
| Value Range | 0: disabled 1: enabled |
| Default Value | 0 |
| Note | It's suggested to configure to enable if network is not good enough |
@@ -197,7 +197,7 @@ TDengine uses continuous 13 ports, both TCP and TCP, from the port specified by
| Default Value | TimeZone configured in the host |
:::info
-To handle the data insertion and data query from multiple timezones, Unix Timestamp is used and stored TDengie. The timestamp generated from any timezones at same time is same in Unix timestamp. To make sure the time on client side can be converted to Unix timestamp correctly, the timezone must be set properly.
+To handle the data insertion and data query from multiple timezones, Unix Timestamp is used and stored TDengine. The timestamp generated from any timezones at same time is same in Unix timestamp. To make sure the time on client side can be converted to Unix timestamp correctly, the timezone must be set properly.
On Linux system, TDengine clients automatically obtain timezone from the host. Alternatively, the timezone can be configured explicitly in configuration file `taos.cfg` like below.
@@ -240,7 +240,7 @@ To avoid the problems of using time strings, Unix timestamp can be used directly
| Default Value | Locale configured in host |
:::info
-A specific type "nchar" is provied in TDengine to store non-ASCII characters such as Chinese, Japanese, Korean. The characters to be stored in nchar type are firstly encoded in UCS4-LE before sending to server side. To store non-ASCII characters correctly, the encoding format of the client side needs to be set properly.
+A specific type "nchar" is provided in TDengine to store non-ASCII characters such as Chinese, Japanese, Korean. The characters to be stored in nchar type are firstly encoded in UCS4-LE before sending to server side. To store non-ASCII characters correctly, the encoding format of the client side needs to be set properly.
The characters input on the client side are encoded using the default system encoding, which is UTF-8 on Linux, or GB18030 or GBK on some systems in Chinese, POSIX in docker, CP936 on Windows in Chinese. The encoding of the operating system in use must be set correctly so that the characters in nchar type can be converted to UCS4-LE.
@@ -700,7 +700,7 @@ charset CP936
| Default Value | 0.0000000000000001 |
| Note | The fractional part lower than this value will be discarded |
-## Continuous Query Prameters
+## Continuous Query Parameters
### stream
diff --git a/docs-en/20-third-party/11-kafka.md b/docs-en/20-third-party/11-kafka.md
index 83d086308d0a593cc90e2b0a0c0945a52ce259ea..b9c7a3814a75a066b498438b6e632690697ae7ca 100644
--- a/docs-en/20-third-party/11-kafka.md
+++ b/docs-en/20-third-party/11-kafka.md
@@ -279,7 +279,7 @@ INSERT INTO d1001 USING meters TAGS(Beijing.Chaoyang, 2) VALUES('2018-10-03 14:3
Use TDengine CLI to execute SQL script
```
-taos -f prepare-sorce-data.sql
+taos -f prepare-source-data.sql
```
### Create Connector instance
diff --git a/docs-en/27-train-faq/01-faq.md b/docs-en/27-train-faq/01-faq.md
index 63ca954d117fb1493992a08990de05677befab97..439775170937ef11fc964914232b2739d688b26f 100644
--- a/docs-en/27-train-faq/01-faq.md
+++ b/docs-en/27-train-faq/01-faq.md
@@ -80,7 +80,7 @@ From version 2.1.7.0, at most 4096 columns can be defined for a table.
Inserting data in batch is a good practice. Single SQL statement can insert data for one or multiple tables in batch.
-### 9. JDBC Error: the excuted SQL is not a DML or a DDL?
+### 9. JDBC Error: the executed SQL is not a DML or a DDL?
Please upgrade to latest JDBC driver, for details please refer to [Java Connector](/reference/connector/java)
@@ -104,7 +104,7 @@ ALTER LOCAL flag_name flag_value;
-### 13. Hhat to do if go compilation fails?
+### 13. What to do if go compilation fails?
From version 2.3.0.0, a new component named `taosAdapter` is introduced. Its' developed in Go. If you want to compile from source code and meet go compilation problems, try to do below steps to resolve Go environment problems.
diff --git a/docs-en/27-train-faq/03-docker.md b/docs-en/27-train-faq/03-docker.md
index 0bcc39f903c635aed7fe8c850d8b706f6ba92293..ba435a9307c1d6595579a295df83030c58ba0f22 100644
--- a/docs-en/27-train-faq/03-docker.md
+++ b/docs-en/27-train-faq/03-docker.md
@@ -72,7 +72,7 @@ $ docker exec -it tdengine /bin/bash
root@tdengine-server:~/TDengine-server-2.4.0.4#
```
-- **docker exec**: Attach to the continaer
+- **docker exec**: Attach to the container
- **-i**: Interactive mode
- **-t**: Use terminal
- **tdengine**: Container name, up to the output of `docker ps`
@@ -156,7 +156,7 @@ Below is an example output:
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["log","2021-12-28 09:18:55.765",10,1,1,1,10,"30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":1}
```
-### Use taosBenchmark on host to access TDenginer server in container
+### Use taosBenchmark on host to access TDengine server in container
1. Run `taosBenchmark`, named as `taosdemo` previously, on the host:
diff --git a/docs-examples/c/async_query_example.c b/docs-examples/c/async_query_example.c
index 77002891bb4c03f7c7e32b329678e8a124f12a99..262757f02b5c52f2d4402d363663db80bb38a54d 100644
--- a/docs-examples/c/async_query_example.c
+++ b/docs-examples/c/async_query_example.c
@@ -155,7 +155,7 @@ void *select_callback(void *param, TAOS_RES *res, int code) {
printHeader(res);
taos_fetch_rows_a(res, fetch_row_callback, _taos);
} else {
- printf("failed to exeuce taos_query. error: %s\n", taos_errstr(res));
+ printf("failed to execute taos_query. error: %s\n", taos_errstr(res));
taos_free_result(res);
taos_close(_taos);
taos_cleanup();
diff --git a/docs-examples/c/connect_example.c b/docs-examples/c/connect_example.c
index ff0891e08267840fd5141d1b4271109d832c1c51..1a23df4806d7ff986898734e1971f6e0cd7c5360 100644
--- a/docs-examples/c/connect_example.c
+++ b/docs-examples/c/connect_example.c
@@ -13,9 +13,9 @@ int main() {
uint16_t port = 0; // 0 means use the default port
TAOS *taos = taos_connect(host, user, passwd, db, port);
if (taos == NULL) {
- int errono = taos_errno(NULL);
+ int errno = taos_errno(NULL);
char *msg = taos_errstr(NULL);
- printf("%d, %s\n", errono, msg);
+ printf("%d, %s\n", errno, msg);
} else {
printf("connected\n");
taos_close(taos);
diff --git a/docs-examples/c/error_handle_example.c b/docs-examples/c/error_handle_example.c
index 36bb7f12f77a46230add5af82b68e6fb86ddfe77..e7dedb263df250f6634aa15fab2729cbaf4e5972 100644
--- a/docs-examples/c/error_handle_example.c
+++ b/docs-examples/c/error_handle_example.c
@@ -13,9 +13,9 @@ int main() {
uint16_t port = 0; // 0 means use the default port
TAOS *taos = taos_connect(host, user, passwd, db, port);
if (taos == NULL) {
- int errono = taos_errno(NULL);
+ int errno = taos_errno(NULL);
char *msg = taos_errstr(NULL);
- printf("%d, %s\n", errono, msg);
+ printf("%d, %s\n", errno, msg);
} else {
printf("connected\n");
taos_close(taos);
diff --git a/docs-examples/c/query_example.c b/docs-examples/c/query_example.c
index 4314ac4fe2f5b5251af2462bf0b20ebeed7cac5e..f88b2467ceb3d9bbeaf6b3beb6a24befd3e398c6 100644
--- a/docs-examples/c/query_example.c
+++ b/docs-examples/c/query_example.c
@@ -128,7 +128,7 @@ int main() {
}
TAOS_RES *res = taos_query(taos, "SELECT * FROM meters LIMIT 2");
if (taos_errno(res) != 0) {
- printf("failed to exeuce taos_query. error: %s\n", taos_errstr(res));
+ printf("failed to execute taos_query. error: %s\n", taos_errstr(res));
exit(EXIT_FAILURE);
}
printResult(res);
diff --git a/docs-examples/c/subscribe_demo.c b/docs-examples/c/subscribe_demo.c
index b523b4667e08ae8a02f4a470c939091f216d1dcb..2fe62c24eb92d2f57c24b40fc16f47d62ea5e378 100644
--- a/docs-examples/c/subscribe_demo.c
+++ b/docs-examples/c/subscribe_demo.c
@@ -46,7 +46,7 @@ int main() {
exit(EXIT_FAILURE);
}
- int restart = 1; // if the topic already exists, where to subscribe from the begine.
+ int restart = 1; // if the topic already exists, where to subscribe from the begin.
const char* topic = "topic-meter-current-bg-10";
const char* sql = "select * from power.meters where current > 10";
void* param = NULL; // additional parameter.
@@ -58,7 +58,7 @@ int main() {
getchar(); // press Enter to stop
printf("total rows consumed: %d\n", nTotalRows);
- int keep = 0; // weather to keep subscribe process
+ int keep = 0; // whether to keep subscribe process
taos_unsubscribe(tsub, keep);
taos_close(taos);
diff --git a/docs-examples/python/connect_exmaple.py b/docs-examples/python/connect_example.py
similarity index 100%
rename from docs-examples/python/connect_exmaple.py
rename to docs-examples/python/connect_example.py
diff --git a/tests/docs-examples-test/test_python.sh b/tests/docs-examples-test/test_python.sh
index 22297ad92fc4c2efd821aaa197936ec08a89ef31..2b96311b29736951e71851af49f84f074428be72 100755
--- a/tests/docs-examples-test/test_python.sh
+++ b/tests/docs-examples-test/test_python.sh
@@ -9,7 +9,7 @@ cd ../../docs-examples/python
# 1
taos -s "create database if not exists log"
-python3 connect_exmaple.py
+python3 connect_example.py
# 2
taos -s "drop database if exists power"