@@ -284,7 +284,7 @@ SELECT COUNT(*) FROM d1001 WHERE ts >= '2017-7-14 00:00:00' AND ts < '2017-7-14
TDengine 对每个数据采集点单独建表,但在实际应用中经常需要对不同的采集点数据进行聚合。为高效的进行聚合操作,TDengine 引入超级表(STable)的概念。超级表用来代表一特定类型的数据采集点,它是包含多张表的表集合,集合里每张表的模式(schema)完全一致,但每张表都带有自己的静态标签,标签可以有多个,可以随时增加、删除和修改。应用可通过指定标签的过滤条件,对一个 STable 下的全部或部分表进行聚合或统计操作,这样大大简化应用的开发。其具体流程如下图所示:
-![多表聚合查询原理图](./multi_tables.webp)
+![TDengine Database 多表聚合查询原理图](./multi_tables.webp)
图 5 多表聚合查询原理图
diff --git a/docs-cn/25-application/01-telegraf.md b/docs-cn/25-application/01-telegraf.md
index 5bfc94c53410f6142b3bc24f696334c334cde933..95df8699ef85b02d6e9dba398c787644fc9089b2 100644
--- a/docs-cn/25-application/01-telegraf.md
+++ b/docs-cn/25-application/01-telegraf.md
@@ -16,7 +16,7 @@ IT 运维监测数据通常都是对时间特性比较敏感的数据,例如
本文介绍不需要写一行代码,通过简单修改几行配置文件,就可以快速搭建一个基于 TDengine + Telegraf + Grafana 的 IT 运维系统。架构如下图:
-![IT-DevOps-Solutions-Telegraf.webp](./IT-DevOps-Solutions-Telegraf.webp)
+![TDengine Database IT-DevOps-Solutions-Telegraf](./IT-DevOps-Solutions-Telegraf.webp)
## 安装步骤
@@ -75,7 +75,7 @@ sudo systemctl start telegraf
点击左侧齿轮图标并选择 `Plugins`,应该可以找到 TDengine data source 插件图标。
点击左侧加号图标并选择 `Import`,从 `https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard-v0.1.0.json` 下载 dashboard JSON 文件后导入。之后可以看到如下界面的仪表盘:
-![IT-DevOps-Solutions-telegraf-dashboard.webp]./IT-DevOps-Solutions-telegraf-dashboard.webp)
+![TDengine Database IT-DevOps-Solutions-telegraf-dashboard](./IT-DevOps-Solutions-telegraf-dashboard.webp)
## 总结
diff --git a/docs-cn/25-application/02-collectd.md b/docs-cn/25-application/02-collectd.md
index 5966f2d6544c78adb806d51e8a4157ba7dc420e9..78c61bb969092d7040ddcb3d02ce7bd29a784858 100644
--- a/docs-cn/25-application/02-collectd.md
+++ b/docs-cn/25-application/02-collectd.md
@@ -16,7 +16,7 @@ IT 运维监测数据通常都是对时间特性比较敏感的数据,例如
本文介绍不需要写一行代码,通过简单修改几行配置文件,就可以快速搭建一个基于 TDengine + collectd / statsD + Grafana 的 IT 运维系统。架构如下图:
-![IT-DevOps-Solutions-Collectd-StatsD.webp](./IT-DevOps-Solutions-Collectd-StatsD.webp)
+![TDengine Database IT-DevOps-Solutions-Collectd-StatsD](./IT-DevOps-Solutions-Collectd-StatsD.webp)
## 安装步骤
@@ -81,12 +81,12 @@ repeater 部分添加 { host:'', port: Figure 1. TDengine Technical Ecosystem
diff --git a/docs-en/07-develop/06-subscribe.mdx b/docs-en/07-develop/06-subscribe.mdx
index 66c8f5129018bee2d9da4a343006d7239cfea856..474841ff8932216d327f39a4f0cb39ba26e6615b 100644
--- a/docs-en/07-develop/06-subscribe.mdx
+++ b/docs-en/07-develop/06-subscribe.mdx
@@ -108,7 +108,7 @@ if (async) {
}
```
-In the above sample code in the else condition, there is an infinite loop. Each time carriage return is entered `taos_consume` is invoked. The return value of `taos_consume` is the selected result set. In the above sample, `print_result` is used to simplify the printing of the result set. Below is the implementation of `print_result`.
+In the above sample code in the else condition, there is an infinite loop. Each time carriage return is entered `taos_consume` is invoked. The return value of `taos_consume` is the selected result set. In the above sample, `print_result` is used to simplify the printing of the result set. It is similar to `taos_use_result`. Below is the implementation of `print_result`.
```c
void print_result(TAOS_RES* res, int blockFetch) {
diff --git a/docs-en/07-develop/07-cache.md b/docs-en/07-develop/07-cache.md
index 3d42e22eb3eb0369140e2782de5a01b60156423a..743452faff6a2be8466318a7dab61a44e33c3664 100644
--- a/docs-en/07-develop/07-cache.md
+++ b/docs-en/07-develop/07-cache.md
@@ -4,15 +4,15 @@ title: Cache
description: "The latest row of each table is kept in cache to provide high performance query of latest state."
---
-The cache management policy in TDengine is First-In-First-Out (FIFO), which is also known as insert driven cache management policy and different from read driven cache management, i.e. Least-Recent-Used (LRU). It simply stores the latest data in cache and flushes the oldest data in cache to disk when the cache usage reaches a threshold. In IoT use cases, the most cared about data is the latest data, i.e. current state. The cache policy in TDengine is based the nature of IoT data.
+The cache management policy in TDengine is First-In-First-Out (FIFO). FIFO is also known as insert driven cache management policy and it is different from read driven cache management, which is more commonly known as Least-Recently-Used (LRU). FIFO simply stores the latest data in cache and flushes the oldest data in cache to disk, when the cache usage reaches a threshold. In IoT use cases, it is the current state i.e. the latest or most recent data that is important. The cache policy in TDengine, like much of the design and architecture of TDengine, is based on the nature of IoT data.
-Caching the latest data provides the capability of retrieving data in milliseconds. With this capability, TDengine can be configured properly to be used as caching system without deploying another separate caching system to simplify the system architecture and minimize the operation cost. The cache will be emptied after TDengine is restarted, TDengine doesn't reload data from disk into cache like a real key-value caching system.
+Caching the latest data provides the capability of retrieving data in milliseconds. With this capability, TDengine can be configured properly to be used as a caching system without deploying another separate caching system. This simplifies the system architecture and minimizes operational costs. The cache is emptied after TDengine is restarted. TDengine does not reload data from disk into cache, like a key-value caching system.
-The memory space used by TDengine cache is fixed in size, according to the configuration based on application requirement and system resources. Independent memory pool is allocated for and managed by each vnode (virtual node) in TDengine, there is no sharing of memory pools between vnodes. All the tables belonging to a vnode share all the cache memory of the vnode.
+The memory space used by the TDengine cache is fixed in size and configurable. It should be allocated based on application requirements and system resources. An independent memory pool is allocated for and managed by each vnode (virtual node) in TDengine. There is no sharing of memory pools between vnodes. All the tables belonging to a vnode share all the cache memory of the vnode.
-Memory pool is divided into blocks and data is stored in row format in memory and each block follows FIFO policy. The size of each block is determined by configuration parameter `cache`, the number of blocks for each vnode is determined by `blocks`. For each vnode, the total cache size is `cache * blocks`. A cache block needs to ensure that each table can store at least dozens of records to be efficient.
+The memory pool is divided into blocks and data is stored in row format in memory and each block follows FIFO policy. The size of each block is determined by configuration parameter `cache` and the number of blocks for each vnode is determined by the parameter `blocks`. For each vnode, the total cache size is `cache * blocks`. A cache block needs to ensure that each table can store at least dozens of records, to be efficient.
-`last_row` function can be used to retrieve the last row of a table or a STable to quickly show the current state of devices on monitoring screen. For example the below SQL statement retrieves the latest voltage of all meters in San Francisco of California.
+`last_row` function can be used to retrieve the last row of a table or a STable to quickly show the current state of devices on monitoring screen. For example the below SQL statement retrieves the latest voltage of all meters in San Francisco, California.
```sql
select last_row(voltage) from meters where location='California.SanFrancisco';
diff --git a/docs-en/12-taos-sql/08-interval.md b/docs-en/12-taos-sql/08-interval.md
index bf0904458ce5601fa0b9f611f3fcba6106dc5084..1b5265b44b6b63f8f5472e1e8760d1f45401fc21 100644
--- a/docs-en/12-taos-sql/08-interval.md
+++ b/docs-en/12-taos-sql/08-interval.md
@@ -10,7 +10,7 @@ Window related clauses are used to divide the data set to be queried into subset
`INTERVAL` clause is used to generate time windows of the same time interval, `SLIDING` is used to specify the time step for which the time window moves forward. The query is performed on one time window each time, and the time window moves forward with time. When defining continuous query both the size of time window and the step of forward sliding time need to be specified. As shown in the figure blow, [t0s, t0e] ,[t1s , t1e], [t2s, t2e] are respectively the time ranges of three time windows on which continuous queries are executed. The time step for which time window moves forward is marked by `sliding time`. Query, filter and aggregate operations are executed on each time window respectively. When the time step specified by `SLIDING` is same as the time interval specified by `INTERVAL`, the sliding time window is actually a flip time window.
-![Time Window](./timewindow-1.webp)
+![TDengine Database Time Window](./timewindow-1.webp)
`INTERVAL` and `SLIDING` should be used with aggregate functions and select functions. Below SQL statement is illegal because no aggregate or selection function is used with `INTERVAL`.
@@ -30,7 +30,7 @@ When the time length specified by `SLIDING` is the same as that specified by `IN
In case of using integer, bool, or string to represent the device status at a moment, the continuous rows with same status belong to same status window. Once the status changes, the status window closes. As shown in the following figure, there are two status windows according to status, [2019-04-28 14:22:07,2019-04-28 14:22:10] and [2019-04-28 14:22:11,2019-04-28 14:22:12]. Status window is not applicable to STable for now.
-![Status Window](./timewindow-3.webp)
+![TDengine Database Status Window](./timewindow-3.webp)
`STATE_WINDOW` is used to specify the column based on which to define status window, for example:
@@ -46,7 +46,7 @@ SELECT COUNT(*), FIRST(ts) FROM temp_tb_1 SESSION(ts, tol_val);
The primary key, i.e. timestamp, is used to determine which session window the row belongs to. If the time interval between two adjacent rows is within the time range specified by `tol_val`, they belong to the same session window; otherwise they belong to two different time windows. As shown in the figure below, if the limit of time interval for the session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:10,2019-04-28 14:22:30] and [2019-04-28 14:23:10,2019-04-28 14:23:30], because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds.
-![Session Window](./timewindow-2.webp)
+![TDengine Database Session Window](./timewindow-2.webp)
If the time interval between two continuous rows are within the time interval specified by `tol_value` they belong to the same session window; otherwise a new session window is started automatically. Session window is not supported on STable for now.
diff --git a/docs-en/14-reference/03-connector/03-connector.mdx b/docs-en/14-reference/03-connector/03-connector.mdx
index 38eba73d0983951901a26eee3962e89007f6d30a..44685579005c2cebd5e0194a10d457cd1199051e 100644
--- a/docs-en/14-reference/03-connector/03-connector.mdx
+++ b/docs-en/14-reference/03-connector/03-connector.mdx
@@ -4,7 +4,7 @@ title: Connector
TDengine provides a rich set of APIs (application development interface). To facilitate users to develop their applications quickly, TDengine supports connectors for multiple programming languages, including official connectors for C/C++, Java, Python, Go, Node.js, C#, and Rust. These connectors support connecting to TDengine clusters using both native interfaces (taosc) and REST interfaces (not supported in a few languages yet). Community developers have also contributed several unofficial connectors, such as the ADO.NET connector, the Lua connector, and the PHP connector.
-![image-connector](./connector.webp)
+![TDengine Database image-connector](./connector.webp)
## Supported platforms
diff --git a/docs-en/14-reference/03-connector/java.mdx b/docs-en/14-reference/03-connector/java.mdx
index 530798af1143d2e611369579a945de295d248ab0..1c84c0b1cacb454ca4e35266a1d362a2d2a038fb 100644
--- a/docs-en/14-reference/03-connector/java.mdx
+++ b/docs-en/14-reference/03-connector/java.mdx
@@ -11,7 +11,7 @@ import TabItem from '@theme/TabItem';
'taos-jdbcdriver' is TDengine's official Java language connector, which allows Java developers to develop applications that access the TDengine database. 'taos-jdbcdriver' implements the interface of the JDBC driver standard and provides two forms of connectors. One is to connect to a TDengine instance natively through the TDengine client driver (taosc), which supports functions including data writing, querying, subscription, schemaless writing, and bind interface. And the other is to connect to a TDengine instance through the REST interface provided by taosAdapter (2.4.0.0 and later). REST connections implement has a slight differences to compare the set of features implemented and native connections.
-![tdengine-connector](tdengine-jdbc-connector.webp)
+![TDengine Database tdengine-connector](tdengine-jdbc-connector.webp)
The preceding diagram shows two ways for a Java app to access TDengine via connector:
diff --git a/docs-en/14-reference/04-taosadapter.md b/docs-en/14-reference/04-taosadapter.md
index de42e8a883d8b195b9d342f761e39458e557dfac..4478ced10e4b47c69ecd2a7e6a935599eb03660c 100644
--- a/docs-en/14-reference/04-taosadapter.md
+++ b/docs-en/14-reference/04-taosadapter.md
@@ -24,7 +24,7 @@ taosAdapter provides the following features.
## taosAdapter architecture diagram
-![taosAdapter Architecture](taosAdapter-architecture.webp)
+![TDengine Database taosAdapter Architecture](taosAdapter-architecture.webp)
## taosAdapter Deployment Method
diff --git a/docs-en/14-reference/07-tdinsight/index.md b/docs-en/14-reference/07-tdinsight/index.md
index dc337bf9fff2a9b60ea2f1c5110185a8ac683098..e945d581c93b2ad1d7f0c32639eb3ba524e35161 100644
--- a/docs-en/14-reference/07-tdinsight/index.md
+++ b/docs-en/14-reference/07-tdinsight/index.md
@@ -233,33 +233,33 @@ The default username/password is `admin`. Grafana will require a password change
Point to the **Configurations** -> **Data Sources** menu, and click the **Add data source** button.
-![Add data source button](./assets/howto-add-datasource-button.webp)
+![TDengine Database TDinsight Add data source button](./assets/howto-add-datasource-button.webp)
Search for and select **TDengine**.
-![Add datasource](./assets/howto-add-datasource-tdengine.webp)
+![TDengine Database TDinsight Add datasource](./assets/howto-add-datasource-tdengine.webp)
Configure the TDengine datasource.
-![Datasource Configuration](./assets/howto-add-datasource.webp)
+![TDengine Database TDinsight Datasource Configuration](./assets/howto-add-datasource.webp)
Save and test. It will report 'TDengine Data source is working' under normal circumstances.
-![datasource test](./assets/howto-add-datasource-test.webp)
+![TDengine Database TDinsight datasource test](./assets/howto-add-datasource-test.webp)
### Importing dashboards
Point to **+** / **Create** - **import** (or `/dashboard/import` url).
-![Import Dashboard and Configuration](./assets/import_dashboard.webp)
+![TDengine Database TDinsight Import Dashboard and Configuration](./assets/import_dashboard.webp)
Type the dashboard ID `15167` in the **Import via grafana.com** location and **Load**.
-![Import via grafana.com](./assets/import-dashboard-15167.webp)
+![TDengine Database TDinsight Import via grafana.com](./assets/import-dashboard-15167.webp)
Once the import is complete, the full page view of TDinsight is shown below.
-![show](./assets/TDinsight-full.webp)
+![TDengine Database TDinsight show](./assets/TDinsight-full.webp)
## TDinsight dashboard details
@@ -269,7 +269,7 @@ Details of the metrics are as follows.
### Cluster Status
-![tdinsight-mnodes-overview](./assets/TDinsight-1-cluster-status.webp)
+![TDengine Database TDinsight mnodes overview](./assets/TDinsight-1-cluster-status.webp)
This section contains the current information and status of the cluster, the alert information is also here (from left to right, top to bottom).
@@ -289,7 +289,7 @@ This section contains the current information and status of the cluster, the ale
### DNodes Status
-![tdinsight-mnodes-overview](./assets/TDinsight-2-dnodes.webp)
+![TDengine Database TDinsight mnodes overview](./assets/TDinsight-2-dnodes.webp)
- **DNodes Status**: simple table view of `show dnodes`.
- **DNodes Lifetime**: the time elapsed since the dnode was created.
@@ -298,14 +298,14 @@ This section contains the current information and status of the cluster, the ale
### MNode Overview
-![tdinsight-mnodes-overview](./assets/TDinsight-3-mnodes.webp)
+![TDengine Database TDinsight mnodes overview](./assets/TDinsight-3-mnodes.webp)
1. **MNodes Status**: a simple table view of `show mnodes`. 2.
2. **MNodes Number**: similar to `DNodes Number`, the number of MNodes changes.
### Request
-![tdinsight-requests](./assets/TDinsight-4-requests.webp)
+![TDengine Database TDinsight tdinsight requests](./assets/TDinsight-4-requests.webp)
1. **Requests Rate(Inserts per Second)**: average number of inserts per second.
2. **Requests (Selects)**: number of query requests and change rate (count of second).
@@ -313,7 +313,7 @@ This section contains the current information and status of the cluster, the ale
### Database
-![tdinsight-database](./assets/TDinsight-5-database.webp)
+![TDengine Database TDinsight database](./assets/TDinsight-5-database.webp)
Database usage, repeated for each value of the variable `$database` i.e. multiple rows per database.
@@ -325,7 +325,7 @@ Database usage, repeated for each value of the variable `$database` i.e. multipl
### DNode Resource Usage
-![dnode-usage](./assets/TDinsight-6-dnode-usage.webp)
+![TDengine Database TDinsight dnode usage](./assets/TDinsight-6-dnode-usage.webp)
Data node resource usage display with repeated multiple rows for the variable `$fqdn` i.e., each data node. Includes.
@@ -346,13 +346,13 @@ Data node resource usage display with repeated multiple rows for the variable `$
### Login History
-![Login History](./assets/TDinsight-7-login-history.webp)
+![TDengine Database TDinsight Login History](./assets/TDinsight-7-login-history.webp)
Currently, only the number of logins per minute is reported.
### Monitoring taosAdapter
-![taosadapter](./assets/TDinsight-8-taosadapter.webp)
+![TDengine Database TDinsight monitor taosadapter](./assets/TDinsight-8-taosadapter.webp)
Support monitoring taosAdapter request statistics and status details. Includes.
diff --git a/docs-en/20-third-party/01-grafana.mdx b/docs-en/20-third-party/01-grafana.mdx
index 7239710e0aebdd95977d9b73a5a1a9fccd656542..ce45a12a04be3b2d07c1efd9248772b875ff0e41 100644
--- a/docs-en/20-third-party/01-grafana.mdx
+++ b/docs-en/20-third-party/01-grafana.mdx
@@ -62,15 +62,15 @@ GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=tdengine-datasource
Users can log in to the Grafana server (username/password: admin/admin) directly through the URL `http://localhost:3000` and add a datasource through `Configuration -> Data Sources` on the left side, as shown in the following figure.
-![img](./grafana/add_datasource1.webp)
+![TDengine Database TDinsight plugin add datasource 1](./grafana/add_datasource1.webp)
Click `Add data source` to enter the Add data source page, and enter TDengine in the query box to add it, as shown in the following figure.
-![img](./grafana/add_datasource2.webp)
+![TDengine Database TDinsight plugin add datasource 2](./grafana/add_datasource2.webp)
Enter the datasource configuration page, and follow the default prompts to modify the corresponding configuration.
-![img](./grafana/add_datasource3.webp)
+![TDengine Database TDinsight plugin add database 3](./grafana/add_datasource3.webp)
- Host: IP address of the server where the components of the TDengine cluster provide REST service (offered by taosd before 2.4 and by taosAdapter since 2.4) and the port number of the TDengine REST service (6041), by default use `http://localhost:6041`.
- User: TDengine user name.
@@ -78,13 +78,13 @@ Enter the datasource configuration page, and follow the default prompts to modif
Click `Save & Test` to test. Follows are a success.
-![img](./grafana/add_datasource4.webp)
+![TDengine Database TDinsight plugin add database 4](./grafana/add_datasource4.webp)
### Create Dashboard
Go back to the main interface to create the Dashboard, click Add Query to enter the panel query page:
-![img](./grafana/create_dashboard1.webp)
+![TDengine Database TDinsight plugin create dashboard 1](./grafana/create_dashboard1.webp)
As shown above, select the `TDengine` data source in the `Query` and enter the corresponding SQL in the query box below for query.
@@ -94,7 +94,7 @@ As shown above, select the `TDengine` data source in the `Query` and enter the c
Follow the default prompt to query the average system memory usage for the specified interval on the server where the current TDengine deployment is located as follows.
-![img](./grafana/create_dashboard2.webp)
+![TDengine Database TDinsight plugin create dashboard 2](./grafana/create_dashboard2.webp)
> For more information on how to use Grafana to create the appropriate monitoring interface and for more details on using Grafana, refer to the official Grafana [documentation](https://grafana.com/docs/).
diff --git a/docs-en/20-third-party/09-emq-broker.md b/docs-en/20-third-party/09-emq-broker.md
index 560c6463b59b00a362023d6cfa44cf833419a9ea..ae393bb085dbe84477ca577dfeb468b29e8bc40c 100644
--- a/docs-en/20-third-party/09-emq-broker.md
+++ b/docs-en/20-third-party/09-emq-broker.md
@@ -44,25 +44,25 @@ Since the configuration interface of EMQX differs from version to version, here
Use your browser to open the URL `http://IP:18083` and log in to EMQX Dashboard. The initial installation username is `admin` and the password is: `public`.
-![img](./emqx/login-dashboard.webp)
+![TDengine Database EMQX login dashboard](./emqx/login-dashboard.webp)
### Creating Rule
Select "Rule" in the "Rule Engine" on the left and click the "Create" button: !
-![img](./emqx/rule-engine.webp)
+![TDengine Database EMQX rule engine](./emqx/rule-engine.webp)
### Edit SQL fields
-![img](./emqx/create-rule.webp)
+![TDengine Database EMQX create rule](./emqx/create-rule.webp)
### Add "action handler"
-![img](./emqx/add-action-handler.webp)
+![TDengine Database EMQX add action handler](./emqx/add-action-handler.webp)
### Add "Resource"
-![img](./emqx/create-resource.webp)
+![TDengine Database EMQX create resource](./emqx/create-resource.webp)
Select "Data to Web Service" and click the "New Resource" button.
@@ -70,13 +70,13 @@ Select "Data to Web Service" and click the "New Resource" button.
Select "Data to Web Service" and fill in the request URL as the address and port of the server running taosAdapter (default is 6041). Leave the other properties at their default values.
-![img](./emqx/edit-resource.webp)
+![TDengine Database EMQX edit resource](./emqx/edit-resource.webp)
### Edit "action"
Edit the resource configuration to add the key/value pairing for Authorization. Please refer to the [ TDengine REST API documentation ](https://docs.taosdata.com/reference/rest-api/) for the authorization in details. Enter the rule engine replacement template in the message body.
-![img](./emqx/edit-action.webp)
+![TDengine Database EMQX edit action](./emqx/edit-action.webp)
## Compose program to mock data
@@ -163,7 +163,7 @@ Edit the resource configuration to add the key/value pairing for Authorization.
Note: `CLIENT_NUM` in the code can be set to a smaller value at the beginning of the test to avoid hardware performance be not capable to handle a more significant number of concurrent clients.
-![img](./emqx/client-num.webp)
+![TDengine Database EMQX client num](./emqx/client-num.webp)
## Execute tests to simulate sending MQTT data
@@ -172,19 +172,19 @@ npm install mqtt mockjs --save ---registry=https://registry.npm.taobao.org
node mock.js
```
-![img](./emqx/run-mock.webp)
+![TDengine Database EMQX run mock](./emqx/run-mock.webp)
## Verify that EMQX is receiving data
Refresh the EMQX Dashboard rules engine interface to see how many records were received correctly:
-![img](./emqx/check-rule-matched.webp)
+![TDengine Database EMQX rule matched](./emqx/check-rule-matched.webp)
## Verify that data writing to TDengine
Use the TDengine CLI program to log in and query the appropriate databases and tables to verify that the data is being written to TDengine correctly:
-![img](./emqx/check-result-in-taos.webp)
+![TDengine Database EMQX result in taos](./emqx/check-result-in-taos.webp)
Please refer to the [TDengine official documentation](https://docs.taosdata.com/) for more details on how to use TDengine.
EMQX Please refer to the [EMQX official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use EMQX.
diff --git a/docs-en/20-third-party/11-kafka.md b/docs-en/20-third-party/11-kafka.md
index 2da9a86b7d3def338497c9c0f3481918b566aaed..155635c231d04634afdd2012177684227b003653 100644
--- a/docs-en/20-third-party/11-kafka.md
+++ b/docs-en/20-third-party/11-kafka.md
@@ -9,11 +9,11 @@ TDengine Kafka Connector contains two plugins: TDengine Source Connector and TDe
Kafka Connect is a component of Apache Kafka that enables other systems, such as databases, cloud services, file systems, etc., to connect to Kafka easily. Data can flow from other software to Kafka via Kafka Connect and Kafka to other systems via Kafka Connect. Plugins that read data from other software are called Source Connectors, and plugins that write data to other software are called Sink Connectors. Neither Source Connector nor Sink Connector will directly connect to Kafka Broker, and Source Connector transfers data to Kafka Connect. Sink Connector receives data from Kafka Connect.
-![](kafka/Kafka_Connect.webp)
+![TDengine Database Kafka Connector -- Kafka Connect](kafka/Kafka_Connect.webp)
TDengine Source Connector is used to read data from TDengine in real-time and send it to Kafka Connect. Users can use The TDengine Sink Connector to receive data from Kafka Connect and write it to TDengine.
-![](kafka/streaming-integration-with-kafka-connect.webp)
+![TDengine Database Kafka Connector -- streaming integration with kafka connect](kafka/streaming-integration-with-kafka-connect.webp)
## What is Confluent?
@@ -26,7 +26,7 @@ Confluent adds many extensions to Kafka. include:
5. GUI for managing and monitoring Kafka - Confluent Control Center
Some of these extensions are available in the community version of Confluent. Some are only available in the enterprise version.
-![](kafka/confluentPlatform.webp)
+![TDengine Database Kafka Connector -- Confluent platform](kafka/confluentPlatform.webp)
Confluent Enterprise Edition provides the `confluent` command-line tool to manage various components.
diff --git a/docs-en/21-tdinternal/01-arch.md b/docs-en/21-tdinternal/01-arch.md
index 2c430908e410c7ae8e6f09a3f7e2d059f906fda5..16d4b7afe26107e251a542ee24b644c1d372def0 100644
--- a/docs-en/21-tdinternal/01-arch.md
+++ b/docs-en/21-tdinternal/01-arch.md
@@ -11,7 +11,7 @@ The design of TDengine is based on the assumption that any hardware or software
Logical structure diagram of TDengine distributed architecture as following:
-![TDengine architecture diagram](structure.webp)
+![TDengine Database architecture diagram](structure.webp)
Figure 1: TDengine architecture diagram
A complete TDengine system runs on one or more physical nodes. Logically, it includes data node (dnode), TDengine client driver (TAOSC) and application (app). There are one or more data nodes in the system, which form a cluster. The application interacts with the TDengine cluster through TAOSC's API. The following is a brief introduction to each logical unit.
@@ -54,7 +54,7 @@ A complete TDengine system runs on one or more physical nodes. Logically, it inc
To explain the relationship between vnode, mnode, TAOSC and application and their respective roles, the following is an analysis of a typical data writing process.
-![typical process of TDengine](message.webp)
+![typical process of TDengine Database](message.webp)
Figure 2: Typical process of TDengine
1. Application initiates a request to insert data through JDBC, ODBC, or other APIs.
@@ -123,7 +123,7 @@ If a database has N replicas, thus a virtual node group has N virtual nodes, but
Master Vnode uses a writing process as follows:
-![TDengine Master Writing Process](write_master.webp)
+![TDengine Database Master Writing Process](write_master.webp)
Figure 3: TDengine Master writing process
1. Master vnode receives the application data insertion request, verifies, and moves to next step;
@@ -137,7 +137,7 @@ Master Vnode uses a writing process as follows:
For a slave vnode, the write process as follows:
-![TDengine Slave Writing Process](write_slave.webp)
+![TDengine Database Slave Writing Process](write_slave.webp)
Figure 4: TDengine Slave Writing Process
1. Slave vnode receives a data insertion request forwarded by Master vnode;
@@ -267,7 +267,7 @@ For the data collected by device D1001, the number of records per hour is counte
TDengine creates a separate table for each data collection point, but in practical applications, it is often necessary to aggregate data from different data collection points. In order to perform aggregation operations efficiently, TDengine introduces the concept of STable. STable is used to represent a specific type of data collection point. It is a table set containing multiple tables. The schema of each table in the set is the same, but each table has its own static tag. The tags can be multiple and be added, deleted and modified at any time. Applications can aggregate or statistically operate all or a subset of tables under a STABLE by specifying tag filters, thus greatly simplifying the development of applications. The process is shown in the following figure:
-![Diagram of multi-table aggregation query](multi_tables.webp)
+![TDengine Database Diagram of multi-table aggregation query](multi_tables.webp)
Figure 5: Diagram of multi-table aggregation query
1. Application sends a query condition to system;
diff --git a/docs-en/25-application/01-telegraf.md b/docs-en/25-application/01-telegraf.md
index 07ab289ac2bbf44c219535fe128db69b34465c01..6a57145cd3d82ca5ec1ab828bfc7b6270bbe9d47 100644
--- a/docs-en/25-application/01-telegraf.md
+++ b/docs-en/25-application/01-telegraf.md
@@ -16,7 +16,7 @@ Current mainstream IT DevOps system usually include a data collection module, a
This article introduces how to quickly build a TDengine + Telegraf + Grafana based IT DevOps visualization system without writing even a single line of code and by simply modifying a few lines of configuration files. The architecture is as follows.
-![IT-DevOps-Solutions-Telegraf.webp](./IT-DevOps-Solutions-Telegraf.webp)
+![TDengine Database IT-DevOps-Solutions-Telegraf](./IT-DevOps-Solutions-Telegraf.webp)
## Installation steps
@@ -73,9 +73,9 @@ sudo systemctl start telegraf
Log in to the Grafana interface using a web browser at `IP:3000`, with the system's initial username and password being `admin/admin`.
Click on the gear icon on the left and select `Plugins`, you should find the TDengine data source plugin icon.
-Click on the plus icon on the left and select `Import` to get the data from `https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard- v0.1.0.json`, download the dashboard JSON file and import it. You will then see the dashboard in the following screen.
+Click on the plus icon on the left and select `Import` to get the data from `https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard-v0.1.0.json`, download the dashboard JSON file and import it. You will then see the dashboard in the following screen.
-![IT-DevOps-Solutions-telegraf-dashboard.webp](./IT-DevOps-Solutions-telegraf-dashboard.webp)
+![TDengine Database IT-DevOps-Solutions-telegraf-dashboard](./IT-DevOps-Solutions-telegraf-dashboard.webp)
## Wrap-up
diff --git a/docs-en/25-application/02-collectd.md b/docs-en/25-application/02-collectd.md
index 0ddea2855497f1dfdfce7a2aa6749e0c5ba1b9ff..963881eafa6e5085eab951c1b1ab54faeba1fa7b 100644
--- a/docs-en/25-application/02-collectd.md
+++ b/docs-en/25-application/02-collectd.md
@@ -17,7 +17,7 @@ The new version of TDengine supports multiple data protocols and can accept data
This article introduces how to quickly build an IT DevOps visualization system based on TDengine + collectd / StatsD + Grafana without writing even a single line of code but by simply modifying a few lines of configuration files. The architecture is shown in the following figure.
-![IT-DevOps-Solutions-Collectd-StatsD.webp](./IT-DevOps-Solutions-Collectd-StatsD.webp)
+![TDengine Database IT-DevOps-Solutions-Collectd-StatsD](./IT-DevOps-Solutions-Collectd-StatsD.webp)
## Installation Steps
@@ -83,19 +83,19 @@ Click on the gear icon on the left and select `Plugins`, you should find the TDe
Download the dashboard json from `https://github.com/taosdata/grafanaplugin/blob/master/examples/collectd/grafana/dashboards/collect-metrics-with-tdengine-v0.1.0.json`, click the plus icon on the left and select Import, follow the instructions to import the JSON file. After that, you can see
The dashboard can be seen in the following screen.
-![IT-DevOps-Solutions-collectd-dashboard.webp](./IT-DevOps-Solutions-collectd-dashboard.webp)
+![TDengine Database IT-DevOps-Solutions-collectd-dashboard](./IT-DevOps-Solutions-collectd-dashboard.webp)
#### import collectd dashboard
Download the dashboard json file from `https://github.com/taosdata/grafanaplugin/blob/master/examples/collectd/grafana/dashboards/collect-metrics-with-tdengine-v0.1.0.json`. Download the dashboard json file, click the plus icon on the left side and select `Import`, and follow the interface prompts to select the JSON file to import. After that, you can see
dashboard with the following interface.
-![IT-DevOps-Solutions-collectd-dashboard.webp](./IT-DevOps-Solutions-collectd-dashboard.webp)
+![IT-DevOps-Solutions-collectd-dashboard](./IT-DevOps-Solutions-collectd-dashboard.webp)
#### Importing the StatsD dashboard
Download the dashboard json from `https://github.com/taosdata/grafanaplugin/blob/master/examples/statsd/dashboards/statsd-with-tdengine-v0.1.0.json`. Click on the plus icon on the left and select `Import`, and follow the interface prompts to import the JSON file. You will then see the dashboard in the following screen.
-![IT-DevOps-Solutions-statsd-dashboard.webp](./IT-DevOps-Solutions-statsd-dashboard.webp)
+![TDengine Database IT-DevOps-Solutions-statsd-dashboard](./IT-DevOps-Solutions-statsd-dashboard.webp)
## Wrap-up
diff --git a/docs-en/25-application/03-immigrate.md b/docs-en/25-application/03-immigrate.md
index 68d8a2b8cc25c80b8a647332df66874bee344715..69166bf78b66a23af35af726f2e5c477195a3595 100644
--- a/docs-en/25-application/03-immigrate.md
+++ b/docs-en/25-application/03-immigrate.md
@@ -32,7 +32,7 @@ We will explain how to migrate OpenTSDB applications to TDengine quickly, secure
The following figure (Figure 1) shows the system's overall architecture for a typical DevOps application scenario.
**Figure 1. Typical architecture in a DevOps scenario**
-![IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch](./IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch.webp "Figure 1. Typical architecture in a DevOps scenario")
+![TDengine Database IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch](./IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch.webp "Figure 1. Typical architecture in a DevOps scenario")
In this application scenario, there are Agent tools deployed in the application environment to collect machine metrics, network metrics, and application metrics. Data collectors to aggregate information collected by agents, systems for persistent data storage and management, and tools for monitoring data visualization (e.g., Grafana, etc.).
@@ -75,7 +75,7 @@ After writing the data to TDengine properly, you can adapt Grafana to visualize
TDengine provides two sets of Dashboard templates by default, and users only need to import the templates from the Grafana directory into Grafana to activate their use.
**Importing Grafana Templates** Figure 2.
-![](./IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.webp "Figure 2. Importing a Grafana Template")
+![TDengine Database IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard](./IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.webp "Figure 2. Importing a Grafana Template")
After the above steps, you completed the migration to replace OpenTSDB with TDengine. You can see that the whole process is straightforward, there is no need to write any code, and only some configuration files need to be adjusted to meet the migration work.
@@ -88,7 +88,7 @@ In most DevOps scenarios, if you have a small OpenTSDB cluster (3 or fewer nodes
Suppose your application is particularly complex, or the application domain is not a DevOps scenario. You can continue reading subsequent chapters for a more comprehensive and in-depth look at the advanced topics of migrating an OpenTSDB application to TDengine.
**Figure 3. System architecture after migration**
-![IT-DevOps-Solutions-Immigrate-TDengine-Arch](./IT-DevOps-Solutions-Immigrate-TDengine-Arch.webp "Figure 3. System architecture after migration completion")
+![TDengine Database IT-DevOps-Solutions-Immigrate-TDengine-Arch](./IT-DevOps-Solutions-Immigrate-TDengine-Arch.webp "Figure 3. System architecture after migration completion")
## Migration evaluation and strategy for other scenarios
diff --git a/docs-examples/c/insert_example.c b/docs-examples/c/insert_example.c
index ca12be9314efbda707dbd05449c746794c209743..ce8fdc5b9372aec7b02d3c9254ec25c4c4f62adc 100644
--- a/docs-examples/c/insert_example.c
+++ b/docs-examples/c/insert_example.c
@@ -36,10 +36,10 @@ int main() {
executeSQL(taos, "CREATE DATABASE power");
executeSQL(taos, "USE power");
executeSQL(taos, "CREATE STABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)");
- executeSQL(taos, "INSERT INTO d1001 USING meters TAGS(Beijing.Chaoyang, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)"
- "d1002 USING meters TAGS(Beijing.Chaoyang, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)"
- "d1003 USING meters TAGS(Beijing.Haidian, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)"
- "d1004 USING meters TAGS(Beijing.Haidian, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)");
+ executeSQL(taos, "INSERT INTO d1001 USING meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)"
+ "d1002 USING meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)"
+ "d1003 USING meters TAGS(California.LosAngeles, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)"
+ "d1004 USING meters TAGS(California.LosAngeles, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)");
taos_close(taos);
taos_cleanup();
}
diff --git a/docs-examples/c/json_protocol_example.c b/docs-examples/c/json_protocol_example.c
index 182fd201308facc80c76f36cfa57580784d70413..9d276127a64c3d74322e30587ab2e319c29cbf65 100644
--- a/docs-examples/c/json_protocol_example.c
+++ b/docs-examples/c/json_protocol_example.c
@@ -29,11 +29,11 @@ int main() {
executeSQL(taos, "USE test");
char *line =
"[{\"metric\": \"meters.current\", \"timestamp\": 1648432611249, \"value\": 10.3, \"tags\": {\"location\": "
- "\"Beijing.Chaoyang\", \"groupid\": 2}},{\"metric\": \"meters.voltage\", \"timestamp\": 1648432611249, "
- "\"value\": 219, \"tags\": {\"location\": \"Beijing.Haidian\", \"groupid\": 1}},{\"metric\": \"meters.current\", "
- "\"timestamp\": 1648432611250, \"value\": 12.6, \"tags\": {\"location\": \"Beijing.Chaoyang\", \"groupid\": "
+ "\"California.SanFrancisco\", \"groupid\": 2}},{\"metric\": \"meters.voltage\", \"timestamp\": 1648432611249, "
+ "\"value\": 219, \"tags\": {\"location\": \"California.LosAngeles\", \"groupid\": 1}},{\"metric\": \"meters.current\", "
+ "\"timestamp\": 1648432611250, \"value\": 12.6, \"tags\": {\"location\": \"California.SanFrancisco\", \"groupid\": "
"2}},{\"metric\": \"meters.voltage\", \"timestamp\": 1648432611250, \"value\": 221, \"tags\": {\"location\": "
- "\"Beijing.Haidian\", \"groupid\": 1}}]";
+ "\"California.LosAngeles\", \"groupid\": 1}}]";
char *lines[] = {line};
TAOS_RES *res = taos_schemaless_insert(taos, lines, 1, TSDB_SML_JSON_PROTOCOL, TSDB_SML_TIMESTAMP_NOT_CONFIGURED);
diff --git a/docs-examples/c/line_example.c b/docs-examples/c/line_example.c
index 8dd4b1a5075369625645959da0476b76b9fbf290..ce39f8d9df744082a450ce246529bf56adebd1e0 100644
--- a/docs-examples/c/line_example.c
+++ b/docs-examples/c/line_example.c
@@ -27,10 +27,10 @@ int main() {
executeSQL(taos, "DROP DATABASE IF EXISTS test");
executeSQL(taos, "CREATE DATABASE test");
executeSQL(taos, "USE test");
- char *lines[] = {"meters,location=Beijing.Haidian,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
- "meters,location=Beijing.Haidian,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
- "meters,location=Beijing.Haidian,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
- "meters,location=Beijing.Haidian,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250"};
+ char *lines[] = {"meters,location=California.LosAngeles,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
+ "meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
+ "meters,location=California.LosAngeles,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
+ "meters,location=California.LosAngeles,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250"};
TAOS_RES *res = taos_schemaless_insert(taos, lines, 4, TSDB_SML_LINE_PROTOCOL, TSDB_SML_TIMESTAMP_MILLI_SECONDS);
if (taos_errno(res) != 0) {
printf("failed to insert schema-less data, reason: %s\n", taos_errstr(res));
diff --git a/docs-examples/c/multi_bind_example.c b/docs-examples/c/multi_bind_example.c
index fe11df9caad3e216fbd0b1ff2f40a54fe3ba86e5..02e6568e9e88ac8703a4993ed406e770d23c2438 100644
--- a/docs-examples/c/multi_bind_example.c
+++ b/docs-examples/c/multi_bind_example.c
@@ -52,7 +52,7 @@ void insertData(TAOS *taos) {
checkErrorCode(stmt, code, "failed to execute taos_stmt_prepare");
// bind table name and tags
TAOS_BIND tags[2];
- char *location = "Beijing.Chaoyang";
+ char *location = "California.SanFrancisco";
int groupId = 2;
tags[0].buffer_type = TSDB_DATA_TYPE_BINARY;
tags[0].buffer_length = strlen(location);
diff --git a/docs-examples/c/query_example.c b/docs-examples/c/query_example.c
index f88b2467ceb3d9bbeaf6b3beb6a24befd3e398c6..fcae95bcd45a282eaa3ae911b4115e6300c6af8e 100644
--- a/docs-examples/c/query_example.c
+++ b/docs-examples/c/query_example.c
@@ -139,5 +139,5 @@ int main() {
// output:
// ts current voltage phase location groupid
-// 1648432611249 10.300000 219 0.310000 Beijing.Chaoyang 2
-// 1648432611749 12.600000 218 0.330000 Beijing.Chaoyang 2
\ No newline at end of file
+// 1648432611249 10.300000 219 0.310000 California.SanFrancisco 2
+// 1648432611749 12.600000 218 0.330000 California.SanFrancisco 2
\ No newline at end of file
diff --git a/docs-examples/c/stmt_example.c b/docs-examples/c/stmt_example.c
index fab1506f953ef68050e4318406fa2ba1a0202929..28dae5f9d5ea2faec0aa3c0a784d39e252651c65 100644
--- a/docs-examples/c/stmt_example.c
+++ b/docs-examples/c/stmt_example.c
@@ -59,7 +59,7 @@ void insertData(TAOS *taos) {
checkErrorCode(stmt, code, "failed to execute taos_stmt_prepare");
// bind table name and tags
TAOS_BIND tags[2];
- char* location = "Beijing.Chaoyang";
+ char* location = "California.SanFrancisco";
int groupId = 2;
tags[0].buffer_type = TSDB_DATA_TYPE_BINARY;
tags[0].buffer_length = strlen(location);
diff --git a/docs-examples/c/telnet_line_example.c b/docs-examples/c/telnet_line_example.c
index 913d433f6aec07b3bce115d45536ffa4b45a0481..da62da4ba492856b0d73a564c1bf9cdd60b5b742 100644
--- a/docs-examples/c/telnet_line_example.c
+++ b/docs-examples/c/telnet_line_example.c
@@ -28,14 +28,14 @@ int main() {
executeSQL(taos, "CREATE DATABASE test");
executeSQL(taos, "USE test");
char *lines[] = {
- "meters.current 1648432611249 10.3 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611250 12.6 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611249 10.8 location=Beijing.Haidian groupid=3",
- "meters.current 1648432611250 11.3 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611249 219 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611250 218 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611249 221 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611250 217 location=Beijing.Haidian groupid=3",
+ "meters.current 1648432611249 10.3 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611250 12.6 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611249 10.8 location=California.LosAngeles groupid=3",
+ "meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611249 219 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611250 218 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611249 221 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611250 217 location=California.LosAngeles groupid=3",
};
TAOS_RES *res = taos_schemaless_insert(taos, lines, 8, TSDB_SML_TELNET_PROTOCOL, TSDB_SML_TIMESTAMP_NOT_CONFIGURED);
if (taos_errno(res) != 0) {
diff --git a/docs-examples/csharp/InfluxDBLineExample.cs b/docs-examples/csharp/InfluxDBLineExample.cs
index 7aad08825209db568d61e5963ec7a00034ab7ca7..7b4453f4ac0b14dd76d166e395bdacb46a5d3fbc 100644
--- a/docs-examples/csharp/InfluxDBLineExample.cs
+++ b/docs-examples/csharp/InfluxDBLineExample.cs
@@ -9,10 +9,10 @@ namespace TDengineExample
IntPtr conn = GetConnection();
PrepareDatabase(conn);
string[] lines = {
- "meters,location=Beijing.Haidian,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
- "meters,location=Beijing.Haidian,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
- "meters,location=Beijing.Haidian,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
- "meters,location=Beijing.Haidian,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250"
+ "meters,location=California.LosAngeles,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
+ "meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
+ "meters,location=California.LosAngeles,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
+ "meters,location=California.LosAngeles,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250"
};
IntPtr res = TDengine.SchemalessInsert(conn, lines, lines.Length, (int)TDengineSchemalessProtocol.TSDB_SML_LINE_PROTOCOL, (int)TDengineSchemalessPrecision.TSDB_SML_TIMESTAMP_MILLI_SECONDS);
if (TDengine.ErrorNo(res) != 0)
diff --git a/docs-examples/csharp/OptsJsonExample.cs b/docs-examples/csharp/OptsJsonExample.cs
index d774a325afa1a8d93eb858f23dcd97dd29f8653d..2c41acc5c9628befda7eb4ad5c30af5b921de948 100644
--- a/docs-examples/csharp/OptsJsonExample.cs
+++ b/docs-examples/csharp/OptsJsonExample.cs
@@ -8,10 +8,10 @@ namespace TDengineExample
{
IntPtr conn = GetConnection();
PrepareDatabase(conn);
- string[] lines = { "[{\"metric\": \"meters.current\", \"timestamp\": 1648432611249, \"value\": 10.3, \"tags\": {\"location\": \"Beijing.Chaoyang\", \"groupid\": 2}}," +
- " {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611249, \"value\": 219, \"tags\": {\"location\": \"Beijing.Haidian\", \"groupid\": 1}}, " +
- "{\"metric\": \"meters.current\", \"timestamp\": 1648432611250, \"value\": 12.6, \"tags\": {\"location\": \"Beijing.Chaoyang\", \"groupid\": 2}}," +
- " {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611250, \"value\": 221, \"tags\": {\"location\": \"Beijing.Haidian\", \"groupid\": 1}}]"
+ string[] lines = { "[{\"metric\": \"meters.current\", \"timestamp\": 1648432611249, \"value\": 10.3, \"tags\": {\"location\": \"California.SanFrancisco\", \"groupid\": 2}}," +
+ " {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611249, \"value\": 219, \"tags\": {\"location\": \"California.LosAngeles\", \"groupid\": 1}}, " +
+ "{\"metric\": \"meters.current\", \"timestamp\": 1648432611250, \"value\": 12.6, \"tags\": {\"location\": \"California.SanFrancisco\", \"groupid\": 2}}," +
+ " {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611250, \"value\": 221, \"tags\": {\"location\": \"California.LosAngeles\", \"groupid\": 1}}]"
};
IntPtr res = TDengine.SchemalessInsert(conn, lines, 1, (int)TDengineSchemalessProtocol.TSDB_SML_JSON_PROTOCOL, (int)TDengineSchemalessPrecision.TSDB_SML_TIMESTAMP_NOT_CONFIGURED);
diff --git a/docs-examples/csharp/OptsTelnetExample.cs b/docs-examples/csharp/OptsTelnetExample.cs
index 81608c32213fa0618a2ca6e0769aacf8e9c8e64d..bb752db1afbbb2ef68df9ca25314c8b91cd9a266 100644
--- a/docs-examples/csharp/OptsTelnetExample.cs
+++ b/docs-examples/csharp/OptsTelnetExample.cs
@@ -9,14 +9,14 @@ namespace TDengineExample
IntPtr conn = GetConnection();
PrepareDatabase(conn);
string[] lines = {
- "meters.current 1648432611249 10.3 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611250 12.6 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611249 10.8 location=Beijing.Haidian groupid=3",
- "meters.current 1648432611250 11.3 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611249 219 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611250 218 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611249 221 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611250 217 location=Beijing.Haidian groupid=3",
+ "meters.current 1648432611249 10.3 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611250 12.6 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611249 10.8 location=California.LosAngeles groupid=3",
+ "meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611249 219 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611250 218 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611249 221 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611250 217 location=California.LosAngeles groupid=3",
};
IntPtr res = TDengine.SchemalessInsert(conn, lines, lines.Length, (int)TDengineSchemalessProtocol.TSDB_SML_TELNET_PROTOCOL, (int)TDengineSchemalessPrecision.TSDB_SML_TIMESTAMP_NOT_CONFIGURED);
if (TDengine.ErrorNo(res) != 0)
diff --git a/docs-examples/csharp/QueryExample.cs b/docs-examples/csharp/QueryExample.cs
index f00e391100c7ce42177e2987f5b0b32dc02262c4..97f0c456d412e2ed608c345ba87469d3f5ccfc15 100644
--- a/docs-examples/csharp/QueryExample.cs
+++ b/docs-examples/csharp/QueryExample.cs
@@ -158,5 +158,5 @@ namespace TDengineExample
// Connect to TDengine success
// fieldCount=6
// ts current voltage phase location groupid
-// 1648432611249 10.3 219 0.31 Beijing.Chaoyang 2
-// 1648432611749 12.6 218 0.33 Beijing.Chaoyang 2
\ No newline at end of file
+// 1648432611249 10.3 219 0.31 California.SanFrancisco 2
+// 1648432611749 12.6 218 0.33 California.SanFrancisco 2
\ No newline at end of file
diff --git a/docs-examples/csharp/SQLInsertExample.cs b/docs-examples/csharp/SQLInsertExample.cs
index fa2e2a50daf06f4d948479e7f5b0df82c517f809..d5462c1062e01fd5c93bac983696d0350117ad92 100644
--- a/docs-examples/csharp/SQLInsertExample.cs
+++ b/docs-examples/csharp/SQLInsertExample.cs
@@ -15,10 +15,10 @@ namespace TDengineExample
CheckRes(conn, res, "failed to change database");
res = TDengine.Query(conn, "CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)");
CheckRes(conn, res, "failed to create stable");
- var sql = "INSERT INTO d1001 USING meters TAGS(Beijing.Chaoyang, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000) " +
- "d1002 USING power.meters TAGS(Beijing.Chaoyang, 3) VALUES('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000) " +
- "d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000)('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000) " +
- "d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000)('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)";
+ var sql = "INSERT INTO d1001 USING meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000) " +
+ "d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000) " +
+ "d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000)('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000) " +
+ "d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000)('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)";
res = TDengine.Query(conn, sql);
CheckRes(conn, res, "failed to insert data");
int affectedRows = TDengine.AffectRows(res);
diff --git a/docs-examples/csharp/StmtInsertExample.cs b/docs-examples/csharp/StmtInsertExample.cs
index d6e00dd4ac54ab8dbfc33b93896d19fc585e7642..6ade424b95d64529b7a40a782de13e3106d0c78a 100644
--- a/docs-examples/csharp/StmtInsertExample.cs
+++ b/docs-examples/csharp/StmtInsertExample.cs
@@ -21,7 +21,7 @@ namespace TDengineExample
CheckStmtRes(res, "failed to prepare stmt");
// 2. bind table name and tags
- TAOS_BIND[] tags = new TAOS_BIND[2] { TaosBind.BindBinary("Beijing.Chaoyang"), TaosBind.BindInt(2) };
+ TAOS_BIND[] tags = new TAOS_BIND[2] { TaosBind.BindBinary("California.SanFrancisco"), TaosBind.BindInt(2) };
res = TDengine.StmtSetTbnameTags(stmt, "d1001", tags);
CheckStmtRes(res, "failed to bind table name and tags");
diff --git a/docs-examples/go/insert/json/main.go b/docs-examples/go/insert/json/main.go
index 47d9e9984adc05896fb9954ad3deffde3764b836..6be375270e32a5091c015f88de52c9dda2246b59 100644
--- a/docs-examples/go/insert/json/main.go
+++ b/docs-examples/go/insert/json/main.go
@@ -25,10 +25,10 @@ func main() {
defer conn.Close()
prepareDatabase(conn)
- payload := `[{"metric": "meters.current", "timestamp": 1648432611249, "value": 10.3, "tags": {"location": "Beijing.Chaoyang", "groupid": 2}},
- {"metric": "meters.voltage", "timestamp": 1648432611249, "value": 219, "tags": {"location": "Beijing.Haidian", "groupid": 1}},
- {"metric": "meters.current", "timestamp": 1648432611250, "value": 12.6, "tags": {"location": "Beijing.Chaoyang", "groupid": 2}},
- {"metric": "meters.voltage", "timestamp": 1648432611250, "value": 221, "tags": {"location": "Beijing.Haidian", "groupid": 1}}]`
+ payload := `[{"metric": "meters.current", "timestamp": 1648432611249, "value": 10.3, "tags": {"location": "California.SanFrancisco", "groupid": 2}},
+ {"metric": "meters.voltage", "timestamp": 1648432611249, "value": 219, "tags": {"location": "California.LosAngeles", "groupid": 1}},
+ {"metric": "meters.current", "timestamp": 1648432611250, "value": 12.6, "tags": {"location": "California.SanFrancisco", "groupid": 2}},
+ {"metric": "meters.voltage", "timestamp": 1648432611250, "value": 221, "tags": {"location": "California.LosAngeles", "groupid": 1}}]`
err = conn.OpenTSDBInsertJsonPayload(payload)
if err != nil {
diff --git a/docs-examples/go/insert/line/main.go b/docs-examples/go/insert/line/main.go
index bbc41468fe5f13d3e6f896445bb88f3eba584d0f..c17e1a5270850e6a8b497e0dbec4ae714ee1e2d6 100644
--- a/docs-examples/go/insert/line/main.go
+++ b/docs-examples/go/insert/line/main.go
@@ -25,10 +25,10 @@ func main() {
defer conn.Close()
prepareDatabase(conn)
var lines = []string{
- "meters,location=Beijing.Haidian,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
- "meters,location=Beijing.Haidian,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
- "meters,location=Beijing.Haidian,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
- "meters,location=Beijing.Haidian,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250",
+ "meters,location=California.LosAngeles,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
+ "meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
+ "meters,location=California.LosAngeles,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
+ "meters,location=California.LosAngeles,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250",
}
err = conn.InfluxDBInsertLines(lines, "ms")
diff --git a/docs-examples/go/insert/sql/main.go b/docs-examples/go/insert/sql/main.go
index 91386855334c1930af721e0b4f43395c6a6d8e82..6cd5f860e65f4fffd139668f69cc1772f5310eae 100644
--- a/docs-examples/go/insert/sql/main.go
+++ b/docs-examples/go/insert/sql/main.go
@@ -19,10 +19,10 @@ func createStable(taos *sql.DB) {
}
func insertData(taos *sql.DB) {
- sql := `INSERT INTO power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
- power.d1002 USING power.meters TAGS(Beijing.Chaoyang, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
- power.d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
- power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)`
+ sql := `INSERT INTO power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
+ power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
+ power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
+ power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)`
result, err := taos.Exec(sql)
if err != nil {
fmt.Println("failed to insert, err:", err)
diff --git a/docs-examples/go/insert/stmt/main.go b/docs-examples/go/insert/stmt/main.go
index c50200ebb427c4c64c2737cb8fe4c3d287551a34..7093fdf1e52bc5a14fc92cec995fd81e70717d9f 100644
--- a/docs-examples/go/insert/stmt/main.go
+++ b/docs-examples/go/insert/stmt/main.go
@@ -37,7 +37,7 @@ func main() {
checkErr(err, "failed to create prepare statement")
// bind table name and tags
- tagParams := param.NewParam(2).AddBinary([]byte("Beijing.Chaoyang")).AddInt(2)
+ tagParams := param.NewParam(2).AddBinary([]byte("California.SanFrancisco")).AddInt(2)
err = stmt.SetTableNameWithTags("d1001", tagParams)
checkErr(err, "failed to execute SetTableNameWithTags")
diff --git a/docs-examples/go/insert/telnet/main.go b/docs-examples/go/insert/telnet/main.go
index 879e6d5cece74fd0b7c815dd34614dca3c9d4544..91fafbe71adbf60d9341b903f5a25708b7011852 100644
--- a/docs-examples/go/insert/telnet/main.go
+++ b/docs-examples/go/insert/telnet/main.go
@@ -25,14 +25,14 @@ func main() {
defer conn.Close()
prepareDatabase(conn)
var lines = []string{
- "meters.current 1648432611249 10.3 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611250 12.6 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611249 10.8 location=Beijing.Haidian groupid=3",
- "meters.current 1648432611250 11.3 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611249 219 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611250 218 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611249 221 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611250 217 location=Beijing.Haidian groupid=3",
+ "meters.current 1648432611249 10.3 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611250 12.6 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611249 10.8 location=California.LosAngeles groupid=3",
+ "meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611249 219 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611250 218 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611249 221 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611250 217 location=California.LosAngeles groupid=3",
}
err = conn.OpenTSDBInsertTelnetLines(lines)
diff --git a/docs-examples/java/src/main/java/com/taos/example/JSONProtocolExample.java b/docs-examples/java/src/main/java/com/taos/example/JSONProtocolExample.java
index cb83424576a4fd7dfa09ea297294ed77b66bd12d..c8e649482fbd747cdc238daa9e7a237cf63295b6 100644
--- a/docs-examples/java/src/main/java/com/taos/example/JSONProtocolExample.java
+++ b/docs-examples/java/src/main/java/com/taos/example/JSONProtocolExample.java
@@ -23,10 +23,10 @@ public class JSONProtocolExample {
}
private static String getJSONData() {
- return "[{\"metric\": \"meters.current\", \"timestamp\": 1648432611249, \"value\": 10.3, \"tags\": {\"location\": \"Beijing.Chaoyang\", \"groupid\": 2}}," +
- " {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611249, \"value\": 219, \"tags\": {\"location\": \"Beijing.Haidian\", \"groupid\": 1}}, " +
- "{\"metric\": \"meters.current\", \"timestamp\": 1648432611250, \"value\": 12.6, \"tags\": {\"location\": \"Beijing.Chaoyang\", \"groupid\": 2}}," +
- " {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611250, \"value\": 221, \"tags\": {\"location\": \"Beijing.Haidian\", \"groupid\": 1}}]";
+ return "[{\"metric\": \"meters.current\", \"timestamp\": 1648432611249, \"value\": 10.3, \"tags\": {\"location\": \"California.SanFrancisco\", \"groupid\": 2}}," +
+ " {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611249, \"value\": 219, \"tags\": {\"location\": \"California.LosAngeles\", \"groupid\": 1}}, " +
+ "{\"metric\": \"meters.current\", \"timestamp\": 1648432611250, \"value\": 12.6, \"tags\": {\"location\": \"California.SanFrancisco\", \"groupid\": 2}}," +
+ " {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611250, \"value\": 221, \"tags\": {\"location\": \"California.LosAngeles\", \"groupid\": 1}}]";
}
public static void main(String[] args) throws SQLException {
diff --git a/docs-examples/java/src/main/java/com/taos/example/LineProtocolExample.java b/docs-examples/java/src/main/java/com/taos/example/LineProtocolExample.java
index 8a2eabe0a91f7966cc3cc6b7dfeeb71b71b88d92..990922b7a516bd32a7e299f5743bd1b5e321868a 100644
--- a/docs-examples/java/src/main/java/com/taos/example/LineProtocolExample.java
+++ b/docs-examples/java/src/main/java/com/taos/example/LineProtocolExample.java
@@ -12,11 +12,11 @@ import java.sql.Statement;
public class LineProtocolExample {
// format: measurement,tag_set field_set timestamp
private static String[] lines = {
- "meters,location=Beijing.Haidian,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249000", // micro
+ "meters,location=California.LosAngeles,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249000", // micro
// seconds
- "meters,location=Beijing.Haidian,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611249500",
- "meters,location=Beijing.Haidian,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249300",
- "meters,location=Beijing.Haidian,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611249800",
+ "meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611249500",
+ "meters,location=California.LosAngeles,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249300",
+ "meters,location=California.LosAngeles,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611249800",
};
private static Connection getConnection() throws SQLException {
diff --git a/docs-examples/java/src/main/java/com/taos/example/RestInsertExample.java b/docs-examples/java/src/main/java/com/taos/example/RestInsertExample.java
index de89f26cbe38f9343d60aeb8d3e9ce7f67c2e764..af97fe4373ca964260e5614f133f359e229b0e15 100644
--- a/docs-examples/java/src/main/java/com/taos/example/RestInsertExample.java
+++ b/docs-examples/java/src/main/java/com/taos/example/RestInsertExample.java
@@ -16,28 +16,28 @@ public class RestInsertExample {
private static List getRawData() {
return Arrays.asList(
- "d1001,2018-10-03 14:38:05.000,10.30000,219,0.31000,Beijing.Chaoyang,2",
- "d1001,2018-10-03 14:38:15.000,12.60000,218,0.33000,Beijing.Chaoyang,2",
- "d1001,2018-10-03 14:38:16.800,12.30000,221,0.31000,Beijing.Chaoyang,2",
- "d1002,2018-10-03 14:38:16.650,10.30000,218,0.25000,Beijing.Chaoyang,3",
- "d1003,2018-10-03 14:38:05.500,11.80000,221,0.28000,Beijing.Haidian,2",
- "d1003,2018-10-03 14:38:16.600,13.40000,223,0.29000,Beijing.Haidian,2",
- "d1004,2018-10-03 14:38:05.000,10.80000,223,0.29000,Beijing.Haidian,3",
- "d1004,2018-10-03 14:38:06.500,11.50000,221,0.35000,Beijing.Haidian,3"
+ "d1001,2018-10-03 14:38:05.000,10.30000,219,0.31000,California.SanFrancisco,2",
+ "d1001,2018-10-03 14:38:15.000,12.60000,218,0.33000,California.SanFrancisco,2",
+ "d1001,2018-10-03 14:38:16.800,12.30000,221,0.31000,California.SanFrancisco,2",
+ "d1002,2018-10-03 14:38:16.650,10.30000,218,0.25000,California.SanFrancisco,3",
+ "d1003,2018-10-03 14:38:05.500,11.80000,221,0.28000,California.LosAngeles,2",
+ "d1003,2018-10-03 14:38:16.600,13.40000,223,0.29000,California.LosAngeles,2",
+ "d1004,2018-10-03 14:38:05.000,10.80000,223,0.29000,California.LosAngeles,3",
+ "d1004,2018-10-03 14:38:06.500,11.50000,221,0.35000,California.LosAngeles,3"
);
}
/**
* The generated SQL is:
- * INSERT INTO power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES('2018-10-03 14:38:05.000',10.30000,219,0.31000)
- * power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES('2018-10-03 14:38:15.000',12.60000,218,0.33000)
- * power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES('2018-10-03 14:38:16.800',12.30000,221,0.31000)
- * power.d1002 USING power.meters TAGS(Beijing.Chaoyang, 3) VALUES('2018-10-03 14:38:16.650',10.30000,218,0.25000)
- * power.d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES('2018-10-03 14:38:05.500',11.80000,221,0.28000)
- * power.d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES('2018-10-03 14:38:16.600',13.40000,223,0.29000)
- * power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES('2018-10-03 14:38:05.000',10.80000,223,0.29000)
- * power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES('2018-10-03 14:38:06.500',11.50000,221,0.35000)
+ * INSERT INTO power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 14:38:05.000',10.30000,219,0.31000)
+ * power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 14:38:15.000',12.60000,218,0.33000)
+ * power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 14:38:16.800',12.30000,221,0.31000)
+ * power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES('2018-10-03 14:38:16.650',10.30000,218,0.25000)
+ * power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES('2018-10-03 14:38:05.500',11.80000,221,0.28000)
+ * power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES('2018-10-03 14:38:16.600',13.40000,223,0.29000)
+ * power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 14:38:05.000',10.80000,223,0.29000)
+ * power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 14:38:06.500',11.50000,221,0.35000)
*/
private static String getSQL() {
StringBuilder sb = new StringBuilder("INSERT INTO ");
diff --git a/docs-examples/java/src/main/java/com/taos/example/RestQueryExample.java b/docs-examples/java/src/main/java/com/taos/example/RestQueryExample.java
index b1a1d224c6d9af2b83ac039726dcdb49a33ec2b0..a3581a1f4733e8bf3e3f561bb6cab5a725d8a1c0 100644
--- a/docs-examples/java/src/main/java/com/taos/example/RestQueryExample.java
+++ b/docs-examples/java/src/main/java/com/taos/example/RestQueryExample.java
@@ -51,5 +51,5 @@ public class RestQueryExample {
// possible output:
// avg(voltage) location
-// 222.0 Beijing.Haidian
-// 219.0 Beijing.Chaoyang
+// 222.0 California.LosAngeles
+// 219.0 California.SanFrancisco
diff --git a/docs-examples/java/src/main/java/com/taos/example/StmtInsertExample.java b/docs-examples/java/src/main/java/com/taos/example/StmtInsertExample.java
index 2a7ccebf41cae1a22d7516966e2c6ffb10011b64..bbcc92b22f67c31384b0fb7a082975eaac2ff2bc 100644
--- a/docs-examples/java/src/main/java/com/taos/example/StmtInsertExample.java
+++ b/docs-examples/java/src/main/java/com/taos/example/StmtInsertExample.java
@@ -30,14 +30,14 @@ public class StmtInsertExample {
private static List getRawData() {
return Arrays.asList(
- "d1001,2018-10-03 14:38:05.000,10.30000,219,0.31000,Beijing.Chaoyang,2",
- "d1001,2018-10-03 14:38:15.000,12.60000,218,0.33000,Beijing.Chaoyang,2",
- "d1001,2018-10-03 14:38:16.800,12.30000,221,0.31000,Beijing.Chaoyang,2",
- "d1002,2018-10-03 14:38:16.650,10.30000,218,0.25000,Beijing.Chaoyang,3",
- "d1003,2018-10-03 14:38:05.500,11.80000,221,0.28000,Beijing.Haidian,2",
- "d1003,2018-10-03 14:38:16.600,13.40000,223,0.29000,Beijing.Haidian,2",
- "d1004,2018-10-03 14:38:05.000,10.80000,223,0.29000,Beijing.Haidian,3",
- "d1004,2018-10-03 14:38:06.500,11.50000,221,0.35000,Beijing.Haidian,3"
+ "d1001,2018-10-03 14:38:05.000,10.30000,219,0.31000,California.SanFrancisco,2",
+ "d1001,2018-10-03 14:38:15.000,12.60000,218,0.33000,California.SanFrancisco,2",
+ "d1001,2018-10-03 14:38:16.800,12.30000,221,0.31000,California.SanFrancisco,2",
+ "d1002,2018-10-03 14:38:16.650,10.30000,218,0.25000,California.SanFrancisco,3",
+ "d1003,2018-10-03 14:38:05.500,11.80000,221,0.28000,California.LosAngeles,2",
+ "d1003,2018-10-03 14:38:16.600,13.40000,223,0.29000,California.LosAngeles,2",
+ "d1004,2018-10-03 14:38:05.000,10.80000,223,0.29000,California.LosAngeles,3",
+ "d1004,2018-10-03 14:38:06.500,11.50000,221,0.35000,California.LosAngeles,3"
);
}
diff --git a/docs-examples/java/src/main/java/com/taos/example/TelnetLineProtocolExample.java b/docs-examples/java/src/main/java/com/taos/example/TelnetLineProtocolExample.java
index 1431eccf16dabaac20f60ae7e971ef49707ba509..4c9368288df74f829121aeab5b925d1d083d29f0 100644
--- a/docs-examples/java/src/main/java/com/taos/example/TelnetLineProtocolExample.java
+++ b/docs-examples/java/src/main/java/com/taos/example/TelnetLineProtocolExample.java
@@ -11,14 +11,14 @@ import java.sql.Statement;
public class TelnetLineProtocolExample {
// format: =[ =]
- private static String[] lines = { "meters.current 1648432611249 10.3 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611250 12.6 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611249 10.8 location=Beijing.Haidian groupid=3",
- "meters.current 1648432611250 11.3 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611249 219 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611250 218 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611249 221 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611250 217 location=Beijing.Haidian groupid=3",
+ private static String[] lines = { "meters.current 1648432611249 10.3 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611250 12.6 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611249 10.8 location=California.LosAngeles groupid=3",
+ "meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611249 219 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611250 218 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611249 221 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611250 217 location=California.LosAngeles groupid=3",
};
private static Connection getConnection() throws SQLException {
diff --git a/docs-examples/java/src/test/java/com/taos/test/TestAll.java b/docs-examples/java/src/test/java/com/taos/test/TestAll.java
index 92fe14a49d5f5ea5d7ea5f1d809867b3de0cc9d2..42db24485afec05298159f7b0c3a4e15835d98ed 100644
--- a/docs-examples/java/src/test/java/com/taos/test/TestAll.java
+++ b/docs-examples/java/src/test/java/com/taos/test/TestAll.java
@@ -23,16 +23,16 @@ public class TestAll {
String jdbcUrl = "jdbc:TAOS://localhost:6030?user=root&password=taosdata";
try (Connection conn = DriverManager.getConnection(jdbcUrl)) {
try (Statement stmt = conn.createStatement()) {
- String sql = "INSERT INTO power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES('2018-10-03 14:38:05.000',10.30000,219,0.31000)\n" +
- " power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES('2018-10-03 15:38:15.000',12.60000,218,0.33000)\n" +
- " power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES('2018-10-03 15:38:16.800',12.30000,221,0.31000)\n" +
- " power.d1002 USING power.meters TAGS(Beijing.Chaoyang, 3) VALUES('2018-10-03 15:38:16.650',10.30000,218,0.25000)\n" +
- " power.d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES('2018-10-03 15:38:05.500',11.80000,221,0.28000)\n" +
- " power.d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES('2018-10-03 15:38:16.600',13.40000,223,0.29000)\n" +
- " power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES('2018-10-03 15:38:05.000',10.80000,223,0.29000)\n" +
- " power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES('2018-10-03 15:38:06.000',10.80000,223,0.29000)\n" +
- " power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES('2018-10-03 15:38:07.000',10.80000,223,0.29000)\n" +
- " power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES('2018-10-03 15:38:08.500',11.50000,221,0.35000)";
+ String sql = "INSERT INTO power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 14:38:05.000',10.30000,219,0.31000)\n" +
+ " power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 15:38:15.000',12.60000,218,0.33000)\n" +
+ " power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 15:38:16.800',12.30000,221,0.31000)\n" +
+ " power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES('2018-10-03 15:38:16.650',10.30000,218,0.25000)\n" +
+ " power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES('2018-10-03 15:38:05.500',11.80000,221,0.28000)\n" +
+ " power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES('2018-10-03 15:38:16.600',13.40000,223,0.29000)\n" +
+ " power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 15:38:05.000',10.80000,223,0.29000)\n" +
+ " power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 15:38:06.000',10.80000,223,0.29000)\n" +
+ " power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 15:38:07.000',10.80000,223,0.29000)\n" +
+ " power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 15:38:08.500',11.50000,221,0.35000)";
stmt.execute(sql);
}
diff --git a/docs-examples/node/nativeexample/influxdb_line_example.js b/docs-examples/node/nativeexample/influxdb_line_example.js
index a9fc6d11df0b335b92bb3292baaa017cb4bc42ea..2050bee54506a3ee6fe7d89de97b3b41334dd4a6 100644
--- a/docs-examples/node/nativeexample/influxdb_line_example.js
+++ b/docs-examples/node/nativeexample/influxdb_line_example.js
@@ -13,10 +13,10 @@ function createDatabase() {
function insertData() {
const lines = [
- "meters,location=Beijing.Haidian,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
- "meters,location=Beijing.Haidian,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
- "meters,location=Beijing.Haidian,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
- "meters,location=Beijing.Haidian,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250",
+ "meters,location=California.LosAngeles,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
+ "meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
+ "meters,location=California.LosAngeles,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
+ "meters,location=California.LosAngeles,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250",
];
cursor.schemalessInsert(
lines,
diff --git a/docs-examples/node/nativeexample/insert_example.js b/docs-examples/node/nativeexample/insert_example.js
index 85a353f889176655654d8c39c9a905054d3b6622..ade9d83158362cbf00a856b43a973de31def7601 100644
--- a/docs-examples/node/nativeexample/insert_example.js
+++ b/docs-examples/node/nativeexample/insert_example.js
@@ -11,10 +11,10 @@ try {
cursor.execute(
"CREATE STABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)"
);
- var sql = `INSERT INTO power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
-power.d1002 USING power.meters TAGS(Beijing.Chaoyang, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
-power.d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
-power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)`;
+ var sql = `INSERT INTO power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
+power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
+power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
+power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)`;
cursor.execute(sql);
} finally {
cursor.close();
diff --git a/docs-examples/node/nativeexample/multi_bind_example.js b/docs-examples/node/nativeexample/multi_bind_example.js
index d52581ec8e10c6edfbc8fc8f7ca78512b5c93d74..6ef8b30c097393fef8c6a2837f8683c736b363f1 100644
--- a/docs-examples/node/nativeexample/multi_bind_example.js
+++ b/docs-examples/node/nativeexample/multi_bind_example.js
@@ -25,7 +25,7 @@ function insertData() {
// bind table name and tags
let tagBind = new taos.TaosBind(2);
- tagBind.bindBinary("Beijing.Chaoyang");
+ tagBind.bindBinary("California.SanFrancisco");
tagBind.bindInt(2);
cursor.stmtSetTbnameTags("d1001", tagBind.getBind());
diff --git a/docs-examples/node/nativeexample/opentsdb_json_example.js b/docs-examples/node/nativeexample/opentsdb_json_example.js
index 6d436a8e9ebe0230bba22064e8fb6c180c14b5d1..2d78444a3f805bc77ab5e11925a28dd18fe221fe 100644
--- a/docs-examples/node/nativeexample/opentsdb_json_example.js
+++ b/docs-examples/node/nativeexample/opentsdb_json_example.js
@@ -17,25 +17,25 @@ function insertData() {
metric: "meters.current",
timestamp: 1648432611249,
value: 10.3,
- tags: { location: "Beijing.Chaoyang", groupid: 2 },
+ tags: { location: "California.SanFrancisco", groupid: 2 },
},
{
metric: "meters.voltage",
timestamp: 1648432611249,
value: 219,
- tags: { location: "Beijing.Haidian", groupid: 1 },
+ tags: { location: "California.LosAngeles", groupid: 1 },
},
{
metric: "meters.current",
timestamp: 1648432611250,
value: 12.6,
- tags: { location: "Beijing.Chaoyang", groupid: 2 },
+ tags: { location: "California.SanFrancisco", groupid: 2 },
},
{
metric: "meters.voltage",
timestamp: 1648432611250,
value: 221,
- tags: { location: "Beijing.Haidian", groupid: 1 },
+ tags: { location: "California.LosAngeles", groupid: 1 },
},
];
diff --git a/docs-examples/node/nativeexample/opentsdb_telnet_example.js b/docs-examples/node/nativeexample/opentsdb_telnet_example.js
index 01e79c2dcacd923cd708d1d228959a628d0ff26a..7f80f558838e18f07ad79e580e7d08638b74e940 100644
--- a/docs-examples/node/nativeexample/opentsdb_telnet_example.js
+++ b/docs-examples/node/nativeexample/opentsdb_telnet_example.js
@@ -13,14 +13,14 @@ function createDatabase() {
function insertData() {
const lines = [
- "meters.current 1648432611249 10.3 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611250 12.6 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611249 10.8 location=Beijing.Haidian groupid=3",
- "meters.current 1648432611250 11.3 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611249 219 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611250 218 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611249 221 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611250 217 location=Beijing.Haidian groupid=3",
+ "meters.current 1648432611249 10.3 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611250 12.6 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611249 10.8 location=California.LosAngeles groupid=3",
+ "meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611249 219 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611250 218 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611249 221 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611250 217 location=California.LosAngeles groupid=3",
];
cursor.schemalessInsert(
lines,
diff --git a/docs-examples/node/nativeexample/param_bind_example.js b/docs-examples/node/nativeexample/param_bind_example.js
index 9117f46c3eeabd9009b72fa9d4a8503e65884242..c7e04c71a0d19ff8666f3d43fe09109009741266 100644
--- a/docs-examples/node/nativeexample/param_bind_example.js
+++ b/docs-examples/node/nativeexample/param_bind_example.js
@@ -24,7 +24,7 @@ function insertData() {
// bind table name and tags
let tagBind = new taos.TaosBind(2);
- tagBind.bindBinary("Beijing.Chaoyang");
+ tagBind.bindBinary("California.SanFrancisco");
tagBind.bindInt(2);
cursor.stmtSetTbnameTags("d1001", tagBind.getBind());
diff --git a/docs-examples/php/connect.php b/docs-examples/php/connect.php
index 5af77b9768e5c5ac4b774b433479a4ac8902beda..b825b447805a3923248042d2cdff79c51bdcdbe3 100644
--- a/docs-examples/php/connect.php
+++ b/docs-examples/php/connect.php
@@ -4,7 +4,7 @@ use TDengine\Connection;
use TDengine\Exception\TDengineException;
try {
- // 实例化
+ // instantiate
$host = 'localhost';
$port = 6030;
$username = 'root';
@@ -12,9 +12,9 @@ try {
$dbname = null;
$connection = new Connection($host, $port, $username, $password, $dbname);
- // 连接
+ // connect
$connection->connect();
} catch (TDengineException $e) {
- // 连接失败捕获异常
+ // throw exception
throw $e;
}
diff --git a/docs-examples/php/insert.php b/docs-examples/php/insert.php
index 0d9cfc4843a2ec3e72d0ad128fa4c2650d6b9cf6..6e38fa0c46d31aa0a939d471ccbd255cfa453a16 100644
--- a/docs-examples/php/insert.php
+++ b/docs-examples/php/insert.php
@@ -4,7 +4,7 @@ use TDengine\Connection;
use TDengine\Exception\TDengineException;
try {
- // 实例化
+ // instantiate
$host = 'localhost';
$port = 6030;
$username = 'root';
@@ -12,22 +12,22 @@ try {
$dbname = 'power';
$connection = new Connection($host, $port, $username, $password, $dbname);
- // 连接
+ // connect
$connection->connect();
- // 插入
+ // insert
$connection->query('CREATE DATABASE if not exists power');
$connection->query('CREATE STABLE if not exists meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)');
$resource = $connection->query(<<<'SQL'
- INSERT INTO power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
- power.d1002 USING power.meters TAGS(Beijing.Chaoyang, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
- power.d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
- power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)
+ INSERT INTO power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
+ power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
+ power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
+ power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)
SQL);
- // 影响行数
+ // get affected rows
var_dump($resource->affectedRows());
} catch (TDengineException $e) {
- // 捕获异常
+ // throw exception
throw $e;
}
diff --git a/docs-examples/php/insert_stmt.php b/docs-examples/php/insert_stmt.php
index 5d4b4809d215d781807c21172982feff2171fe07..99a9a6aef3f69a8880316355e17396e06ca985c9 100644
--- a/docs-examples/php/insert_stmt.php
+++ b/docs-examples/php/insert_stmt.php
@@ -4,7 +4,7 @@ use TDengine\Connection;
use TDengine\Exception\TDengineException;
try {
- // 实例化
+ // instantiate
$host = 'localhost';
$port = 6030;
$username = 'root';
@@ -12,18 +12,18 @@ try {
$dbname = 'power';
$connection = new Connection($host, $port, $username, $password, $dbname);
- // 连接
+ // connect
$connection->connect();
- // 插入
+ // insert
$connection->query('CREATE DATABASE if not exists power');
$connection->query('CREATE STABLE if not exists meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)');
$stmt = $connection->prepare('INSERT INTO ? USING meters TAGS(?, ?) VALUES(?, ?, ?, ?)');
- // 设置表名和标签
+ // set table name and tags
$stmt->setTableNameTags('d1001', [
// 支持格式同参数绑定
- [TDengine\TSDB_DATA_TYPE_BINARY, 'Beijing.Chaoyang'],
+ [TDengine\TSDB_DATA_TYPE_BINARY, 'California.SanFrancisco'],
[TDengine\TSDB_DATA_TYPE_INT, 2],
]);
@@ -41,9 +41,9 @@ try {
]);
$resource = $stmt->execute();
- // 影响行数
+ // get affected rows
var_dump($resource->affectedRows());
} catch (TDengineException $e) {
- // 捕获异常
+ // throw exception
throw $e;
}
diff --git a/docs-examples/php/query.php b/docs-examples/php/query.php
index 4e86a2cec7426887686049977a8647e786ac2744..2607940ea06a70eaa30e4c165c05bd72aa89857c 100644
--- a/docs-examples/php/query.php
+++ b/docs-examples/php/query.php
@@ -4,7 +4,7 @@ use TDengine\Connection;
use TDengine\Exception\TDengineException;
try {
- // 实例化
+ // instantiate
$host = 'localhost';
$port = 6030;
$username = 'root';
@@ -12,12 +12,12 @@ try {
$dbname = 'power';
$connection = new Connection($host, $port, $username, $password, $dbname);
- // 连接
+ // connect
$connection->connect();
$resource = $connection->query('SELECT ts, current FROM meters LIMIT 2');
var_dump($resource->fetch());
} catch (TDengineException $e) {
- // 捕获异常
+ // throw exception
throw $e;
}
diff --git a/docs-examples/python/bind_param_example.py b/docs-examples/python/bind_param_example.py
index 503a2eb5dd91a3516f87a4d3c1c3218cb6505236..6a67434f876f159cf32069a55e9527ca19034640 100644
--- a/docs-examples/python/bind_param_example.py
+++ b/docs-examples/python/bind_param_example.py
@@ -2,14 +2,14 @@ import taos
from datetime import datetime
# note: lines have already been sorted by table name
-lines = [('d1001', '2018-10-03 14:38:05.000', 10.30000, 219, 0.31000, 'Beijing.Chaoyang', 2),
- ('d1001', '2018-10-03 14:38:15.000', 12.60000, 218, 0.33000, 'Beijing.Chaoyang', 2),
- ('d1001', '2018-10-03 14:38:16.800', 12.30000, 221, 0.31000, 'Beijing.Chaoyang', 2),
- ('d1002', '2018-10-03 14:38:16.650', 10.30000, 218, 0.25000, 'Beijing.Chaoyang', 3),
- ('d1003', '2018-10-03 14:38:05.500', 11.80000, 221, 0.28000, 'Beijing.Haidian', 2),
- ('d1003', '2018-10-03 14:38:16.600', 13.40000, 223, 0.29000, 'Beijing.Haidian', 2),
- ('d1004', '2018-10-03 14:38:05.000', 10.80000, 223, 0.29000, 'Beijing.Haidian', 3),
- ('d1004', '2018-10-03 14:38:06.500', 11.50000, 221, 0.35000, 'Beijing.Haidian', 3)]
+lines = [('d1001', '2018-10-03 14:38:05.000', 10.30000, 219, 0.31000, 'California.SanFrancisco', 2),
+ ('d1001', '2018-10-03 14:38:15.000', 12.60000, 218, 0.33000, 'California.SanFrancisco', 2),
+ ('d1001', '2018-10-03 14:38:16.800', 12.30000, 221, 0.31000, 'California.SanFrancisco', 2),
+ ('d1002', '2018-10-03 14:38:16.650', 10.30000, 218, 0.25000, 'California.SanFrancisco', 3),
+ ('d1003', '2018-10-03 14:38:05.500', 11.80000, 221, 0.28000, 'California.LosAngeles', 2),
+ ('d1003', '2018-10-03 14:38:16.600', 13.40000, 223, 0.29000, 'California.LosAngeles', 2),
+ ('d1004', '2018-10-03 14:38:05.000', 10.80000, 223, 0.29000, 'California.LosAngeles', 3),
+ ('d1004', '2018-10-03 14:38:06.500', 11.50000, 221, 0.35000, 'California.LosAngeles', 3)]
def get_ts(ts: str):
diff --git a/docs-examples/python/connect_rest_examples.py b/docs-examples/python/connect_rest_examples.py
index a043d506b965bc31179dbb6f38749d196ab338ff..94e7d5f467aeceae77ab0d9f4a5dce28fecf0722 100644
--- a/docs-examples/python/connect_rest_examples.py
+++ b/docs-examples/python/connect_rest_examples.py
@@ -16,10 +16,10 @@ cursor.execute("CREATE DATABASE power")
cursor.execute("CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)")
# insert data
-cursor.execute("""INSERT INTO power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
- power.d1002 USING power.meters TAGS(Beijing.Chaoyang, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
- power.d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
- power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)""")
+cursor.execute("""INSERT INTO power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
+ power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
+ power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
+ power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)""")
print("inserted row count:", cursor.rowcount)
# query data
diff --git a/docs-examples/python/json_protocol_example.py b/docs-examples/python/json_protocol_example.py
index 5bb4d629bccf3d79e74b381d6259de86d6522315..bdf324f7061c964e3d913351635d9f7c4f052d0a 100644
--- a/docs-examples/python/json_protocol_example.py
+++ b/docs-examples/python/json_protocol_example.py
@@ -3,12 +3,12 @@ import json
import taos
from taos import SmlProtocol, SmlPrecision
-lines = [{"metric": "meters.current", "timestamp": 1648432611249, "value": 10.3, "tags": {"location": "Beijing.Chaoyang", "groupid": 2}},
+lines = [{"metric": "meters.current", "timestamp": 1648432611249, "value": 10.3, "tags": {"location": "California.SanFrancisco", "groupid": 2}},
{"metric": "meters.voltage", "timestamp": 1648432611249, "value": 219,
- "tags": {"location": "Beijing.Haidian", "groupid": 1}},
+ "tags": {"location": "California.LosAngeles", "groupid": 1}},
{"metric": "meters.current", "timestamp": 1648432611250, "value": 12.6,
- "tags": {"location": "Beijing.Chaoyang", "groupid": 2}},
- {"metric": "meters.voltage", "timestamp": 1648432611250, "value": 221, "tags": {"location": "Beijing.Haidian", "groupid": 1}}]
+ "tags": {"location": "California.SanFrancisco", "groupid": 2}},
+ {"metric": "meters.voltage", "timestamp": 1648432611250, "value": 221, "tags": {"location": "California.LosAngeles", "groupid": 1}}]
def get_connection():
diff --git a/docs-examples/python/line_protocol_example.py b/docs-examples/python/line_protocol_example.py
index 02baeb2104f9f48984b4d34afb5e67af641d4e32..735e8e7eb8aed1a8133de7a6de50bd50d076c472 100644
--- a/docs-examples/python/line_protocol_example.py
+++ b/docs-examples/python/line_protocol_example.py
@@ -1,10 +1,10 @@
import taos
from taos import SmlProtocol, SmlPrecision
-lines = ["meters,location=Beijing.Haidian,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249000",
- "meters,location=Beijing.Haidian,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611249500",
- "meters,location=Beijing.Haidian,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249300",
- "meters,location=Beijing.Haidian,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611249800",
+lines = ["meters,location=California.LosAngeles,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249000",
+ "meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611249500",
+ "meters,location=California.LosAngeles,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249300",
+ "meters,location=California.LosAngeles,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611249800",
]
diff --git a/docs-examples/python/multi_bind_example.py b/docs-examples/python/multi_bind_example.py
index 1714121d72705ab8d619a41f3463af4aa3193871..205ba69fb267ae1781415e4f0995b41f908ceb17 100644
--- a/docs-examples/python/multi_bind_example.py
+++ b/docs-examples/python/multi_bind_example.py
@@ -3,10 +3,10 @@ from datetime import datetime
# ANCHOR: bind_batch
table_tags = {
- "d1001": ('Beijing.Chaoyang', 2),
- "d1002": ('Beijing.Chaoyang', 3),
- "d1003": ('Beijing.Haidian', 2),
- "d1004": ('Beijing.Haidian', 3)
+ "d1001": ('California.SanFrancisco', 2),
+ "d1002": ('California.SanFrancisco', 3),
+ "d1003": ('California.LosAngeles', 2),
+ "d1004": ('California.LosAngeles', 3)
}
table_values = {
diff --git a/docs-examples/python/native_insert_example.py b/docs-examples/python/native_insert_example.py
index 94d4888a8f5330b9e39d5ae051fcb68f9825505f..3b6b73cb2236c8d9d11019349f99f79135a5c1d6 100644
--- a/docs-examples/python/native_insert_example.py
+++ b/docs-examples/python/native_insert_example.py
@@ -1,13 +1,13 @@
import taos
-lines = ["d1001,2018-10-03 14:38:05.000,10.30000,219,0.31000,Beijing.Chaoyang,2",
- "d1004,2018-10-03 14:38:05.000,10.80000,223,0.29000,Beijing.Haidian,3",
- "d1003,2018-10-03 14:38:05.500,11.80000,221,0.28000,Beijing.Haidian,2",
- "d1004,2018-10-03 14:38:06.500,11.50000,221,0.35000,Beijing.Haidian,3",
- "d1002,2018-10-03 14:38:16.650,10.30000,218,0.25000,Beijing.Chaoyang,3",
- "d1001,2018-10-03 14:38:15.000,12.60000,218,0.33000,Beijing.Chaoyang,2",
- "d1001,2018-10-03 14:38:16.800,12.30000,221,0.31000,Beijing.Chaoyang,2",
- "d1003,2018-10-03 14:38:16.600,13.40000,223,0.29000,Beijing.Haidian,2"]
+lines = ["d1001,2018-10-03 14:38:05.000,10.30000,219,0.31000,California.SanFrancisco,2",
+ "d1004,2018-10-03 14:38:05.000,10.80000,223,0.29000,California.LosAngeles,3",
+ "d1003,2018-10-03 14:38:05.500,11.80000,221,0.28000,California.LosAngeles,2",
+ "d1004,2018-10-03 14:38:06.500,11.50000,221,0.35000,California.LosAngeles,3",
+ "d1002,2018-10-03 14:38:16.650,10.30000,218,0.25000,California.SanFrancisco,3",
+ "d1001,2018-10-03 14:38:15.000,12.60000,218,0.33000,California.SanFrancisco,2",
+ "d1001,2018-10-03 14:38:16.800,12.30000,221,0.31000,California.SanFrancisco,2",
+ "d1003,2018-10-03 14:38:16.600,13.40000,223,0.29000,California.LosAngeles,2"]
def get_connection() -> taos.TaosConnection:
@@ -25,10 +25,10 @@ def create_stable(conn: taos.TaosConnection):
# The generated SQL is:
-# INSERT INTO d1001 USING meters TAGS(Beijing.Chaoyang, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
-# d1002 USING meters TAGS(Beijing.Chaoyang, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
-# d1003 USING meters TAGS(Beijing.Haidian, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
-# d1004 USING meters TAGS(Beijing.Haidian, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)
+# INSERT INTO d1001 USING meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
+# d1002 USING meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
+# d1003 USING meters TAGS(California.LosAngeles, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
+# d1004 USING meters TAGS(California.LosAngeles, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)
def get_sql():
global lines
diff --git a/docs-examples/python/query_example.py b/docs-examples/python/query_example.py
index 6d33c49c968d9210b475931b5d8cecca0ceff3e3..de5f26784cbd1f523c996458f326ecb90c778da3 100644
--- a/docs-examples/python/query_example.py
+++ b/docs-examples/python/query_example.py
@@ -14,8 +14,8 @@ def query_api_demo(conn: taos.TaosConnection):
# field count: 7
# meta of files[1]: {name: ts, type: 9, bytes: 8}
# ======================Iterate on result=========================
-# ('d1001', datetime.datetime(2018, 10, 3, 14, 38, 5), 10.300000190734863, 219, 0.3100000023841858, 'Beijing.Chaoyang', 2)
-# ('d1001', datetime.datetime(2018, 10, 3, 14, 38, 15), 12.600000381469727, 218, 0.33000001311302185, 'Beijing.Chaoyang', 2)
+# ('d1001', datetime.datetime(2018, 10, 3, 14, 38, 5), 10.300000190734863, 219, 0.3100000023841858, 'California.SanFrancisco', 2)
+# ('d1001', datetime.datetime(2018, 10, 3, 14, 38, 15), 12.600000381469727, 218, 0.33000001311302185, 'California.SanFrancisco', 2)
# ANCHOR_END: iter
# ANCHOR: fetch_all
diff --git a/docs-examples/python/telnet_line_protocol_example.py b/docs-examples/python/telnet_line_protocol_example.py
index 072835109ee238940e6fe5880b72b2b04e0157fa..d812e186af86be6811ee7774f10458e46df1f39f 100644
--- a/docs-examples/python/telnet_line_protocol_example.py
+++ b/docs-examples/python/telnet_line_protocol_example.py
@@ -2,14 +2,14 @@ import taos
from taos import SmlProtocol, SmlPrecision
# format: =[ =]
-lines = ["meters.current 1648432611249 10.3 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611250 12.6 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611249 10.8 location=Beijing.Haidian groupid=3",
- "meters.current 1648432611250 11.3 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611249 219 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611250 218 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611249 221 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611250 217 location=Beijing.Haidian groupid=3",
+lines = ["meters.current 1648432611249 10.3 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611250 12.6 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611249 10.8 location=California.LosAngeles groupid=3",
+ "meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611249 219 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611250 218 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611249 221 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611250 217 location=California.LosAngeles groupid=3",
]
diff --git a/docs-examples/rust/nativeexample/examples/stmt_example.rs b/docs-examples/rust/nativeexample/examples/stmt_example.rs
index a791a4135984a33dded145e8175d7ade57de8d77..190f8c1ef6d50a8e9c925178c1a9d31c22e3d4df 100644
--- a/docs-examples/rust/nativeexample/examples/stmt_example.rs
+++ b/docs-examples/rust/nativeexample/examples/stmt_example.rs
@@ -12,7 +12,7 @@ async fn main() -> Result<(), Error> {
stmt.set_tbname_tags(
"d1001",
[
- Field::Binary(BString::from("Beijing.Chaoyang")),
+ Field::Binary(BString::from("California.SanFrancisco")),
Field::Int(2),
],
)?;
diff --git a/docs-examples/rust/restexample/examples/insert_example.rs b/docs-examples/rust/restexample/examples/insert_example.rs
index d7acc98d096fb3cd6bea22d6c5f6f0f5caea50af..9261536f627c297fc707708f88f57eed647dbf3e 100644
--- a/docs-examples/rust/restexample/examples/insert_example.rs
+++ b/docs-examples/rust/restexample/examples/insert_example.rs
@@ -5,10 +5,10 @@ async fn main() -> Result<(), Error> {
let taos = TaosCfg::default().connect().expect("fail to connect");
taos.create_database("power").await?;
taos.exec("CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)").await?;
- let sql = "INSERT INTO power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
- power.d1002 USING power.meters TAGS(Beijing.Chaoyang, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
- power.d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
- power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)";
+ let sql = "INSERT INTO power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
+ power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
+ power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
+ power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)";
let result = taos.query(sql).await?;
println!("{:?}", result);
Ok(())
diff --git a/docs-examples/rust/schemalessexample/examples/influxdb_line_example.rs b/docs-examples/rust/schemalessexample/examples/influxdb_line_example.rs
index e93888cc83d12f3bec7370a66e8a85d38cec42ad..64d1a3c9ac6037c16e3e1c3be0258e19cce632a0 100644
--- a/docs-examples/rust/schemalessexample/examples/influxdb_line_example.rs
+++ b/docs-examples/rust/schemalessexample/examples/influxdb_line_example.rs
@@ -5,10 +5,10 @@ fn main() {
let taos = TaosCfg::default().connect().expect("fail to connect");
taos.raw_query("CREATE DATABASE test").unwrap();
taos.raw_query("USE test").unwrap();
- let lines = ["meters,location=Beijing.Haidian,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
- "meters,location=Beijing.Haidian,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
- "meters,location=Beijing.Haidian,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
- "meters,location=Beijing.Haidian,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250"];
+ let lines = ["meters,location=California.LosAngeles,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
+ "meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
+ "meters,location=California.LosAngeles,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
+ "meters,location=California.LosAngeles,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250"];
let affected_rows = taos
.schemaless_insert(
&lines,
diff --git a/docs-examples/rust/schemalessexample/examples/opentsdb_json_example.rs b/docs-examples/rust/schemalessexample/examples/opentsdb_json_example.rs
index 1d66bd1f2b1bcbe82dc3ee3e8e25ea4c521c81f0..e61691596704c8aaf979081429802df6e5aa86f9 100644
--- a/docs-examples/rust/schemalessexample/examples/opentsdb_json_example.rs
+++ b/docs-examples/rust/schemalessexample/examples/opentsdb_json_example.rs
@@ -6,10 +6,10 @@ fn main() {
taos.raw_query("CREATE DATABASE test").unwrap();
taos.raw_query("USE test").unwrap();
let lines = [
- r#"[{"metric": "meters.current", "timestamp": 1648432611249, "value": 10.3, "tags": {"location": "Beijing.Chaoyang", "groupid": 2}},
- {"metric": "meters.voltage", "timestamp": 1648432611249, "value": 219, "tags": {"location": "Beijing.Haidian", "groupid": 1}},
- {"metric": "meters.current", "timestamp": 1648432611250, "value": 12.6, "tags": {"location": "Beijing.Chaoyang", "groupid": 2}},
- {"metric": "meters.voltage", "timestamp": 1648432611250, "value": 221, "tags": {"location": "Beijing.Haidian", "groupid": 1}}]"#,
+ r#"[{"metric": "meters.current", "timestamp": 1648432611249, "value": 10.3, "tags": {"location": "California.SanFrancisco", "groupid": 2}},
+ {"metric": "meters.voltage", "timestamp": 1648432611249, "value": 219, "tags": {"location": "California.LosAngeles", "groupid": 1}},
+ {"metric": "meters.current", "timestamp": 1648432611250, "value": 12.6, "tags": {"location": "California.SanFrancisco", "groupid": 2}},
+ {"metric": "meters.voltage", "timestamp": 1648432611250, "value": 221, "tags": {"location": "California.LosAngeles", "groupid": 1}}]"#,
];
let affected_rows = taos
diff --git a/docs-examples/rust/schemalessexample/examples/opentsdb_telnet_example.rs b/docs-examples/rust/schemalessexample/examples/opentsdb_telnet_example.rs
index 18d7500714d9e41b1bebd490199d296ead3dc7c4..c8cab7655a24806e5c7659af80e83da383539c55 100644
--- a/docs-examples/rust/schemalessexample/examples/opentsdb_telnet_example.rs
+++ b/docs-examples/rust/schemalessexample/examples/opentsdb_telnet_example.rs
@@ -6,14 +6,14 @@ fn main() {
taos.raw_query("CREATE DATABASE test").unwrap();
taos.raw_query("USE test").unwrap();
let lines = [
- "meters.current 1648432611249 10.3 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611250 12.6 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611249 10.8 location=Beijing.Haidian groupid=3",
- "meters.current 1648432611250 11.3 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611249 219 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611250 218 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611249 221 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611250 217 location=Beijing.Haidian groupid=3",
+ "meters.current 1648432611249 10.3 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611250 12.6 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611249 10.8 location=California.LosAngeles groupid=3",
+ "meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611249 219 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611250 218 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611249 221 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611250 217 location=California.LosAngeles groupid=3",
];
let affected_rows = taos
.schemaless_insert(
diff --git a/include/common/tmsg.h b/include/common/tmsg.h
index cd5df2f78a4d1e50619e08e8e16fee16c5669154..4436c84830e5956c8d02aac1c13ecdd3a31c64b9 100644
--- a/include/common/tmsg.h
+++ b/include/common/tmsg.h
@@ -1667,6 +1667,10 @@ typedef struct {
int32_t tSerializeSMDropCgroupReq(void* buf, int32_t bufLen, SMDropCgroupReq* pReq);
int32_t tDeserializeSMDropCgroupReq(void* buf, int32_t bufLen, SMDropCgroupReq* pReq);
+typedef struct {
+ int8_t reserved;
+} SMDropCgroupRsp;
+
typedef struct {
char name[TSDB_TABLE_FNAME_LEN];
int8_t alterType;
@@ -1728,9 +1732,9 @@ int32_t tDecodeSVDropStbReq(SDecoder* pCoder, SVDropStbReq* pReq);
#define TD_CREATE_IF_NOT_EXISTS 0x1
typedef struct SVCreateTbReq {
int32_t flags;
+ char* name;
tb_uid_t uid;
int64_t ctime;
- char* name;
int32_t ttl;
int8_t type;
union {
diff --git a/include/common/tmsgdef.h b/include/common/tmsgdef.h
index 81c907ca9e015e74894c1e26f804c43d4c5b8d25..51a15c1489cf94d755dfdda386edae8c2ae4a708 100644
--- a/include/common/tmsgdef.h
+++ b/include/common/tmsgdef.h
@@ -151,6 +151,7 @@ enum {
TD_DEF_MSG_TYPE(TDMT_MND_MQ_CONSUMER_LOST, "mnode-mq-consumer-lost", SMqConsumerLostMsg, NULL)
TD_DEF_MSG_TYPE(TDMT_MND_MQ_CONSUMER_RECOVER, "mnode-mq-consumer-recover", SMqConsumerRecoverMsg, NULL)
TD_DEF_MSG_TYPE(TDMT_MND_MQ_DO_REBALANCE, "mnode-mq-do-rebalance", SMqDoRebalanceMsg, NULL)
+ TD_DEF_MSG_TYPE(TDMT_MND_MQ_DROP_CGROUP, "mnode-mq-drop-cgroup", SMqDropCGroupReq, SMqDropCGroupRsp)
TD_DEF_MSG_TYPE(TDMT_MND_MQ_COMMIT_OFFSET, "mnode-mq-commit-offset", SMqCMCommitOffsetReq, SMqCMCommitOffsetRsp)
TD_DEF_MSG_TYPE(TDMT_MND_CREATE_STREAM, "mnode-create-stream", SCMCreateStreamReq, SCMCreateStreamRsp)
TD_DEF_MSG_TYPE(TDMT_MND_ALTER_STREAM, "mnode-alter-stream", NULL, NULL)
diff --git a/include/libs/executor/executor.h b/include/libs/executor/executor.h
index 9cafb4ee04543f1978f68c982a5208fcde2c25a4..5379a8f712cde79c29ca23e2baac2ac4985450e7 100644
--- a/include/libs/executor/executor.h
+++ b/include/libs/executor/executor.h
@@ -61,7 +61,7 @@ qTaskInfo_t qCreateStreamExecTaskInfo(void* msg, void* streamReadHandle);
* @param type
* @return
*/
-int32_t qSetStreamInput(qTaskInfo_t tinfo, const void* input, int32_t type);
+int32_t qSetStreamInput(qTaskInfo_t tinfo, const void* input, int32_t type, bool assignUid);
/**
* Set multiple input data blocks for the stream scan.
@@ -71,7 +71,7 @@ int32_t qSetStreamInput(qTaskInfo_t tinfo, const void* input, int32_t type);
* @param type
* @return
*/
-int32_t qSetMultiStreamInput(qTaskInfo_t tinfo, const void* pBlocks, size_t numOfBlocks, int32_t type);
+int32_t qSetMultiStreamInput(qTaskInfo_t tinfo, const void* pBlocks, size_t numOfBlocks, int32_t type, bool assignUid);
/**
* Update the table id list, add or remove.
diff --git a/include/libs/transport/trpc.h b/include/libs/transport/trpc.h
index 70977bba871dd109d8e3d7a9b747df2e5435fa58..839194da94e5a184ab11b446077e334f085d68b5 100644
--- a/include/libs/transport/trpc.h
+++ b/include/libs/transport/trpc.h
@@ -124,6 +124,7 @@ void rpcSendRedirectRsp(void *pConn, const SEpSet *pEpSet);
void rpcSendRequestWithCtx(void *thandle, const SEpSet *pEpSet, SRpcMsg *pMsg, int64_t *rid, SRpcCtx *ctx);
int32_t rpcGetConnInfo(void *thandle, SRpcConnInfo *pInfo);
void rpcSendRecv(void *shandle, SEpSet *pEpSet, SRpcMsg *pReq, SRpcMsg *pRsp);
+void rpcSetDefaultAddr(void *thandle, const char *ip, const char *fqdn);
#ifdef __cplusplus
}
diff --git a/include/util/taoserror.h b/include/util/taoserror.h
index a924719cf9d1355ce745267b39481b9dbd349faf..2b2ce31673c5fc261187ea4a3dfc120228aa8400 100644
--- a/include/util/taoserror.h
+++ b/include/util/taoserror.h
@@ -268,6 +268,7 @@ int32_t* taosGetErrno();
#define TSDB_CODE_MND_OFFSET_NOT_EXIST TAOS_DEF_ERROR_CODE(0, 0x03E9)
#define TSDB_CODE_MND_CONSUMER_NOT_READY TAOS_DEF_ERROR_CODE(0, 0x03EA)
#define TSDB_CODE_MND_TOPIC_SUBSCRIBED TAOS_DEF_ERROR_CODE(0, 0x03EB)
+#define TSDB_CODE_MND_CGROUP_USED TAOS_DEF_ERROR_CODE(0, 0x03EC)
// mnode-stream
#define TSDB_CODE_MND_STREAM_ALREADY_EXIST TAOS_DEF_ERROR_CODE(0, 0x03F0)
diff --git a/include/util/tlog.h b/include/util/tlog.h
index 47ac01aacfafc71d5f2ebd48f16c0d22b1c2d0eb..988d9c6890832d17a7e9acd2b496e3ef6ba63d90 100644
--- a/include/util/tlog.h
+++ b/include/util/tlog.h
@@ -62,6 +62,7 @@ extern int32_t fsDebugFlag;
extern int32_t metaDebugFlag;
extern int32_t fnDebugFlag;
extern int32_t smaDebugFlag;
+extern int32_t idxDebugFlag;
int32_t taosInitLog(const char *logName, int32_t maxFiles);
void taosCloseLog();
diff --git a/source/client/src/clientMsgHandler.c b/source/client/src/clientMsgHandler.c
index dfce01dd6356f19da8dce1b8de9c2eb9e9ca42e4..f15315fe6055127f13b15849f897d8edda5a381b 100644
--- a/source/client/src/clientMsgHandler.c
+++ b/source/client/src/clientMsgHandler.c
@@ -58,7 +58,12 @@ int32_t processConnectRsp(void* param, const SDataBuf* pMsg, int32_t code) {
return code;
}
- if (connectRsp.dnodeNum > 1 && !isEpsetEqual(&pTscObj->pAppInfo->mgmtEp.epSet, &connectRsp.epSet)) {
+ if (connectRsp.dnodeNum == 1) {
+ SEpSet srcEpSet = getEpSet_s(&pTscObj->pAppInfo->mgmtEp);
+ SEpSet dstEpSet = connectRsp.epSet;
+ rpcSetDefaultAddr(pTscObj->pAppInfo->pTransporter, srcEpSet.eps[srcEpSet.inUse].fqdn,
+ dstEpSet.eps[dstEpSet.inUse].fqdn);
+ } else if (connectRsp.dnodeNum > 1 && !isEpsetEqual(&pTscObj->pAppInfo->mgmtEp.epSet, &connectRsp.epSet)) {
updateEpSet_s(&pTscObj->pAppInfo->mgmtEp, &connectRsp.epSet);
}
@@ -126,9 +131,10 @@ int32_t processUseDbRsp(void* param, const SDataBuf* pMsg, int32_t code) {
if (usedbRsp.vgVersion >= 0) {
uint64_t clusterId = pRequest->pTscObj->pAppInfo->clusterId;
- int32_t code1 = catalogGetHandle(clusterId, &pCatalog);
+ int32_t code1 = catalogGetHandle(clusterId, &pCatalog);
if (code1 != TSDB_CODE_SUCCESS) {
- tscWarn("0x%" PRIx64 "catalogGetHandle failed, clusterId:%" PRIx64 ", error:%s", pRequest->requestId, clusterId, tstrerror(code1));
+ tscWarn("0x%" PRIx64 "catalogGetHandle failed, clusterId:%" PRIx64 ", error:%s", pRequest->requestId, clusterId,
+ tstrerror(code1));
} else {
catalogRemoveDB(pCatalog, usedbRsp.db, usedbRsp.uid);
}
@@ -158,7 +164,7 @@ int32_t processUseDbRsp(void* param, const SDataBuf* pMsg, int32_t code) {
if (output.dbVgroup) taosHashCleanup(output.dbVgroup->vgHash);
taosMemoryFreeClear(output.dbVgroup);
- tscError("0x%" PRIx64" failed to build use db output since %s", pRequest->requestId, terrstr());
+ tscError("0x%" PRIx64 " failed to build use db output since %s", pRequest->requestId, terrstr());
} else if (output.dbVgroup) {
struct SCatalog* pCatalog = NULL;
diff --git a/source/common/src/tglobal.c b/source/common/src/tglobal.c
index 1b61a0bc606aa9fd479cf996668756d2b88f4702..d0a2ddd9bb6379d702b8c4d46c60085d3fa05b0c 100644
--- a/source/common/src/tglobal.c
+++ b/source/common/src/tglobal.c
@@ -79,9 +79,10 @@ uint16_t tsTelemPort = 80;
// schemaless
char tsSmlTagName[TSDB_COL_NAME_LEN] = "_tag_null";
-char tsSmlChildTableName[TSDB_TABLE_NAME_LEN] = ""; //user defined child table name can be specified in tag value.
- //If set to empty system will generate table name using MD5 hash.
-bool tsSmlDataFormat = true; // true means that the name and order of cols in each line are the same(only for influx protocol)
+char tsSmlChildTableName[TSDB_TABLE_NAME_LEN] = ""; // user defined child table name can be specified in tag value.
+ // If set to empty system will generate table name using MD5 hash.
+bool tsSmlDataFormat =
+ true; // true means that the name and order of cols in each line are the same(only for influx protocol)
// query
int32_t tsQueryPolicy = 1;
@@ -292,6 +293,7 @@ int32_t taosAddClientLogCfg(SConfig *pCfg) {
if (cfgAddInt32(pCfg, "jniDebugFlag", jniDebugFlag, 0, 255, 1) != 0) return -1;
if (cfgAddInt32(pCfg, "simDebugFlag", 143, 0, 255, 1) != 0) return -1;
if (cfgAddInt32(pCfg, "debugFlag", 0, 0, 255, 1) != 0) return -1;
+ if (cfgAddInt32(pCfg, "idxDebugFlag", 0, 0, 255, 1) != 0) return -1;
return 0;
}
@@ -307,6 +309,7 @@ static int32_t taosAddServerLogCfg(SConfig *pCfg) {
if (cfgAddInt32(pCfg, "fsDebugFlag", fsDebugFlag, 0, 255, 0) != 0) return -1;
if (cfgAddInt32(pCfg, "fnDebugFlag", fnDebugFlag, 0, 255, 0) != 0) return -1;
if (cfgAddInt32(pCfg, "smaDebugFlag", smaDebugFlag, 0, 255, 0) != 0) return -1;
+ if (cfgAddInt32(pCfg, "idxDebugFlag", idxDebugFlag, 0, 255, 0) != 0) return -1;
return 0;
}
@@ -479,6 +482,7 @@ static void taosSetClientLogCfg(SConfig *pCfg) {
rpcDebugFlag = cfgGetItem(pCfg, "rpcDebugFlag")->i32;
tmrDebugFlag = cfgGetItem(pCfg, "tmrDebugFlag")->i32;
jniDebugFlag = cfgGetItem(pCfg, "jniDebugFlag")->i32;
+ idxDebugFlag = cfgGetItem(pCfg, "idxDebugFlag")->i32;
}
static void taosSetServerLogCfg(SConfig *pCfg) {
@@ -493,6 +497,7 @@ static void taosSetServerLogCfg(SConfig *pCfg) {
fsDebugFlag = cfgGetItem(pCfg, "fsDebugFlag")->i32;
fnDebugFlag = cfgGetItem(pCfg, "fnDebugFlag")->i32;
smaDebugFlag = cfgGetItem(pCfg, "smaDebugFlag")->i32;
+ idxDebugFlag = cfgGetItem(pCfg, "idxDebugFlag")->i32;
}
static int32_t taosSetClientCfg(SConfig *pCfg) {
diff --git a/source/common/src/tmsg.c b/source/common/src/tmsg.c
index 58c9243999e8b3a15d59bf1abdd37ee44b5e6cd5..b74e1d72c6936708d3ebe36b63aa8a29949251f6 100644
--- a/source/common/src/tmsg.c
+++ b/source/common/src/tmsg.c
@@ -3836,10 +3836,9 @@ int tEncodeSVCreateTbReq(SEncoder *pCoder, const SVCreateTbReq *pReq) {
if (tStartEncode(pCoder) < 0) return -1;
if (tEncodeI32v(pCoder, pReq->flags) < 0) return -1;
+ if (tEncodeCStr(pCoder, pReq->name) < 0) return -1;
if (tEncodeI64(pCoder, pReq->uid) < 0) return -1;
if (tEncodeI64(pCoder, pReq->ctime) < 0) return -1;
-
- if (tEncodeCStr(pCoder, pReq->name) < 0) return -1;
if (tEncodeI32(pCoder, pReq->ttl) < 0) return -1;
if (tEncodeI8(pCoder, pReq->type) < 0) return -1;
@@ -3862,10 +3861,9 @@ int tDecodeSVCreateTbReq(SDecoder *pCoder, SVCreateTbReq *pReq) {
if (tStartDecode(pCoder) < 0) return -1;
if (tDecodeI32v(pCoder, &pReq->flags) < 0) return -1;
+ if (tDecodeCStr(pCoder, &pReq->name) < 0) return -1;
if (tDecodeI64(pCoder, &pReq->uid) < 0) return -1;
if (tDecodeI64(pCoder, &pReq->ctime) < 0) return -1;
-
- if (tDecodeCStr(pCoder, &pReq->name) < 0) return -1;
if (tDecodeI32(pCoder, &pReq->ttl) < 0) return -1;
if (tDecodeI8(pCoder, &pReq->type) < 0) return -1;
diff --git a/source/dnode/mnode/impl/inc/mndDef.h b/source/dnode/mnode/impl/inc/mndDef.h
index 6318f2e3f2b790b5aff25e774c762a8bb5f4c4da..9c62214b0eec6e0736dc20fb7653ca91188bfe84 100644
--- a/source/dnode/mnode/impl/inc/mndDef.h
+++ b/source/dnode/mnode/impl/inc/mndDef.h
@@ -94,6 +94,7 @@ typedef enum {
TRN_TYPE_ALTER_STREAM = 1027,
TRN_TYPE_CONSUMER_LOST = 1028,
TRN_TYPE_CONSUMER_RECOVER = 1029,
+ TRN_TYPE_DROP_CGROUP = 1030,
TRN_TYPE_BASIC_SCOPE_END,
TRN_TYPE_GLOBAL_SCOPE = 2000,
diff --git a/source/dnode/mnode/impl/inc/mndOffset.h b/source/dnode/mnode/impl/inc/mndOffset.h
index 900181858bd724873ea948d450e830cc83643463..f7569b964875bbffe90c8fc5525fda8f68b688b8 100644
--- a/source/dnode/mnode/impl/inc/mndOffset.h
+++ b/source/dnode/mnode/impl/inc/mndOffset.h
@@ -39,6 +39,7 @@ static FORCE_INLINE int32_t mndMakePartitionKey(char *key, const char *cgroup, c
int32_t mndDropOffsetByDB(SMnode *pMnode, STrans *pTrans, SDbObj *pDb);
int32_t mndDropOffsetByTopic(SMnode *pMnode, STrans *pTrans, const char *topic);
+int32_t mndDropOffsetBySubKey(SMnode *pMnode, STrans *pTrans, const char *subKey);
bool mndOffsetFromTopic(SMqOffsetObj *pOffset, const char *topic);
diff --git a/source/dnode/mnode/impl/inc/mndSubscribe.h b/source/dnode/mnode/impl/inc/mndSubscribe.h
index 50cede62ce424ae855f46ba0f359b5088058e4d1..d91c2bd4c3f69063420f3a775f6183e3eaa3824d 100644
--- a/source/dnode/mnode/impl/inc/mndSubscribe.h
+++ b/source/dnode/mnode/impl/inc/mndSubscribe.h
@@ -33,6 +33,7 @@ int32_t mndMakeSubscribeKey(char *key, const char *cgroup, const char *topicName
int32_t mndDropSubByDB(SMnode *pMnode, STrans *pTrans, SDbObj *pDb);
int32_t mndDropSubByTopic(SMnode *pMnode, STrans *pTrans, const char *topic);
+int32_t mndSetDropSubCommitLogs(SMnode *pMnode, STrans *pTrans, SMqSubscribeObj *pSub);
#ifdef __cplusplus
}
diff --git a/source/dnode/mnode/impl/src/mndOffset.c b/source/dnode/mnode/impl/src/mndOffset.c
index dca07f6a6d2910630a939d119b6d21e287112866..01516d03f28f168b71ea5272bf983c181a059bcd 100644
--- a/source/dnode/mnode/impl/src/mndOffset.c
+++ b/source/dnode/mnode/impl/src/mndOffset.c
@@ -58,6 +58,12 @@ bool mndOffsetFromTopic(SMqOffsetObj *pOffset, const char *topic) {
return false;
}
+bool mndOffsetFromSubKey(SMqOffsetObj *pOffset, const char *subKey) {
+ int32_t i = 0;
+ while (pOffset->key[i] != ':') i++;
+ if (strcmp(&pOffset->key[i + 1], subKey) == 0) return true;
+ return false;
+}
SSdbRaw *mndOffsetActionEncode(SMqOffsetObj *pOffset) {
terrno = TSDB_CODE_OUT_OF_MEMORY;
void *buf = NULL;
@@ -303,7 +309,35 @@ int32_t mndDropOffsetByTopic(SMnode *pMnode, STrans *pTrans, const char *topic)
continue;
}
- if (mndSetDropOffsetRedoLogs(pMnode, pTrans, pOffset) < 0) {
+ if (mndSetDropOffsetCommitLogs(pMnode, pTrans, pOffset) < 0) {
+ sdbRelease(pSdb, pOffset);
+ goto END;
+ }
+
+ sdbRelease(pSdb, pOffset);
+ }
+
+ code = 0;
+END:
+ return code;
+}
+
+int32_t mndDropOffsetBySubKey(SMnode *pMnode, STrans *pTrans, const char *subKey) {
+ int32_t code = -1;
+ SSdb *pSdb = pMnode->pSdb;
+
+ void *pIter = NULL;
+ SMqOffsetObj *pOffset = NULL;
+ while (1) {
+ pIter = sdbFetch(pSdb, SDB_OFFSET, pIter, (void **)&pOffset);
+ if (pIter == NULL) break;
+
+ if (!mndOffsetFromSubKey(pOffset, subKey)) {
+ sdbRelease(pSdb, pOffset);
+ continue;
+ }
+
+ if (mndSetDropOffsetCommitLogs(pMnode, pTrans, pOffset) < 0) {
sdbRelease(pSdb, pOffset);
goto END;
}
diff --git a/source/dnode/mnode/impl/src/mndSubscribe.c b/source/dnode/mnode/impl/src/mndSubscribe.c
index 0ece5d29e525a649e8a02b0890eb7ae951008f7b..17cf5d43b575dbe2840bec24a29e97dc399ccd7d 100644
--- a/source/dnode/mnode/impl/src/mndSubscribe.c
+++ b/source/dnode/mnode/impl/src/mndSubscribe.c
@@ -42,6 +42,7 @@ static int32_t mndSubActionDelete(SSdb *pSdb, SMqSubscribeObj *);
static int32_t mndSubActionUpdate(SSdb *pSdb, SMqSubscribeObj *pOldSub, SMqSubscribeObj *pNewSub);
static int32_t mndProcessRebalanceReq(SRpcMsg *pMsg);
+static int32_t mndProcessDropCgroupReq(SRpcMsg *pMsg);
static int32_t mndProcessSubscribeInternalRsp(SRpcMsg *pMsg);
static int32_t mndRetrieveSubscribe(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pBlock, int32_t rows);
@@ -75,6 +76,8 @@ int32_t mndInitSubscribe(SMnode *pMnode) {
mndSetMsgHandle(pMnode, TDMT_VND_MQ_VG_CHANGE_RSP, mndProcessSubscribeInternalRsp);
mndSetMsgHandle(pMnode, TDMT_VND_MQ_VG_DELETE_RSP, mndProcessSubscribeInternalRsp);
mndSetMsgHandle(pMnode, TDMT_MND_MQ_DO_REBALANCE, mndProcessRebalanceReq);
+ mndSetMsgHandle(pMnode, TDMT_MND_MQ_DO_REBALANCE, mndProcessRebalanceReq);
+ mndSetMsgHandle(pMnode, TDMT_MND_MQ_DROP_CGROUP, mndProcessDropCgroupReq);
mndAddShowRetrieveHandle(pMnode, TSDB_MGMT_TABLE_SUBSCRIPTIONS, mndRetrieveSubscribe);
mndAddShowFreeIterHandle(pMnode, TSDB_MGMT_TABLE_TOPICS, mndCancelGetNextSubscribe);
@@ -581,6 +584,57 @@ static int32_t mndProcessRebalanceReq(SRpcMsg *pMsg) {
return 0;
}
+static int32_t mndProcessDropCgroupReq(SRpcMsg *pReq) {
+ SMnode *pMnode = pReq->info.node;
+ /*SSdb *pSdb = pMnode->pSdb;*/
+ SMDropCgroupReq dropReq = {0};
+
+ if (tDeserializeSMDropCgroupReq(pReq->pCont, pReq->contLen, &dropReq) != 0) {
+ terrno = TSDB_CODE_INVALID_MSG;
+ return -1;
+ }
+
+ SMqSubscribeObj *pSub = mndAcquireSubscribe(pMnode, dropReq.cgroup, dropReq.topic);
+ if (pSub == NULL) {
+ if (dropReq.igNotExists) {
+ mDebug("cgroup:%s on topic:%s, not exist, ignore not exist is set", dropReq.cgroup, dropReq.topic);
+ return 0;
+ } else {
+ terrno = TSDB_CODE_MND_SUBSCRIBE_NOT_EXIST;
+ mError("topic:%s, cgroup:%s, failed to drop since %s", dropReq.topic, dropReq.cgroup, terrstr());
+ return -1;
+ }
+ }
+
+ if (taosHashGetSize(pSub->consumerHash) == 0) {
+ terrno = TSDB_CODE_MND_CGROUP_USED;
+ mError("cgroup:%s on topic:%s, failed to drop since %s", dropReq.cgroup, dropReq.topic, terrstr());
+ return -1;
+ }
+
+ STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_TYPE_DROP_CGROUP, pReq);
+ if (pTrans == NULL) {
+ mError("cgroup: %s on topic:%s, failed to drop since %s", dropReq.cgroup, dropReq.topic, terrstr());
+ return -1;
+ }
+
+ mDebug("trans:%d, used to drop cgroup:%s on topic %s", pTrans->id, dropReq.cgroup, dropReq.topic);
+
+ if (mndDropOffsetBySubKey(pMnode, pTrans, pSub->key) < 0) {
+ ASSERT(0);
+ return -1;
+ }
+
+ if (mndSetDropSubCommitLogs(pMnode, pTrans, pSub) < 0) {
+ mError("cgroup %s on topic:%s, failed to drop since %s", dropReq.cgroup, dropReq.topic, terrstr());
+ return -1;
+ }
+
+ mndReleaseSubscribe(pMnode, pSub);
+
+ return TSDB_CODE_ACTION_IN_PROGRESS;
+}
+
void mndCleanupSubscribe(SMnode *pMnode) {}
static SSdbRaw *mndSubActionEncode(SMqSubscribeObj *pSub) {
@@ -735,7 +789,7 @@ static int32_t mndSetDropSubRedoLogs(SMnode *pMnode, STrans *pTrans, SMqSubscrib
return 0;
}
-static int32_t mndSetDropSubCommitLogs(SMnode *pMnode, STrans *pTrans, SMqSubscribeObj *pSub) {
+int32_t mndSetDropSubCommitLogs(SMnode *pMnode, STrans *pTrans, SMqSubscribeObj *pSub) {
SSdbRaw *pCommitRaw = mndSubActionEncode(pSub);
if (pCommitRaw == NULL) return -1;
if (mndTransAppendCommitlog(pTrans, pCommitRaw) != 0) return -1;
diff --git a/source/dnode/vnode/src/inc/vnodeInt.h b/source/dnode/vnode/src/inc/vnodeInt.h
index d38ff716abecd9b553823a08299709e204392353..ba25c5e2866995d71c1c7cdee2473a87b609d2fe 100644
--- a/source/dnode/vnode/src/inc/vnodeInt.h
+++ b/source/dnode/vnode/src/inc/vnodeInt.h
@@ -91,6 +91,7 @@ int metaAlterTable(SMeta* pMeta, int64_t version, SVAlterTbReq* pReq
SSchemaWrapper* metaGetTableSchema(SMeta* pMeta, tb_uid_t uid, int32_t sver, bool isinline);
STSchema* metaGetTbTSchema(SMeta* pMeta, tb_uid_t uid, int32_t sver);
int metaGetTableEntryByName(SMetaReader* pReader, const char* name);
+tb_uid_t metaGetTableEntryUidByName(SMeta* pMeta, const char* name);
int metaGetTbNum(SMeta* pMeta);
SMCtbCursor* metaOpenCtbCursor(SMeta* pMeta, tb_uid_t uid);
void metaCloseCtbCursor(SMCtbCursor* pCtbCur);
diff --git a/source/dnode/vnode/src/meta/metaQuery.c b/source/dnode/vnode/src/meta/metaQuery.c
index c19190e68a6bd54a106a9de1278d8870989864dc..184b640bddb82122b95abdcf9b3934d27a1d860c 100644
--- a/source/dnode/vnode/src/meta/metaQuery.c
+++ b/source/dnode/vnode/src/meta/metaQuery.c
@@ -81,6 +81,19 @@ int metaGetTableEntryByName(SMetaReader *pReader, const char *name) {
return metaGetTableEntryByUid(pReader, uid);
}
+tb_uid_t metaGetTableEntryUidByName(SMeta *pMeta, const char *name) {
+ void *pData = NULL;
+ int nData = 0;
+ tb_uid_t uid = 0;
+
+ if (tdbTbGet(pMeta->pNameIdx, name, strlen(name) + 1, &pData, &nData) == 0) {
+ uid = *(tb_uid_t *)pData;
+ tdbFree(pData);
+ }
+
+ return 0;
+}
+
int metaReadNext(SMetaReader *pReader) {
SMeta *pMeta = pReader->pMeta;
diff --git a/source/dnode/vnode/src/sma/smaRollup.c b/source/dnode/vnode/src/sma/smaRollup.c
index 88af049d0bd298e58e51286e0980fd13a7872734..df10d9d53361b013b267fdeb5cb445f4c3575bfa 100644
--- a/source/dnode/vnode/src/sma/smaRollup.c
+++ b/source/dnode/vnode/src/sma/smaRollup.c
@@ -374,7 +374,7 @@ static FORCE_INLINE int32_t tdExecuteRSmaImpl(SSma *pSma, const void *pMsg, int3
smaDebug("vgId:%d execute rsma %" PRIi8 " task for qTaskInfo:%p suid:%" PRIu64, SMA_VID(pSma), level, taskInfo, suid);
- qSetStreamInput(taskInfo, pMsg, inputType);
+ qSetStreamInput(taskInfo, pMsg, inputType, true);
while (1) {
SSDataBlock *output = NULL;
uint64_t ts;
diff --git a/source/dnode/vnode/src/tq/tq.c b/source/dnode/vnode/src/tq/tq.c
index 9941b00ff73cbe95a9a7ead1baf619f910bdef18..192016166a4d386aa6873955d9411efe32df2412 100644
--- a/source/dnode/vnode/src/tq/tq.c
+++ b/source/dnode/vnode/src/tq/tq.c
@@ -264,7 +264,7 @@ int32_t tqPushMsgNew(STQ* pTq, void* msg, int32_t msgLen, tmsg_t msgType, int64_
if (pExec->subType == TOPIC_SUB_TYPE__TABLE) {
qTaskInfo_t task = pExec->task[workerId];
ASSERT(task);
- qSetStreamInput(task, pReq, STREAM_DATA_TYPE_SUBMIT_BLOCK);
+ qSetStreamInput(task, pReq, STREAM_DATA_TYPE_SUBMIT_BLOCK, false);
while (1) {
SSDataBlock* pDataBlock = NULL;
uint64_t ts = 0;
@@ -510,7 +510,7 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg, int32_t workerId) {
if (pExec->subType == TOPIC_SUB_TYPE__TABLE) {
qTaskInfo_t task = pExec->task[workerId];
ASSERT(task);
- qSetStreamInput(task, pCont, STREAM_DATA_TYPE_SUBMIT_BLOCK);
+ qSetStreamInput(task, pCont, STREAM_DATA_TYPE_SUBMIT_BLOCK, false);
while (1) {
SSDataBlock* pDataBlock = NULL;
uint64_t ts = 0;
diff --git a/source/dnode/vnode/src/tsdb/tsdbSma.c b/source/dnode/vnode/src/tsdb/tsdbSma.c
index 18cf18dbad32bb1a780d098c0343c8c7894f700b..ea23858f3e592a9d675a8efc6f6db59c373ca025 100644
--- a/source/dnode/vnode/src/tsdb/tsdbSma.c
+++ b/source/dnode/vnode/src/tsdb/tsdbSma.c
@@ -2015,7 +2015,7 @@ static FORCE_INLINE int32_t tsdbExecuteRSmaImpl(STsdb *pTsdb, const void *pMsg,
tsdbDebug("vgId:%d execute rsma %" PRIi8 " task for qTaskInfo:%p suid:%" PRIu64, REPO_ID(pTsdb), level, taskInfo,
suid);
- qSetStreamInput(taskInfo, pMsg, inputType);
+ qSetStreamInput(taskInfo, pMsg, inputType, false);
while (1) {
SSDataBlock *output = NULL;
uint64_t ts;
diff --git a/source/dnode/vnode/src/vnd/vnodeSvr.c b/source/dnode/vnode/src/vnd/vnodeSvr.c
index 8d67dd334635e7072fc8ad5912a7e44c4c50012b..40f75804dd36e23c06f4bcc189f355aea6b71a56 100644
--- a/source/dnode/vnode/src/vnd/vnodeSvr.c
+++ b/source/dnode/vnode/src/vnd/vnodeSvr.c
@@ -38,9 +38,11 @@ int32_t vnodePreprocessReq(SVnode *pVnode, SRpcMsg *pMsg) {
tDecodeI32v(&dc, &nReqs);
for (int32_t iReq = 0; iReq < nReqs; iReq++) {
tb_uid_t uid = tGenIdPI64();
+ char *name = NULL;
tStartDecode(&dc);
tDecodeI32v(&dc, NULL);
+ tDecodeCStr(&dc, &name);
*(int64_t *)(dc.data + dc.pos) = uid;
*(int64_t *)(dc.data + dc.pos + 8) = ctime;
@@ -64,12 +66,18 @@ int32_t vnodePreprocessReq(SVnode *pVnode, SRpcMsg *pMsg) {
if (pBlock == NULL) break;
if (msgIter.schemaLen > 0) {
- uid = tGenIdPI64();
+ char *name = NULL;
tDecoderInit(&dc, pBlock->data, msgIter.schemaLen);
tStartDecode(&dc);
tDecodeI32v(&dc, NULL);
+ tDecodeCStr(&dc, &name);
+
+ uid = metaGetTableEntryUidByName(pVnode->pMeta, name);
+ if (uid == 0) {
+ uid = tGenIdPI64();
+ }
*(int64_t *)(dc.data + dc.pos) = uid;
*(int64_t *)(dc.data + dc.pos + 8) = ctime;
pBlock->uid = htobe64(uid);
diff --git a/source/libs/executor/inc/executorimpl.h b/source/libs/executor/inc/executorimpl.h
index dc613ddd86294a97408c6338cbb8a12a9d73859e..300149a22df641c361daadc675591f244d09097c 100644
--- a/source/libs/executor/inc/executorimpl.h
+++ b/source/libs/executor/inc/executorimpl.h
@@ -371,6 +371,7 @@ typedef struct SessionWindowSupporter {
SStreamAggSupporter* pStreamAggSup;
int64_t gap;
} SessionWindowSupporter;
+
typedef struct SStreamBlockScanInfo {
SArray* pBlockLists; // multiple SSDatablock.
SSDataBlock* pRes; // result SSDataBlock
@@ -379,7 +380,6 @@ typedef struct SStreamBlockScanInfo {
int32_t blockType; // current block type
int32_t validBlockIndex; // Is current data has returned?
SColumnInfo* pCols; // the output column info
- uint64_t numOfRows; // total scanned rows
uint64_t numOfExec; // execution times
void* streamBlockReader;// stream block reader handle
SArray* pColMatchInfo; //
@@ -394,8 +394,9 @@ typedef struct SStreamBlockScanInfo {
SOperatorInfo* pOperatorDumy;
SInterval interval; // if the upstream is an interval operator, the interval info is also kept here.
SCatchSupporter childAggSup;
- SArray* childIds;
+ SArray* childIds;
SessionWindowSupporter sessionSup;
+ bool assignBlockUid; // assign block uid to groupId, temporarily used for generating rollup SMA.
} SStreamBlockScanInfo;
typedef struct SSysTableScanInfo {
diff --git a/source/libs/executor/src/executor.c b/source/libs/executor/src/executor.c
index 2811c8dce84918bc61339597150b15f56690b99d..fd62849e56805c22472a5ea438140ec655e20df0 100644
--- a/source/libs/executor/src/executor.c
+++ b/source/libs/executor/src/executor.c
@@ -19,7 +19,7 @@
#include "tdatablock.h"
#include "vnode.h"
-static int32_t doSetStreamBlock(SOperatorInfo* pOperator, void* input, size_t numOfBlocks, int32_t type, char* id) {
+static int32_t doSetStreamBlock(SOperatorInfo* pOperator, void* input, size_t numOfBlocks, int32_t type, bool assignUid, char* id) {
ASSERT(pOperator != NULL);
if (pOperator->operatorType != QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN) {
if (pOperator->numOfDownstream == 0) {
@@ -32,11 +32,12 @@ static int32_t doSetStreamBlock(SOperatorInfo* pOperator, void* input, size_t nu
return TSDB_CODE_QRY_APP_ERROR;
}
pOperator->status = OP_NOT_OPENED;
- return doSetStreamBlock(pOperator->pDownstream[0], input, numOfBlocks, type, id);
+ return doSetStreamBlock(pOperator->pDownstream[0], input, numOfBlocks, type, assignUid, id);
} else {
pOperator->status = OP_NOT_OPENED;
SStreamBlockScanInfo* pInfo = pOperator->info;
+ pInfo->assignBlockUid = assignUid;
// the block type can not be changed in the streamscan operators
if (pInfo->blockType == 0) {
@@ -67,11 +68,11 @@ static int32_t doSetStreamBlock(SOperatorInfo* pOperator, void* input, size_t nu
}
}
-int32_t qSetStreamInput(qTaskInfo_t tinfo, const void* input, int32_t type) {
- return qSetMultiStreamInput(tinfo, input, 1, type);
+int32_t qSetStreamInput(qTaskInfo_t tinfo, const void* input, int32_t type, bool assignUid) {
+ return qSetMultiStreamInput(tinfo, input, 1, type, assignUid);
}
-int32_t qSetMultiStreamInput(qTaskInfo_t tinfo, const void* pBlocks, size_t numOfBlocks, int32_t type) {
+int32_t qSetMultiStreamInput(qTaskInfo_t tinfo, const void* pBlocks, size_t numOfBlocks, int32_t type, bool assignUid) {
if (tinfo == NULL) {
return TSDB_CODE_QRY_APP_ERROR;
}
@@ -82,7 +83,7 @@ int32_t qSetMultiStreamInput(qTaskInfo_t tinfo, const void* pBlocks, size_t numO
SExecTaskInfo* pTaskInfo = (SExecTaskInfo*)tinfo;
- int32_t code = doSetStreamBlock(pTaskInfo->pRoot, (void**)pBlocks, numOfBlocks, type, GET_TASKID(pTaskInfo));
+ int32_t code = doSetStreamBlock(pTaskInfo->pRoot, (void**)pBlocks, numOfBlocks, type, assignUid, GET_TASKID(pTaskInfo));
if (code != TSDB_CODE_SUCCESS) {
qError("%s failed to set the stream block data", GET_TASKID(pTaskInfo));
} else {
diff --git a/source/libs/executor/src/scanoperator.c b/source/libs/executor/src/scanoperator.c
index 32187c81a765f481532a9f1ce30f530cbbdbc29b..40373d554262cea7ecddc70283cf76136cd3673d 100644
--- a/source/libs/executor/src/scanoperator.c
+++ b/source/libs/executor/src/scanoperator.c
@@ -878,6 +878,12 @@ static SSDataBlock* doStreamBlockScan(SOperatorInfo* pOperator) {
pInfo->pRes->info.uid = uid;
pInfo->pRes->info.type = STREAM_NORMAL;
+ // for generating rollup SMA result, each time is an independent time serie.
+ // TODO temporarily used, when the statement of "partition by tbname" is ready, remove this
+ if (pInfo->assignBlockUid) {
+ pInfo->pRes->info.groupId = uid;
+ }
+
int32_t numOfCols = pInfo->pRes->info.numOfCols;
for (int32_t i = 0; i < numOfCols; ++i) {
SColMatchInfo* pColMatchInfo = taosArrayGet(pInfo->pColMatchInfo, i);
@@ -918,7 +924,7 @@ static SSDataBlock* doStreamBlockScan(SOperatorInfo* pOperator) {
// record the scan action.
pInfo->numOfExec++;
- pInfo->numOfRows += pBlockInfo->rows;
+ pOperator->resultInfo.totalRows += pBlockInfo->rows;
if (rows == 0) {
pOperator->status = OP_EXEC_DONE;
diff --git a/source/libs/function/src/builtinsimpl.c b/source/libs/function/src/builtinsimpl.c
index 3b453a8b1f19f5ab02e5f20489cf2281db9ec44b..f2adbe0053533cc1e12a9f10990aa0c1e461b90e 100644
--- a/source/libs/function/src/builtinsimpl.c
+++ b/source/libs/function/src/builtinsimpl.c
@@ -225,6 +225,7 @@ typedef struct SUniqueInfo {
int32_t numOfPoints;
uint8_t colType;
int16_t colBytes;
+ bool hasNull; //null is not hashable, handle separately
SHashObj *pHash;
char pItems[];
} SUniqueInfo;
@@ -3860,6 +3861,7 @@ bool uniqueFunctionSetup(SqlFunctionCtx* pCtx, SResultRowEntryInfo* pResInfo) {
pInfo->numOfPoints = 0;
pInfo->colType = pCtx->resDataInfo.type;
pInfo->colBytes = pCtx->resDataInfo.bytes;
+ pInfo->hasNull = false;
if (pInfo->pHash != NULL) {
taosHashClear(pInfo->pHash);
} else {
@@ -3869,8 +3871,22 @@ bool uniqueFunctionSetup(SqlFunctionCtx* pCtx, SResultRowEntryInfo* pResInfo) {
}
static void doUniqueAdd(SUniqueInfo* pInfo, char *data, TSKEY ts, bool isNull) {
- int32_t hashKeyBytes = IS_VAR_DATA_TYPE(pInfo->colType) ? varDataTLen(data) : pInfo->colBytes;
+ //handle null elements
+ if (isNull == true) {
+ int32_t size = sizeof(SUniqueItem) + pInfo->colBytes;
+ SUniqueItem *pItem = (SUniqueItem *)(pInfo->pItems + pInfo->numOfPoints * size);
+ if (pInfo->hasNull == false && pItem->isNull == false) {
+ pItem->timestamp = ts;
+ pItem->isNull = true;
+ pInfo->numOfPoints++;
+ pInfo->hasNull = true;
+ } else if (pItem->timestamp > ts && pItem->isNull == true) {
+ pItem->timestamp = ts;
+ }
+ return;
+ }
+ int32_t hashKeyBytes = IS_VAR_DATA_TYPE(pInfo->colType) ? varDataTLen(data) : pInfo->colBytes;
SUniqueItem *pHashItem = taosHashGet(pInfo->pHash, data, hashKeyBytes);
if (pHashItem == NULL) {
int32_t size = sizeof(SUniqueItem) + pInfo->colBytes;
@@ -3884,6 +3900,7 @@ static void doUniqueAdd(SUniqueInfo* pInfo, char *data, TSKEY ts, bool isNull) {
pHashItem->timestamp = ts;
}
+ return;
}
int32_t uniqueFunction(SqlFunctionCtx* pCtx) {
@@ -3910,7 +3927,11 @@ int32_t uniqueFunction(SqlFunctionCtx* pCtx) {
for (int32_t i = 0; i < pInfo->numOfPoints; ++i) {
SUniqueItem *pItem = (SUniqueItem *)(pInfo->pItems + i * (sizeof(SUniqueItem) + pInfo->colBytes));
- colDataAppend(pOutput, i, pItem->data, false);
+ if (pItem->isNull == true) {
+ colDataAppendNULL(pOutput, i);
+ } else {
+ colDataAppend(pOutput, i, pItem->data, false);
+ }
if (pTsOutput != NULL) {
colDataAppendInt64(pTsOutput, i, &pItem->timestamp);
}
diff --git a/source/libs/index/inc/indexCache.h b/source/libs/index/inc/indexCache.h
index aff2e0e836c0f2aae9a1fe63dd984cd4f5eb7850..6cbe2532cc5b7532e011f14f76dea49437087006 100644
--- a/source/libs/index/inc/indexCache.h
+++ b/source/libs/index/inc/indexCache.h
@@ -38,7 +38,7 @@ typedef struct IndexCache {
MemTable *mem, *imm;
SIndex* index;
char* colName;
- int32_t version;
+ int64_t version;
int64_t occupiedMem;
int8_t type;
uint64_t suid;
@@ -47,12 +47,12 @@ typedef struct IndexCache {
TdThreadCond finished;
} IndexCache;
-#define CACHE_VERSION(cache) atomic_load_32(&cache->version)
+#define CACHE_VERSION(cache) atomic_load_64(&cache->version)
typedef struct CacheTerm {
// key
char* colVal;
- int32_t version;
+ int64_t version;
// value
uint64_t uid;
int8_t colType;
diff --git a/source/libs/index/inc/indexInt.h b/source/libs/index/inc/indexInt.h
index 0bdcb131b69befd518b233e38a2653a17e67bde8..81d43daf133ab0613b3cc56ec68d82e87bc0326c 100644
--- a/source/libs/index/inc/indexInt.h
+++ b/source/libs/index/inc/indexInt.h
@@ -34,6 +34,15 @@
extern "C" {
#endif
+// clang-format off
+#define indexFatal(...) do { if (idxDebugFlag & DEBUG_FATAL) { taosPrintLog("INDEX FATAL ", DEBUG_FATAL, 255, __VA_ARGS__); }} while (0)
+#define indexError(...) do { if (idxDebugFlag & DEBUG_ERROR) { taosPrintLog("INDEX ERROR ", DEBUG_ERROR, 255, __VA_ARGS__); }} while (0)
+#define indexWarn(...) do { if (idxDebugFlag & DEBUG_WARN) { taosPrintLog("INDEX WARN ", DEBUG_WARN, 255, __VA_ARGS__); }} while (0)
+#define indexInfo(...) do { if (idxDebugFlag & DEBUG_INFO) { taosPrintLog("INDEX ", DEBUG_INFO, 255, __VA_ARGS__); } } while (0)
+#define indexDebug(...) do { if (idxDebugFlag & DEBUG_DEBUG) { taosPrintLog("INDEX ", DEBUG_DEBUG, sDebugFlag, __VA_ARGS__);} } while (0)
+#define indexTrace(...) do { if (idxDebugFlag & DEBUG_TRACE) { taosPrintLog("INDEX ", DEBUG_TRACE, sDebugFlag, __VA_ARGS__);} } while (0)
+// clang-format on
+
typedef enum { LT, LE, GT, GE } RangeType;
typedef enum { kTypeValue, kTypeDeletion } STermValueType;
@@ -134,15 +143,6 @@ int32_t indexSerialCacheKey(ICacheKey* key, char* buf);
// int32_t indexSerialKey(ICacheKey* key, char* buf);
// int32_t indexSerialTermKey(SIndexTerm* itm, char* buf);
-// clang-format off
-#define indexFatal(...) do { if (sDebugFlag & DEBUG_FATAL) { taosPrintLog("INDEX FATAL ", DEBUG_FATAL, 255, __VA_ARGS__); }} while (0)
-#define indexError(...) do { if (sDebugFlag & DEBUG_ERROR) { taosPrintLog("INDEX ERROR ", DEBUG_ERROR, 255, __VA_ARGS__); }} while (0)
-#define indexWarn(...) do { if (sDebugFlag & DEBUG_WARN) { taosPrintLog("INDEX WARN ", DEBUG_WARN, 255, __VA_ARGS__); }} while (0)
-#define indexInfo(...) do { if (sDebugFlag & DEBUG_INFO) { taosPrintLog("INDEX ", DEBUG_INFO, 255, __VA_ARGS__); } } while (0)
-#define indexDebug(...) do { if (sDebugFlag & DEBUG_DEBUG) { taosPrintLog("INDEX ", DEBUG_DEBUG, sDebugFlag, __VA_ARGS__);} } while (0)
-#define indexTrace(...) do { if (sDebugFlag & DEBUG_TRACE) { taosPrintLog("INDEX ", DEBUG_TRACE, sDebugFlag, __VA_ARGS__);} } while (0)
-// clang-format on
-
#define INDEX_TYPE_CONTAIN_EXTERN_TYPE(ty, exTy) (((ty >> 4) & (exTy)) != 0)
#define INDEX_TYPE_GET_TYPE(ty) (ty & 0x0F)
diff --git a/source/libs/index/inc/indexTfile.h b/source/libs/index/inc/indexTfile.h
index 85ed397b0ac5d14984a4020b265fbdcf6951c68e..af32caa8219016cd6562423466d5f8a44eeb0229 100644
--- a/source/libs/index/inc/indexTfile.h
+++ b/source/libs/index/inc/indexTfile.h
@@ -28,12 +28,12 @@ extern "C" {
// tfile header content
// |<---suid--->|<---version--->|<-------colName------>|<---type-->|<--fstOffset->|
-// |<-uint64_t->|<---int32_t--->|<--TSDB_COL_NAME_LEN-->|<-uint8_t->|<---int32_t-->|
+// |<-uint64_t->|<---int64_t--->|<--TSDB_COL_NAME_LEN-->|<-uint8_t->|<---int32_t-->|
#pragma pack(push, 1)
typedef struct TFileHeader {
uint64_t suid;
- int32_t version;
+ int64_t version;
char colName[TSDB_COL_NAME_LEN]; //
uint8_t colType;
int32_t fstOffset;
@@ -74,9 +74,10 @@ typedef struct TFileReader {
} TFileReader;
typedef struct IndexTFile {
- char* path;
- TFileCache* cache;
- TFileWriter* tw;
+ char* path;
+ TFileCache* cache;
+ TFileWriter* tw;
+ TdThreadMutex mtx;
} IndexTFile;
typedef struct TFileWriterOpt {
@@ -101,14 +102,14 @@ void tfileCachePut(TFileCache* tcache, ICacheKey* key, TFileReader* read
TFileReader* tfileGetReaderByCol(IndexTFile* tf, uint64_t suid, char* colName);
-TFileReader* tfileReaderOpen(char* path, uint64_t suid, int32_t version, const char* colName);
+TFileReader* tfileReaderOpen(char* path, uint64_t suid, int64_t version, const char* colName);
TFileReader* tfileReaderCreate(WriterCtx* ctx);
void tfileReaderDestroy(TFileReader* reader);
int tfileReaderSearch(TFileReader* reader, SIndexTermQuery* query, SIdxTempResult* tr);
void tfileReaderRef(TFileReader* reader);
void tfileReaderUnRef(TFileReader* reader);
-TFileWriter* tfileWriterOpen(char* path, uint64_t suid, int32_t version, const char* colName, uint8_t type);
+TFileWriter* tfileWriterOpen(char* path, uint64_t suid, int64_t version, const char* colName, uint8_t type);
void tfileWriterClose(TFileWriter* tw);
TFileWriter* tfileWriterCreate(WriterCtx* ctx, TFileHeader* header);
void tfileWriterDestroy(TFileWriter* tw);
diff --git a/source/libs/index/src/index.c b/source/libs/index/src/index.c
index 6add788a896f8149f49d9f224538d5b3ab4e5b57..500f5706491b61e05deea65d567b68ecc8cb1694 100644
--- a/source/libs/index/src/index.c
+++ b/source/libs/index/src/index.c
@@ -557,20 +557,18 @@ void iterateValueDestroy(IterateValue* value, bool destroy) {
static int64_t indexGetAvaialbleVer(SIndex* sIdx, IndexCache* cache) {
ICacheKey key = {.suid = cache->suid, .colName = cache->colName, .nColName = strlen(cache->colName)};
int64_t ver = CACHE_VERSION(cache);
- taosThreadMutexLock(&sIdx->mtx);
- TFileReader* trd = tfileCacheGet(((IndexTFile*)sIdx->tindex)->cache, &key);
- if (trd != NULL) {
- if (ver < trd->header.version) {
- ver = trd->header.version + 1;
- } else {
- ver += 1;
- }
- indexInfo("header: %d, ver: %" PRId64 "", trd->header.version, ver);
- tfileReaderUnRef(trd);
- } else {
- indexInfo("not found reader base %p", trd);
+
+ IndexTFile* tf = (IndexTFile*)(sIdx->tindex);
+
+ taosThreadMutexLock(&tf->mtx);
+ TFileReader* rd = tfileCacheGet(tf->cache, &key);
+ taosThreadMutexUnlock(&tf->mtx);
+
+ if (rd != NULL) {
+ ver = (ver > rd->header.version ? ver : rd->header.version) + 1;
+ indexInfo("header: %" PRId64 ", ver: %" PRId64 "", rd->header.version, ver);
}
- taosThreadMutexUnlock(&sIdx->mtx);
+ tfileReaderUnRef(rd);
return ver;
}
static int indexGenTFile(SIndex* sIdx, IndexCache* cache, SArray* batch) {
@@ -597,13 +595,14 @@ static int indexGenTFile(SIndex* sIdx, IndexCache* cache, SArray* batch) {
}
indexInfo("success to create tfile, reopen it, %s", reader->ctx->file.buf);
+ IndexTFile* tf = (IndexTFile*)sIdx->tindex;
+
TFileHeader* header = &reader->header;
ICacheKey key = {.suid = cache->suid, .colName = header->colName, .nColName = strlen(header->colName)};
- taosThreadMutexLock(&sIdx->mtx);
- IndexTFile* ifile = (IndexTFile*)sIdx->tindex;
- tfileCachePut(ifile->cache, &key, reader);
- taosThreadMutexUnlock(&sIdx->mtx);
+ taosThreadMutexLock(&tf->mtx);
+ tfileCachePut(tf->cache, &key, reader);
+ taosThreadMutexUnlock(&tf->mtx);
return ret;
END:
if (tw != NULL) {
diff --git a/source/libs/index/src/indexCache.c b/source/libs/index/src/indexCache.c
index d704e3876e4979cdf8c1354e9b3d2ef23bf91132..6e52c4b1ba03ecd77cc4476022d61d160ae34890 100644
--- a/source/libs/index/src/indexCache.c
+++ b/source/libs/index/src/indexCache.c
@@ -80,7 +80,7 @@ static int32_t cacheSearchTerm(void* cache, SIndexTerm* term, SIdxTempResult* tr
CacheTerm* pCt = taosMemoryCalloc(1, sizeof(CacheTerm));
pCt->colVal = term->colVal;
- pCt->version = atomic_load_32(&pCache->version);
+ pCt->version = atomic_load_64(&pCache->version);
char* key = indexCacheTermGet(pCt);
@@ -133,7 +133,7 @@ static int32_t cacheSearchCompareFunc(void* cache, SIndexTerm* term, SIdxTempRes
CacheTerm* pCt = taosMemoryCalloc(1, sizeof(CacheTerm));
pCt->colVal = term->colVal;
- pCt->version = atomic_load_32(&pCache->version);
+ pCt->version = atomic_load_64(&pCache->version);
char* key = indexCacheTermGet(pCt);
@@ -185,7 +185,7 @@ static int32_t cacheSearchTerm_JSON(void* cache, SIndexTerm* term, SIdxTempResul
CacheTerm* pCt = taosMemoryCalloc(1, sizeof(CacheTerm));
pCt->colVal = term->colVal;
- pCt->version = atomic_load_32(&pCache->version);
+ pCt->version = atomic_load_64(&pCache->version);
char* exBuf = NULL;
if (INDEX_TYPE_CONTAIN_EXTERN_TYPE(term->colType, TSDB_DATA_TYPE_JSON)) {
@@ -259,7 +259,7 @@ static int32_t cacheSearchCompareFunc_JSON(void* cache, SIndexTerm* term, SIdxTe
CacheTerm* pCt = taosMemoryCalloc(1, sizeof(CacheTerm));
pCt->colVal = term->colVal;
- pCt->version = atomic_load_32(&pCache->version);
+ pCt->version = atomic_load_64(&pCache->version);
int8_t dType = INDEX_TYPE_GET_TYPE(term->colType);
int skip = 0;
@@ -356,7 +356,7 @@ void indexCacheDebug(IndexCache* cache) {
CacheTerm* ct = (CacheTerm*)SL_GET_NODE_DATA(node);
if (ct != NULL) {
// TODO, add more debug info
- indexInfo("{colVal: %s, version: %d} \t", ct->colVal, ct->version);
+ indexInfo("{colVal: %s, version: %" PRId64 "} \t", ct->colVal, ct->version);
}
}
tSkipListDestroyIter(iter);
@@ -377,7 +377,7 @@ void indexCacheDebug(IndexCache* cache) {
CacheTerm* ct = (CacheTerm*)SL_GET_NODE_DATA(node);
if (ct != NULL) {
// TODO, add more debug info
- indexInfo("{colVal: %s, version: %d} \t", ct->colVal, ct->version);
+ indexInfo("{colVal: %s, version: %" PRId64 "} \t", ct->colVal, ct->version);
}
}
tSkipListDestroyIter(iter);
@@ -529,7 +529,7 @@ int indexCachePut(void* cache, SIndexTerm* term, uint64_t uid) {
ct->colVal = (char*)taosMemoryCalloc(1, sizeof(char) * (term->nColVal + 1));
memcpy(ct->colVal, term->colVal, term->nColVal);
}
- ct->version = atomic_add_fetch_32(&pCache->version, 1);
+ ct->version = atomic_add_fetch_64(&pCache->version, 1);
// set value
ct->uid = uid;
ct->operaType = term->operType;
@@ -663,7 +663,11 @@ static int32_t indexCacheTermCompare(const void* l, const void* r) {
// compare colVal
int32_t cmp = strcmp(lt->colVal, rt->colVal);
if (cmp == 0) {
- return rt->version - lt->version;
+ if (rt->version == lt->version) {
+ cmp = 0;
+ } else {
+ cmp = rt->version < lt->version ? -1 : 1;
+ }
}
return cmp;
}
diff --git a/source/libs/index/src/indexTfile.c b/source/libs/index/src/indexTfile.c
index 3d85646bd25596e7d3a666b99287d6b5e3d5e902..3de556e8b50c27f11687ea6b45fcf5da9675fed3 100644
--- a/source/libs/index/src/indexTfile.c
+++ b/source/libs/index/src/indexTfile.c
@@ -54,9 +54,9 @@ static SArray* tfileGetFileList(const char* path);
static int tfileRmExpireFile(SArray* result);
static void tfileDestroyFileName(void* elem);
static int tfileCompare(const void* a, const void* b);
-static int tfileParseFileName(const char* filename, uint64_t* suid, char* col, int* version);
-static void tfileGenFileName(char* filename, uint64_t suid, const char* col, int version);
-static void tfileGenFileFullName(char* fullname, const char* path, uint64_t suid, const char* col, int32_t version);
+static int tfileParseFileName(const char* filename, uint64_t* suid, char* col, int64_t* version);
+static void tfileGenFileName(char* filename, uint64_t suid, const char* col, int64_t version);
+static void tfileGenFileFullName(char* fullname, const char* path, uint64_t suid, const char* col, int64_t version);
/*
* search from tfile
*/
@@ -151,13 +151,10 @@ TFileReader* tfileCacheGet(TFileCache* tcache, ICacheKey* key) {
char buf[128] = {0};
int32_t sz = indexSerialCacheKey(key, buf);
assert(sz < sizeof(buf));
- indexInfo("Try to get key: %s", buf);
TFileReader** reader = taosHashGet(tcache->tableCache, buf, sz);
if (reader == NULL || *reader == NULL) {
- indexInfo("failed to get key: %s", buf);
return NULL;
}
- indexInfo("Get key: %s file: %s", buf, (*reader)->ctx->file.buf);
tfileReaderRef(*reader);
return *reader;
@@ -168,11 +165,11 @@ void tfileCachePut(TFileCache* tcache, ICacheKey* key, TFileReader* reader) {
// remove last version index reader
TFileReader** p = taosHashGet(tcache->tableCache, buf, sz);
if (p != NULL && *p != NULL) {
- TFileReader* oldReader = *p;
+ TFileReader* oldRdr = *p;
taosHashRemove(tcache->tableCache, buf, sz);
- indexInfo("found %s, remove file %s", buf, oldReader->ctx->file.buf);
- oldReader->remove = true;
- tfileReaderUnRef(oldReader);
+ indexInfo("found %s, should remove file %s", buf, oldRdr->ctx->file.buf);
+ oldRdr->remove = true;
+ tfileReaderUnRef(oldRdr);
}
taosHashPut(tcache->tableCache, buf, sz, &reader, sizeof(void*));
tfileReaderRef(reader);
@@ -215,6 +212,12 @@ void tfileReaderDestroy(TFileReader* reader) {
// T_REF_INC(reader);
fstDestroy(reader->fst);
writerCtxDestroy(reader->ctx, reader->remove);
+ if (reader->remove) {
+ indexInfo("%s is removed", reader->ctx->file.buf);
+ } else {
+ indexInfo("%s is not removed", reader->ctx->file.buf);
+ }
+
taosMemoryFree(reader);
}
static int32_t tfSearchTerm(void* reader, SIndexTerm* tem, SIdxTempResult* tr) {
@@ -512,7 +515,7 @@ int tfileReaderSearch(TFileReader* reader, SIndexTermQuery* query, SIdxTempResul
return ret;
}
-TFileWriter* tfileWriterOpen(char* path, uint64_t suid, int32_t version, const char* colName, uint8_t colType) {
+TFileWriter* tfileWriterOpen(char* path, uint64_t suid, int64_t version, const char* colName, uint8_t colType) {
char fullname[256] = {0};
tfileGenFileFullName(fullname, path, suid, colName, version);
// indexInfo("open write file name %s", fullname);
@@ -529,7 +532,7 @@ TFileWriter* tfileWriterOpen(char* path, uint64_t suid, int32_t version, const c
return tfileWriterCreate(wcx, &tfh);
}
-TFileReader* tfileReaderOpen(char* path, uint64_t suid, int32_t version, const char* colName) {
+TFileReader* tfileReaderOpen(char* path, uint64_t suid, int64_t version, const char* colName) {
char fullname[256] = {0};
tfileGenFileFullName(fullname, path, suid, colName, version);
@@ -657,7 +660,7 @@ IndexTFile* indexTFileCreate(const char* path) {
tfileCacheDestroy(cache);
return NULL;
}
-
+ taosThreadMutexInit(&tfile->mtx, NULL);
tfile->cache = cache;
return tfile;
}
@@ -665,6 +668,7 @@ void indexTFileDestroy(IndexTFile* tfile) {
if (tfile == NULL) {
return;
}
+ taosThreadMutexDestroy(&tfile->mtx);
tfileCacheDestroy(tfile->cache);
taosMemoryFree(tfile);
}
@@ -680,7 +684,10 @@ int indexTFileSearch(void* tfile, SIndexTermQuery* query, SIdxTempResult* result
SIndexTerm* term = query->term;
ICacheKey key = {.suid = term->suid, .colType = term->colType, .colName = term->colName, .nColName = term->nColName};
+
+ taosThreadMutexLock(&pTfile->mtx);
TFileReader* reader = tfileCacheGet(pTfile->cache, &key);
+ taosThreadMutexUnlock(&pTfile->mtx);
if (reader == NULL) {
return 0;
}
@@ -780,8 +787,13 @@ TFileReader* tfileGetReaderByCol(IndexTFile* tf, uint64_t suid, char* colName) {
if (tf == NULL) {
return NULL;
}
- ICacheKey key = {.suid = suid, .colType = TSDB_DATA_TYPE_BINARY, .colName = colName, .nColName = strlen(colName)};
- return tfileCacheGet(tf->cache, &key);
+ TFileReader* rd = NULL;
+ ICacheKey key = {.suid = suid, .colType = TSDB_DATA_TYPE_BINARY, .colName = colName, .nColName = strlen(colName)};
+
+ taosThreadMutexLock(&tf->mtx);
+ rd = tfileCacheGet(tf->cache, &key);
+ taosThreadMutexUnlock(&tf->mtx);
+ return rd;
}
static int tfileUidCompare(const void* a, const void* b) {
@@ -1013,7 +1025,7 @@ void tfileReaderUnRef(TFileReader* reader) {
static SArray* tfileGetFileList(const char* path) {
char buf[128] = {0};
uint64_t suid;
- uint32_t version;
+ int64_t version;
SArray* files = taosArrayInit(4, sizeof(void*));
TdDirPtr pDir = taosOpenDir(path);
@@ -1053,19 +1065,19 @@ static int tfileCompare(const void* a, const void* b) {
return strcmp(as, bs);
}
-static int tfileParseFileName(const char* filename, uint64_t* suid, char* col, int* version) {
- if (3 == sscanf(filename, "%" PRIu64 "-%[^-]-%d.tindex", suid, col, version)) {
+static int tfileParseFileName(const char* filename, uint64_t* suid, char* col, int64_t* version) {
+ if (3 == sscanf(filename, "%" PRIu64 "-%[^-]-%" PRId64 ".tindex", suid, col, version)) {
// read suid & colid & version success
return 0;
}
return -1;
}
// tfile name suid-colId-version.tindex
-static void tfileGenFileName(char* filename, uint64_t suid, const char* col, int version) {
- sprintf(filename, "%" PRIu64 "-%s-%d.tindex", suid, col, version);
+static void tfileGenFileName(char* filename, uint64_t suid, const char* col, int64_t version) {
+ sprintf(filename, "%" PRIu64 "-%s-%" PRId64 ".tindex", suid, col, version);
return;
}
-static void tfileGenFileFullName(char* fullname, const char* path, uint64_t suid, const char* col, int32_t version) {
+static void tfileGenFileFullName(char* fullname, const char* path, uint64_t suid, const char* col, int64_t version) {
char filename[128] = {0};
tfileGenFileName(filename, suid, col, version);
sprintf(fullname, "%s/%s", path, filename);
diff --git a/source/libs/index/test/indexTests.cc b/source/libs/index/test/indexTests.cc
index f848cee86b4af0376af61640eb01a07eb1c22371..2d06002af854b1860faf7985fd23e68275207c46 100644
--- a/source/libs/index/test/indexTests.cc
+++ b/source/libs/index/test/indexTests.cc
@@ -279,7 +279,7 @@ static void initLog() {
const int32_t maxLogFileNum = 10;
tsAsyncLog = 0;
- sDebugFlag = 143;
+ idxDebugFlag = 143;
strcpy(tsLogDir, logDir.c_str());
taosRemoveDir(tsLogDir);
taosMkDir(tsLogDir);
@@ -387,7 +387,7 @@ class TFileObj {
std::string path(path_);
int colId = 2;
char buf[64] = {0};
- sprintf(buf, "%" PRIu64 "-%d-%d.tindex", header.suid, colId_, header.version);
+ sprintf(buf, "%" PRIu64 "-%d-%" PRId64 ".tindex", header.suid, colId_, header.version);
path.append("/").append(buf);
fileName_ = path;
@@ -794,10 +794,10 @@ class IndexObj {
}
int sz = taosArrayGetSize(result);
indexMultiTermQueryDestroy(mq);
- taosArrayDestroy(result);
assert(sz == 1);
uint64_t* ret = (uint64_t*)taosArrayGet(result, 0);
assert(val = *ret);
+ taosArrayDestroy(result);
return sz;
}
@@ -953,8 +953,8 @@ TEST_F(IndexEnv2, testIndex_TrigeFlush) {
}
static void single_write_and_search(IndexObj* idx) {
- int target = idx->SearchOne("tag1", "Hello");
- target = idx->SearchOne("tag2", "Test");
+ // int target = idx->SearchOne("tag1", "Hello");
+ // target = idx->SearchOne("tag2", "Test");
}
static void multi_write_and_search(IndexObj* idx) {
idx->PutOne("tag1", "Hello");
diff --git a/source/libs/index/test/index_executor_tests.cpp b/source/libs/index/test/index_executor_tests.cpp
index b0c2a983d1b5f60b50e4f5734a8c99fb3729d80e..b88ffe5b8bdb2058a66d1e56020206643c246e42 100644
--- a/source/libs/index/test/index_executor_tests.cpp
+++ b/source/libs/index/test/index_executor_tests.cpp
@@ -24,11 +24,7 @@
#pragma GCC diagnostic ignored "-Wunused-variable"
#pragma GCC diagnostic ignored "-Wsign-compare"
-#include "executor.h"
-#include "executorimpl.h"
-#include "indexoperator.h"
-#include "os.h"
-
+#include "index.h"
#include "stub.h"
#include "taos.h"
#include "tcompare.h"
diff --git a/source/libs/index/test/jsonUT.cc b/source/libs/index/test/jsonUT.cc
index 8a837c5700da2b8c70d083d5f282933844091673..cd5a5d9b0f192883f67e9dfecdbcb3854669fdf3 100644
--- a/source/libs/index/test/jsonUT.cc
+++ b/source/libs/index/test/jsonUT.cc
@@ -24,7 +24,7 @@ static void initLog() {
const int32_t maxLogFileNum = 10;
tsAsyncLog = 0;
- sDebugFlag = 143;
+ idxDebugFlag = 143;
strcpy(tsLogDir, logDir.c_str());
taosRemoveDir(tsLogDir);
taosMkDir(tsLogDir);
diff --git a/source/libs/stream/src/tstream.c b/source/libs/stream/src/tstream.c
index dc0fbf2bbef00c5bb81bc03182e31edb1f729894..7e4f83a693cf9301da29493ea984828c2731552a 100644
--- a/source/libs/stream/src/tstream.c
+++ b/source/libs/stream/src/tstream.c
@@ -141,13 +141,13 @@ static int32_t streamTaskExecImpl(SStreamTask* pTask, void* data, SArray* pRes)
SStreamDataSubmit* pSubmit = (SStreamDataSubmit*)data;
ASSERT(pSubmit->type == STREAM_INPUT__DATA_SUBMIT);
- qSetStreamInput(exec, pSubmit->data, STREAM_DATA_TYPE_SUBMIT_BLOCK);
+ qSetStreamInput(exec, pSubmit->data, STREAM_DATA_TYPE_SUBMIT_BLOCK, false);
} else if (pTask->inputType == STREAM_INPUT__DATA_BLOCK) {
SStreamDataBlock* pBlock = (SStreamDataBlock*)data;
ASSERT(pBlock->type == STREAM_INPUT__DATA_BLOCK);
SArray* blocks = pBlock->blocks;
- qSetMultiStreamInput(exec, blocks->pData, blocks->size, STREAM_DATA_TYPE_SSDATA_BLOCK);
+ qSetMultiStreamInput(exec, blocks->pData, blocks->size, STREAM_DATA_TYPE_SSDATA_BLOCK, false);
}
// exec
diff --git a/source/libs/tdb/src/inc/tdbInt.h b/source/libs/tdb/src/inc/tdbInt.h
index 9f0267da93fca6db1b35844e77fdf8877eb33847..6524e3c9bcd873180378b5cfea2404b1a461ac7b 100644
--- a/source/libs/tdb/src/inc/tdbInt.h
+++ b/source/libs/tdb/src/inc/tdbInt.h
@@ -55,8 +55,8 @@ typedef u32 SPgno;
#define TDB_PUT_U24(p, v) \
do { \
int tv = (v); \
- (p)[2] = tv & 0xff; \
- (p)[1] = (tv >> 8) & 0xff; \
+ (p)[1] = tv & 0xff; \
+ (p)[2] = (tv >> 8) & 0xff; \
(p)[0] = (tv >> 16) & 0xff; \
} while (0)
diff --git a/source/libs/transport/inc/transComm.h b/source/libs/transport/inc/transComm.h
index 18a85865df1da125a25815a1030ab448bd2c6c01..5ba6c4029eab7d0f530e104de90bb288b96de082 100644
--- a/source/libs/transport/inc/transComm.h
+++ b/source/libs/transport/inc/transComm.h
@@ -104,6 +104,13 @@ typedef SRpcCtxVal STransCtxVal;
typedef SRpcInfo STrans;
typedef SRpcConnInfo STransHandleInfo;
+/*convet from fqdn to ip */
+typedef struct SCvtAddr {
+ char ip[TSDB_FQDN_LEN];
+ char fqdn[TSDB_FQDN_LEN];
+ bool cvt;
+} SCvtAddr;
+
typedef struct {
SEpSet epSet; // ip list provided by app
void* ahandle; // handle provided by app
@@ -115,6 +122,7 @@ typedef struct {
STransCtx appCtx; //
STransMsg* pRsp; // for synchronous API
tsem_t* pSem; // for synchronous API
+ SCvtAddr cvtAddr;
int hThrdIdx;
} STransConnCtx;
@@ -155,7 +163,7 @@ typedef struct {
#pragma pack(pop)
-typedef enum { Normal, Quit, Release, Register } STransMsgType;
+typedef enum { Normal, Quit, Release, Register, Update } STransMsgType;
typedef enum { ConnNormal, ConnAcquire, ConnRelease, ConnBroken, ConnInPool } ConnStatus;
#define container_of(ptr, type, member) ((type*)((char*)(ptr)-offsetof(type, member)))
@@ -209,6 +217,22 @@ SAsyncPool* transCreateAsyncPool(uv_loop_t* loop, int sz, void* arg, AsyncCB cb)
void transDestroyAsyncPool(SAsyncPool* pool);
int transSendAsync(SAsyncPool* pool, queue* mq);
+#define TRANS_DESTROY_ASYNC_POOL_MSG(pool, msgType, freeFunc) \
+ do { \
+ for (int i = 0; i < pool->nAsync; i++) { \
+ uv_async_t* async = &(pool->asyncs[i]); \
+ SAsyncItem* item = async->data; \
+ while (!QUEUE_IS_EMPTY(&item->qmsg)) { \
+ tTrace("destroy msg in async pool "); \
+ queue* h = QUEUE_HEAD(&item->qmsg); \
+ QUEUE_REMOVE(h); \
+ msgType* msg = QUEUE_DATA(h, msgType, q); \
+ if (msg != NULL) { \
+ freeFunc(msg); \
+ } \
+ } \
+ } \
+ } while (0)
int transInitBuffer(SConnBuffer* buf);
int transClearBuffer(SConnBuffer* buf);
int transDestroyBuffer(SConnBuffer* buf);
@@ -231,6 +255,7 @@ void transSendRecv(void* shandle, const SEpSet* pEpSet, STransMsg* pMsg, STransM
void transSendResponse(const STransMsg* msg);
void transRegisterMsg(const STransMsg* msg);
int transGetConnInfo(void* thandle, STransHandleInfo* pInfo);
+void transSetDefaultAddr(void* shandle, const char* ip, const char* fqdn);
void* transInitServer(uint32_t ip, uint32_t port, char* label, int numOfThreads, void* fp, void* shandle);
void* transInitClient(uint32_t ip, uint32_t port, char* label, int numOfThreads, void* fp, void* shandle);
diff --git a/source/libs/transport/src/trans.c b/source/libs/transport/src/trans.c
index 9e71c87fa5289d2af6d71639c313d208fe6d9b37..84b0156e3697996a81f7743940b04d73c20d0a05 100644
--- a/source/libs/transport/src/trans.c
+++ b/source/libs/transport/src/trans.c
@@ -27,6 +27,14 @@ void (*taosUnRefHandle[])(void* handle) = {transUnrefSrvHandle, transUnrefCliHan
void (*transReleaseHandle[])(void* handle) = {transReleaseSrvHandle, transReleaseCliHandle};
+static int32_t transValidLocalFqdn(const char* localFqdn, uint32_t* ip) {
+ *ip = taosGetIpv4FromFqdn(localFqdn);
+ if (*ip == 0xFFFFFFFF) {
+ terrno = TSDB_CODE_RPC_FQDN_ERROR;
+ return -1;
+ }
+ return 0;
+}
void* rpcOpen(const SRpcInit* pInit) {
SRpcInfo* pRpc = taosMemoryCalloc(1, sizeof(SRpcInfo));
if (pRpc == NULL) {
@@ -35,7 +43,6 @@ void* rpcOpen(const SRpcInit* pInit) {
if (pInit->label) {
tstrncpy(pRpc->label, pInit->label, strlen(pInit->label) + 1);
}
-
// register callback handle
pRpc->cfp = pInit->cfp;
pRpc->retry = pInit->rfp;
@@ -48,10 +55,8 @@ void* rpcOpen(const SRpcInit* pInit) {
uint32_t ip = 0;
if (pInit->connType == TAOS_CONN_SERVER) {
- ip = taosGetIpv4FromFqdn(pInit->localFqdn);
- if (ip == 0xFFFFFFFF) {
- tError("invalid fqdn: %s", pInit->localFqdn);
- terrno = TSDB_CODE_RPC_FQDN_ERROR;
+ if (transValidLocalFqdn(pInit->localFqdn, &ip) != 0) {
+ tError("invalid fqdn: %s, errmsg: %s", pInit->localFqdn, terrstr());
taosMemoryFree(pRpc);
return NULL;
}
@@ -149,6 +154,11 @@ void rpcReleaseHandle(void* handle, int8_t type) {
(*transReleaseHandle[type])(handle);
}
+void rpcSetDefaultAddr(void* thandle, const char* ip, const char* fqdn) {
+ // later
+ transSetDefaultAddr(thandle, ip, fqdn);
+}
+
int32_t rpcInit() {
// impl later
return 0;
diff --git a/source/libs/transport/src/transCli.c b/source/libs/transport/src/transCli.c
index 0313ea583220a72ad94ba5e833385089763551f5..3220e229a628bb0f8fc419c7c496da2c1212dc2b 100644
--- a/source/libs/transport/src/transCli.c
+++ b/source/libs/transport/src/transCli.c
@@ -63,7 +63,10 @@ typedef struct SCliThrdObj {
SDelayQueue* delayQueue;
uint64_t nextTimeout; // next timeout
void* pTransInst; //
- bool quit;
+
+ SCvtAddr cvtAddr;
+
+ bool quit;
} SCliThrdObj;
typedef struct SCliObj {
@@ -103,6 +106,7 @@ static void cliDestroyConn(SCliConn* pConn, bool clear /*clear tcp handle o
static void cliDestroy(uv_handle_t* handle);
static void cliSend(SCliConn* pConn);
+void cliMayCvtFqdnToIp(SEpSet* pEpSet, SCvtAddr* pCvtAddr);
/*
* set TCP connection timeout per-socket level
*/
@@ -116,7 +120,9 @@ static void cliHandleExcept(SCliConn* conn);
static void cliHandleReq(SCliMsg* pMsg, SCliThrdObj* pThrd);
static void cliHandleQuit(SCliMsg* pMsg, SCliThrdObj* pThrd);
static void cliHandleRelease(SCliMsg* pMsg, SCliThrdObj* pThrd);
-static void (*cliAsyncHandle[])(SCliMsg* pMsg, SCliThrdObj* pThrd) = {cliHandleReq, cliHandleQuit, cliHandleRelease};
+static void cliHandleUpdate(SCliMsg* pMsg, SCliThrdObj* pThrd);
+static void (*cliAsyncHandle[])(SCliMsg* pMsg, SCliThrdObj* pThrd) = {cliHandleReq, cliHandleQuit, cliHandleRelease,
+ NULL, cliHandleUpdate};
static void cliSendQuit(SCliThrdObj* thrd);
static void destroyUserdata(STransMsg* userdata);
@@ -697,6 +703,12 @@ static void cliHandleRelease(SCliMsg* pMsg, SCliThrdObj* pThrd) {
transUnrefCliHandle(conn);
}
}
+static void cliHandleUpdate(SCliMsg* pMsg, SCliThrdObj* pThrd) {
+ STransConnCtx* pCtx = pMsg->ctx;
+
+ pThrd->cvtAddr = pCtx->cvtAddr;
+ destroyCmsg(pMsg);
+}
SCliConn* cliGetConn(SCliMsg* pMsg, SCliThrdObj* pThrd) {
SCliConn* conn = NULL;
@@ -716,7 +728,17 @@ SCliConn* cliGetConn(SCliMsg* pMsg, SCliThrdObj* pThrd) {
}
return conn;
}
-
+void cliMayCvtFqdnToIp(SEpSet* pEpSet, SCvtAddr* pCvtAddr) {
+ if (pCvtAddr->cvt == false) {
+ return;
+ }
+ for (int i = 0; i < pEpSet->numOfEps && pEpSet->numOfEps == 1; i++) {
+ if (strncmp(pEpSet->eps[i].fqdn, pCvtAddr->fqdn, TSDB_FQDN_LEN) == 0) {
+ memset(pEpSet->eps[i].fqdn, 0, TSDB_FQDN_LEN);
+ memcpy(pEpSet->eps[i].fqdn, pCvtAddr->ip, TSDB_FQDN_LEN);
+ }
+ }
+}
void cliHandleReq(SCliMsg* pMsg, SCliThrdObj* pThrd) {
uint64_t et = taosGetTimestampUs();
uint64_t el = et - pMsg->st;
@@ -726,6 +748,8 @@ void cliHandleReq(SCliMsg* pMsg, SCliThrdObj* pThrd) {
STransConnCtx* pCtx = pMsg->ctx;
STrans* pTransInst = pThrd->pTransInst;
+ cliMayCvtFqdnToIp(&pCtx->epSet, &pThrd->cvtAddr);
+
SCliConn* conn = cliGetConn(pMsg, pThrd);
if (conn != NULL) {
conn->hThrdIdx = pCtx->hThrdIdx;
@@ -855,7 +879,6 @@ static SCliThrdObj* createThrdObj() {
pThrd->timer.data = pThrd;
pThrd->pool = createConnPool(4);
-
transDQCreate(pThrd->loop, &pThrd->delayQueue);
pThrd->quit = false;
@@ -869,6 +892,7 @@ static void destroyThrdObj(SCliThrdObj* pThrd) {
taosThreadJoin(pThrd->thread, NULL);
CLI_RELEASE_UV(pThrd->loop);
taosThreadMutexDestroy(&pThrd->msgMtx);
+ TRANS_DESTROY_ASYNC_POOL_MSG(pThrd->asyncPool, SCliMsg, destroyCmsg);
transDestroyAsyncPool(pThrd->asyncPool);
transDQDestroy(pThrd->delayQueue);
@@ -1088,4 +1112,32 @@ void transSendRecv(void* shandle, const SEpSet* pEpSet, STransMsg* pReq, STransM
taosMemoryFree(pSem);
}
+/*
+ *
+ **/
+void transSetDefaultAddr(void* ahandle, const char* ip, const char* fqdn) {
+ STrans* pTransInst = ahandle;
+
+ SCvtAddr cvtAddr = {0};
+ if (ip != NULL && fqdn != NULL) {
+ memcpy(cvtAddr.ip, ip, strlen(ip));
+ memcpy(cvtAddr.fqdn, fqdn, strlen(fqdn));
+ cvtAddr.cvt = true;
+ }
+ for (int i = 0; i < pTransInst->numOfThreads; i++) {
+ STransConnCtx* pCtx = taosMemoryCalloc(1, sizeof(STransConnCtx));
+ pCtx->hThrdIdx = i;
+ pCtx->cvtAddr = cvtAddr;
+
+ SCliMsg* cliMsg = taosMemoryCalloc(1, sizeof(SCliMsg));
+ cliMsg->ctx = pCtx;
+ cliMsg->type = Update;
+
+ SCliThrdObj* thrd = ((SCliObj*)pTransInst->tcphandle)->pThreadObj[i];
+ tDebug("update epset at thread:%d, threadID:%" PRId64 "", i, thrd->thread);
+
+ tsem_t* pSem = pCtx->pSem;
+ transSendAsync(thrd->asyncPool, &(cliMsg->q));
+ }
+}
#endif
diff --git a/source/libs/transport/src/transComm.c b/source/libs/transport/src/transComm.c
index 526f896ad2cbf0edd5db4f368a92cd8f6ee70707..be07fbd26455a0f4b47a91d999ff5d4c8e35bebb 100644
--- a/source/libs/transport/src/transComm.c
+++ b/source/libs/transport/src/transComm.c
@@ -190,6 +190,7 @@ SAsyncPool* transCreateAsyncPool(uv_loop_t* loop, int sz, void* arg, AsyncCB cb)
}
return pool;
}
+
void transDestroyAsyncPool(SAsyncPool* pool) {
for (int i = 0; i < pool->nAsync; i++) {
uv_async_t* async = &(pool->asyncs[i]);
diff --git a/source/libs/transport/src/transSrv.c b/source/libs/transport/src/transSrv.c
index 36f5cf98150e5636b43eb35b819d5bcd9288fe6a..9018eaacf600a9f8ceedde86672b2362039fbd0e 100644
--- a/source/libs/transport/src/transSrv.c
+++ b/source/libs/transport/src/transSrv.c
@@ -146,7 +146,7 @@ static void uvHandleRelease(SSrvMsg* msg, SWorkThrdObj* thrd);
static void uvHandleResp(SSrvMsg* msg, SWorkThrdObj* thrd);
static void uvHandleRegister(SSrvMsg* msg, SWorkThrdObj* thrd);
static void (*transAsyncHandle[])(SSrvMsg* msg, SWorkThrdObj* thrd) = {uvHandleResp, uvHandleQuit, uvHandleRelease,
- uvHandleRegister};
+ uvHandleRegister, NULL};
static int32_t exHandlesMgt;
@@ -923,7 +923,7 @@ void* transInitServer(uint32_t ip, uint32_t port, char* label, int numOfThreads,
}
if (false == taosValidIpAndPort(srv->ip, srv->port)) {
terrno = TAOS_SYSTEM_ERROR(errno);
- tError("invalid ip/port, reason: %s", terrstr());
+ tError("invalid ip/port, %d:%d, reason: %s", srv->ip, srv->port, terrstr());
goto End;
}
if (false == addHandleToAcceptloop(srv)) {
@@ -1036,6 +1036,7 @@ void destroyWorkThrd(SWorkThrdObj* pThrd) {
}
taosThreadJoin(pThrd->thread, NULL);
SRV_RELEASE_UV(pThrd->loop);
+ TRANS_DESTROY_ASYNC_POOL_MSG(pThrd->asyncPool, SSrvMsg, destroySmsg);
transDestroyAsyncPool(pThrd->asyncPool);
taosMemoryFree(pThrd->loop);
taosMemoryFree(pThrd);
diff --git a/source/util/src/tlog.c b/source/util/src/tlog.c
index e8a1ceb18b5acdf4c113aacb85f72c0f52b005cd..94b6d0a06cc381a8a09407445e7f54d8f2ce478a 100644
--- a/source/util/src/tlog.c
+++ b/source/util/src/tlog.c
@@ -39,7 +39,7 @@
#define LOG_BUF_MUTEX(x) ((x)->buffMutex)
typedef struct {
- char *buffer;
+ char * buffer;
int32_t buffStart;
int32_t buffEnd;
int32_t buffSize;
@@ -58,7 +58,7 @@ typedef struct {
int32_t openInProgress;
pid_t pid;
char logName[LOG_FILE_NAME_LEN];
- SLogBuff *logHandle;
+ SLogBuff * logHandle;
TdThreadMutex logMutex;
} SLogObj;
@@ -96,6 +96,7 @@ int32_t fsDebugFlag = 135;
int32_t metaDebugFlag = 135;
int32_t fnDebugFlag = 135;
int32_t smaDebugFlag = 135;
+int32_t idxDebugFlag = 135;
int64_t dbgEmptyW = 0;
int64_t dbgWN = 0;
@@ -103,7 +104,7 @@ int64_t dbgSmallWN = 0;
int64_t dbgBigWN = 0;
int64_t dbgWSize = 0;
-static void *taosAsyncOutputLog(void *param);
+static void * taosAsyncOutputLog(void *param);
static int32_t taosPushLogBuffer(SLogBuff *pLogBuf, const char *msg, int32_t msgLen);
static SLogBuff *taosLogBuffNew(int32_t bufSize);
static void taosCloseLogByFd(TdFilePtr pFile);
@@ -701,7 +702,7 @@ int32_t taosCompressFile(char *srcFileName, char *destFileName) {
int32_t compressSize = 163840;
int32_t ret = 0;
int32_t len = 0;
- char *data = taosMemoryMalloc(compressSize);
+ char * data = taosMemoryMalloc(compressSize);
// gzFile dstFp = NULL;
// srcFp = fopen(srcFileName, "r");
@@ -759,6 +760,7 @@ void taosSetAllDebugFlag(int32_t flag) {
fsDebugFlag = flag;
fnDebugFlag = flag;
smaDebugFlag = flag;
+ idxDebugFlag = flag;
uInfo("all debug flag are set to %d", flag);
}
diff --git a/tests/script/general/alter/cached_schema_after_alter.sim b/tests/script/general/alter/cached_schema_after_alter.sim
index 96ee4390845450d53508cc90c48a3148a0a827dd..043f360856e4b4f0533bf4dc5e4be7cea71c3325 100644
--- a/tests/script/general/alter/cached_schema_after_alter.sim
+++ b/tests/script/general/alter/cached_schema_after_alter.sim
@@ -1,9 +1,6 @@
system sh/stop_dnodes.sh
-
system sh/deploy.sh -n dnode1 -i 1
-system sh/cfg.sh -n dnode1 -c wallevel -v 2
system sh/exec.sh -n dnode1 -s start
-sleep 2000
sql connect
$db = csaa_db
diff --git a/tests/script/general/alter/dnode.sim b/tests/script/general/alter/dnode.sim
index 7b31218fc231cfdbb79ca97573cfc6f6f149037d..64e8a17de02c956a937aa1001ac4d5873a6bed21 100644
--- a/tests/script/general/alter/dnode.sim
+++ b/tests/script/general/alter/dnode.sim
@@ -1,10 +1,6 @@
system sh/stop_dnodes.sh
-
system sh/deploy.sh -n dnode1 -i 1
-system sh/cfg.sh -n dnode1 -c walLevel -v 2
system sh/exec.sh -n dnode1 -s start
-
-sleep 2000
sql connect
print ======== step1
diff --git a/tests/script/general/alter/import.sim b/tests/script/general/alter/import.sim
index aef0a258b24563e915cd8aa3dd42f6623a29170a..175e084b7f1aa73a1c8b599752fd0b7de59efda7 100644
--- a/tests/script/general/alter/import.sim
+++ b/tests/script/general/alter/import.sim
@@ -1,13 +1,8 @@
system sh/stop_dnodes.sh
-
system sh/deploy.sh -n dnode1 -i 1
-system sh/cfg.sh -n dnode1 -c wallevel -v 2
-system sh/cfg.sh -n dnode1 -c numOfMnodes -v 1
-system sh/cfg.sh -n dnode1 -c mnodeEqualVnodeNum -v 4
print ========= start dnode1 as master
system sh/exec.sh -n dnode1 -s start
-sleep 2000
sql connect
print ======== step1
diff --git a/tests/script/general/alter/insert1.sim b/tests/script/general/alter/insert1.sim
index 12ab09beb989dd963a9e8c9c3ff5926e78d8b0ac..82781f2fe5cadf0488c5107e9e54b06364629680 100644
--- a/tests/script/general/alter/insert1.sim
+++ b/tests/script/general/alter/insert1.sim
@@ -1,10 +1,6 @@
system sh/stop_dnodes.sh
-
system sh/deploy.sh -n dnode1 -i 1
-system sh/cfg.sh -n dnode1 -c wallevel -v 2
system sh/exec.sh -n dnode1 -s start
-
-sleep 2000
sql connect
print ======== step1
diff --git a/tests/script/general/alter/insert2.sim b/tests/script/general/alter/insert2.sim
index dcd9f500304f906ddddb33bd1a04c5943c232d49..a30175f3980cc117ec052ebb13a2e0b31b2cb316 100644
--- a/tests/script/general/alter/insert2.sim
+++ b/tests/script/general/alter/insert2.sim
@@ -1,10 +1,6 @@
system sh/stop_dnodes.sh
-
system sh/deploy.sh -n dnode1 -i 1
-system sh/cfg.sh -n dnode1 -c wallevel -v 2
system sh/exec.sh -n dnode1 -s start
-
-sleep 2000
sql connect
print ======== step1
diff --git a/tests/script/general/alter/metrics.sim b/tests/script/general/alter/metrics.sim
index fd0b210cd1b452b2a35ebcd9f74aec98c3817b03..ec8c980c16adcf512975e54fa492d3c22b12c195 100644
--- a/tests/script/general/alter/metrics.sim
+++ b/tests/script/general/alter/metrics.sim
@@ -1,10 +1,6 @@
system sh/stop_dnodes.sh
-
system sh/deploy.sh -n dnode1 -i 1
-system sh/cfg.sh -n dnode1 -c walLevel -v 2
system sh/exec.sh -n dnode1 -s start
-
-sleep 2000
sql connect
print ======== step1
diff --git a/tests/script/general/alter/table.sim b/tests/script/general/alter/table.sim
index 06704eeca6b3149b47ddc2ffb90aaab9df934bd8..cd0397760276c775d170e90831f6674880cb8f81 100644
--- a/tests/script/general/alter/table.sim
+++ b/tests/script/general/alter/table.sim
@@ -1,10 +1,6 @@
system sh/stop_dnodes.sh
-
system sh/deploy.sh -n dnode1 -i 1
-system sh/cfg.sh -n dnode1 -c walLevel -v 2
system sh/exec.sh -n dnode1 -s start
-
-sleep 2000
sql connect
print ======== step1
diff --git a/tests/script/general/alter/testSuite.sim b/tests/script/general/alter/testSuite.sim
deleted file mode 100644
index cfac68144c080593499159eec81325924e7f25e6..0000000000000000000000000000000000000000
--- a/tests/script/general/alter/testSuite.sim
+++ /dev/null
@@ -1,7 +0,0 @@
-run general/alter/cached_schema_after_alter.sim
-run general/alter/count.sim
-run general/alter/import.sim
-run general/alter/insert1.sim
-run general/alter/insert2.sim
-run general/alter/metrics.sim
-run general/alter/table.sim
\ No newline at end of file
diff --git a/tests/script/jenkins/basic.txt b/tests/script/jenkins/basic.txt
index 623182fddf6e517ebeab028dd4183cda8264dbb4..f3bb043f52fc1103a79584b15011e5583f1ac09a 100644
--- a/tests/script/jenkins/basic.txt
+++ b/tests/script/jenkins/basic.txt
@@ -88,7 +88,6 @@
./test.sh -f tsim/tmq/topic.sim
# --- stable
-./test.sh -f tsim/stable/alter1.sim
./test.sh -f tsim/stable/disk.sim
./test.sh -f tsim/stable/dnode3.sim
./test.sh -f tsim/stable/metrics.sim
@@ -98,8 +97,12 @@
./test.sh -f tsim/stable/vnode3.sim
./test.sh -f tsim/stable/column_add.sim
./test.sh -f tsim/stable/column_drop.sim
-#./test.sh -f tsim/stable/column_modify.sim
-
+./test.sh -f tsim/stable/column_modify.sim
+./test.sh -f tsim/stable/tag_add.sim
+./test.sh -f tsim/stable/tag_drop.sim
+./test.sh -f tsim/stable/tag_modify.sim
+./test.sh -f tsim/stable/tag_rename.sim
+./test.sh -f tsim/stable/alter_comment.sim
# --- for multi process mode
./test.sh -f tsim/user/basic1.sim -m
@@ -120,4 +123,10 @@
# --- valgrind
./test.sh -f tsim/valgrind/checkError.sim -v
+# --- sync
+./test.sh -f tsim/sync/3Replica1VgElect.sim
+./test.sh -f tsim/sync/3Replica5VgElect.sim
+./test.sh -f tsim/sync/oneReplica1VgElect.sim
+./test.sh -f tsim/sync/oneReplica5VgElect.sim
+
#======================b1-end===============
diff --git a/tests/script/tsim/stable/alter1.sim b/tests/script/tsim/stable/alter_comment.sim
similarity index 99%
rename from tests/script/tsim/stable/alter1.sim
rename to tests/script/tsim/stable/alter_comment.sim
index 1205f50f6ea144de6f5fae06ef7569a60b47e0cb..cfcbb9a1daa046c894bbfe47f4684ded5faf79a6 100644
--- a/tests/script/tsim/stable/alter1.sim
+++ b/tests/script/tsim/stable/alter_comment.sim
@@ -166,4 +166,5 @@ if $data[0][6] != abcde then
return -1
endi
+return
system sh/exec.sh -n dnode1 -s stop -x SIGINT
diff --git a/tests/script/general/alter/count.sim b/tests/script/tsim/stable/alter_count.sim
similarity index 96%
rename from tests/script/general/alter/count.sim
rename to tests/script/tsim/stable/alter_count.sim
index fc936668b8ea08f9cd08874ad98668a4d8904315..9c9ece7ee4725a5e6da2c292a2c5d2acaa31e75b 100644
--- a/tests/script/general/alter/count.sim
+++ b/tests/script/tsim/stable/alter_count.sim
@@ -1,13 +1,8 @@
system sh/stop_dnodes.sh
-
system sh/deploy.sh -n dnode1 -i 1
-system sh/cfg.sh -n dnode1 -c wallevel -v 2
-system sh/cfg.sh -n dnode1 -c numOfMnodes -v 1
-system sh/cfg.sh -n dnode1 -c mnodeEqualVnodeNum -v 4
print ========= start dnode1 as master
system sh/exec.sh -n dnode1 -s start
-sleep 2000
sql connect
print ======== step1
@@ -141,10 +136,11 @@ endi
print ============= step10
system sh/exec.sh -n dnode1 -s stop -x SIGINT
-sleep 3000
system sh/exec.sh -n dnode1 -s start
-sleep 3000
+sql connect
+sql select count(a), count(b), count(c), count(d), count(e), count(f), count(g), count(h) from d1.tb;
+sql select count(a), count(b), count(c), count(d), count(e), count(f), count(g), count(h) from d1.tb;
sql select count(a), count(b), count(c), count(d), count(e), count(f), count(g), count(h) from tb
if $data00 != 24 then
return -1
diff --git a/tests/script/tsim/stable/column_modify.sim b/tests/script/tsim/stable/column_modify.sim
index 732e449c4aea74f5df310a9af71411e99eeb9f25..16e7ff8f67f1d9818947c54f6929728b086f44ab 100644
--- a/tests/script/tsim/stable/column_modify.sim
+++ b/tests/script/tsim/stable/column_modify.sim
@@ -47,7 +47,7 @@ endi
print ========== step2 describe
sql describe db.ctb
-if $rows != 7 then
+if $rows != 6 then
return -1
endi
if $data[0][0] != ts then
@@ -75,4 +75,32 @@ if $data[5][0] != t3 then
return -1
endi
+system sh/exec.sh -n dnode1 -s stop -x SIGINT
+system sh/exec.sh -n dnode1 -s start
+
+sql connect
+sql select * from db.ctb
+
+if $rows != 2 then
+ return -1
+endi
+#if $data[0][1] != 1 then
+# return -1
+#endi
+#if $data[0][2] != 1234 then
+# return -1
+#endi
+#if $data[0][3] != 101 then
+# return -1
+#endi
+#if $data[1][1] != 1 then
+# return -1
+#endi
+#if $data[1][2] != 12345 then
+# return -1
+#endi
+#if $data[1][3] != 101 then
+# return -1
+#endi
+
system sh/exec.sh -n dnode1 -s stop -x SIGINT
\ No newline at end of file
diff --git a/tests/script/tsim/stable/disk.sim b/tests/script/tsim/stable/disk.sim
index c1ced6ae1076b3b1cc5e8a79f31188c076a93f59..eeaa8293a505a7af3b774eb2e0d3b7fab5b6fe49 100644
--- a/tests/script/tsim/stable/disk.sim
+++ b/tests/script/tsim/stable/disk.sim
@@ -1,17 +1,9 @@
system sh/stop_dnodes.sh
-
-
system sh/deploy.sh -n dnode1 -i 1
-system sh/cfg.sh -n dnode1 -c walLevel -v 1
-system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 4
-system sh/cfg.sh -n dnode1 -c maxTablesPerVnode -v 4
system sh/exec.sh -n dnode1 -s start
-
-sleep 2000
sql connect
print ======================== dnode1 start
-
$dbPrefix = d_db
$tbPrefix = d_tb
$mtPrefix = d_mt
@@ -59,9 +51,8 @@ endi
sleep 1000
system sh/exec.sh -n dnode1 -s stop -x SIGINT
-sleep 3000
+sleep 1000
system sh/exec.sh -n dnode1 -s start
-sleep 6000
sql use $db
sql show vgroups
diff --git a/tests/script/tsim/stable/dnode3.sim b/tests/script/tsim/stable/dnode3.sim
index 706c4aa499ce3cebaedcbb71c24a9473a9069c9a..03e8df26b7543e61f0e8e52a1fd5bd8ab9de5e0f 100644
--- a/tests/script/tsim/stable/dnode3.sim
+++ b/tests/script/tsim/stable/dnode3.sim
@@ -1,19 +1,9 @@
system sh/stop_dnodes.sh
-
system sh/deploy.sh -n dnode1 -i 1
system sh/deploy.sh -n dnode2 -i 2
system sh/deploy.sh -n dnode3 -i 3
system sh/deploy.sh -n dnode4 -i 4
-system sh/cfg.sh -n dnode1 -c walLevel -v 1
-system sh/cfg.sh -n dnode2 -c walLevel -v 1
-system sh/cfg.sh -n dnode3 -c walLevel -v 1
-system sh/cfg.sh -n dnode4 -c walLevel -v 1
-# system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 4
-# system sh/cfg.sh -n dnode2 -c maxtablesPerVnode -v 4
-# system sh/cfg.sh -n dnode3 -c maxtablesPerVnode -v 4
-# system sh/cfg.sh -n dnode4 -c maxtablesPerVnode -v 4
system sh/exec.sh -n dnode1 -s start
-
sql connect
sql create dnode $hostname PORT 7200
diff --git a/tests/script/tsim/stable/metrics.sim b/tests/script/tsim/stable/metrics.sim
index e68d95511cfd3c4ea556e34ffed5111f05064405..26323b4a92539ed62fdd060cc7e73dfafec70101 100644
--- a/tests/script/tsim/stable/metrics.sim
+++ b/tests/script/tsim/stable/metrics.sim
@@ -1,10 +1,6 @@
system sh/stop_dnodes.sh
system sh/deploy.sh -n dnode1 -i 1
-system sh/cfg.sh -n dnode1 -c walLevel -v 1
-system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 4
system sh/exec.sh -n dnode1 -s start
-
-sleep 1000
sql connect
$dbPrefix = m_me_db
diff --git a/tests/script/tsim/stable/refcount.sim b/tests/script/tsim/stable/refcount.sim
index fffa6f75a4adfe2b52b1a7d1b587f6bf7a182ba4..d77c8e08900c1b0eeeee95bbfc4c6a4540558e6b 100644
--- a/tests/script/tsim/stable/refcount.sim
+++ b/tests/script/tsim/stable/refcount.sim
@@ -1,11 +1,6 @@
system sh/stop_dnodes.sh
-
system sh/deploy.sh -n dnode1 -i 1
-system sh/cfg.sh -n dnode1 -c walLevel -v 1
-system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 4
system sh/exec.sh -n dnode1 -s start
-
-sleep 2000
sql connect
print =============== step1
diff --git a/tests/script/tsim/stable/show.sim b/tests/script/tsim/stable/show.sim
index 823aefe9d86954dc8a3af85359ec02a475182aae..d3ab75adf5ac08dbd4c2a8a0870cfe4fbfd62a4d 100644
--- a/tests/script/tsim/stable/show.sim
+++ b/tests/script/tsim/stable/show.sim
@@ -1,14 +1,9 @@
system sh/stop_dnodes.sh
-
system sh/deploy.sh -n dnode1 -i 1
-system sh/cfg.sh -n dnode1 -c walLevel -v 1
system sh/exec.sh -n dnode1 -s start
-
-sleep 2000
sql connect
print ======================== create stable
-
sql create database d1
sql use d1
diff --git a/tests/script/tsim/stable/tag_add.sim b/tests/script/tsim/stable/tag_add.sim
new file mode 100644
index 0000000000000000000000000000000000000000..01cc7bc36c9f9dc5f69198f4f0282b0f15531fe8
--- /dev/null
+++ b/tests/script/tsim/stable/tag_add.sim
@@ -0,0 +1,193 @@
+system sh/stop_dnodes.sh
+system sh/deploy.sh -n dnode1 -i 1
+system sh/exec.sh -n dnode1 -s start
+sql connect
+
+print ========== prepare stb and ctb
+sql create database db vgroups 1
+sql create table db.stb (ts timestamp, c1 int, c2 binary(4)) tags(t1 int, t2 binary(16)) comment "abd"
+sql create table db.ctb using db.stb tags(101, "102")
+sql insert into db.ctb values(now, 1, "2")
+
+sql show db.stables
+if $rows != 1 then
+ return -1
+endi
+if $data[0][0] != stb then
+ return -1
+endi
+if $data[0][1] != db then
+ return -1
+endi
+if $data[0][3] != 3 then
+ return -1
+endi
+if $data[0][4] != 2 then
+ return -1
+endi
+if $data[0][6] != abd then
+ return -1
+endi
+
+sql show db.tables
+if $rows != 1 then
+ return -1
+endi
+if $data[0][0] != ctb then
+ return -1
+endi
+if $data[0][1] != db then
+ return -1
+endi
+if $data[0][3] != 3 then
+ return -1
+endi
+if $data[0][4] != stb then
+ return -1
+endi
+if $data[0][6] != 2 then
+ return -1
+endi
+if $data[0][9] != CHILD_TABLE then
+ return -1
+endi
+
+sql select * from db.stb
+if $rows != 1 then
+ return -1
+endi
+if $data[0][1] != 1 then
+ return -1
+endi
+if $data[0][2] != 2 then
+ return -1
+endi
+if $data[0][3] != 101 then
+ return -1
+endi
+if $data[0][4] != 102 then
+ return -1
+endi
+
+sql_error alter table db.stb add tag ts int
+sql_error alter table db.stb add tag t1 int
+sql_error alter table db.stb add tag t2 int
+sql_error alter table db.stb add tag c1 int
+sql_error alter table db.stb add tag c2 int
+
+print ========== step1 add tag t3
+sql alter table db.stb add tag t3 int
+
+sql show db.stables
+if $data[0][3] != 3 then
+ return -1
+endi
+
+sql show db.tables
+if $data[0][3] != 3 then
+ return -1
+endi
+
+sql describe db.ctb
+if $rows != 6 then
+ return -1
+endi
+if $data[5][0] != t3 then
+ return -1
+endi
+if $data[5][1] != INT then
+ return -1
+endi
+if $data[5][2] != 4 then
+ return -1
+endi
+
+sql select * from db.stb
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+if $rows != 1 then
+ return -1
+endi
+if $data[0][1] != 1 then
+ return -1
+endi
+if $data[0][2] != 2 then
+ return -1
+endi
+if $data[0][3] != 101 then
+ return -1
+endi
+if $data[0][4] != 102 then
+ return -1
+endi
+if $data[0][5] != NULL then
+ return -1
+endi
+
+print ========== step2 add tag t4
+sql alter table db.stb add tag t4 bigint
+sql select * from db.stb
+sql select * from db.stb
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+
+if $rows != 1 then
+ return -1
+endi
+if $data[0][1] != 1 then
+ return -1
+endi
+if $data[0][2] != 2 then
+ return -1
+endi
+if $data[0][3] != 101 then
+ return -1
+endi
+if $data[0][4] != 102 then
+ return -1
+endi
+if $data[0][5] != NULL then
+ return -1
+endi
+if $data[0][6] != NULL then
+ return -1
+endi
+
+sql_error create table db.ctb2 using db.stb tags(101, "102")
+sql create table db.ctb2 using db.stb tags(101, "102", 103, 104)
+sql insert into db.ctb2 values(now, 1, "2")
+
+sql select * from db.stb where tbname = 'ctb2';
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+
+if $rows != 1 then
+ return -1
+endi
+if $data[0][1] != 1 then
+ return -1
+endi
+if $data[0][2] != 2 then
+ return -1
+endi
+if $data[0][3] != 101 then
+ return -1
+endi
+if $data[0][4] != 102 then
+ return -1
+endi
+if $data[0][5] != 103 then
+ return -1
+endi
+if $data[0][6] != 104 then
+ return -1
+endi
+
+print ========== step3 describe
+sql describe db.ctb
+if $rows != 7 then
+ return -1
+endi
+
+system sh/exec.sh -n dnode1 -s stop -x SIGINT
\ No newline at end of file
diff --git a/tests/script/tsim/stable/tag_drop.sim b/tests/script/tsim/stable/tag_drop.sim
new file mode 100644
index 0000000000000000000000000000000000000000..afac59daff9b8d3d2713517f9bf7523e2c612b6c
--- /dev/null
+++ b/tests/script/tsim/stable/tag_drop.sim
@@ -0,0 +1,337 @@
+system sh/stop_dnodes.sh
+system sh/deploy.sh -n dnode1 -i 1
+system sh/exec.sh -n dnode1 -s start
+sql connect
+
+print ========== prepare stb and ctb
+sql create database db vgroups 1
+sql create table db.stb (ts timestamp, c1 int, c2 binary(4)) tags(t1 int, t2 binary(16)) comment "abd"
+sql create table db.ctb using db.stb tags(101, "102")
+sql insert into db.ctb values(now, 1, "2")
+
+sql show db.stables
+if $rows != 1 then
+ return -1
+endi
+if $data[0][0] != stb then
+ return -1
+endi
+if $data[0][1] != db then
+ return -1
+endi
+if $data[0][3] != 3 then
+ return -1
+endi
+if $data[0][4] != 2 then
+ return -1
+endi
+if $data[0][6] != abd then
+ return -1
+endi
+
+sql show db.tables
+if $rows != 1 then
+ return -1
+endi
+if $data[0][0] != ctb then
+ return -1
+endi
+if $data[0][1] != db then
+ return -1
+endi
+if $data[0][3] != 3 then
+ return -1
+endi
+if $data[0][4] != stb then
+ return -1
+endi
+if $data[0][6] != 2 then
+ return -1
+endi
+if $data[0][9] != CHILD_TABLE then
+ return -1
+endi
+
+sql select * from db.stb
+if $rows != 1 then
+ return -1
+endi
+if $data[0][1] != 1 then
+ return -1
+endi
+if $data[0][2] != 2 then
+ return -1
+endi
+if $data[0][3] != 101 then
+ return -1
+endi
+if $data[0][4] != 102 then
+ return -1
+endi
+
+sql_error alter table db.stb drop tag ts int
+sql_error alter table db.stb drop tag t3 int
+sql_error alter table db.stb drop tag t4 int
+sql_error alter table db.stb drop tag c1 int
+sql_error alter table db.stb drop tag c2 int
+
+print ========== step1 drop tag t2
+sql alter table db.stb drop tag t2
+
+sql show db.stables
+if $data[0][4] != 1 then
+ return -1
+endi
+
+sql describe db.ctb
+if $rows != 4 then
+ return -1
+endi
+if $data[4][0] != null then
+ return -1
+endi
+
+sql select * from db.stb
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+if $rows != 1 then
+ return -1
+endi
+if $data[0][1] != 1 then
+ return -1
+endi
+if $data[0][2] != 2 then
+ return -1
+endi
+if $data[0][3] != 101 then
+ return -1
+endi
+if $data[0][4] != null then
+ return -1
+endi
+
+print ========== step2 add tag t3
+sql alter table db.stb add tag t3 int
+
+sql show db.stables
+if $data[0][4] != 2 then
+ return -1
+endi
+
+sql describe db.ctb
+if $rows != 5 then
+ return -1
+endi
+if $data[4][0] != t3 then
+ return -1
+endi
+if $data[4][1] != INT then
+ return -1
+endi
+if $data[4][2] != 4 then
+ return -1
+endi
+
+sql select * from db.stb
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+if $rows != 1 then
+ return -1
+endi
+if $data[0][1] != 1 then
+ return -1
+endi
+if $data[0][2] != 2 then
+ return -1
+endi
+if $data[0][3] != 101 then
+ return -1
+endi
+if $data[0][4] != NULL then
+ return -1
+endi
+
+print ========== step3 add tag t4
+sql alter table db.stb add tag t4 bigint
+sql select * from db.stb
+sql select * from db.stb
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+
+if $rows != 1 then
+ return -1
+endi
+if $data[0][1] != 1 then
+ return -1
+endi
+if $data[0][2] != 2 then
+ return -1
+endi
+if $data[0][3] != 101 then
+ return -1
+endi
+if $data[0][4] != NULL then
+ return -1
+endi
+if $data[0][5] != NULL then
+ return -1
+endi
+if $data[0][6] != null then
+ return -1
+endi
+
+sql_error create table db.ctb2 using db.stb tags(101, "102")
+sql create table db.ctb2 using db.stb tags(201, 202, 203)
+sql insert into db.ctb2 values(now, 1, "2")
+
+sql select * from db.stb where tbname = 'ctb2';
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+
+if $rows != 1 then
+ return -1
+endi
+if $data[0][1] != 1 then
+ return -1
+endi
+if $data[0][2] != 2 then
+ return -1
+endi
+if $data[0][3] != 201 then
+ return -1
+endi
+if $data[0][4] != 202 then
+ return -1
+endi
+if $data[0][5] != 203 then
+ return -1
+endi
+
+print ========== step4 describe
+sql describe db.ctb
+if $rows != 6 then
+ return -1
+endi
+
+print ========== step5 add tag2
+sql alter table db.stb add tag t2 bigint
+sql select * from db.stb where tbname = 'ctb2';
+sql select * from db.stb where tbname = 'ctb2';
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+
+if $rows != 1 then
+ return -1
+endi
+if $data[0][1] != 1 then
+ return -1
+endi
+if $data[0][2] != 2 then
+ return -1
+endi
+if $data[0][3] != 201 then
+ return -1
+endi
+if $data[0][4] != 202 then
+ return -1
+endi
+if $data[0][5] != 203 then
+ return -1
+endi
+if $data[0][6] != NULL then
+ return -1
+endi
+
+sql_error create table db.ctb2 using db.stb tags(101, "102")
+sql_error create table db.ctb2 using db.stb tags(201, 202, 203)
+sql create table db.ctb3 using db.stb tags(301, 302, 303, 304)
+sql insert into db.ctb3 values(now, 1, "2")
+
+sql select * from db.stb where tbname = 'ctb3';
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+
+if $rows != 1 then
+ return -1
+endi
+if $data[0][1] != 1 then
+ return -1
+endi
+if $data[0][2] != 2 then
+ return -1
+endi
+if $data[0][3] != 301 then
+ return -1
+endi
+if $data[0][4] != 302 then
+ return -1
+endi
+if $data[0][5] != 303 then
+ return -1
+endi
+if $data[0][6] != 304 then
+ return -1
+endi
+
+print ========== step6 describe
+sql describe db.ctb
+if $rows != 7 then
+ return -1
+endi
+
+if $data[3][0] != t1 then
+ return -1
+endi
+if $data[4][0] != t3 then
+ return -1
+endi
+if $data[5][0] != t4 then
+ return -1
+endi
+if $data[6][0] != t2 then
+ return -1
+endi
+if $data[6][1] != BIGINT then
+ return -1
+endi
+
+print ========== step7 drop tag t1
+sql alter table db.stb drop tag t1
+
+sql show db.stables
+if $data[0][4] != 3 then
+ return -1
+endi
+
+sql describe db.ctb
+if $rows != 6 then
+ return -1
+endi
+
+sql select * from db.stb where tbname = 'ctb3';
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+
+if $rows != 1 then
+ return -1
+endi
+if $data[0][1] != 1 then
+ return -1
+endi
+if $data[0][2] != 2 then
+ return -1
+endi
+if $data[0][3] != 302 then
+ return -1
+endi
+if $data[0][4] != 303 then
+ return -1
+endi
+if $data[0][5] != 304 then
+ return -1
+endi
+
+system sh/exec.sh -n dnode1 -s stop -x SIGINT
\ No newline at end of file
diff --git a/tests/script/tsim/stable/tag_modify.sim b/tests/script/tsim/stable/tag_modify.sim
new file mode 100644
index 0000000000000000000000000000000000000000..62e4c7b28255ee085250cb4fc43612116fc50be0
--- /dev/null
+++ b/tests/script/tsim/stable/tag_modify.sim
@@ -0,0 +1,123 @@
+system sh/stop_dnodes.sh
+system sh/deploy.sh -n dnode1 -i 1
+system sh/exec.sh -n dnode1 -s start
+sql connect
+
+print ========== prepare stb and ctb
+sql create database db vgroups 1
+sql create table db.stb (ts timestamp, c1 int, c2 binary(4)) tags(t1 int, t2 binary(4)) comment "abd"
+
+sql_error alter table db.stb MODIFY tag c2 binary(3)
+sql_error alter table db.stb MODIFY tag c2 int
+sql_error alter table db.stb MODIFY tag c1 int
+sql_error alter table db.stb MODIFY tag ts int
+sql_error alter table db.stb MODIFY tag t2 binary(3)
+sql_error alter table db.stb MODIFY tag t2 int
+sql_error alter table db.stb MODIFY tag t1 int
+sql create table db.ctb using db.stb tags(101, "12345")
+sql insert into db.ctb values(now, 1, "1234")
+
+sql select * from db.stb
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+
+if $rows != 1 then
+ return -1
+endi
+if $data[0][1] != 1 then
+ return -1
+endi
+if $data[0][2] != 1234 then
+ return -1
+endi
+if $data[0][3] != 101 then
+ return -1
+endi
+if $data[0][4] != 1234 then
+ return -1
+endi
+
+print ========== step1 modify tag
+sql alter table db.stb MODIFY tag t2 binary(5)
+sql select * from db.stb
+
+sql create table db.ctb2 using db.stb tags(101, "12345")
+sql insert into db.ctb2 values(now, 1, "1234")
+
+sql select * from db.stb where tbname = 'ctb2';
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+
+if $rows != 1 then
+ return -1
+endi
+if $data[0][1] != 1 then
+ return -1
+endi
+if $data[0][2] != 1234 then
+ return -1
+endi
+if $data[0][3] != 101 then
+ return -1
+endi
+if $data[0][4] != 12345 then
+ return -1
+endi
+
+print ========== step2 describe
+sql describe db.ctb2
+if $rows != 5 then
+ return -1
+endi
+if $data[0][0] != ts then
+ return -1
+endi
+if $data[1][0] != c1 then
+ return -1
+endi
+if $data[2][0] != c2 then
+ return -1
+endi
+if $data[3][0] != t1 then
+ return -1
+endi
+if $data[4][0] != t2 then
+ return -1
+endi
+if $data[4][1] != VARCHAR then
+ return -1
+endi
+if $data[4][2] != 5 then
+ return -1
+endi
+
+system sh/exec.sh -n dnode1 -s stop -x SIGINT
+system sh/exec.sh -n dnode1 -s start
+
+sql connect
+sql describe db.ctb2
+if $rows != 5 then
+ return -1
+endi
+if $data[0][0] != ts then
+ return -1
+endi
+if $data[1][0] != c1 then
+ return -1
+endi
+if $data[2][0] != c2 then
+ return -1
+endi
+if $data[3][0] != t1 then
+ return -1
+endi
+if $data[4][0] != t2 then
+ return -1
+endi
+if $data[4][1] != VARCHAR then
+ return -1
+endi
+if $data[4][2] != 5 then
+ return -1
+endi
+
+system sh/exec.sh -n dnode1 -s stop -x SIGINT
\ No newline at end of file
diff --git a/tests/script/tsim/stable/tag_rename.sim b/tests/script/tsim/stable/tag_rename.sim
new file mode 100644
index 0000000000000000000000000000000000000000..2f67a3ab2c51d8c8499219ea8779b23797d9d0af
--- /dev/null
+++ b/tests/script/tsim/stable/tag_rename.sim
@@ -0,0 +1,120 @@
+system sh/stop_dnodes.sh
+system sh/deploy.sh -n dnode1 -i 1
+system sh/exec.sh -n dnode1 -s start
+sql connect
+
+print ========== prepare stb and ctb
+sql create database db vgroups 1
+sql create table db.stb (ts timestamp, c1 int, c2 binary(4)) tags(t1 int, t2 binary(4)) comment "abd"
+
+sql_error alter table db.stb rename tag c2 c3
+sql_error alter table db.stb rename tag c2 c3
+sql_error alter table db.stb rename tag c1 c3
+sql_error alter table db.stb rename tag ts c3
+sql_error alter table db.stb rename tag t2 t1
+sql_error alter table db.stb rename tag t2 t2
+sql_error alter table db.stb rename tag t1 t2
+sql create table db.ctb using db.stb tags(101, "12345")
+sql insert into db.ctb values(now, 1, "1234")
+
+sql select * from db.stb
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+
+if $rows != 1 then
+ return -1
+endi
+if $data[0][1] != 1 then
+ return -1
+endi
+if $data[0][2] != 1234 then
+ return -1
+endi
+if $data[0][3] != 101 then
+ return -1
+endi
+if $data[0][4] != 1234 then
+ return -1
+endi
+
+print ========== step1 rename tag
+sql alter table db.stb rename tag t1 t3
+sql select * from db.stb
+sql select * from db.stb
+
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+
+if $rows != 1 then
+ return -1
+endi
+if $data[0][1] != 1 then
+ return -1
+endi
+if $data[0][2] != 1234 then
+ return -1
+endi
+if $data[0][3] != 101 then
+ return -1
+endi
+if $data[0][4] != 1234 then
+ return -1
+endi
+
+print ========== step2 describe
+sql describe db.ctb
+if $rows != 5 then
+ return -1
+endi
+if $data[0][0] != ts then
+ return -1
+endi
+if $data[1][0] != c1 then
+ return -1
+endi
+if $data[2][0] != c2 then
+ return -1
+endi
+if $data[3][0] != t3 then
+ return -1
+endi
+if $data[4][0] != t2 then
+ return -1
+endi
+if $data[4][1] != VARCHAR then
+ return -1
+endi
+if $data[4][2] != 4 then
+ return -1
+endi
+
+system sh/exec.sh -n dnode1 -s stop -x SIGINT
+system sh/exec.sh -n dnode1 -s start
+
+sql connect
+sql describe db.ctb
+if $rows != 5 then
+ return -1
+endi
+if $data[0][0] != ts then
+ return -1
+endi
+if $data[1][0] != c1 then
+ return -1
+endi
+if $data[2][0] != c2 then
+ return -1
+endi
+if $data[3][0] != t3 then
+ return -1
+endi
+if $data[4][0] != t2 then
+ return -1
+endi
+if $data[4][1] != VARCHAR then
+ return -1
+endi
+if $data[4][2] != 4 then
+ return -1
+endi
+
+system sh/exec.sh -n dnode1 -s stop -x SIGINT
\ No newline at end of file
diff --git a/tests/script/tsim/stable/values.sim b/tests/script/tsim/stable/values.sim
index e5e3118e12634f41b0d124d3ba379b8f93df442f..88eca28a12c6a48c5c39178f194e8836864e71d8 100644
--- a/tests/script/tsim/stable/values.sim
+++ b/tests/script/tsim/stable/values.sim
@@ -1,16 +1,9 @@
system sh/stop_dnodes.sh
-
-
system sh/deploy.sh -n dnode1 -i 1
-system sh/cfg.sh -n dnode1 -c walLevel -v 1
-system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 4
system sh/exec.sh -n dnode1 -s start
-
-sleep 2000
sql connect
print ======================== dnode1 start
-
sql create database vdb0
sql create table vdb0.mt (ts timestamp, tbcol int) TAGS(tgcol int)
diff --git a/tests/script/tsim/stable/vnode3.sim b/tests/script/tsim/stable/vnode3.sim
index 97a8203883cc5f427ccc355cf5898b1e3ebe6cd2..186d0f5eea254aeb451f48c3cbf7d0d094723c09 100644
--- a/tests/script/tsim/stable/vnode3.sim
+++ b/tests/script/tsim/stable/vnode3.sim
@@ -1,16 +1,9 @@
system sh/stop_dnodes.sh
-
system sh/deploy.sh -n dnode1 -i 1
-system sh/cfg.sh -n dnode1 -c walLevel -v 1
-system sh/cfg.sh -n dnode1 -c maxtablesPerVnode -v 4
-system sh/cfg.sh -n dnode1 -c maxTablesPerVnode -v 4
system sh/exec.sh -n dnode1 -s start
-
-sleep 2000
sql connect
print ======================== dnode1 start
-
$dbPrefix = v3_db
$tbPrefix = v3_tb
$mtPrefix = v3_mt
diff --git a/tests/script/tsim/sync/3Replica1VgElect.sim b/tests/script/tsim/sync/3Replica1VgElect.sim
new file mode 100644
index 0000000000000000000000000000000000000000..61b3b09288faecf857c5d33e7a34ac3544c4db67
--- /dev/null
+++ b/tests/script/tsim/sync/3Replica1VgElect.sim
@@ -0,0 +1,478 @@
+system sh/stop_dnodes.sh
+system sh/deploy.sh -n dnode1 -i 1
+system sh/deploy.sh -n dnode2 -i 2
+system sh/deploy.sh -n dnode3 -i 3
+system sh/deploy.sh -n dnode4 -i 4
+
+system sh/cfg.sh -n dnode1 -c supportVnodes -v 0
+
+system sh/exec.sh -n dnode1 -s start
+system sh/exec.sh -n dnode2 -s start
+system sh/exec.sh -n dnode3 -s start
+system sh/exec.sh -n dnode4 -s start
+
+$loop_cnt = 0
+check_dnode_ready:
+ $loop_cnt = $loop_cnt + 1
+ sleep 200
+ if $loop_cnt == 10 then
+ print ====> dnode not ready!
+ return -1
+ endi
+sql show dnodes
+print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
+print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
+if $data[0][0] != 1 then
+ return -1
+endi
+if $data[0][4] != ready then
+ goto check_dnode_ready
+endi
+
+sql connect
+sql create dnode $hostname port 7200
+sql create dnode $hostname port 7300
+sql create dnode $hostname port 7400
+
+$loop_cnt = 0
+check_dnode_ready_1:
+$loop_cnt = $loop_cnt + 1
+sleep 200
+if $loop_cnt == 10 then
+ print ====> dnodes not ready!
+ return -1
+endi
+sql show dnodes
+print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
+print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
+if $data[0][4] != ready then
+ goto check_dnode_ready_1
+endi
+if $data[1][4] != ready then
+ goto check_dnode_ready_1
+endi
+if $data[2][4] != ready then
+ goto check_dnode_ready_1
+endi
+if $data[3][4] != ready then
+ goto check_dnode_ready_1
+endi
+
+$replica = 3
+$vgroups = 1
+
+print ============= create database
+sql create database db replica $replica vgroups $vgroups
+
+$loop_cnt = 0
+check_db_ready:
+$loop_cnt = $loop_cnt + 1
+sleep 200
+if $loop_cnt == 100 then
+ print ====> db not ready!
+ return -1
+endi
+sql show databases
+print ===> rows: $rows
+print $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6] $data[2][7] $data[2][8] $data[2][9] $data[2][6] $data[2][11] $data[2][12] $data[2][13] $data[2][14] $data[2][15] $data[2][16] $data[2][17] $data[2][18] $data[2][19]
+if $rows != 3 then
+ return -1
+endi
+if $data[2][19] != ready then
+ goto check_db_ready
+endi
+
+sql use db
+
+$loop_cnt = 0
+check_vg_ready:
+$loop_cnt = $loop_cnt + 1
+sleep 200
+if $loop_cnt == 300 then
+ print ====> vgroups not ready!
+ return -1
+endi
+
+sql show vgroups
+print ===> rows: $rows
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6] $data[0][7] $data[0][8] $data[0][9] $data[0][10] $data[0][11]
+
+if $rows != $vgroups then
+ return -1
+endi
+
+if $data[0][4] == LEADER then
+ if $data[0][6] == FOLLOWER then
+ if $data[0][8] == FOLLOWER then
+ print ---- vgroup $data[0][0] leader locate on dnode $data[0][3]
+ endi
+ endi
+elif $data[0][6] == LEADER then
+ if $data[0][4] == FOLLOWER then
+ if $data[0][8] == FOLLOWER then
+ print ---- vgroup $data[0][0] leader locate on dnode $data[0][5]
+ endi
+ endi
+elif $data[0][8] == LEADER then
+ if $data[0][4] == FOLLOWER then
+ if $data[0][6] == FOLLOWER then
+ print ---- vgroup $data[0][0] leader locate on dnode $data[0][7]
+ endi
+ endi
+else
+ goto check_vg_ready
+endi
+
+
+vg_ready:
+print ====> create stable/child table
+sql create table stb (ts timestamp, c1 int, c2 float, c3 binary(10)) tags (t1 int)
+
+sql show stables
+if $rows != 1 then
+ return -1
+endi
+
+$ctbPrefix = ctb
+$ntbPrefix = ntb
+$tbNum = 10
+$i = 0
+while $i < $tbNum
+ $ctb = $ctbPrefix . $i
+ sql create table $ctb using stb tags( $i )
+ $ntb = $ntbPrefix . $i
+ sql create table $ntb (ts timestamp, c1 int, c2 float, c3 binary(10))
+ $i = $i + 1
+endw
+
+$totalTblNum = $tbNum * 2
+sleep 1000
+sql show tables
+print ====> expect $totalTblNum and infinsert $rows in fact
+if $rows != $totalTblNum then
+ return -1
+endi
+
+start_switch_leader:
+
+$switch_loop_cnt = 0
+sql show vgroups
+$dnodeId = $data[0][3]
+$dnodeId = dnode . $dnodeId
+
+switch_leader_to_offine_loop:
+
+print $dnodeId
+print ====> stop $dnodeId
+system sh/exec.sh -n $dnodeId -s stop -x SIGINT
+
+
+$loop_cnt = 0
+$loop_cnt = $loop_cnt + 1
+sleep 201
+if $loop_cnt == 300 then
+ print ====> vgroups switch fail!!!
+ return -1
+endi
+sql show vgroups
+print ===> rows: $rows
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6] $data[0][7] $data[0][8] $data[0][9] $data[0][10] $data[0][11]
+
+if $rows != $vgroups then
+ return -1
+endi
+
+
+vg_offline_1:
+
+print ====> start $dnodeId
+system sh/exec.sh -n $dnodeId -s start
+
+$switch_loop_cnt = $switch_loop_cnt + 1
+print $switch_loop_cnt
+
+if $switch_loop_cnt == 1 then
+ sql show vgroups
+ $dnodeId = $data[1][3]
+ $dnodeId = dnode . $dnodeId
+ goto switch_leader_to_offine_loop
+elif $switch_loop_cnt == 2 then
+ sql show vgroups
+ $dnodeId = $data[2][3]
+ $dnodeId = dnode . $dnodeId
+ goto switch_leader_to_offine_loop
+elif $switch_loop_cnt == 3 then
+ sql show vgroups
+ $dnodeId = $data[3][3]
+ $dnodeId = dnode . $dnodeId
+ goto switch_leader_to_offine_loop
+elif $switch_loop_cnt == 4 then
+ sql show vgroups
+ $dnodeId = $data[4][3]
+ $dnodeId = dnode . $dnodeId
+ goto switch_leader_to_offine_loop
+else
+ goto stop_leader_to_offine_loop
+endi
+
+stop_leader_to_offine_loop:
+
+$loop_cnt = 0
+check_vg_ready1:
+$loop_cnt = $loop_cnt + 1
+print $loop_cnt
+sleep 202
+if $loop_cnt == 300 then
+ print ====> vgroups not ready!
+ return -1
+endi
+
+sql show vgroups
+print ===> rows: $rows
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6] $data[0][7] $data[0][8] $data[0][9] $data[0][10] $data[0][11]
+
+if $rows != $vgroups then
+ return -1
+endi
+
+if $data[0][4] == LEADER then
+ if $data[0][6] == FOLLOWER then
+ if $data[0][8] == FOLLOWER then
+ print ---- vgroup $data[0][0] leader locate on dnode $data[0][3]
+ endi
+ endi
+elif $data[0][6] == LEADER then
+ if $data[0][4] == FOLLOWER then
+ if $data[0][8] == FOLLOWER then
+ print ---- vgroup $data[0][0] leader locate on dnode $data[0][5]
+ endi
+ endi
+elif $data[0][8] == LEADER then
+ if $data[0][4] == FOLLOWER then
+ if $data[0][6] == FOLLOWER then
+ print ---- vgroup $data[0][0] leader locate on dnode $data[0][7]
+ endi
+ endi
+else
+ goto check_vg_ready1
+endi
+
+
+print ====> final test: create stable/child table
+sql create table stb1 (ts timestamp, c1 int, c2 float, c3 binary(10)) tags (t1 int)
+
+
+sql show stables
+if $rows != 2 then
+ return -1
+endi
+
+$ctbPrefix = ctb1
+$ntbPrefix = ntb1
+$tbNum = 10
+$i = 0
+while $i < $tbNum
+ $ctb = $ctbPrefix . $i
+ sql create table $ctb using stb1 tags( $i )
+ $ntb = $ntbPrefix . $i
+ sql create table $ntb (ts timestamp, c1 int, c2 float, c3 binary(10))
+ $i = $i + 1
+endw
+
+sleep 1000
+sql show stables
+if $rows != 2 then
+ return -1
+endi
+
+sql show tables
+if $rows != 40 then
+ return -1
+endi
+
+
+
+system sh/deploy.sh -n dnode5 -i 5
+system sh/exec.sh -n dnode5 -s start
+
+sql connect
+sql create dnode $hostname port 7500
+
+$loop_cnt = 0
+check_dnode_ready3:
+ $loop_cnt = $loop_cnt + 1
+ sleep 200
+ if $loop_cnt == 100 then
+ print ====> dnode not ready!
+ return -1
+ endi
+
+sql show dnodes
+print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
+print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
+print ===> $rows $data[4][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
+
+if $rows != 5 then
+ return -1
+endi
+
+if $data[4][4] != ready then
+ goto check_dnode_ready3
+endi
+
+
+
+# restart clusters
+
+system sh/exec.sh -n dnode1 -s stop -x SIGINT
+system sh/exec.sh -n dnode2 -s stop -x SIGINT
+system sh/exec.sh -n dnode3 -s stop -x SIGINT
+system sh/exec.sh -n dnode4 -s stop -x SIGINT
+system sh/exec.sh -n dnode5 -s stop -x SIGINT
+
+
+
+system sh/exec.sh -n dnode1 -s start
+system sh/exec.sh -n dnode2 -s start
+system sh/exec.sh -n dnode3 -s start
+system sh/exec.sh -n dnode4 -s start
+system sh/exec.sh -n dnode5 -s start
+
+
+$loop_cnt = 0
+check_dnode_ready_2:
+ $loop_cnt = $loop_cnt + 1
+ sleep 200
+ if $loop_cnt == 10 then
+ print ====> dnode not ready!
+ return -1
+ endi
+sql show dnodes
+print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
+print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
+if $data[0][0] != 1 then
+ return -1
+endi
+
+if $data[0][4] != ready then
+ goto check_dnode_ready_2
+endi
+if $data[1][4] != ready then
+ goto check_dnode_ready_2
+endi
+if $data[2][4] != ready then
+ goto check_dnode_ready_2
+endi
+if $data[3][4] != ready then
+ goto check_dnode_ready_2
+endi
+
+sql use db;
+$ctbPrefix = ctb2
+$ntbPrefix = ntb2
+$tbNum = 10
+$i = 0
+while $i < $tbNum
+ $ctb = $ctbPrefix . $i
+ sql create table $ctb using stb1 tags( $i )
+ $ntb = $ntbPrefix . $i
+ sql create table $ntb (ts timestamp, c1 int, c2 float, c3 binary(10))
+ $i = $i + 1
+endw
+
+sleep 1000
+sql use db
+sql show stables
+if $rows != 2 then
+ return -1
+endi
+
+sql show tables
+print $rows
+if $rows != 60 then
+ return -1
+endi
+
+
+
+$replica = 3
+$vgroups = 5
+
+print ============= create database
+sql create database db1 replica $replica vgroups $vgroups
+
+$loop_cnt = 0
+check_db_ready1:
+$loop_cnt = $loop_cnt + 1
+sleep 200
+if $loop_cnt == 100 then
+ print ====> db not ready!
+ return -1
+endi
+sql show databases
+print ===> rows: $rows
+print $data(db1)[0] $data(db1)[1] $data(db1)[2] $data(db1)[3] $data(db1)[4] $data(db1)[5] $data(db1)[6] $data(db1)[7] $data(db1)[8] $data(db1)[9] $data(db1)[6] $data(db1)[11] $data(db1)[12] $data(db1)[13] $data(db1)[14] $data(db1)[15] $data(db1)[16] $data(db1)[17] $data(db1)[18] $data(db1)[19]
+if $rows != 4 then
+ return -1
+endi
+if $data(db1)[19] != ready then
+ goto check_db_ready1
+endi
+
+
+sql use db1
+
+$loop_cnt = 0
+check_vg_ready3:
+$loop_cnt = $loop_cnt + 1
+print $loop_cnt
+sleep 202
+if $loop_cnt == 300 then
+ print ====> vgroups not ready!
+ return -1
+endi
+
+sql show vgroups
+print ===> rows: $rows
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6] $data[0][7] $data[0][8] $data[0][9] $data[0][10] $data[0][11]
+if $rows != $vgroups then
+ return -1
+endi
+
+if $data[0][4] == LEADER then
+ if $data[0][6] == FOLLOWER then
+ if $data[0][8] == FOLLOWER then
+ print ---- vgroup $data[0][0] leader locate on dnode $data[0][3]
+ endi
+ endi
+elif $data[0][6] == LEADER then
+ if $data[0][4] == FOLLOWER then
+ if $data[0][8] == FOLLOWER then
+ print ---- vgroup $data[0][0] leader locate on dnode $data[0][5]
+ endi
+ endi
+elif $data[0][8] == LEADER then
+ if $data[0][4] == FOLLOWER then
+ if $data[0][6] == FOLLOWER then
+ print ---- vgroup $data[0][0] leader locate on dnode $data[0][7]
+ endi
+ endi
+else
+ goto check_vg_ready3
+endi
+
+system sh/exec.sh -n dnode1 -s stop -x SIGINT
+system sh/exec.sh -n dnode2 -s stop -x SIGINT
+system sh/exec.sh -n dnode3 -s stop -x SIGINT
+system sh/exec.sh -n dnode4 -s stop -x SIGINT
+
+
+
diff --git a/tests/script/tsim/sync/3Replica5VgElect.sim b/tests/script/tsim/sync/3Replica5VgElect.sim
new file mode 100644
index 0000000000000000000000000000000000000000..4041263e55fa06b93ecea9930ba1dbd728579ce7
--- /dev/null
+++ b/tests/script/tsim/sync/3Replica5VgElect.sim
@@ -0,0 +1,755 @@
+system sh/stop_dnodes.sh
+system sh/deploy.sh -n dnode1 -i 1
+system sh/deploy.sh -n dnode2 -i 2
+system sh/deploy.sh -n dnode3 -i 3
+system sh/deploy.sh -n dnode4 -i 4
+
+system sh/cfg.sh -n dnode1 -c supportVnodes -v 0
+
+system sh/exec.sh -n dnode1 -s start
+system sh/exec.sh -n dnode2 -s start
+system sh/exec.sh -n dnode3 -s start
+system sh/exec.sh -n dnode4 -s start
+
+$loop_cnt = 0
+check_dnode_ready:
+ $loop_cnt = $loop_cnt + 1
+ sleep 200
+ if $loop_cnt == 10 then
+ print ====> dnode not ready!
+ return -1
+ endi
+sql show dnodes
+print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
+print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
+if $data[0][0] != 1 then
+ return -1
+endi
+if $data[0][4] != ready then
+ goto check_dnode_ready
+endi
+
+sql connect
+sql create dnode $hostname port 7200
+sql create dnode $hostname port 7300
+sql create dnode $hostname port 7400
+
+$loop_cnt = 0
+check_dnode_ready_1:
+$loop_cnt = $loop_cnt + 1
+sleep 200
+if $loop_cnt == 10 then
+ print ====> dnodes not ready!
+ return -1
+endi
+sql show dnodes
+print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
+print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
+if $data[0][4] != ready then
+ goto check_dnode_ready_1
+endi
+if $data[1][4] != ready then
+ goto check_dnode_ready_1
+endi
+if $data[2][4] != ready then
+ goto check_dnode_ready_1
+endi
+if $data[3][4] != ready then
+ goto check_dnode_ready_1
+endi
+
+$replica = 3
+$vgroups = 5
+
+print ============= create database
+sql create database db replica $replica vgroups $vgroups
+
+$loop_cnt = 0
+check_db_ready:
+$loop_cnt = $loop_cnt + 1
+sleep 200
+if $loop_cnt == 100 then
+ print ====> db not ready!
+ return -1
+endi
+sql show databases
+print ===> rows: $rows
+print $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6] $data[2][7] $data[2][8] $data[2][9] $data[2][6] $data[2][11] $data[2][12] $data[2][13] $data[2][14] $data[2][15] $data[2][16] $data[2][17] $data[2][18] $data[2][19]
+if $rows != 3 then
+ return -1
+endi
+if $data[2][19] != ready then
+ goto check_db_ready
+endi
+
+sql use db
+
+$loop_cnt = 0
+check_vg_ready:
+$loop_cnt = $loop_cnt + 1
+sleep 200
+if $loop_cnt == 300 then
+ print ====> vgroups not ready!
+ return -1
+endi
+
+sql show vgroups
+print ===> rows: $rows
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6] $data[0][7] $data[0][8] $data[0][9] $data[0][10] $data[0][11]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6] $data[1][7] $data[1][8] $data[1][9] $data[1][10] $data[1][11]
+print $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6] $data[2][7] $data[2][8] $data[2][9] $data[2][10] $data[2][11]
+print $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6] $data[3][7] $data[3][8] $data[3][9] $data[3][10] $data[3][11]
+print $data[4][0] $data[4][1] $data[4][2] $data[4][3] $data[4][4] $data[4][5] $data[4][6] $data[4][7] $data[4][8] $data[4][9] $data[4][10] $data[4][11]
+if $rows != $vgroups then
+ return -1
+endi
+
+if $data[0][4] == LEADER then
+ if $data[0][6] == FOLLOWER then
+ if $data[0][8] == FOLLOWER then
+ print ---- vgroup $data[0][0] leader locate on dnode $data[0][3]
+ endi
+ endi
+elif $data[0][6] == LEADER then
+ if $data[0][4] == FOLLOWER then
+ if $data[0][8] == FOLLOWER then
+ print ---- vgroup $data[0][0] leader locate on dnode $data[0][5]
+ endi
+ endi
+elif $data[0][8] == LEADER then
+ if $data[0][4] == FOLLOWER then
+ if $data[0][6] == FOLLOWER then
+ print ---- vgroup $data[0][0] leader locate on dnode $data[0][7]
+ endi
+ endi
+else
+ goto check_vg_ready
+endi
+
+if $data[1][4] == LEADER then
+ if $data[1][6] == FOLLOWER then
+ if $data[1][8] == FOLLOWER then
+ print ---- vgroup $data[1][0] leader locate on dnode $data[1][3]
+ endi
+ endi
+elif $data[1][6] == LEADER then
+ if $data[1][4] == FOLLOWER then
+ if $data[1][8] == FOLLOWER then
+ print ---- vgroup $data[1][0] leader locate on dnode $data[1][5]
+ endi
+ endi
+elif $data[1][8] == LEADER then
+ if $data[1][4] == FOLLOWER then
+ if $data[1][6] == FOLLOWER then
+ print ---- vgroup $data[1][0] leader locate on dnode $data[1][7]
+ endi
+ endi
+else
+ goto check_vg_ready
+endi
+
+if $data[2][4] == LEADER then
+ if $data[2][6] == FOLLOWER then
+ if $data[2][8] == FOLLOWER then
+ print ---- vgroup $data[2][0] leader locate on dnode $data[2][3]
+ endi
+ endi
+elif $data[2][6] == LEADER then
+ if $data[2][4] == FOLLOWER then
+ if $data[2][8] == FOLLOWER then
+ print ---- vgroup $data[2][0] leader locate on dnode $data[2][5]
+ endi
+ endi
+elif $data[2][8] == LEADER then
+ if $data[2][4] == FOLLOWER then
+ if $data[2][6] == FOLLOWER then
+ print ---- vgroup $data[2][0] leader locate on dnode $data[2][7]
+ endi
+ endi
+else
+ goto check_vg_ready
+endi
+
+if $data[3][4] == LEADER then
+ if $data[3][6] == FOLLOWER then
+ if $data[3][8] == FOLLOWER then
+ print ---- vgroup $data[3][0] leader locate on dnode $data[3][3]
+ endi
+ endi
+elif $data[3][6] == LEADER then
+ if $data[3][4] == FOLLOWER then
+ if $data[3][8] == FOLLOWER then
+ print ---- vgroup $data[3][0] leader locate on dnode $data[3][5]
+ endi
+ endi
+elif $data[3][8] == LEADER then
+ if $data[3][4] == FOLLOWER then
+ if $data[3][6] == FOLLOWER then
+ print ---- vgroup $data[3][0] leader locate on dnode $data[3][7]
+ endi
+ endi
+else
+ goto check_vg_ready
+endi
+
+if $data[4][4] == LEADER then
+ if $data[4][6] == FOLLOWER then
+ if $data[4][8] == FOLLOWER then
+ print ---- vgroup $data[4][0] leader locate on dnode $data[4][3]
+ endi
+ endi
+elif $data[4][6] == LEADER then
+ if $data[4][4] == FOLLOWER then
+ if $data[4][8] == FOLLOWER then
+ print ---- vgroup $data[4][0] leader locate on dnode $data[4][5]
+ endi
+ endi
+elif $data[4][8] == LEADER then
+ if $data[4][4] == FOLLOWER then
+ if $data[4][6] == FOLLOWER then
+ print ---- vgroup $data[4][0] leader locate on dnode $data[4][7]
+ endi
+ endi
+else
+ goto check_vg_ready
+endi
+
+vg_ready:
+print ====> create stable/child table
+sql create table stb (ts timestamp, c1 int, c2 float, c3 binary(10)) tags (t1 int)
+
+sql show stables
+if $rows != 1 then
+ return -1
+endi
+
+$ctbPrefix = ctb
+$ntbPrefix = ntb
+$tbNum = 10
+$i = 0
+while $i < $tbNum
+ $ctb = $ctbPrefix . $i
+ sql create table $ctb using stb tags( $i )
+ $ntb = $ntbPrefix . $i
+ sql create table $ntb (ts timestamp, c1 int, c2 float, c3 binary(10))
+ $i = $i + 1
+endw
+
+$totalTblNum = $tbNum * 2
+sleep 1000
+sql show tables
+print ====> expect $totalTblNum and infinsert $rows in fact
+if $rows != $totalTblNum then
+ return -1
+endi
+
+start_switch_leader:
+
+$switch_loop_cnt = 0
+sql show vgroups
+$dnodeId = $data[0][3]
+$dnodeId = dnode . $dnodeId
+
+switch_leader_to_offine_loop:
+
+print $dnodeId
+print ====> stop $dnodeId
+system sh/exec.sh -n $dnodeId -s stop -x SIGINT
+
+
+$loop_cnt = 0
+$loop_cnt = $loop_cnt + 1
+sleep 201
+if $loop_cnt == 300 then
+ print ====> vgroups switch fail!!!
+ return -1
+endi
+sql show vgroups
+print ===> rows: $rows
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6] $data[0][7] $data[0][8] $data[0][9] $data[0][10] $data[0][11]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6] $data[1][7] $data[1][8] $data[1][9] $data[1][10] $data[1][11]
+print $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6] $data[2][7] $data[2][8] $data[2][9] $data[2][10] $data[2][11]
+print $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6] $data[3][7] $data[3][8] $data[3][9] $data[3][10] $data[3][11]
+print $data[4][0] $data[4][1] $data[4][2] $data[4][3] $data[4][4] $data[4][5] $data[4][6] $data[4][7] $data[4][8] $data[4][9] $data[4][10] $data[4][11]
+if $rows != $vgroups then
+ return -1
+endi
+
+
+vg_offline_1:
+
+print ====> start $dnodeId
+system sh/exec.sh -n $dnodeId -s start
+
+$switch_loop_cnt = $switch_loop_cnt + 1
+print $switch_loop_cnt
+
+if $switch_loop_cnt == 1 then
+ sql show vgroups
+ $dnodeId = $data[1][3]
+ $dnodeId = dnode . $dnodeId
+ goto switch_leader_to_offine_loop
+elif $switch_loop_cnt == 2 then
+ sql show vgroups
+ $dnodeId = $data[2][3]
+ $dnodeId = dnode . $dnodeId
+ goto switch_leader_to_offine_loop
+elif $switch_loop_cnt == 3 then
+ sql show vgroups
+ $dnodeId = $data[3][3]
+ $dnodeId = dnode . $dnodeId
+ goto switch_leader_to_offine_loop
+elif $switch_loop_cnt == 4 then
+ sql show vgroups
+ $dnodeId = $data[4][3]
+ $dnodeId = dnode . $dnodeId
+ goto switch_leader_to_offine_loop
+else
+ goto stop_leader_to_offine_loop
+endi
+
+stop_leader_to_offine_loop:
+
+$loop_cnt = 0
+check_vg_ready1:
+$loop_cnt = $loop_cnt + 1
+print $loop_cnt
+sleep 202
+if $loop_cnt == 300 then
+ print ====> vgroups not ready!
+ return -1
+endi
+
+sql show vgroups
+print ===> rows: $rows
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6] $data[0][7] $data[0][8] $data[0][9] $data[0][10] $data[0][11]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6] $data[1][7] $data[1][8] $data[1][9] $data[1][10] $data[1][11]
+print $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6] $data[2][7] $data[2][8] $data[2][9] $data[2][10] $data[2][11]
+print $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6] $data[3][7] $data[3][8] $data[3][9] $data[3][10] $data[3][11]
+print $data[4][0] $data[4][1] $data[4][2] $data[4][3] $data[4][4] $data[4][5] $data[4][6] $data[4][7] $data[4][8] $data[4][9] $data[4][10] $data[4][11]
+if $rows != $vgroups then
+ return -1
+endi
+
+if $data[0][4] == LEADER then
+ if $data[0][6] == FOLLOWER then
+ if $data[0][8] == FOLLOWER then
+ print ---- vgroup $data[0][0] leader locate on dnode $data[0][3]
+ endi
+ endi
+elif $data[0][6] == LEADER then
+ if $data[0][4] == FOLLOWER then
+ if $data[0][8] == FOLLOWER then
+ print ---- vgroup $data[0][0] leader locate on dnode $data[0][5]
+ endi
+ endi
+elif $data[0][8] == LEADER then
+ if $data[0][4] == FOLLOWER then
+ if $data[0][6] == FOLLOWER then
+ print ---- vgroup $data[0][0] leader locate on dnode $data[0][7]
+ endi
+ endi
+else
+ goto check_vg_ready1
+endi
+
+if $data[1][4] == LEADER then
+ if $data[1][6] == FOLLOWER then
+ if $data[1][8] == FOLLOWER then
+ print ---- vgroup $data[1][0] leader locate on dnode $data[1][3]
+ endi
+ endi
+elif $data[1][6] == LEADER then
+ if $data[1][4] == FOLLOWER then
+ if $data[1][8] == FOLLOWER then
+ print ---- vgroup $data[1][0] leader locate on dnode $data[1][5]
+ endi
+ endi
+elif $data[1][8] == LEADER then
+ if $data[1][4] == FOLLOWER then
+ if $data[1][6] == FOLLOWER then
+ print ---- vgroup $data[1][0] leader locate on dnode $data[1][7]
+ endi
+ endi
+else
+ goto check_vg_ready1
+endi
+
+if $data[2][4] == LEADER then
+ if $data[2][6] == FOLLOWER then
+ if $data[2][8] == FOLLOWER then
+ print ---- vgroup $data[2][0] leader locate on dnode $data[2][3]
+ endi
+ endi
+elif $data[2][6] == LEADER then
+ if $data[2][4] == FOLLOWER then
+ if $data[2][8] == FOLLOWER then
+ print ---- vgroup $data[2][0] leader locate on dnode $data[2][5]
+ endi
+ endi
+elif $data[2][8] == LEADER then
+ if $data[2][4] == FOLLOWER then
+ if $data[2][6] == FOLLOWER then
+ print ---- vgroup $data[2][0] leader locate on dnode $data[2][7]
+ endi
+ endi
+else
+ goto check_vg_ready1
+endi
+
+if $data[3][4] == LEADER then
+ if $data[3][6] == FOLLOWER then
+ if $data[3][8] == FOLLOWER then
+ print ---- vgroup $data[3][0] leader locate on dnode $data[3][3]
+ endi
+ endi
+elif $data[3][6] == LEADER then
+ if $data[3][4] == FOLLOWER then
+ if $data[3][8] == FOLLOWER then
+ print ---- vgroup $data[3][0] leader locate on dnode $data[3][5]
+ endi
+ endi
+elif $data[3][8] == LEADER then
+ if $data[3][4] == FOLLOWER then
+ if $data[3][6] == FOLLOWER then
+ print ---- vgroup $data[3][0] leader locate on dnode $data[3][7]
+ endi
+ endi
+else
+ goto check_vg_ready1
+endi
+
+if $data[4][4] == LEADER then
+ if $data[4][6] == FOLLOWER then
+ if $data[4][8] == FOLLOWER then
+ print ---- vgroup $data[4][0] leader locate on dnode $data[4][3]
+ endi
+ endi
+elif $data[4][6] == LEADER then
+ if $data[4][4] == FOLLOWER then
+ if $data[4][8] == FOLLOWER then
+ print ---- vgroup $data[4][0] leader locate on dnode $data[4][5]
+ endi
+ endi
+elif $data[4][8] == LEADER then
+ if $data[4][4] == FOLLOWER then
+ if $data[4][6] == FOLLOWER then
+ print ---- vgroup $data[4][0] leader locate on dnode $data[4][7]
+ endi
+ endi
+else
+ goto check_vg_ready1
+endi
+
+
+print ====> final test: create stable/child table
+sql create table stb1 (ts timestamp, c1 int, c2 float, c3 binary(10)) tags (t1 int)
+
+
+sql show stables
+if $rows != 2 then
+ return -1
+endi
+
+$ctbPrefix = ctb1
+$ntbPrefix = ntb1
+$tbNum = 10
+$i = 0
+while $i < $tbNum
+ $ctb = $ctbPrefix . $i
+ sql create table $ctb using stb1 tags( $i )
+ $ntb = $ntbPrefix . $i
+ sql create table $ntb (ts timestamp, c1 int, c2 float, c3 binary(10))
+ $i = $i + 1
+endw
+
+sleep 1000
+sql show stables
+if $rows != 2 then
+ return -1
+endi
+
+sql show tables
+if $rows != 40 then
+ return -1
+endi
+
+
+
+system sh/deploy.sh -n dnode5 -i 5
+system sh/exec.sh -n dnode5 -s start
+
+sql connect
+sql create dnode $hostname port 7500
+
+$loop_cnt = 0
+check_dnode_ready3:
+ $loop_cnt = $loop_cnt + 1
+ sleep 200
+ if $loop_cnt == 100 then
+ print ====> dnode not ready!
+ return -1
+ endi
+
+sql show dnodes
+print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
+print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
+print ===> $rows $data[4][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
+
+if $rows != 5 then
+ return -1
+endi
+
+if $data[4][4] != ready then
+ goto check_dnode_ready3
+endi
+
+
+
+# restart clusters
+
+system sh/exec.sh -n dnode1 -s stop -x SIGINT
+system sh/exec.sh -n dnode2 -s stop -x SIGINT
+system sh/exec.sh -n dnode3 -s stop -x SIGINT
+system sh/exec.sh -n dnode4 -s stop -x SIGINT
+system sh/exec.sh -n dnode5 -s stop -x SIGINT
+
+
+
+system sh/exec.sh -n dnode1 -s start
+system sh/exec.sh -n dnode2 -s start
+system sh/exec.sh -n dnode3 -s start
+system sh/exec.sh -n dnode4 -s start
+system sh/exec.sh -n dnode5 -s start
+
+
+$loop_cnt = 0
+check_dnode_ready_2:
+ $loop_cnt = $loop_cnt + 1
+ sleep 200
+ if $loop_cnt == 10 then
+ print ====> dnode not ready!
+ return -1
+ endi
+sql show dnodes
+print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
+print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
+if $data[0][0] != 1 then
+ return -1
+endi
+
+if $data[0][4] != ready then
+ goto check_dnode_ready_2
+endi
+if $data[1][4] != ready then
+ goto check_dnode_ready_2
+endi
+if $data[2][4] != ready then
+ goto check_dnode_ready_2
+endi
+if $data[3][4] != ready then
+ goto check_dnode_ready_2
+endi
+
+sql use db;
+$ctbPrefix = ctb2
+$ntbPrefix = ntb2
+$tbNum = 10
+$i = 0
+while $i < $tbNum
+ $ctb = $ctbPrefix . $i
+ sql create table $ctb using stb1 tags( $i )
+ $ntb = $ntbPrefix . $i
+ sql create table $ntb (ts timestamp, c1 int, c2 float, c3 binary(10))
+ $i = $i + 1
+endw
+
+sleep 1000
+sql use db
+sql show stables
+if $rows != 2 then
+ return -1
+endi
+
+sql show tables
+print $rows
+if $rows != 60 then
+ return -1
+endi
+
+
+
+$replica = 3
+$vgroups = 5
+
+print ============= create database
+sql create database db1 replica $replica vgroups $vgroups
+
+$loop_cnt = 0
+check_db_ready1:
+$loop_cnt = $loop_cnt + 1
+sleep 200
+if $loop_cnt == 100 then
+ print ====> db not ready!
+ return -1
+endi
+sql show databases
+print ===> rows: $rows
+print $data(db1)[0] $data(db1)[1] $data(db1)[2] $data(db1)[3] $data(db1)[4] $data(db1)[5] $data(db1)[6] $data(db1)[7] $data(db1)[8] $data(db1)[9] $data(db1)[6] $data(db1)[11] $data(db1)[12] $data(db1)[13] $data(db1)[14] $data(db1)[15] $data(db1)[16] $data(db1)[17] $data(db1)[18] $data(db1)[19]
+if $rows != 4 then
+ return -1
+endi
+if $data(db1)[19] != ready then
+ goto check_db_ready1
+endi
+
+
+sql use db1
+
+$loop_cnt = 0
+check_vg_ready3:
+$loop_cnt = $loop_cnt + 1
+print $loop_cnt
+sleep 202
+if $loop_cnt == 300 then
+ print ====> vgroups not ready!
+ return -1
+endi
+
+sql show vgroups
+print ===> rows: $rows
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6] $data[0][7] $data[0][8] $data[0][9] $data[0][10] $data[0][11]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6] $data[1][7] $data[1][8] $data[1][9] $data[1][10] $data[1][11]
+print $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6] $data[2][7] $data[2][8] $data[2][9] $data[2][10] $data[2][11]
+print $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6] $data[3][7] $data[3][8] $data[3][9] $data[3][10] $data[3][11]
+print $data[4][0] $data[4][1] $data[4][2] $data[4][3] $data[4][4] $data[4][5] $data[4][6] $data[4][7] $data[4][8] $data[4][9] $data[4][10] $data[4][11]
+if $rows != $vgroups then
+ return -1
+endi
+
+if $data[0][4] == LEADER then
+ if $data[0][6] == FOLLOWER then
+ if $data[0][8] == FOLLOWER then
+ print ---- vgroup $data[0][0] leader locate on dnode $data[0][3]
+ endi
+ endi
+elif $data[0][6] == LEADER then
+ if $data[0][4] == FOLLOWER then
+ if $data[0][8] == FOLLOWER then
+ print ---- vgroup $data[0][0] leader locate on dnode $data[0][5]
+ endi
+ endi
+elif $data[0][8] == LEADER then
+ if $data[0][4] == FOLLOWER then
+ if $data[0][6] == FOLLOWER then
+ print ---- vgroup $data[0][0] leader locate on dnode $data[0][7]
+ endi
+ endi
+else
+ goto check_vg_ready3
+endi
+
+if $data[1][4] == LEADER then
+ if $data[1][6] == FOLLOWER then
+ if $data[1][8] == FOLLOWER then
+ print ---- vgroup $data[1][0] leader locate on dnode $data[1][3]
+ endi
+ endi
+elif $data[1][6] == LEADER then
+ if $data[1][4] == FOLLOWER then
+ if $data[1][8] == FOLLOWER then
+ print ---- vgroup $data[1][0] leader locate on dnode $data[1][5]
+ endi
+ endi
+elif $data[1][8] == LEADER then
+ if $data[1][4] == FOLLOWER then
+ if $data[1][6] == FOLLOWER then
+ print ---- vgroup $data[1][0] leader locate on dnode $data[1][7]
+ endi
+ endi
+else
+ goto check_vg_ready3
+endi
+
+if $data[2][4] == LEADER then
+ if $data[2][6] == FOLLOWER then
+ if $data[2][8] == FOLLOWER then
+ print ---- vgroup $data[2][0] leader locate on dnode $data[2][3]
+ endi
+ endi
+elif $data[2][6] == LEADER then
+ if $data[2][4] == FOLLOWER then
+ if $data[2][8] == FOLLOWER then
+ print ---- vgroup $data[2][0] leader locate on dnode $data[2][5]
+ endi
+ endi
+elif $data[2][8] == LEADER then
+ if $data[2][4] == FOLLOWER then
+ if $data[2][6] == FOLLOWER then
+ print ---- vgroup $data[2][0] leader locate on dnode $data[2][7]
+ endi
+ endi
+else
+ goto check_vg_ready3
+endi
+
+if $data[3][4] == LEADER then
+ if $data[3][6] == FOLLOWER then
+ if $data[3][8] == FOLLOWER then
+ print ---- vgroup $data[3][0] leader locate on dnode $data[3][3]
+ endi
+ endi
+elif $data[3][6] == LEADER then
+ if $data[3][4] == FOLLOWER then
+ if $data[3][8] == FOLLOWER then
+ print ---- vgroup $data[3][0] leader locate on dnode $data[3][5]
+ endi
+ endi
+elif $data[3][8] == LEADER then
+ if $data[3][4] == FOLLOWER then
+ if $data[3][6] == FOLLOWER then
+ print ---- vgroup $data[3][0] leader locate on dnode $data[3][7]
+ endi
+ endi
+else
+ goto check_vg_ready3
+endi
+
+if $data[4][4] == LEADER then
+ if $data[4][6] == FOLLOWER then
+ if $data[4][8] == FOLLOWER then
+ print ---- vgroup $data[4][0] leader locate on dnode $data[4][3]
+ endi
+ endi
+elif $data[4][6] == LEADER then
+ if $data[4][4] == FOLLOWER then
+ if $data[4][8] == FOLLOWER then
+ print ---- vgroup $data[4][0] leader locate on dnode $data[4][5]
+ endi
+ endi
+elif $data[4][8] == LEADER then
+ if $data[4][4] == FOLLOWER then
+ if $data[4][6] == FOLLOWER then
+ print ---- vgroup $data[4][0] leader locate on dnode $data[4][7]
+ endi
+ endi
+else
+ goto check_vg_ready3
+endi
+
+# sql drop dnode 5
+
+system sh/exec.sh -n dnode1 -s stop -x SIGINT
+system sh/exec.sh -n dnode2 -s stop -x SIGINT
+system sh/exec.sh -n dnode3 -s stop -x SIGINT
+system sh/exec.sh -n dnode4 -s stop -x SIGINT
+
+
diff --git a/tests/script/tsim/sync/oneReplica1VgElect.sim b/tests/script/tsim/sync/oneReplica1VgElect.sim
index bb9b3f449640818d888137721350b0cea90eebae..d98b823192b82556f0327f5107d2d359176e19cb 100644
--- a/tests/script/tsim/sync/oneReplica1VgElect.sim
+++ b/tests/script/tsim/sync/oneReplica1VgElect.sim
@@ -31,7 +31,7 @@ if $data[0][4] != ready then
goto check_dnode_ready
endi
-#sql connect
+sql connect
sql create dnode $hostname port 7200
sql create dnode $hostname port 7300
sql create dnode $hostname port 7400
@@ -66,139 +66,94 @@ $vgroups = 1
$replica = 1
print ============= create database
-sql create database db replica $replica vgroups $vgroups
+sql create database db1 replica $replica vgroups $vgroups
$loop_cnt = 0
check_db_ready:
$loop_cnt = $loop_cnt + 1
sleep 200
-if $loop_cnt == 10 then
- print ====> db not ready!
+if $loop_cnt == 100 then
+ print ====> db1 not ready!
return -1
endi
sql show databases
print ===> rows: $rows
-print $data(db)[0] $data(db)[1] $data(db)[2] $data(db)[3] $data(db)[4] $data(db)[5] $data(db)[6] $data(db)[7] $data(db)[8] $data(db)[9] $data(db)[10] $data(db)[11] $data(db)[12]
+print $data(db1)[0] $data(db)[1] $data(db)[2] $data(db)[3] $data(db)[4] $data(db)[5] $data(db)[6] $data(db)[7] $data(db)[8] $data(db)[9] $data(db)[10] $data(db)[11] $data(db)[12]
print $data(db)[13] $data(db)[14] $data(db)[15] $data(db)[16] $data(db)[17] $data(db)[18] $data(db)[19] $data(db)[20]
if $rows != 3 then
return -1
endi
-if $data(db)[19] != ready then
+if $data(db1)[19] != ready then
goto check_db_ready
endi
-sql use db
+sql use db1
$loop_cnt = 0
check_vg_ready:
$loop_cnt = $loop_cnt + 1
sleep 200
-if $loop_cnt == 10 then
+if $loop_cnt == 300 then
print ====> vgroups not ready!
return -1
endi
sql show vgroups
print ===> rows: $rows
-print $data(2)[0] $data(2)[1] $data(2)[2] $data(2)[3] $data(2)[4] $data(2)[5] $data(2)[6] $data(2)[7] $data(2)[8] $data(2)[9] $data(2)[10] $data(2)[11] $data(2)[12] $data(2)[13]
-print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6] $data[0][7] $data[0][8] $data[0][9] $data[10][6] $data[0][11] $data[0][12] $data[0][13]
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6] $data[0][7] $data[0][8] $data[0][9] $data[0][6] $data[0][11] $data[0][12] $data[0][13]
if $rows != $vgroups then
return -1
endi
if $data[0][4] == LEADER then
- if $data[0][6] != NULL then
- goto check_vg_ready
- endi
- if $data[0][8] != NULL then
- goto check_vg_ready
- endi
print ---- vgroup $data[0][0] leader locate on dnode $data[0][3]
- goto vg_ready
-endi
-if $data[0][6] == LEADER then
- if $data[0][4] != NULL then
- goto check_vg_ready
- endi
- if $data[0][8] != NULL then
- goto check_vg_ready
- endi
+ goto vg_ready
+elif $data[0][6] == LEADER then
print ---- vgroup $data[0][0] leader locate on dnode $data[0][5]
- goto vg_ready
-endi
-if $data[0][8] == LEADER then
- if $data[0][4] != NULL then
- goto check_vg_ready
- endi
- if $data[0][6] != NULL then
- goto check_vg_ready
- endi
+ goto vg_ready
+elif $data[0][8] == LEADER then
print ---- vgroup $data[0][0] leader locate on dnode $data[0][7]
- goto vg_ready
+ goto vg_ready
+else
+ goto check_vg_ready
endi
-vg_ready:
-print ====> create stable/child table, insert data, and select
-sql create table if not exists stb (ts timestamp, c1 int, c2 float, c3 binary(10)) tags (t1 int)
+vg_ready:
+print ====> create stable/child table
+sql create table stb (ts timestamp, c1 int, c2 float, c3 binary(10)) tags (t1 int)
sql show stables
if $rows != 1 then
return -1
endi
+
$ctbPrefix = ctb
$ntbPrefix = ntb
$tbNum = 10
-$rowNum = 10
-$tstart = 1640966400000 # 2022-01-01 00:00:00.000
-
$i = 0
while $i < $tbNum
$ctb = $ctbPrefix . $i
sql create table $ctb using stb tags( $i )
$ntb = $ntbPrefix . $i
sql create table $ntb (ts timestamp, c1 int, c2 float, c3 binary(10))
-
- $x = 0
- while $x < $rowNum
- $binary = ' . binary
- $binary = $binary . $i
- $binary = $binary . '
-
- sql insert into $ctb values ($tstart , $i , $x , $binary )
- sql insert into $ntb values ($tstart , 999 , 999 , 'binary-ntb' )
- $tstart = $tstart + 1
- $x = $x + 1
- endw
-
- print ====> insert rows: $rowNum into $ctb and $ntb
-
$i = $i + 1
- $tstart = 1640966400000
endw
$totalTblNum = $tbNum * 2
+sleep 1000
sql show tables
+print ====> expect $totalTblNum and infinsert $rows in fact
if $rows != $totalTblNum then
return -1
endi
-sql select count(*) from ntb0
-print rows: $rows
-print $data[0][0] $data[0][1]
-if $data[0][0] != $rowNum then
- return -1
-endi
+start_switch_leader:
-$totalRowsOfStb = $rowNum * $tbNum
-sql select count(*) from stb
-print rows: $rows
-print $data[0][0] $data[0][1]
-if $data[0][0] != $totalRowsOfStb then
- return -1
-endi
+$switch_loop_cnt = 0
+switch_leader_to_offine_loop:
print ====> finde vnode of leader, and stop the dnode where the vnode is located, and query stb/ntb count(*)
sql show vgroups
-print $data(2)[0] $data(2)[1] $data(2)[2] $data(2)[3] $data(2)[4] $data(2)[5] $data(2)[6] $data(2)[7] $data(2)[8] $data(2)[9] $data(2)[10] $data(2)[11] $data(2)[12] $data(2)[13]
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6] $data[0][7] $data[0][8] $data[0][9] $data[0][6] $data[0][11] $data[0][12] $data[0][13]
if $data[0][4] == LEADER then
$dnodeId = $data[0][3]
elif $data[0][6] == LEADER then
@@ -213,148 +168,78 @@ endi
$dnodeId = dnode . $dnodeId
print ====> stop $dnodeId
system sh/exec.sh -n $dnodeId -s stop -x SIGINT
+#print ====> start $dnodeId
+#system sh/exec.sh -n $dnodeId -s start
$loop_cnt = 0
check_vg_ready_2:
$loop_cnt = $loop_cnt + 1
sleep 200
-if $loop_cnt == 10 then
+if $loop_cnt == 300 then
print ====> vgroups switch fail!!!
return -1
endi
sql show vgroups
print ===> rows: $rows
-print $data(2)[0] $data(2)[1] $data(2)[2] $data(2)[3] $data(2)[4] $data(2)[5] $data(2)[6] $data(2)[7] $data(2)[8] $data(2)[9] $data(2)[10] $data(2)[11] $data(2)[12] $data(2)[13]
-print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6] $data[0][7] $data[0][8] $data[0][9] $data[10][6] $data[0][11] $data[0][12] $data[0][13]
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6] $data[0][7] $data[0][8] $data[0][9] $data[0][6] $data[0][11] $data[0][12] $data[0][13]
if $rows != $vgroups then
return -1
endi
-if $data[0][4] == LEADER then
- if $data[0][6] != NULL then
- goto check_vg_ready_2
- endi
- if $data[0][8] != NULL then
- goto check_vg_ready_2
- endi
- print ---- vgroup $data[0][0] leader switch to dnode $data[0][3]
- goto vg_ready_2
-endi
-if $data[0][6] == LEADER then
- if $data[0][4] != NULL then
- goto check_vg_ready_2
- endi
- if $data[0][8] != NULL then
- goto check_vg_ready_2
- endi
- print ---- vgroup $data[0][0] leader switch to dnode $data[0][5]
- goto vg_ready_2
-endi
-if $data[0][8] == LEADER then
- if $data[0][4] != NULL then
- goto check_vg_ready_2
- endi
- if $data[0][6] != NULL then
- goto check_vg_ready_2
- endi
- print ---- vgroup $data[0][0] leader switch to dnode $data[0][7]
- goto vg_ready_2
-endi
-vg_ready_2:
-
-sql select count(*) from ntb0
-print rows: $rows
-print $data[0][0] $data[0][1]
-if $data[0][0] != $rowNum then
- return -1
-endi
-sql select count(*) from ctb0
-print rows: $rows
-print $data[0][0] $data[0][1]
-if $data[0][0] != $rowNum then
- return -1
-endi
+if $data[0][4] == OFFLINE then
+ print ---- vgroup $dnodeId leader switch to offline
+ goto vg_offline_1
+elif $data[0][6] == OFFLINE then
+ print ---- vgroup $dnodeId leader switch to offline
+ goto vg_offline_1
+elif $data[0][8] == OFFLINE then
+ print ---- vgroup $dnodeId leader switch to offline
+ goto vg_offline_1
+else
+ goto check_vg_ready_2
+endi
-sql select count(*) from stb
-print rows: $rows
-print $data[0][0] $data[0][1]
-if $data[0][0] != $totalRowsOfStb then
- return -1
-endi
+vg_offline_1:
-print ====> stop and start all dnode(not include the dnode where mnode is located), then query
-system sh/exec.sh -n dnode2 -s stop -x SIGINT
-system sh/exec.sh -n dnode3 -s stop -x SIGINT
-system sh/exec.sh -n dnode4 -s stop -x SIGINT
-system sh/exec.sh -n dnode4 -s start
-system sh/exec.sh -n dnode3 -s start
-system sh/exec.sh -n dnode2 -s start
+print ====> start $dnodeId
+system sh/exec.sh -n $dnodeId -s start
-$loop_cnt = 0
-check_vg_ready_1:
-$loop_cnt = $loop_cnt + 1
+$loop_cnt1= 0
+check_vg1_ready:
+$loop_cnt1 = $loop_cnt1 + 1
sleep 200
-if $loop_cnt == 10 then
- print ====> after restart dnode, vgroups not ready!
+if $loop_cnt1 == 300 then
+ print ====> vgroups not ready!
return -1
endi
sql show vgroups
print ===> rows: $rows
-print $data(2)[0] $data(2)[1] $data(2)[2] $data(2)[3] $data(2)[4] $data(2)[5] $data(2)[6] $data(2)[7] $data(2)[8] $data(2)[9] $data(2)[10] $data(2)[11] $data(2)[12] $data(2)[13]
-print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6] $data[0][7] $data[0][8] $data[0][9] $data[10][6] $data[0][11] $data[0][12] $data[0][13]
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6] $data[0][7] $data[0][8] $data[0][9] $data[0][6] $data[0][11] $data[0][12] $data[0][13]
if $rows != $vgroups then
return -1
endi
if $data[0][4] == LEADER then
- if $data[0][6] != NULL then
- goto check_vg_ready_1
- endi
- if $data[0][8] != NULL then
- goto check_vg_ready_1
- endi
- goto vg_ready_1
-endi
-if $data[0][6] == LEADER then
- if $data[0][4] != NULL then
- goto check_vg_ready_1
- endi
- if $data[0][8] != NULL then
- goto check_vg_ready_1
- endi
- goto vg_ready_1
-endi
-if $data[0][8] == LEADER then
- if $data[0][4] != NULL then
- goto check_vg_ready_1
- endi
- if $data[0][6] != NULL then
- goto check_vg_ready_1
- endi
- goto vg_ready_1
+ print ---- vgroup $data[0][0] leader locate on dnode $data[0][3]
+ goto countinu_loop
+elif $data[0][6] == LEADER then
+ print ---- vgroup $data[0][0] leader locate on dnode $data[0][5]
+ goto countinu_loop
+elif $data[0][8] == LEADER then
+ print ---- vgroup $data[0][0] leader locate on dnode $data[0][7]
+ goto countinu_loop
+else
+ goto check_vg1_ready
endi
-vg_ready_1:
-print ====> after restart dnode2/dnode3/dnode4, query stb/ntb count(*)
-sql select count(*) from ntb0
-print rows: $rows
-print $data[0][0] $data[0][1]
-if $data[0][0] != $rowNum then
- return -1
-endi
+countinu_loop:
-sql select count(*) from ctb0
-print rows: $rows
-print $data[0][0] $data[0][1]
-if $data[0][0] != $rowNum then
- return -1
+$switch_loop_cnt = $switch_loop_cnt + 1
+print $switch_loop_cnt
+if $switch_loop_cnt < 4 then
+ goto switch_leader_to_offine_loop
endi
-sql select count(*) from stb
-print rows: $rows
-print $data[0][0] $data[0][1]
-if $data[0][0] != $totalRowsOfStb then
- return -1
-endi
+stop_leader_to_offine_loop:
system sh/exec.sh -n dnode1 -s stop -x SIGINT
system sh/exec.sh -n dnode2 -s stop -x SIGINT
diff --git a/tests/script/tsim/sync/oneReplica5VgElect.sim b/tests/script/tsim/sync/oneReplica5VgElect.sim
new file mode 100644
index 0000000000000000000000000000000000000000..d6d18093c341dfea8d4d2f1c22fa10cab173d71c
--- /dev/null
+++ b/tests/script/tsim/sync/oneReplica5VgElect.sim
@@ -0,0 +1,417 @@
+system sh/stop_dnodes.sh
+system sh/deploy.sh -n dnode1 -i 1
+system sh/deploy.sh -n dnode2 -i 2
+system sh/deploy.sh -n dnode3 -i 3
+system sh/deploy.sh -n dnode4 -i 4
+
+system sh/cfg.sh -n dnode1 -c supportVnodes -v 0
+
+system sh/exec.sh -n dnode1 -s start
+system sh/exec.sh -n dnode2 -s start
+system sh/exec.sh -n dnode3 -s start
+system sh/exec.sh -n dnode4 -s start
+
+$loop_cnt = 0
+check_dnode_ready:
+ $loop_cnt = $loop_cnt + 1
+ sleep 200
+ if $loop_cnt == 10 then
+ print ====> dnode not ready!
+ return -1
+ endi
+sql show dnodes
+print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
+print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
+if $data[0][0] != 1 then
+ return -1
+endi
+if $data[0][4] != ready then
+ goto check_dnode_ready
+endi
+
+sql connect
+sql create dnode $hostname port 7200
+sql create dnode $hostname port 7300
+sql create dnode $hostname port 7400
+
+$loop_cnt = 0
+check_dnode_ready_1:
+$loop_cnt = $loop_cnt + 1
+sleep 200
+if $loop_cnt == 10 then
+ print ====> dnodes not ready!
+ return -1
+endi
+sql show dnodes
+print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
+print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
+if $data[0][4] != ready then
+ goto check_dnode_ready_1
+endi
+if $data[1][4] != ready then
+ goto check_dnode_ready_1
+endi
+if $data[2][4] != ready then
+ goto check_dnode_ready_1
+endi
+if $data[3][4] != ready then
+ goto check_dnode_ready_1
+endi
+
+$replica = 1
+$vgroups = 5
+
+print ============= create database
+sql create database db1 replica $replica vgroups $vgroups
+
+$loop_cnt = 0
+check_db_ready:
+$loop_cnt = $loop_cnt + 1
+sleep 200
+if $loop_cnt == 100 then
+ print ====> db1 not ready!
+ return -1
+endi
+sql show databases
+print ===> rows: $rows
+print $data(db1)[0] $data(db)[1] $data(db)[2] $data(db)[3] $data(db)[4] $data(db)[5] $data(db)[6] $data(db)[7] $data(db)[8] $data(db)[9] $data(db)[10] $data(db)[11] $data(db)[12]
+if $rows != 3 then
+ return -1
+endi
+if $data(db1)[19] != ready then
+ goto check_db_ready
+endi
+
+sql use db1
+
+$loop_cnt = 0
+check_vg_ready:
+$loop_cnt = $loop_cnt + 1
+sleep 200
+if $loop_cnt == 300 then
+ print ====> vgroups not ready!
+ return -1
+endi
+
+sql show vgroups
+print ===> rows: $rows
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6] $data[0][7] $data[0][8] $data[0][9] $data[0][6] $data[0][11] $data[0][12] $data[0][13]
+if $rows != $vgroups then
+ return -1
+endi
+
+if $data[0][4] == LEADER then
+ print ---- vgroup $data[0][0] leader locate on dnode $data[0][3]
+elif $data[0][6] == LEADER then
+ print ---- vgroup $data[0][0] leader locate on dnode $data[0][5]
+elif $data[0][8] == LEADER then
+ print ---- vgroup $data[0][0] leader locate on dnode $data[0][7]
+else
+ goto check_vg_ready
+endi
+
+if $data[1][4] == LEADER then
+ print ---- vgroup $data[1][0] leader locate on dnode $data[0][3]
+elif $data[1][6] == LEADER then
+ print ---- vgroup $data[1][0] leader locate on dnode $data[0][5]
+elif $data[1][8] == LEADER then
+ print ---- vgroup $data[1][0] leader locate on dnode $data[0][7]
+else
+ goto check_vg_ready
+endi
+
+if $data[2][4] == LEADER then
+ print ---- vgroup $data[2][0] leader locate on dnode $data[0][3]
+elif $data[2][6] == LEADER then
+ print ---- vgroup $data[2][0] leader locate on dnode $data[0][5]
+elif $data[2][8] == LEADER then
+ print ---- vgroup $data[2][0] leader locate on dnode $data[0][7]
+else
+ goto check_vg_ready
+endi
+
+if $data[3][4] == LEADER then
+ print ---- vgroup $data[3][0] leader locate on dnode $data[0][3]
+elif $data[3][6] == LEADER then
+ print ---- vgroup $data[3][0] leader locate on dnode $data[0][5]
+elif $data[3][8] == LEADER then
+ print ---- vgroup $data[3][0] leader locate on dnode $data[0][7]
+else
+ goto check_vg_ready
+endi
+
+if $data[4][4] == LEADER then
+ print ---- vgroup $data[4][0] leader locate on dnode $data[0][3]
+elif $data[4][6] == LEADER then
+ print ---- vgroup $data[4][0] leader locate on dnode $data[0][5]
+elif $data[4][8] == LEADER then
+ print ---- vgroup $data[4][0] leader locate on dnode $data[0][7]
+else
+ goto check_vg_ready
+endi
+
+vg_ready:
+print ====> create stable/child table
+sql create table stb (ts timestamp, c1 int, c2 float, c3 binary(10)) tags (t1 int)
+
+sql show stables
+if $rows != 1 then
+ return -1
+endi
+
+$ctbPrefix = ctb
+$ntbPrefix = ntb
+$tbNum = 10
+$i = 0
+while $i < $tbNum
+ $ctb = $ctbPrefix . $i
+ sql create table $ctb using stb tags( $i )
+ $ntb = $ntbPrefix . $i
+ sql create table $ntb (ts timestamp, c1 int, c2 float, c3 binary(10))
+ $i = $i + 1
+endw
+
+$totalTblNum = $tbNum * 2
+sleep 1000
+sql show tables
+print ====> expect $totalTblNum and infinsert $rows in fact
+if $rows != $totalTblNum then
+ return -1
+endi
+
+start_switch_leader:
+
+$switch_loop_cnt = 0
+sql show vgroups
+$dnodeId = $data[0][3]
+$dnodeId = dnode . $dnodeId
+
+switch_leader_to_offine_loop:
+
+print $dnodeId
+print ====> stop $dnodeId
+system sh/exec.sh -n $dnodeId -s stop -x SIGINT
+
+
+$loop_cnt = 0
+check_vg_ready_2:
+$loop_cnt = $loop_cnt + 1
+sleep 201
+if $loop_cnt == 300 then
+ print ====> vgroups switch fail!!!
+ return -1
+endi
+sql show vgroups
+print ===> rows: $rows
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6] $data[0][7] $data[0][8] $data[0][9] $data[0][6] $data[0][11] $data[0][12] $data[0][13]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6] $data[1][7] $data[1][8] $data[1][9] $data[1][6] $data[1][11] $data[1][12] $data[1][13]
+print $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6] $data[2][7] $data[2][8] $data[2][9] $data[2][6] $data[2][11] $data[2][12] $data[2][13]
+print $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6] $data[3][7] $data[3][8] $data[3][9] $data[3][6] $data[3][11] $data[3][12] $data[3][13]
+print $data[4][0] $data[4][1] $data[4][2] $data[4][3] $data[4][4] $data[4][5] $data[4][6] $data[4][7] $data[4][8] $data[4][9] $data[4][6] $data[4][11] $data[4][12] $data[4][13]
+if $rows != $vgroups then
+ return -1
+endi
+
+
+vg_offline_1:
+
+print ====> start $dnodeId
+system sh/exec.sh -n $dnodeId -s start
+
+$switch_loop_cnt = $switch_loop_cnt + 1
+print $switch_loop_cnt
+
+if $switch_loop_cnt == 1 then
+ sql show vgroups
+ $dnodeId = $data[1][3]
+ $dnodeId = dnode . $dnodeId
+ goto switch_leader_to_offine_loop
+elif $switch_loop_cnt == 2 then
+ sql show vgroups
+ $dnodeId = $data[2][3]
+ $dnodeId = dnode . $dnodeId
+ goto switch_leader_to_offine_loop
+elif $switch_loop_cnt == 3 then
+ sql show vgroups
+ $dnodeId = $data[3][3]
+ $dnodeId = dnode . $dnodeId
+ goto switch_leader_to_offine_loop
+elif $switch_loop_cnt == 4 then
+ sql show vgroups
+ $dnodeId = $data[4][3]
+ $dnodeId = dnode . $dnodeId
+ goto switch_leader_to_offine_loop
+else
+ goto stop_leader_to_offine_loop
+endi
+
+stop_leader_to_offine_loop:
+
+$loop_cnt = 0
+check_vg_ready1:
+$loop_cnt = $loop_cnt + 1
+print $loop_cnt
+sleep 202
+if $loop_cnt == 300 then
+ print ====> vgroups not ready!
+ return -1
+endi
+
+sql show vgroups
+print ===> rows: $rows
+print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6] $data[0][7] $data[0][8] $data[0][9] $data[0][6] $data[0][11] $data[0][12] $data[0][13]
+print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6] $data[1][7] $data[1][8] $data[1][9] $data[1][6] $data[1][11] $data[1][12] $data[1][13]
+print $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6] $data[2][7] $data[2][8] $data[2][9] $data[2][6] $data[2][11] $data[2][12] $data[2][13]
+print $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6] $data[3][7] $data[3][8] $data[3][9] $data[3][6] $data[3][11] $data[3][12] $data[3][13]
+print $data[4][0] $data[4][1] $data[4][2] $data[4][3] $data[4][4] $data[4][5] $data[4][6] $data[4][7] $data[4][8] $data[4][9] $data[4][6] $data[4][11] $data[4][12] $data[4][13]
+if $rows != $vgroups then
+ return -1
+endi
+
+if $data[0][4] == LEADER then
+ print ---- vgroup $data[0][0] leader locate on dnode $data[0][3]
+elif $data[0][6] == LEADER then
+ print ---- vgroup $data[0][0] leader locate on dnode $data[0][5]
+elif $data[0][8] == LEADER then
+ print ---- vgroup $data[0][0] leader locate on dnode $data[0][7]
+else
+ goto check_vg_ready1
+endi
+
+if $data[1][4] == LEADER then
+ print ---- vgroup $data[1][0] leader locate on dnode $data[1][3]
+elif $data[1][6] == LEADER then
+ print ---- vgroup $data[1][0] leader locate on dnode $data[1][5]
+elif $data[1][8] == LEADER then
+ print ---- vgroup $data[1][0] leader locate on dnode $data[1][7]
+else
+ goto check_vg_ready1
+endi
+
+if $data[2][4] == LEADER then
+ print ---- vgroup $data[2][0] leader locate on dnode $data[2][3]
+elif $data[2][6] == LEADER then
+ print ---- vgroup $data[2][0] leader locate on dnode $data[2][5]
+elif $data[2][8] == LEADER then
+ print ---- vgroup $data[2][0] leader locate on dnode $data[2][7]
+else
+ goto check_vg_ready
+endi
+
+if $data[3][4] == LEADER then
+ print ---- vgroup $data[3][0] leader locate on dnode $data[3][3]
+elif $data[3][6] == LEADER then
+ print ---- vgroup $data[3][0] leader locate on dnode $data[3][5]
+elif $data[3][8] == LEADER then
+ print ---- vgroup $data[3][0] leader locate on dnode $data[3][7]
+else
+ goto check_vg_ready1
+endi
+
+if $data[4][4] == LEADER then
+ print ---- vgroup $data[4][0] leader locate on dnode $data[4][3]
+elif $data[4][6] == LEADER then
+ print ---- vgroup $data[4][0] leader locate on dnode $data[4][5]
+elif $data[4][8] == LEADER then
+ print ---- vgroup $data[4][0] leader locate on dnode $data[4][7]
+else
+ goto check_vg_ready1
+endi
+
+
+print ====> final test: create stable/child table
+sql create table stb1 (ts timestamp, c1 int, c2 float, c3 binary(10)) tags (t1 int)
+
+
+sql show stables
+if $rows != 2 then
+ return -1
+endi
+
+$ctbPrefix = ctb1
+$ntbPrefix = ntb1
+$tbNum = 10
+$i = 0
+while $i < $tbNum
+ $ctb = $ctbPrefix . $i
+ sql create table $ctb using stb1 tags( $i )
+ $ntb = $ntbPrefix . $i
+ sql create table $ntb (ts timestamp, c1 int, c2 float, c3 binary(10))
+ $i = $i + 1
+endw
+
+
+sql show stables
+if $rows != 2 then
+ return -1
+endi
+
+sql show tables
+if $rows != 40 then
+ return -1
+endi
+
+system sh/exec.sh -n dnode1 -s stop -x SIGINT
+system sh/exec.sh -n dnode2 -s stop -x SIGINT
+system sh/exec.sh -n dnode3 -s stop -x SIGINT
+system sh/exec.sh -n dnode4 -s stop -x SIGINT
+
+
+
+system sh/exec.sh -n dnode1 -s start
+system sh/exec.sh -n dnode2 -s start
+system sh/exec.sh -n dnode3 -s start
+system sh/exec.sh -n dnode4 -s start
+
+
+
+$loop_cnt = 0
+check_dnode_ready_2:
+ $loop_cnt = $loop_cnt + 1
+ sleep 200
+ if $loop_cnt == 10 then
+ print ====> dnode not ready!
+ return -1
+ endi
+sql show dnodes
+print ===> $rows $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
+print ===> $rows $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
+print ===> $rows $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
+print ===> $rows $data[3][0] $data[3][1] $data[3][2] $data[3][3] $data[3][4] $data[3][5] $data[3][6]
+if $data[0][0] != 1 then
+ return -1
+endi
+
+if $data[0][4] != ready then
+ goto check_dnode_ready_2
+endi
+if $data[1][4] != ready then
+ goto check_dnode_ready_2
+endi
+if $data[2][4] != ready then
+ goto check_dnode_ready_2
+endi
+if $data[3][4] != ready then
+ goto check_dnode_ready_2
+endi
+
+sql use db1
+sql show stables
+if $rows != 2 then
+ return -1
+endi
+
+sql show tables
+if $rows != 40 then
+ return -1
+endi
+
+
+system sh/exec.sh -n dnode1 -s stop -x SIGINT
+system sh/exec.sh -n dnode2 -s stop -x SIGINT
+system sh/exec.sh -n dnode3 -s stop -x SIGINT
+system sh/exec.sh -n dnode4 -s stop -x SIGINT
diff --git a/tests/script/tsim/testsuit.sim b/tests/script/tsim/testsuit.sim
index e32abe4b7ff8850f9818113bed5f006c2182392e..0b1f0df04e9db6af2547cc1da49873082b2682b3 100644
--- a/tests/script/tsim/testsuit.sim
+++ b/tests/script/tsim/testsuit.sim
@@ -77,3 +77,4 @@ run sma/tsmaCreateInsertData.sim
run sma/rsmaCreateInsertQuery.sim
run valgrind/checkError.sim
run bnode/basic1.sim
+
diff --git a/tests/script/tsim/valgrind/checkError.sim b/tests/script/tsim/valgrind/checkError.sim
index 97d16dba9663a77fdf96fe1741d045765a306d42..5790437a671e61dedb90b3384de08b145f2a4cac 100644
--- a/tests/script/tsim/valgrind/checkError.sim
+++ b/tests/script/tsim/valgrind/checkError.sim
@@ -71,7 +71,7 @@ print ====> start to check if there are ERRORS in vagrind log file for each dnod
# -n : dnode[x] be check
system_content sh/checkValgrind.sh -n dnode1
print cmd return result----> [ $system_content ]
-if $system_content <= 1 then
+if $system_content <= 3 then
return 0
endi
diff --git a/tests/system-test/1-insert/test_stmt_insert_query.py b/tests/system-test/1-insert/test_stmt_insert_query.py
new file mode 100644
index 0000000000000000000000000000000000000000..c6faedd35ee9f08e50310e5570a9be284d16ecc4
--- /dev/null
+++ b/tests/system-test/1-insert/test_stmt_insert_query.py
@@ -0,0 +1,261 @@
+###################################################################
+# Copyright (c) 2016 by TAOS Technologies, Inc.
+# All rights reserved.
+#
+# This file is proprietary and confidential to TAOS Technologies.
+# No part of this file may be reproduced, stored, transmitted,
+# disclosed or used in any form or by any means other than as
+# expressly provided by the written permission from Jianhui Tao
+#
+###################################################################
+
+# -*- coding: utf-8 -*-
+
+import sys
+import os
+import threading as thd
+import multiprocessing as mp
+from numpy.lib.function_base import insert
+import taos
+from taos import *
+from util.log import *
+from util.cases import *
+from util.sql import *
+import numpy as np
+import datetime as dt
+from datetime import datetime
+from ctypes import *
+import time
+# constant define
+WAITS = 5 # wait seconds
+
+class TDTestCase:
+ #
+ # --------------- main frame -------------------
+ def caseDescription(self):
+ '''
+ limit and offset keyword function test cases;
+ case1: limit offset base function test
+ case2: offset return valid
+ '''
+ return
+
+ def getBuildPath(self):
+ selfPath = os.path.dirname(os.path.realpath(__file__))
+
+ if ("community" in selfPath):
+ projPath = selfPath[:selfPath.find("community")]
+ else:
+ projPath = selfPath[:selfPath.find("tests")]
+
+ for root, dirs, files in os.walk(projPath):
+ if ("taosd" in files):
+ rootRealPath = os.path.dirname(os.path.realpath(root))
+ if ("packaging" not in rootRealPath):
+ buildPath = root[:len(root)-len("/build/bin")]
+ break
+ return buildPath
+
+ # init
+ def init(self, conn, logSql):
+ tdLog.debug("start to execute %s" % __file__)
+ tdSql.init(conn.cursor())
+ # tdSql.prepare()
+ # self.create_tables();
+ self.ts = 1500000000000
+
+ # stop
+ def stop(self):
+ tdSql.close()
+ tdLog.success("%s successfully executed" % __file__)
+
+
+ # --------------- case -------------------
+
+
+ def newcon(self,host,cfg):
+ user = "root"
+ password = "taosdata"
+ port =6030
+ con=taos.connect(host=host, user=user, password=password, config=cfg ,port=port)
+ print(con)
+ return con
+
+ def test_stmt_insert_multi(self,conn):
+ # type: (TaosConnection) -> None
+
+ dbname = "pytest_taos_stmt_multi"
+ try:
+ conn.execute("drop database if exists %s" % dbname)
+ conn.execute("create database if not exists %s" % dbname)
+ conn.select_db(dbname)
+
+ conn.execute(
+ "create table if not exists log(ts timestamp, bo bool, nil tinyint, ti tinyint, si smallint, ii int,\
+ bi bigint, tu tinyint unsigned, su smallint unsigned, iu int unsigned, bu bigint unsigned, \
+ ff float, dd double, bb binary(100), nn nchar(100), tt timestamp)",
+ )
+ # conn.load_table_info("log")
+
+ start = datetime.now()
+ stmt = conn.statement("insert into log values(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)")
+
+ params = new_multi_binds(16)
+ params[0].timestamp((1626861392589, 1626861392590, 1626861392591))
+ params[1].bool((True, None, False))
+ params[2].tinyint([-128, -128, None]) # -128 is tinyint null
+ params[3].tinyint([0, 127, None])
+ params[4].smallint([3, None, 2])
+ params[5].int([3, 4, None])
+ params[6].bigint([3, 4, None])
+ params[7].tinyint_unsigned([3, 4, None])
+ params[8].smallint_unsigned([3, 4, None])
+ params[9].int_unsigned([3, 4, None])
+ params[10].bigint_unsigned([3, 4, None])
+ params[11].float([3, None, 1])
+ params[12].double([3, None, 1.2])
+ params[13].binary(["abc", "dddafadfadfadfadfa", None])
+ params[14].nchar(["涛思数据", None, "a long string with 中文字符"])
+ params[15].timestamp([None, None, 1626861392591])
+ # print(type(stmt))
+ stmt.bind_param_batch(params)
+ stmt.execute()
+ end = datetime.now()
+ print("elapsed time: ", end - start)
+ assert stmt.affected_rows == 3
+
+ #query
+ querystmt=conn.statement("select ?,bu from log")
+ queryparam=new_bind_params(1)
+ print(type(queryparam))
+ queryparam[0].binary("ts")
+ querystmt.bind_param(queryparam)
+ querystmt.execute()
+ result=querystmt.use_result()
+ rows=result.fetch_all()
+ print( querystmt.use_result())
+
+ # result = conn.query("select * from log")
+ # rows=result.fetch_all()
+ # rows=result.fetch_all()
+ print(rows)
+ assert rows[1][0] == "ts"
+ assert rows[0][1] == 3
+
+ #query
+ querystmt1=conn.statement("select * from log where bu < ?")
+ queryparam1=new_bind_params(1)
+ print(type(queryparam1))
+ queryparam1[0].int(4)
+ querystmt1.bind_param(queryparam1)
+ querystmt1.execute()
+ result1=querystmt1.use_result()
+ rows1=result1.fetch_all()
+ assert str(rows1[0][0]) == "2021-07-21 17:56:32.589000"
+ assert rows1[0][10] == 3
+
+
+ stmt.close()
+
+ # conn.execute("drop database if exists %s" % dbname)
+ conn.close()
+
+ except Exception as err:
+ # conn.execute("drop database if exists %s" % dbname)
+ conn.close()
+ raise err
+
+ def test_stmt_set_tbname_tag(self,conn):
+ dbname = "pytest_taos_stmt_set_tbname_tag"
+
+ try:
+ conn.execute("drop database if exists %s" % dbname)
+ conn.execute("create database if not exists %s PRECISION 'us' " % dbname)
+ conn.select_db(dbname)
+ conn.execute("create table if not exists log(ts timestamp, bo bool, nil tinyint, ti tinyint, si smallint, ii int,\
+ bi bigint, tu tinyint unsigned, su smallint unsigned, iu int unsigned, bu bigint unsigned, \
+ ff float, dd double, bb binary(100), nn nchar(100), tt timestamp) tags (t1 timestamp, t2 bool,\
+ t3 tinyint, t4 tinyint, t5 smallint, t6 int, t7 bigint, t8 tinyint unsigned, t9 smallint unsigned, \
+ t10 int unsigned, t11 bigint unsigned, t12 float, t13 double, t14 binary(100), t15 nchar(100), t16 timestamp)")
+
+ stmt = conn.statement("insert into ? using log tags (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?) \
+ values (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)")
+ tags = new_bind_params(16)
+ tags[0].timestamp(1626861392589123, PrecisionEnum.Microseconds)
+ tags[1].bool(True)
+ tags[2].null()
+ tags[3].tinyint(2)
+ tags[4].smallint(3)
+ tags[5].int(4)
+ tags[6].bigint(5)
+ tags[7].tinyint_unsigned(6)
+ tags[8].smallint_unsigned(7)
+ tags[9].int_unsigned(8)
+ tags[10].bigint_unsigned(9)
+ tags[11].float(10.1)
+ tags[12].double(10.11)
+ tags[13].binary("hello")
+ tags[14].nchar("stmt")
+ tags[15].timestamp(1626861392589, PrecisionEnum.Milliseconds)
+ stmt.set_tbname_tags("tb1", tags)
+ params = new_multi_binds(16)
+ params[0].timestamp((1626861392589111, 1626861392590111, 1626861392591111))
+ params[1].bool((True, None, False))
+ params[2].tinyint([-128, -128, None]) # -128 is tinyint null
+ params[3].tinyint([0, 127, None])
+ params[4].smallint([3, None, 2])
+ params[5].int([3, 4, None])
+ params[6].bigint([3, 4, None])
+ params[7].tinyint_unsigned([3, 4, None])
+ params[8].smallint_unsigned([3, 4, None])
+ params[9].int_unsigned([3, 4, None])
+ params[10].bigint_unsigned([3, 4, 5])
+ params[11].float([3, None, 1])
+ params[12].double([3, None, 1.2])
+ params[13].binary(["abc", "dddafadfadfadfadfa", None])
+ params[14].nchar(["涛思数据", None, "a long string with 中文字符"])
+ params[15].timestamp([None, None, 1626861392591])
+
+ stmt.bind_param_batch(params)
+ stmt.execute()
+
+ assert stmt.affected_rows == 3
+
+ #query
+ querystmt1=conn.statement("select * from log where bu < ?")
+ queryparam1=new_bind_params(1)
+ print(type(queryparam1))
+ queryparam1[0].int(5)
+ querystmt1.bind_param(queryparam1)
+ querystmt1.execute()
+ result1=querystmt1.use_result()
+ rows1=result1.fetch_all()
+ assert str(rows1[0][0]) == "2021-07-21 17:56:32.589111"
+ assert rows1[0][10] == 3
+ assert rows1[1][10] == 4
+
+ # conn.execute("drop database if exists %s" % dbname)
+ conn.close()
+
+ except Exception as err:
+ # conn.execute("drop database if exists %s" % dbname)
+ conn.close()
+ raise err
+
+ def run(self):
+ buildPath = self.getBuildPath()
+ config = buildPath+ "../sim/dnode1/cfg/"
+ host="localhost"
+ connectstmt=self.newcon(host,config)
+ print(connectstmt)
+ self.test_stmt_insert_multi(connectstmt)
+ connectstmt=self.newcon(host,config)
+ self.test_stmt_set_tbname_tag(connectstmt)
+
+ return
+
+
+# add case with filename
+#
+tdCases.addWindows(__file__, TDTestCase())
+tdCases.addLinux(__file__, TDTestCase())
\ No newline at end of file
diff --git a/tests/system-test/2-query/To_iso8601.py b/tests/system-test/2-query/To_iso8601.py
index cd22ffb90c1fbf86e81dfabecbcb1ae0e536cd39..57bcca638ce26aace35d76707c12699fe2e8d1c4 100644
--- a/tests/system-test/2-query/To_iso8601.py
+++ b/tests/system-test/2-query/To_iso8601.py
@@ -95,7 +95,7 @@ class TDTestCase:
# tdSql.query("select to_iso8601(-1) from ntb")
tdSql.query("select to_iso8601(9223372036854775807) from ntb")
tdSql.checkRows(3)
-
+ # bug TD-14896
# tdSql.query("select to_iso8601(10000000000) from ntb")
# tdSql.checkData(0,0,None)
# tdSql.query("select to_iso8601(-1) from ntb")
@@ -106,11 +106,6 @@ class TDTestCase:
tdSql.error("select to_iso8601(1.5) from db.ntb")
tdSql.error("select to_iso8601('a') from ntb")
tdSql.error("select to_iso8601(c2) from ntb")
-
-
-
-
-
tdSql.query("select to_iso8601(now) from stb")
tdSql.query("select to_iso8601(now()) from stb")
tdSql.checkRows(3)
@@ -126,7 +121,7 @@ class TDTestCase:
tdSql.checkRows(3)
tdSql.query("select to_iso8601(ts)+'a' from stb ")
tdSql.checkRows(3)
- # tdSql.query()
+
tdSql.query("select to_iso8601(today()) *null from stb")
tdSql.checkRows(3)
tdSql.checkData(0,0,None)
@@ -152,7 +147,9 @@ class TDTestCase:
tdSql.checkRows(3)
tdSql.checkData(0,0,None)
+ # bug TD-14896
# tdSql.query("select to_iso8601(-1) from ntb")
+ # tdSql.checkRows(3)
diff --git a/tests/system-test/2-query/bottom.py b/tests/system-test/2-query/bottom.py
new file mode 100644
index 0000000000000000000000000000000000000000..96ae73c6c46fc447692f8e3eec93bfa668f24887
--- /dev/null
+++ b/tests/system-test/2-query/bottom.py
@@ -0,0 +1,102 @@
+###################################################################
+# Copyright (c) 2016 by TAOS Technologies, Inc.
+# All rights reserved.
+#
+# This file is proprietary and confidential to TAOS Technologies.
+# No part of this file may be reproduced, stored, transmitted,
+# disclosed or used in any form or by any means other than as
+# expressly provided by the written permission from Jianhui Tao
+#
+###################################################################
+
+# -*- coding: utf-8 -*-
+
+from util.log import *
+from util.cases import *
+from util.sql import *
+
+
+
+class TDTestCase:
+ def init(self, conn, logSql):
+ tdLog.debug("start to execute %s" % __file__)
+ tdSql.init(conn.cursor())
+
+ self.rowNum = 10
+ self.ts = 1537146000000
+
+ def run(self):
+ tdSql.prepare()
+
+ tdSql.execute('''create table test(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
+ col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned) tags(loc nchar(20))''')
+ tdSql.execute("create table test1 using test tags('beijing')")
+ for i in range(self.rowNum):
+ tdSql.execute("insert into test1 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
+ % (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
+
+ # bottom verifacation
+ tdSql.error("select bottom(ts, 10) from test")
+ tdSql.error("select bottom(col1, 0) from test")
+ tdSql.error("select bottom(col1, 101) from test")
+ tdSql.error("select bottom(col2, 0) from test")
+ tdSql.error("select bottom(col2, 101) from test")
+ tdSql.error("select bottom(col3, 0) from test")
+ tdSql.error("select bottom(col3, 101) from test")
+ tdSql.error("select bottom(col4, 0) from test")
+ tdSql.error("select bottom(col4, 101) from test")
+ tdSql.error("select bottom(col5, 0) from test")
+ tdSql.error("select bottom(col5, 101) from test")
+ tdSql.error("select bottom(col6, 0) from test")
+ tdSql.error("select bottom(col6, 101) from test")
+ tdSql.error("select bottom(col7, 10) from test")
+ tdSql.error("select bottom(col8, 10) from test")
+ tdSql.error("select bottom(col9, 10) from test")
+
+ tdSql.query("select bottom(col1, 2) from test")
+ tdSql.checkRows(2)
+ tdSql.checkEqual(tdSql.queryResult,[(2,),(1,)])
+ tdSql.query("select bottom(col2, 2) from test")
+ tdSql.checkRows(2)
+ tdSql.checkEqual(tdSql.queryResult,[(2,),(1,)])
+
+ tdSql.query("select bottom(col3, 2) from test")
+ tdSql.checkRows(2)
+ tdSql.checkEqual(tdSql.queryResult,[(2,),(1,)])
+
+ tdSql.query("select bottom(col4, 2) from test")
+ tdSql.checkRows(2)
+ tdSql.checkEqual(tdSql.queryResult,[(2,),(1,)])
+
+ tdSql.query("select bottom(col11, 2) from test")
+ tdSql.checkRows(2)
+ tdSql.checkEqual(tdSql.queryResult,[(2,),(1,)])
+
+ tdSql.query("select bottom(col12, 2) from test")
+ tdSql.checkRows(2)
+ tdSql.checkEqual(tdSql.queryResult,[(2,),(1,)])
+
+ tdSql.query("select bottom(col13, 2) from test")
+ tdSql.checkRows(2)
+ tdSql.checkEqual(tdSql.queryResult,[(2,),(1,)])
+
+ tdSql.query("select bottom(col14, 2) from test")
+ tdSql.checkRows(2)
+ tdSql.checkEqual(tdSql.queryResult,[(2,),(1,)])
+ tdSql.query("select ts,bottom(col1, 2) from test1")
+ tdSql.checkRows(2)
+ tdSql.query("select ts,bottom(col1, 2),ts from test group by tbname")
+ tdSql.checkRows(2)
+
+ tdSql.query('select bottom(col2,1) from test interval(1y) order by col2')
+ tdSql.checkData(0,0,1)
+
+ tdSql.error('select * from test where bottom(col2,1)=1')
+
+
+ def stop(self):
+ tdSql.close()
+ tdLog.success("%s successfully executed" % __file__)
+
+tdCases.addWindows(__file__, TDTestCase())
+tdCases.addLinux(__file__, TDTestCase())
diff --git a/tests/system-test/2-query/diff.py b/tests/system-test/2-query/diff.py
index 03b3899dc659d79ca8ae0750710fe293b5f83a3b..0d8b0de3dca8d0db11eb98e9b04defff07df741c 100644
--- a/tests/system-test/2-query/diff.py
+++ b/tests/system-test/2-query/diff.py
@@ -15,59 +15,51 @@ class TDTestCase:
self.perfix = 'dev'
self.tables = 10
- def insertData(self):
- print("==============step1")
- tdSql.execute(
- "create table if not exists st (ts timestamp, col int) tags(dev nchar(50))")
-
- for i in range(self.tables):
- tdSql.execute("create table %s%d using st tags(%d)" % (self.perfix, i, i))
- rows = 15 + i
- for j in range(rows):
- tdSql.execute("insert into %s%d values(%d, %d)" %(self.perfix, i, self.ts + i * 20 * 10000 + j * 10000, j))
def run(self):
tdSql.prepare()
- tdSql.execute("create table ntb(ts timestamp,c1 int,c2 double,c3 float)")
- tdSql.execute("insert into ntb values(now,1,1.0,10.5)(now+1s,10,-100.0,5.1)(now+10s,-1,15.1,5.0)")
+ tdSql.execute(
+ "create table ntb(ts timestamp,c1 int,c2 double,c3 float)")
+ tdSql.execute(
+ "insert into ntb values(now,1,1.0,10.5)(now+1s,10,-100.0,5.1)(now+10s,-1,15.1,5.0)")
tdSql.query("select diff(c1,0) from ntb")
tdSql.checkRows(2)
- tdSql.checkData(0,0,9)
- tdSql.checkData(1,0,-11)
+ tdSql.checkData(0, 0, 9)
+ tdSql.checkData(1, 0, -11)
tdSql.query("select diff(c1,1) from ntb")
tdSql.checkRows(2)
- tdSql.checkData(0,0,9)
- tdSql.checkData(1,0,None)
-
+ tdSql.checkData(0, 0, 9)
+ tdSql.checkData(1, 0, None)
+
tdSql.query("select diff(c2,0) from ntb")
tdSql.checkRows(2)
- tdSql.checkData(0,0,-101)
- tdSql.checkData(1,0,115.1)
+ tdSql.checkData(0, 0, -101)
+ tdSql.checkData(1, 0, 115.1)
tdSql.query("select diff(c2,1) from ntb")
tdSql.checkRows(2)
- tdSql.checkData(0,0,None)
- tdSql.checkData(1,0,115.1)
+ tdSql.checkData(0, 0, None)
+ tdSql.checkData(1, 0, 115.1)
tdSql.query("select diff(c3,0) from ntb")
tdSql.checkRows(2)
- tdSql.checkData(0,0,-5.4)
- tdSql.checkData(1,0,-0.1)
+ tdSql.checkData(0, 0, -5.4)
+ tdSql.checkData(1, 0, -0.1)
tdSql.query("select diff(c3,1) from ntb")
tdSql.checkRows(2)
- tdSql.checkData(0,0,None)
- tdSql.checkData(1,0,None)
-
+ tdSql.checkData(0, 0, None)
+ tdSql.checkData(1, 0, None)
tdSql.execute('''create table stb(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned) tags(loc nchar(20))''')
tdSql.execute("create table stb_1 using stb tags('beijing')")
- tdSql.execute("insert into stb_1 values(%d, 0, 0, 0, 0, 0.0, 0.0, False, ' ', ' ', 0, 0, 0, 0)" % (self.ts - 1))
-
- # diff verifacation
+ tdSql.execute(
+ "insert into stb_1 values(%d, 0, 0, 0, 0, 0.0, 0.0, False, ' ', ' ', 0, 0, 0, 0)" % (self.ts - 1))
+
+ # diff verifacation
tdSql.query("select diff(col1) from stb_1")
tdSql.checkRows(0)
-
+
tdSql.query("select diff(col2) from stb_1")
tdSql.checkRows(0)
@@ -87,38 +79,23 @@ class TDTestCase:
tdSql.checkRows(0)
for i in range(self.rowNum):
- tdSql.execute("insert into stb_1 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
- % (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
-
- # tdSql.error("select diff(ts) from stb")
+ tdSql.execute("insert into stb_1 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
+ % (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
+
+ tdSql.error("select diff(ts) from stb")
tdSql.error("select diff(ts) from stb_1")
- # tdSql.error("select diff(col7) from stb")
-
- # tdSql.error("select diff(col8) from stb")
+
+ # tdSql.error("select diff(col7) from stb")
+
+ tdSql.error("select diff(col8) from stb")
tdSql.error("select diff(col8) from stb_1")
- # tdSql.error("select diff(col9) from stb")
+ tdSql.error("select diff(col9) from stb")
tdSql.error("select diff(col9) from stb_1")
tdSql.error("select diff(col11) from stb_1")
tdSql.error("select diff(col12) from stb_1")
tdSql.error("select diff(col13) from stb_1")
tdSql.error("select diff(col14) from stb_1")
-
- tdSql.query("select ts,diff(col1),ts from stb_1")
- tdSql.checkRows(11)
- tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
- tdSql.checkData(1, 0, "2018-09-17 09:00:00.000")
- tdSql.checkData(1, 2, "2018-09-17 09:00:00.000")
- tdSql.checkData(9, 0, "2018-09-17 09:00:00.009")
- tdSql.checkData(9, 2, "2018-09-17 09:00:00.009")
-
- # tdSql.query("select ts,diff(col1),ts from stb group by tbname")
- # tdSql.checkRows(10)
- # tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
- # tdSql.checkData(0, 1, "2018-09-17 09:00:00.000")
- # tdSql.checkData(0, 3, "2018-09-17 09:00:00.000")
- # tdSql.checkData(9, 0, "2018-09-17 09:00:00.009")
- # tdSql.checkData(9, 1, "2018-09-17 09:00:00.009")
- # tdSql.checkData(9, 3, "2018-09-17 09:00:00.009")
+ tdSql.error("select ts,diff(col1),ts from stb_1")
tdSql.query("select diff(col1) from stb_1")
tdSql.checkRows(10)
@@ -137,10 +114,27 @@ class TDTestCase:
tdSql.query("select diff(col6) from stb_1")
tdSql.checkRows(10)
-
+
+ tdSql.execute('''create table stb1(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
+ col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned) tags(loc nchar(20))''')
+ tdSql.execute("create table stb1_1 using stb tags('shanghai')")
+
+ for i in range(self.rowNum):
+ tdSql.execute("insert into stb1_1 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
+ % (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
+ for i in range(self.rowNum):
+ tdSql.execute("insert into stb1_1 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
+ % (self.ts - i-1, i-1, i-1, i-1, i-1, -i - 0.1, -i - 0.1, -i % 2, i - 1, i - 1, i + 1, i + 1, i + 1, i + 1))
+ tdSql.query("select diff(col1,0) from stb1_1")
+ tdSql.checkRows(19)
+ tdSql.query("select diff(col1,1) from stb1_1")
+ tdSql.checkRows(19)
+ tdSql.checkData(0,0,None)
+
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
+
tdCases.addWindows(__file__, TDTestCase())
-tdCases.addLinux(__file__, TDTestCase())
\ No newline at end of file
+tdCases.addLinux(__file__, TDTestCase())
diff --git a/tests/system-test/2-query/first.py b/tests/system-test/2-query/first.py
new file mode 100644
index 0000000000000000000000000000000000000000..7227d1afb5e22f68af90fb9d2192eb7a4a088c96
--- /dev/null
+++ b/tests/system-test/2-query/first.py
@@ -0,0 +1,152 @@
+###################################################################
+# Copyright (c) 2016 by TAOS Technologies, Inc.
+# All rights reserved.
+#
+# This file is proprietary and confidential to TAOS Technologies.
+# No part of this file may be reproduced, stored, transmitted,
+# disclosed or used in any form or by any means other than as
+# expressly provided by the written permission from Jianhui Tao
+#
+###################################################################
+
+# -*- coding: utf-8 -*-
+
+import sys
+import taos
+from util.log import *
+from util.cases import *
+from util.sql import *
+import numpy as np
+
+
+class TDTestCase:
+ def init(self, conn, logSql):
+ tdLog.debug("start to execute %s" % __file__)
+ tdSql.init(conn.cursor())
+
+ self.rowNum = 10
+ self.ts = 1537146000000
+
+ def run(self):
+ tdSql.prepare()
+
+ tdSql.execute('''create table test(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
+ col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned) tags(loc nchar(20))''')
+ tdSql.execute("create table test1 using test tags('beijing')")
+ tdSql.execute("insert into test1(ts) values(%d)" % (self.ts - 1))
+
+ # first verifacation
+ # bug TD-15957
+ tdSql.query("select first(*) from test1")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 1, None)
+
+ tdSql.query("select first(col1) from test1")
+ tdSql.checkRows(0)
+
+ tdSql.query("select first(col2) from test1")
+ tdSql.checkRows(0)
+
+ tdSql.query("select first(col3) from test1")
+ tdSql.checkRows(0)
+
+ tdSql.query("select first(col4) from test1")
+ tdSql.checkRows(0)
+
+ tdSql.query("select first(col11) from test1")
+ tdSql.checkRows(0)
+
+ tdSql.query("select first(col12) from test1")
+ tdSql.checkRows(0)
+
+ tdSql.query("select first(col13) from test1")
+ tdSql.checkRows(0)
+
+ tdSql.query("select first(col14) from test1")
+ tdSql.checkRows(0)
+
+ tdSql.query("select first(col5) from test1")
+ tdSql.checkRows(0)
+
+ tdSql.query("select first(col6) from test1")
+ tdSql.checkRows(0)
+
+ tdSql.query("select first(col7) from test1")
+ tdSql.checkRows(0)
+
+ tdSql.query("select first(col8) from test1")
+ tdSql.checkRows(0)
+
+ tdSql.query("select first(col9) from test1")
+ tdSql.checkRows(0)
+
+ for i in range(self.rowNum):
+ tdSql.execute("insert into test1 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
+ % (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
+
+ tdSql.query("select first(*) from test1")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 1, 1)
+
+ tdSql.query("select first(col1) from test1")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 1)
+
+ tdSql.query("select first(col2) from test1")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 1)
+
+ tdSql.query("select first(col3) from test1")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 1)
+
+ tdSql.query("select first(col4) from test1")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 1)
+
+ tdSql.query("select first(col11) from test1")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 1)
+
+ tdSql.query("select first(col12) from test1")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 1)
+
+ tdSql.query("select first(col13) from test1")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 1)
+
+ tdSql.query("select first(col14) from test1")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 1)
+
+ tdSql.query("select first(col5) from test1")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 0.1)
+
+ tdSql.query("select first(col6) from test1")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 0.1)
+
+ tdSql.query("select first(col7) from test1")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, False)
+
+ tdSql.query("select first(col8) from test1")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 'taosdata1')
+
+ tdSql.query("select first(col9) from test1")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, '涛思数据1')
+
+
+ tdSql.query("select first(*),last(*) from test1 where ts < 23 interval(1s)")
+ tdSql.checkRows(0)
+
+ def stop(self):
+ tdSql.close()
+ tdLog.success("%s successfully executed" % __file__)
+
+tdCases.addWindows(__file__, TDTestCase())
+tdCases.addLinux(__file__, TDTestCase())
diff --git a/tests/system-test/2-query/last.py b/tests/system-test/2-query/last.py
index b491679c627b5bd65c1d4c67ed16b31c792d8a08..4ef13e9142f3a2ebc3ef55f6a2316fd6433908f3 100644
--- a/tests/system-test/2-query/last.py
+++ b/tests/system-test/2-query/last.py
@@ -170,7 +170,96 @@ class TDTestCase:
tdSql.query("select last(col9) from db.stb_1")
tdSql.checkRows(1)
tdSql.checkData(0, 0, '涛思数据10')
+ tdSql.query("select last(col1,col2,col3) from stb_1")
+ tdSql.checkData(0,2,10)
+ tdSql.query("select last(*) from stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 1, 10)
+ tdSql.query("select last(*) from db.stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 1, 10)
+ tdSql.query("select last(col1) from stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 10)
+ tdSql.query("select last(col1) from db.stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 10)
+ tdSql.query("select last(col2) from stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 10)
+ tdSql.query("select last(col2) from db.stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 10)
+ tdSql.query("select last(col3) from stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 10)
+ tdSql.query("select last(col3) from db.stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 10)
+ tdSql.query("select last(col4) from stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 10)
+ tdSql.query("select last(col4) from db.stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 10)
+ tdSql.query("select last(col11) from stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 10)
+ tdSql.query("select last(col11) from db.stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 10)
+ tdSql.query("select last(col12) from stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 10)
+ tdSql.query("select last(col12) from db.stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 10)
+ tdSql.query("select last(col13) from stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 10)
+ tdSql.query("select last(col13) from db.stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 10)
+ tdSql.query("select last(col14) from stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 10)
+ tdSql.query("select last(col14) from db.stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 10)
+ tdSql.query("select last(col5) from stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 9.1)
+ tdSql.query("select last(col5) from db.stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 9.1)
+ tdSql.query("select last(col6) from stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 9.1)
+ tdSql.query("select last(col6) from db.stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 9.1)
+ tdSql.query("select last(col7) from stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, True)
+ tdSql.query("select last(col7) from db.stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, True)
+ tdSql.query("select last(col8) from stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 'taosdata10')
+ tdSql.query("select last(col8) from db.stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, 'taosdata10')
+ tdSql.query("select last(col9) from stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, '涛思数据10')
+ tdSql.query("select last(col9) from db.stb")
+ tdSql.checkRows(1)
+ tdSql.checkData(0, 0, '涛思数据10')
+ tdSql.query("select last(col1,col2,col3) from stb")
+ tdSql.checkData(0,2,10)
+
tdSql.execute('''create table ntb(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned)''')
@@ -322,7 +411,12 @@ class TDTestCase:
tdSql.query("select last(col9) from db.ntb")
tdSql.checkRows(1)
tdSql.checkData(0, 0, '涛思数据10')
-
+ tdSql.query("select last(col1,col2,col3) from ntb")
+ tdSql.checkData(0,2,10)
+
+ tdSql.error("select col1 from stb where last(col9)='涛思数据10'")
+ tdSql.error("select col1 from ntb where last(col9)='涛思数据10'")
+ tdSql.error("select col1 from stb_1 where last(col9)='涛思数据10'")
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
diff --git a/tests/system-test/2-query/percentile.py b/tests/system-test/2-query/percentile.py
new file mode 100644
index 0000000000000000000000000000000000000000..2122197ad2cfbe2996266840b4fcd615627179b9
--- /dev/null
+++ b/tests/system-test/2-query/percentile.py
@@ -0,0 +1,209 @@
+###################################################################
+# Copyright (c) 2016 by TAOS Technologies, Inc.
+# All rights reserved.
+#
+# This file is proprietary and confidential to TAOS Technologies.
+# No part of this file may be reproduced, stored, transmitted,
+# disclosed or used in any form or by any means other than as
+# expressly provided by the written permission from Jianhui Tao
+#
+###################################################################
+
+# -*- coding: utf-8 -*-
+
+from util.log import *
+from util.cases import *
+from util.sql import *
+import numpy as np
+
+
+class TDTestCase:
+ def init(self, conn, logSql):
+ tdLog.debug("start to execute %s" % __file__)
+ tdSql.init(conn.cursor())
+
+ self.rowNum = 10
+ self.ts = 1537146000000
+
+ def run(self):
+ tdSql.prepare()
+
+ intData = []
+ floatData = []
+
+ tdSql.execute('''create table test(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
+ col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned)''')
+ for i in range(self.rowNum):
+ tdSql.execute("insert into test values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
+ % (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
+ intData.append(i + 1)
+ floatData.append(i + 0.1)
+
+ # percentile verifacation
+ tdSql.error("select percentile(ts ,20) from test")
+ tdSql.error("select apercentile(ts ,20) from test")
+ tdSql.error("select percentile(col7 ,20) from test")
+ tdSql.error("select apercentile(col7 ,20) from test")
+ tdSql.error("select percentile(col8 ,20) from test")
+ tdSql.error("select apercentile(col8 ,20) from test")
+ tdSql.error("select percentile(col9 ,20) from test")
+ tdSql.error("select apercentile(col9 ,20) from test")
+
+ tdSql.query("select percentile(col1, 0) from test")
+ tdSql.checkData(0, 0, np.percentile(intData, 0))
+ tdSql.query("select apercentile(col1, 0) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+ tdSql.query("select percentile(col1, 50) from test")
+ tdSql.checkData(0, 0, np.percentile(intData, 50))
+ tdSql.query("select apercentile(col1, 50) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+ tdSql.query("select percentile(col1, 100) from test")
+ tdSql.checkData(0, 0, np.percentile(intData, 100))
+ tdSql.query("select apercentile(col1, 100) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+
+ tdSql.query("select percentile(col2, 0) from test")
+ tdSql.checkData(0, 0, np.percentile(intData, 0))
+ tdSql.query("select apercentile(col2, 0) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+ tdSql.query("select percentile(col2, 50) from test")
+ tdSql.checkData(0, 0, np.percentile(intData, 50))
+ tdSql.query("select apercentile(col2, 50) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+ tdSql.query("select percentile(col2, 100) from test")
+ tdSql.checkData(0, 0, np.percentile(intData, 100))
+ tdSql.query("select apercentile(col2, 100) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+
+ tdSql.query("select percentile(col3, 0) from test")
+ tdSql.checkData(0, 0, np.percentile(intData, 0))
+ tdSql.query("select apercentile(col3, 0) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+ tdSql.query("select percentile(col3, 50) from test")
+ tdSql.checkData(0, 0, np.percentile(intData, 50))
+ tdSql.query("select apercentile(col3, 50) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+ tdSql.query("select percentile(col3, 100) from test")
+ tdSql.checkData(0, 0, np.percentile(intData, 100))
+ tdSql.query("select apercentile(col3, 100) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+
+ tdSql.query("select percentile(col4, 0) from test")
+ tdSql.checkData(0, 0, np.percentile(intData, 0))
+ tdSql.query("select apercentile(col4, 0) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+ tdSql.query("select percentile(col4, 50) from test")
+ tdSql.checkData(0, 0, np.percentile(intData, 50))
+ tdSql.query("select apercentile(col4, 50) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+ tdSql.query("select percentile(col4, 100) from test")
+ tdSql.checkData(0, 0, np.percentile(intData, 100))
+ tdSql.query("select apercentile(col4, 100) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+
+ tdSql.query("select percentile(col11, 0) from test")
+ tdSql.checkData(0, 0, np.percentile(intData, 0))
+ tdSql.query("select apercentile(col11, 0) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+ tdSql.query("select percentile(col11, 50) from test")
+ tdSql.checkData(0, 0, np.percentile(intData, 50))
+ tdSql.query("select apercentile(col11, 50) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+ tdSql.query("select percentile(col11, 100) from test")
+ tdSql.checkData(0, 0, np.percentile(intData, 100))
+ tdSql.query("select apercentile(col11, 100) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+
+ tdSql.query("select percentile(col12, 0) from test")
+ tdSql.checkData(0, 0, np.percentile(intData, 0))
+ tdSql.query("select apercentile(col12, 0) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+ tdSql.query("select percentile(col12, 50) from test")
+ tdSql.checkData(0, 0, np.percentile(intData, 50))
+ tdSql.query("select apercentile(col12, 50) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+ tdSql.query("select percentile(col12, 100) from test")
+ tdSql.checkData(0, 0, np.percentile(intData, 100))
+ tdSql.query("select apercentile(col12, 100) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+
+ tdSql.query("select percentile(col13, 0) from test")
+ tdSql.checkData(0, 0, np.percentile(intData, 0))
+ tdSql.query("select apercentile(col13, 0) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+ tdSql.query("select percentile(col13, 50) from test")
+ tdSql.checkData(0, 0, np.percentile(intData, 50))
+ tdSql.query("select apercentile(col13, 50) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+ tdSql.query("select percentile(col13, 100) from test")
+ tdSql.checkData(0, 0, np.percentile(intData, 100))
+ tdSql.query("select apercentile(col13, 100) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+
+ tdSql.query("select percentile(col14, 0) from test")
+ tdSql.checkData(0, 0, np.percentile(intData, 0))
+ tdSql.query("select apercentile(col14, 0) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+ tdSql.query("select percentile(col14, 50) from test")
+ tdSql.checkData(0, 0, np.percentile(intData, 50))
+ tdSql.query("select apercentile(col14, 50) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+ tdSql.query("select percentile(col14, 100) from test")
+ tdSql.checkData(0, 0, np.percentile(intData, 100))
+ tdSql.query("select apercentile(col14, 100) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+
+ tdSql.query("select percentile(col5, 0) from test")
+ print("query result: %s" % tdSql.getData(0, 0))
+ print("array result: %s" % np.percentile(floatData, 0))
+ tdSql.query("select apercentile(col5, 0) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+ tdSql.query("select percentile(col5, 50) from test")
+ print("query result: %s" % tdSql.getData(0, 0))
+ print("array result: %s" % np.percentile(floatData, 50))
+ tdSql.query("select apercentile(col5, 50) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+ tdSql.query("select percentile(col5, 100) from test")
+ print("query result: %s" % tdSql.getData(0, 0))
+ print("array result: %s" % np.percentile(floatData, 100))
+ tdSql.query("select apercentile(col5, 100) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+
+ tdSql.query("select percentile(col6, 0) from test")
+ tdSql.checkData(0, 0, np.percentile(floatData, 0))
+ tdSql.query("select apercentile(col6, 0) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+ tdSql.query("select percentile(col6, 50) from test")
+ tdSql.checkData(0, 0, np.percentile(floatData, 50))
+ tdSql.query("select apercentile(col6, 50) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+ tdSql.query("select percentile(col6, 100) from test")
+ tdSql.checkData(0, 0, np.percentile(floatData, 100))
+ tdSql.query("select apercentile(col6, 100) from test")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+
+ tdSql.execute("create table meters (ts timestamp, voltage int) tags(loc nchar(20))")
+ tdSql.execute("create table t0 using meters tags('beijing')")
+ tdSql.execute("create table t1 using meters tags('shanghai')")
+ for i in range(self.rowNum):
+ tdSql.execute("insert into t0 values(%d, %d)" % (self.ts + i, i + 1))
+ tdSql.execute("insert into t1 values(%d, %d)" % (self.ts + i, i + 1))
+
+ tdSql.error("select percentile(voltage, 20) from meters")
+ tdSql.query("select apercentile(voltage, 20) from meters")
+ print("apercentile result: %s" % tdSql.getData(0, 0))
+
+
+ tdSql.execute("create table st(ts timestamp, k int)")
+ tdSql.execute("insert into st values(now, -100)(now+1a,-99)")
+ tdSql.query("select apercentile(k, 20) from st")
+ tdSql.checkData(0, 0, -100.00)
+
+
+
+ def stop(self):
+ tdSql.close()
+ tdLog.success("%s successfully executed" % __file__)
+
+tdCases.addWindows(__file__, TDTestCase())
+tdCases.addLinux(__file__, TDTestCase())
diff --git a/tests/system-test/2-query/top.py b/tests/system-test/2-query/top.py
new file mode 100644
index 0000000000000000000000000000000000000000..12e81fa1900ffe2633520359f0051a21434611b6
--- /dev/null
+++ b/tests/system-test/2-query/top.py
@@ -0,0 +1,105 @@
+###################################################################
+# Copyright (c) 2016 by TAOS Technologies, Inc.
+# All rights reserved.
+#
+# This file is proprietary and confidential to TAOS Technologies.
+# No part of this file may be reproduced, stored, transmitted,
+# disclosed or used in any form or by any means other than as
+# expressly provided by the written permission from Jianhui Tao
+#
+###################################################################
+
+# -*- coding: utf-8 -*-
+
+from util.log import *
+from util.cases import *
+from util.sql import *
+
+
+class TDTestCase:
+ def init(self, conn, logSql):
+ tdLog.debug("start to execute %s" % __file__)
+ tdSql.init(conn.cursor())
+
+ self.rowNum = 10
+ self.ts = 1537146000000
+
+ def run(self):
+ tdSql.prepare()
+
+
+
+ tdSql.execute('''create table test(ts timestamp, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
+ col7 bool, col8 binary(20), col9 nchar(20), col11 tinyint unsigned, col12 smallint unsigned, col13 int unsigned, col14 bigint unsigned) tags(loc nchar(20))''')
+ tdSql.execute("create table test1 using test tags('beijing')")
+ for i in range(self.rowNum):
+ tdSql.execute("insert into test1 values(%d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
+ % (self.ts + i, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
+
+
+ # top verifacation
+ tdSql.error("select top(ts, 10) from test")
+ tdSql.error("select top(col1, 0) from test")
+ tdSql.error("select top(col1, 101) from test")
+ tdSql.error("select top(col2, 0) from test")
+ tdSql.error("select top(col2, 101) from test")
+ tdSql.error("select top(col3, 0) from test")
+ tdSql.error("select top(col3, 101) from test")
+ tdSql.error("select top(col4, 0) from test")
+ tdSql.error("select top(col4, 101) from test")
+ tdSql.error("select top(col5, 0) from test")
+ tdSql.error("select top(col5, 101) from test")
+ tdSql.error("select top(col6, 0) from test")
+ tdSql.error("select top(col6, 101) from test")
+ tdSql.error("select top(col7, 10) from test")
+ tdSql.error("select top(col8, 10) from test")
+ tdSql.error("select top(col9, 10) from test")
+ tdSql.error("select top(col11, 0) from test")
+ tdSql.error("select top(col11, 101) from test")
+ tdSql.error("select top(col12, 0) from test")
+ tdSql.error("select top(col12, 101) from test")
+ tdSql.error("select top(col13, 0) from test")
+ tdSql.error("select top(col13, 101) from test")
+ tdSql.error("select top(col14, 0) from test")
+ tdSql.error("select top(col14, 101) from test")
+
+ tdSql.query("select top(col1, 2) from test")
+ tdSql.checkRows(2)
+ tdSql.checkEqual(tdSql.queryResult,[(9,),(10,)])
+ tdSql.query("select top(col2, 2) from test")
+ tdSql.checkRows(2)
+ tdSql.checkEqual(tdSql.queryResult,[(9,),(10,)])
+ tdSql.query("select top(col3, 2) from test")
+ tdSql.checkRows(2)
+ tdSql.checkEqual(tdSql.queryResult,[(9,),(10,)])
+ tdSql.query("select top(col4, 2) from test")
+ tdSql.checkRows(2)
+ tdSql.checkEqual(tdSql.queryResult,[(9,),(10,)])
+ tdSql.query("select top(col11, 2) from test")
+ tdSql.checkRows(2)
+ tdSql.checkEqual(tdSql.queryResult,[(9,),(10,)])
+ tdSql.query("select top(col12, 2) from test")
+ tdSql.checkRows(2)
+ tdSql.checkEqual(tdSql.queryResult,[(9,),(10,)])
+ tdSql.query("select top(col13, 2) from test")
+ tdSql.checkRows(2)
+ tdSql.checkEqual(tdSql.queryResult,[(9,),(10,)])
+ tdSql.query("select top(col14, 2) from test")
+ tdSql.checkRows(2)
+ tdSql.checkEqual(tdSql.queryResult,[(9,),(10,)])
+ tdSql.query("select ts,top(col1, 2),ts from test1")
+ tdSql.checkRows(2)
+
+ tdSql.query("select ts,top(col1, 2),ts from test group by tbname")
+ tdSql.checkRows(2)
+ tdSql.query('select top(col2,1) from test interval(1y) order by col2')
+ tdSql.checkData(0,0,10)
+
+ tdSql.error('select * from test where bottom(col2,1)=1')
+
+ def stop(self):
+ tdSql.close()
+ tdLog.success("%s successfully executed" % __file__)
+
+tdCases.addWindows(__file__, TDTestCase())
+tdCases.addLinux(__file__, TDTestCase())
diff --git a/tests/system-test/7-tmq/subscribeStb0.py b/tests/system-test/7-tmq/subscribeStb0.py
index 1d56103059e84de3afbe14647f357b152ab291c3..b6a9934d4f43cd4d4cd141d6e701c3c96e42daea 100644
--- a/tests/system-test/7-tmq/subscribeStb0.py
+++ b/tests/system-test/7-tmq/subscribeStb0.py
@@ -360,7 +360,7 @@ class TDTestCase:
'replica': 1, \
'stbName': 'stb1', \
'ctbNum': 10, \
- 'rowsPerTbl': 20000, \
+ 'rowsPerTbl': 30000, \
'batchNum': 50, \
'startTs': 1640966400000} # 2022-01-01 00:00:00.000
parameterDict['cfg'] = cfgPath
@@ -391,7 +391,7 @@ class TDTestCase:
showRow = 1
self.startTmqSimProcess(buildPath,cfgPath,pollDelay,parameterDict["dbName"],showMsg, showRow)
- time.sleep(3)
+ time.sleep(1.5)
tdLog.info("drop som child table of stb1")
dropTblNum = 4
tdSql.query("drop table if exists %s.%s_1"%(parameterDict["dbName"], parameterDict["stbName"]))
@@ -408,7 +408,7 @@ class TDTestCase:
remaindrowcnt = parameterDict["rowsPerTbl"] * (parameterDict["ctbNum"] - dropTblNum)
- if not (totalConsumeRows < expectrowcnt and totalConsumeRows > remaindrowcnt):
+ if not (totalConsumeRows <= expectrowcnt and totalConsumeRows >= remaindrowcnt):
tdLog.info("act consume rows: %d, expect consume rows: between %d and %d"%(totalConsumeRows, remaindrowcnt, expectrowcnt))
tdLog.exit("tmq consume rows error!")
diff --git a/tests/system-test/fulltest.sh b/tests/system-test/fulltest.sh
index dd3ff510d0adabc8454cfd08e0cbaae668ea0711..7c4bdbcfdcae1244b11aa11eb7378450dc7fdd02 100755
--- a/tests/system-test/fulltest.sh
+++ b/tests/system-test/fulltest.sh
@@ -40,12 +40,18 @@ python3 ./test.py -f 2-query/max.py
python3 ./test.py -f 2-query/min.py
python3 ./test.py -f 2-query/count.py
python3 ./test.py -f 2-query/last.py
-#python3 ./test.py -f 2-query/To_iso8601.py
+python3 ./test.py -f 2-query/first.py
+python3 ./test.py -f 2-query/To_iso8601.py
python3 ./test.py -f 2-query/To_unixtimestamp.py
python3 ./test.py -f 2-query/timetruncate.py
-# python3 ./test.py -f 2-query/diff.py
+python3 ./test.py -f 2-query/diff.py
python3 ./test.py -f 2-query/Timediff.py
+#python3 ./test.py -f 2-query/cast.py
+python3 ./test.py -f 2-query/top.py
+python3 ./test.py -f 2-query/bottom.py
+
+
python3 ./test.py -f 2-query/abs.py
python3 ./test.py -f 2-query/ceil.py
python3 ./test.py -f 2-query/floor.py