}
+```
+
+### Import dashboard
+
+Use your Web browser to access IP:3000 to log in to the Grafana management interface. The default username and password are admin/admin。
+
+Click the gear icon from the left bar to select 'Plugins'. You could find the icon of the TDengine data source plugin.
+
+#### Import collectd dashboard
+
+Please download the dashboard JSON file from https://github.com/taosdata/grafanaplugin/blob/master/examples/collectd/grafana/dashboards/collect-metrics-with-tdengine-v0.1.0.json.
+
+Click the 'plus' icon from the left bar to select 'Import'. Then you should see the interface like:
+
+
+
+#### Import StatsD dashboard
+
+Please download dashboard JSON file from https://github.com/taosdata/grafanaplugin/blob/master/examples/statsd/dashboards/statsd-with-tdengine-v0.1.0.json.
+
+Click the 'plus' icon from the left bar to select 'Import'. Then you should see the interface like:
+
+
+
+## Summary
+
+We demonstrated how to build a full-function IT DevOps system with TDengine, collectd, StatsD, and Grafana. TDengine supports schemaless protocol data insertion capability from 2.3.0.0. Based on TDengine's powerful ecosystem software integration capability, the user can build a high efficient and easy-to-maintain IT DevOps system in few minutes. Please find more detailed documentation about TDengine high-performance data insertion/query functions and more use cases from TAOS Data's official website.
diff --git a/documentation20/en/14.devops/03.immigrate/docs.md b/documentation20/en/14.devops/03.immigrate/docs.md
new file mode 100644
index 0000000000000000000000000000000000000000..ebba18710f6218ce7043b563a99246ccf62035f9
--- /dev/null
+++ b/documentation20/en/14.devops/03.immigrate/docs.md
@@ -0,0 +1,436 @@
+# Best practice of immigration from OpenTSDB to TDengine
+
+As a distributed, scalable, HBase-based distributed temporal database system, OpenTSDB has been introduced and widely used in the field of operation and monitoring by people in DevOps due to its first-mover advantage. However, in recent years, with the rapid development of new technologies such as cloud computing, microservices, and containerization, enterprise-level services have become more and more diverse, and the architecture has become more and more complex, and the application operation infrastructure environment has become more and more diverse, which brings more and more pressure on system and operation monitoring. From this status quo, the use of OpenTSDB as the monitoring backend storage for DevOps is increasingly plagued by performance issues and slow feature upgrades, as well as the resulting increase in application deployment costs and reduced operational efficiency, which are becoming more and more serious as the system scales up.
+
+In this context, to meet the fast-growing IoT big data market and technical demands, TOS Data has developed an innovative big data processing product TDengine independently after learning the advantages of many traditional relational databases, NoSQL databases, stream computing engines, message queues, etc. TDengine has its unique advantages in time-series big data processing. TDengine can effectively solve the problems currently encountered by OpenTSDB.
+
+Compared with OpenTSDB, TDengine has the following distinctive features.
+
+- Performance of data writing and querying far exceeds that of OpenTSDB.
+- Efficient compression mechanism for time-series data, which compresses less than 1/5 of the storage space on disk.
+- The installation and deployment is very simple, a single installation package to complete the installation and deployment, no other third-party software, the entire installation and deployment process in seconds;
+- The built-in functions cover all the query functions supported by OpenTSDB, and also support more time-series data query functions, scalar functions and aggregation functions, and support advanced query functions such as multiple time-window aggregation, join query, expression operation, multiple group aggregation, user-defined sorting, and user-defined functions. Adopting SQL-like syntax rules, it is easier to learn and basically has no learning cost.
+- Supports up to 128 tags with a total tag length of up to 16 KB.
+- In addition to HTTP, it also provides interfaces to Java, Python, C, Rust, Go, and other languages, and supports a variety of enterprise-class standard connector protocols such as JDBC.
+
+If we migrate applications originally running on OpenTSDB to TDengine, we can not only effectively reduce the consumption of computing and storage resources and the scale of deployed servers, but also greatly reduce the output of operation and maintenance costs, making operation and maintenance management simpler and easier, and significantly reducing the total cost of ownership. Like OpenTSDB, TDengine has also been open sourced, but the difference is that in addition to the stand-alone version, the latter has also achieved the open source of the cluster version, and the concern of being bound by the vendor has been swept away.
+
+In the following section we will explain how to migrate OpenTSDB applications to TDengine quickly, securely and reliably without coding, using the most typical and widely used DevOps scenarios. Subsequent chapters will provide more in-depth coverage to facilitate migration for non-DevOps scenarios.
+
+## Rapid migration of DevOps applications
+
+### 1. Typical Application Scenarios
+
+The overall system architecture of a typical DevOps application scenario is shown in the figure below (Figure 1).
+
+
+Figure 1. Typical architecture in a DevOps scenario
+
+In this application scenario, there are Agent tools deployed in the application environment to collect machine metrics, network metrics, and application metrics, data collectors to aggregate information collected by agents, systems for data persistence storage and management, and tools for monitoring data visualization (e.g., Grafana, etc.).
+
+Among them, Agents deployed in application nodes are responsible for providing operational metrics from different sources to collectd/Statsd, and collectd/StatsD is responsible for pushing the aggregated data to the OpenTSDB cluster system and then visualizing the data using the visualization kanban board Grafana.
+
+### 2. Migration Service
+
+- **TDengine installation and deployment**
+
+First of all, TDengine should be installed. Download the latest stable version of TDengine from the official website, unzip it and run install.sh to install it. For help on using various installation packages, please refer to the blog ["Installation and uninstallation of various TDengine installation packages"](https://www.taosdata.com/blog/2019/08/09/566.html).
+
+Note that after the installation, do not start the taosd service immediately, but start it after the parameters are correctly configured.
+
+- **Adjusting the data collector configuration**
+
+In TDengine version 2.3, an HTTP service taosAdapter is automatically enabled after the backend service taosd is started. The taosAdapter is compatible with Influxdb's Line Protocol and OpenTSDB's telnet/JSON write protocol, allowing data collected by collectd and StatsD to be pushed directly to TDengine.
+
+If you use collectd, modify the configuration file in its default location /etc/collectd/collectd.conf to point to the IP address and port of the node where taosAdapter is deployed. Assuming the taosAdapter IP address is 192.168.1.130 and the port is 6046, configure it as follows
+
+```html
+LoadPlugin write_tsdb
+
+
+ Host "192.168.1.130"
+ Port "6046"
+ HostTags "status=production"
+ StoreRates false
+ AlwaysAppendDS false
+
+```
+
+This allows collectd to push the data to taosAdapter using the push to OpenTSDB plugin. taosAdapter will call the API to write the data to taosd, thus completing the writing of the data. If you are using StatsD adjust the profile information accordingly.
+
+- **Adjusting the Dashboard system**
+
+After the data has been written to TDengine properly, you can adapt Grafana to visualize the data written to TDengine. There is a connection plugin for Grafana in the TDengine installation directory connector/grafanaplugin. The way to use this plugin is simple.
+
+First copy the entire dist directory under the grafanaplugin directory to Grafana's plugins directory (the default address is /var/lib/grafana/plugins/), and then restart Grafana to see the TDengine data source under the Add Data Source menu.
+
+```shell
+sudo cp -r . /var/lib/grafana/plugins/tdengine
+sudo chown grafana:grafana -R /var/lib/grafana/plugins/tdengine
+echo -e "[plugins]\nallow_loading_unsigned_plugins = taosdata-tdengine-datasource\n" | sudo tee -a /etc/grafana/grafana.ini
+
+# start grafana service
+sudo service grafana-server restart
+# or with systemd
+sudo systemctl start grafana-server
+```
+
+
+
+In addition, TDengine provides two default Dashboard templates for users to quickly view the information saved to the TDengine repository. You can simply import the templates from the Grafana directory into Grafana to activate their use.
+
+
+
+Figure 2. Importing Grafana Templates
+
+After the above steps, you have completed the migration of OpenTSDB to TDengine. You can see that the whole process is very simple, no code needs to be written, and only some configuration files need to be adjusted to complete the migration work.
+
+### 3. Post-migration architecture
+
+After the migration is completed, the overall architecture of the system at this time is shown in the figure below (Figure 3), and the acquisition side, data writing side, and monitoring presentation side all remain stable during the whole process, which does not involve any important changes or alterations except for very few configuration adjustments. OpenTSDB to TDengine migration action, using TDengine more powerful processing power and query performance.
+
+In most DevOps scenarios, if you have a small OpenTSDB cluster (3 nodes or less) as the storage side of DevOps, relying on OpenTSDB to provide data storage and query functions for the system persistence layer, then you can safely replace it with TDengine and save more compute and storage resources. With the same configuration of computing resources, a single TDengine can meet the service capacity provided by 3~5 OpenTSDB nodes. If the scale is relatively large, then a TDengine cluster is required.
+
+If your application is particularly complex, or the application domain is not a DevOps scenario, you can continue reading subsequent chapters for a more comprehensive and in-depth look at the advanced topics of migrating OpenTSDB applications to TDengine.
+
+
+
+Figure 3. System architecture after the migration is complete
+
+## Migration evaluation and strategy for other scenarios
+
+### 1. Differences between TDengine and OpenTSDB
+
+This chapter describes in detail the differences between OpenTSDB and TDengine at the system functionality level. After reading this chapter, you can thoroughly evaluate whether you can migrate certain complex OpenTSDB-based applications to TDengine, and what you should pay attention to after the migration.
+
+TDengine currently only supports Grafana visual kanban rendering, so if your application uses a front-end kanban other than Grafana (e.g. [TSDash](https://github.com/facebook/tsdash), [Status Wolf](https://github.com/box/StatusWolf), etc.), then the front-end kanban cannot be migrated directly to TDengine and will need to be re-adapted to Grafana before it can function properly.
+
+As of version 2.3.0.x, TDengine can only support collectd and StatsD as data collection aggregation software, but more data collection aggregation software will be provided in the future. If you use other types of data aggregators on the collection side, your application needs to be adapted to these two data aggregation systems to be able to write data properly. In addition to the two data aggregation end software protocols mentioned above, TDengine also supports writing data directly via InfluxDB's row protocol and OpenTSDB's data writing protocol, JSON format, and you can rewrite the logic on the data push side to write data using the row protocols supported by TDengine.
+
+In addition, if you use the following features of OpenTSDB in your application, you need to understand the following considerations before migrating your application to TDengine.
+
+1. `/api/stats`: TDengine provides a new mechanism for handling cluster state monitoring to meet your application's monitoring and maintenance needs of your application.
+2. `/api/tree`: TDengine uses a hierarchy of database -> supertable -> sub-table to organize and maintain timelines, with all timelines belonging to the same supertable at the same level in the system. However, it is possible to simulate a logical multi-level structure of the application through the special construction of different tag values.
+3. `Rollup And PreAggregates`: The use of Rollup and PreAggregates requires the application to decide where to access the Rollup results and in some scenarios to access the original results, the opaqueness of this structure makes the application processing logic extremely complex and completely non-portable. TDengine does not support automatic downsampling of multiple timelines and preaggregates (for a range of periods) for the time being, but due to its high-performance query processing logic, it can provide high performance even without relying on Rollup and preaggregates.
+4. `Rate`: TDengine provides two functions to calculate the rate of change of values, namely Derivative (whose calculation results are consistent with InfluxDB's Derivative behavior) and IRate (whose calculation results are consistent with the IRate function in Prometheus). However, the results of these two functions differ slightly from Rate, but are more powerful overall. In addition,** all the calculation functions provided by OpenTSDB are supported by TDengine with corresponding query functions, and the query functions of TDengine far exceed the query functions supported by OpenTSDB,** which can greatly simplify your application processing logic.
+
+Through the above introduction, I believe you should be able to understand the changes brought by the migration of OpenTSDB to TDengine, and this information will also help you correctly judge whether it is acceptable to migrate your application to TDengine, and experience the powerful timing data processing capability and convenient user experience provided by TDengine.
+
+### 2. Migration strategy
+
+First of all, the OpenTSDB-based system will be migrated involving data schema design, system scale estimation, data write end transformation, data streaming, and application adaptation; after that, the two systems will run in parallel for a period of time, and then the historical data will be migrated to TDengine. Of course, if your application has some functions that strongly depend on the above OpenTSDB features, and at the same time, You can consider keeping the original OpenTSDB system running while starting TDengine to provide the main services.
+
+## Data model design
+
+On the one hand, TDengine requires a strict schema definition for its incoming data. On the other hand, the data model of TDengine is richer than that of OpenTSDB, and the multi-valued model is compatible with all single-valued model building requirements.
+
+Now let's assume a DevOps scenario where we use collectd to collect base metrics of devices, including memory, swap, disk, etc. The schema in OpenTSDB is as follows:
+
+| No. | metric | value | type | tag1 | tag2 | tag3 | tag4 | tag5 |
+| ---- | -------------- | ------ | ------ | ---- | ----------- | -------------------- | --------- | ------ |
+| 1 | memory | value | double | host | memory_type | memory_type_instance | source | |
+| 2 | swap | value | double | host | swap_type | swap_type_instance | source | |
+| 3 | disk | value | double | host | disk_point | disk_instance | disk_type | source |
+
+
+
+TDengine requires data stored to have a data schema, i.e., you need to create a supertable and specify the schema of the supertable before writing the data. For data schema creation, you have two ways to do this: 1) Take full advantage of TDengine's native data writing support for OpenTSDB by calling the API provided by TDengine to write the data (in text line or JSON format) to the super table and automate the creation of the single-value model. And automate the creation of single-value models. This approach does not require major adjustments to the data writing application, nor does it require conversion of the written data format.
+
+At the C level, TDengine provides taos_insert_lines to write data in OpenTSDB format directly (in version 2.3.x this function corresponds to taos_schemaless_insert). For the code reference example, please refer to the sample code schemaless.c in the installation package directory.
+
+ (2) Based on the full understanding of TDengine's data model, establish the mapping relationship between OpenTSDB and TDengine's data model adjustment manually, taking into account that OpenTSDB is a single-value mapping model, it is recommended to use the single-value model in TDengine. TDengine supports both multi-value and single-value models.
+
+- **Single-valued model**.
+
+The steps are as follows: the name of the metrics is used as the name of the TDengine super table, which is built with two basic data columns - timestamp and value, and the labels of the super table are equivalent to the label information of the metrics, and the number of labels is equal to the number of labels of the metrics. The sub-tables are named using a fixed rule row naming: `metric + '_' + tags1_value + '_' + tag2_value + '_' + tag3_value ... ` as sub-table names.
+
+Create 3 super tables in TDengine.
+
+```sql
+create stable memory(ts timestamp, val float) tags(host binary(12),memory_type binary(20), memory_type_instance binary(20), source binary(20));
+create stable swap(ts timestamp, val double) tags(host binary(12), swap_type binary(20), swap_type_binary binary(20), source binary(20));
+create stable disk(ts timestamp, val double) tags(host binary(12), disk_point binary(20), disk_instance binary(20), disk_type binary(20), source binary(20));
+```
+
+
+
+For sub-tables use dynamic table creation as shown below:
+
+```sql
+insert into memory_vm130_memory_bufferred_collectd using memory tags(‘vm130’, ‘memory’, 'buffer', 'collectd') values(1632979445, 3.0656);
+```
+
+Eventually about 340 sub-tables and 3 super-tables will be created in the system. Note that if the use of concatenated tagged values causes the sub-table names to exceed the system limit (191 bytes), then some encoding (e.g. MD5) needs to be used to convert them to an acceptable length.
+
+- **Multi-value model**
+
+If you want to take advantage of TDengine's multi-value modeling capabilities, you need to first meet the requirements that different collection quantities have the same collection frequency and can reach the **data writing side simultaneously via a message queue**, thus ensuring that multiple metrics are written at once using SQL statements. The name of the metric is used as the name of the super table to create a multi-column model of data with the same collection frequency and capable of arriving at the same. The data can be collected with the same frequency and arrive in multiple columns. The names of the sub-tables are named using a fixed rule. Each metric above contains only one measurement value, so it cannot be transformed into a multi-value model.
+
+
+
+## Data triage and application adaptation
+
+Data is subscribed from the message queue and an adapted writer is started to write the data.
+
+After the data starts to be written for a sustained period, SQL statements can be used to check whether the amount of data written meets the expected write requirements. The following SQL statement is used to count the amount of data.
+
+```sql
+select count(*) from memory
+```
+
+After completing the query, if the written data does not differ from the expected one, and there are no abnormal error messages from the writing program itself, then you can confirm that the data writing is complete and valid.
+
+TDengine does not support query or data fetch processing using OpenTSDB query syntax, but it does provide support for each type of OpenTSDB query. You can check Annex 2 for the corresponding query processing adjustments and application usage, or refer to the TDengine user manual for a full understanding of the types of queries supported by TDengine.
+
+TDengine supports the standard JDBC 3.0 interface for manipulating databases, but you can also use other types of high-level language connectors for querying and reading data to suit your application. See also the user manual for the specific operation and usage help.
+
+## Historical data migration
+
+### 1. Use the tool to migrate data automatically
+
+To facilitate the migration of historical data, we provide a plug-in for the data synchronization tool DataX, which can automatically write data to TDengine, it should be noted that DataX's automated data migration can only support the data migration process of single-value models.
+
+DataX Please refer to its help manual [github.com/taosdata/datax](http://github.com/taosdata/datax) for details on how to use DataX and how to use it to write data to TDengine.
+
+### 2. Migrate data manually
+
+If you need to use a multi-value model for data writing, you need to develop your tool to export data from OpenTSDB, then confirm which timelines can be merged and imported into the same timeline, and then write the time that can be imported at the same time to the database by SQL statement.
+
+The manual migration of data requires attention to two issues.
+
+1) When storing the exported data on the disk, the disk needs to have enough storage space to be able to adequately accommodate the exported data files. To avoid straining the disk file storage after exporting the full amount of data, a partial import mode can be adopted, with the timelines belonging to the same super table being exported first, and then the exported part of the data files are imported into the TDengine system
+
+(2) Under the full-load operation of the system, if there are enough remaining computing and IO resources, a multi-threaded import mechanism can be established to maximize the efficiency of data migration. Considering the huge load on the CPU brought by data parsing, the maximum number of parallel tasks needs to be controlled to avoid the overall system overload triggered by importing historical data.
+
+Due to the ease of operation of TDegnine itself, there is no need to perform index maintenance, data format change processing, etc. throughout the process, and the whole process only needs to be executed sequentially.
+
+Once the historical data is fully imported into TDengine, the two systems are running simultaneously, after which the query requests can be switched to TDengine, thus achieving a seamless application switchover.
+
+## Appendix 1: Correspondence table of OpenTSDB query functions
+
+**Avg**
+
+Equivalent function: avg
+
+Example.
+
+SELECT avg(val) FROM (SELECT first(val) FROM super_table WHERE ts >= startTime and ts <= endTime INTERVAL(20s) Fill(linear)) INTERVAL(20s)
+
+Notes.
+
+1. the value within the Interval needs to be the same as the interval value of the outer query.
+As the interpolation of values in OpenTSDB uses linear interpolation, use fill(linear) to declare the interpolation type in the interpolation clause. The following functions with the same interpolation requirements are handled by this method. 3.
+3. The 20s parameter in Interval means that the inner query will generate results in a 20-second window. In a real query, it needs to be adjusted to the time interval between different records. This ensures that the interpolation results are generated equivalently to the original data.
+Due to the special interpolation strategy and mechanism of OpenTSDB, the way of interpolation before computation in Aggregate query makes it impossible for the computation result to be the same as TDengine. However, in the case of Downsample, TDengine, and OpenTSDB can obtain the same result (because OpenTSDB uses a completely different interpolation strategy for Aggregate and Downsample queries).
+(since OpenTSDB uses a completely different interpolation strategy for aggregated and downsampled queries).[]()
+
+
+**Count**
+
+Equivalent function: count
+
+Example.
+
+select count(*) from super_table_name;
+
+
+
+**Dev**
+
+Equivalent function: stddev
+
+Example.
+
+Select stddev(val) from table_name
+
+
+
+**Estimated percentiles**
+
+Equivalent function: apercentile
+
+Example.
+
+Select apercentile(col1, 50, “t-digest”) from table_name
+
+Remark.
+
+1. t-digest algorithm is used by default in OpenTSDB during approximate query processing, so to get the same calculation result, you need to specify the algorithm used in the apercentile function. tDengine can support two different approximate processing algorithms, which are declared by "default " and "t-digest" to declare.
+
+
+
+**First**
+
+Equivalent function: first
+
+Example.
+
+Select first(col1) from table_name
+
+
+
+**Last**
+
+Equivalent function: last
+
+Example.
+
+Select last(col1) from table_name
+
+
+
+**Max**
+
+Equivalent function: max
+
+Example.
+
+Select max(value) from (select first(val) value from table_name interval(10s) fill(linear)) interval(10s)
+
+Note: The Max function requires interpolation, for the reasons given above.
+
+
+
+**Min**
+
+Equivalent function: min
+
+Example.
+
+Select min(value) from (select first(val) value from table_name interval(10s) fill(linear)) interval(10s);
+
+
+
+**MinMax**
+
+Equivalent function: max
+
+Select max(val) from table_name
+
+Note: This function does not require interpolation, so it can be calculated directly.
+
+
+
+**MimMin**
+
+Equivalent function: min
+
+Select min(val) from table_name
+
+Note: This function does not require interpolation, so it can be calculated directly.
+
+
+
+**Percentile**
+
+Equivalent function: percentile
+
+备注:
+
+
+
+**Sum**
+
+Equivalent function: sum
+
+Select max(value) from (select first(val) value from table_name interval(10s) fill(linear)) interval(10s)
+
+Note: This function does not require interpolation, so it can be calculated directly.
+
+
+
+**Zimsum**
+
+Equivalent function: sum
+
+Select sum(val) from table_name
+
+Note: This function does not require interpolation, so it can be calculated directly.
+
+
+
+完整示例:
+
+```json
+//OpenTSDB query JSON
+query = {
+"start":1510560000,
+"end": 1515000009,
+"queries":[{
+"aggregator": "count",
+"metric":"cpu.usage_user",
+}]
+}
+
+// Equivalent SQL:
+SELECT count(*)
+FROM `cpu.usage_user`
+WHERE ts>=1510560000 AND ts<=1515000009
+```
+
+
+
+## Appendix 2: Resource Estimation Methodology
+
+### Data generation environment
+
+We still use the hypothetical environment from Chapter 4 with 3 measurements. The data writing rate for temperature and humidity is one record every 5 seconds, with a timeline of 100,000. Air quality is written at a rate of one record every 10 seconds, with a timeline of 10,000, and a query request frequency of 500 QPS.
+
+### Storage resource estimation
+
+Assuming that the number of sensor devices that generate data and require storage is `n`, the frequency of data generation is ` t` records/second, and the length of each record is `L` bytes, the size of data generated per day is `n×t×L` bytes. assuming a compression ratio of C, the size of data generated per day is `(n×t×L)/C` bytes. storage resources are estimated to be able to accommodate 1.5 years The storage resources are estimated to be able to accommodate 1.5 years of data size. Under the production environment, the compression ratio C of TDengine is generally between 5 and 7, while adding 20% redundancy to the final result, we can calculate the required storage resources.
+
+```matlab
+(n×t×L)×(365×1.5)×(1+20%)/C
+```
+
+Combining the above formula and bringing the parameters into the calculation formula, the size of the raw data generated per year without considering tagging information is 11.8 TB. It should be noted that since tagging information is associated with each timeline in TDengine, it is not per record. So the size of the data volume to be recorded is somewhat reduced relative to the data generated, and this part of the tag data as a whole can be neglected. Assuming a compression ratio of 5, the size of the retained data ends up being 2.56 TB.
+
+### Storage device selection considerations
+
+The hard disk should be used with a better random read performance hard disk device, if you can have SSD, consider using SSD as much as possible. better random read performance of the disk is extremely helpful to improve the system query performance and can improve the overall query response performance of the system. To obtain better query performance, the performance index of single-threaded random read IOPS of the hard disk device should not be lower than 1000, it is better to reach 5000 IOPS or more. To obtain an evaluation of the current device random read IO performance, it is recommended that fio software be used to evaluate its operational performance (see Appendix 1 for details on how to use it) to confirm whether it can meet the large file random read performance requirements.
+
+Hard disk write performance has little impact on TDengine; TDengine writes in append write mode, so as long as it has good sequential write performance, both SAS hard disks and SSDs, in general, can meet TDengine's requirements for disk write performance well.
+
+### Computational resource estimation
+
+Due to the specificity of IoT data, after the frequency of data generation is fixed, the process of TDengine writing maintains a relatively fixed amount of resource consumption (both computation and storage). As described in [TDengine Operation and Maintenance](https://www.taosdata.com/cn/documentation/administrator), 22,000 writes per second in this system consumes less than 1 CPU core.
+
+In terms of estimating the CPU resources required for queries, assuming that the application requires 10,000 QPS from the database and each query consumes about 1 ms of CPU time, then each core provides 1,000 QPS of queries per second, and at least 10 cores are required to satisfy 10,000 QPS of query requests. To make the overall CPU load of the system less than 50%, the whole cluster needs 10 cores twice as many, i.e., 20 cores.
+
+### Memory resource estimation
+
+The database allocates memory for each Vnode by default 16MB*3 buffers, the cluster system includes 22 CPU cores, then 22 virtual node Vnodes will be established by default, each Vnode contains 1000 tables, then it can accommodate all the tables. Then it takes about 1 and a half hours to write a full block, thus triggering a dropped disk, which can be unadjusted. 22 Vnodes require a total memory cache of about 1GB. considering the memory required for queries, assuming a memory overhead of about 50MB per query, then 500 queries concurrently require about 25GB of memory.
+
+In summary, a single 16-core 32GB machine can be used, or a cluster of two 8-core 16GB machines can be used.
+
+## Appendix 3: Cluster Deployment and Startup
+
+TDengine provides a wealth of help documentation on many aspects of cluster installation and deployment, here is an index of responsive documentation for your reference.
+
+### Cluster Deployment
+
+The first step is to install TDengine. Download the latest stable version of TDengine from the official website, unzip it and run install.sh to install it. Please refer to the blog ["Installing and uninstalling TDengine packages"](https://www.taosdata.com/blog/2019/08/09/566.html) for help on using the various installation packages.
+
+Be careful not to start the taosd service immediately after the installation is complete, but only after the parameters are properly configured.
+
+### Set the running parameters and start the service
+
+To ensure that the system can get the necessary information to run properly. Please set the following key parameters correctly on the server-side.
+
+FQDN, firstEp, secondEP, dataDir, logDir, tmpDir, serverPort. The specific meaning of each parameter and the requirements for setting them can be found in the documentation "TDengine Cluster Installation, Management" (https://www.taosdata.com/cn/ documentation/cluster)".
+
+Follow the same steps to set the parameters on the node that needs to run and start the taosd service, then add the Dnode to the cluster.
+
+Finally, start taos and execute the command show dnodes, if you can see all the nodes that have joined the cluster, then the cluster is successfully built. For the specific operation procedure and notes, please refer to the document "[TDengine Cluster Installation, Management](https://www.taosdata.com/cn/documentation/cluster)".
+
+## Appendix 4: Super table names
+
+Since the metric name of OpenTSDB has a dot (". "However, the dot has a special meaning in TDengine, as a separator between database and table names. TDengine also provides escapes to allow users to use keywords or special separators (e.g., dot) in (super) table names. To use special characters, the table name needs to be enclosed in escape characters, e.g. `cpu.usage_user` would be a legal (super) table name.
+
+## Appendix 5: Reference Articles
+
+1. [Quickly build an IT Ops monitoring system using TDengine + collectd/StatsD + Grafana](https://www.taosdata.com/cn/documentation20/devops/collectd)(Chinese)_
+2. [Writing collection data directly to TDengine via collectd](https://www.taosdata.com/cn/documentation20/insert#collectd) (Chinese)
+
diff --git a/documentation20/en/images/IT-DevOps-Solutions-Collectd-StatsD.png b/documentation20/en/images/IT-DevOps-Solutions-Collectd-StatsD.png
new file mode 100644
index 0000000000000000000000000000000000000000..b34aec45bdbe30bebbce532d6150c40f80399c25
Binary files /dev/null and b/documentation20/en/images/IT-DevOps-Solutions-Collectd-StatsD.png differ
diff --git a/documentation20/en/images/IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch.jpg b/documentation20/en/images/IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..d3de5fb7a10a1cb22693468029bc26ad63a96d71
Binary files /dev/null and b/documentation20/en/images/IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch.jpg differ
diff --git a/documentation20/en/images/IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.jpg b/documentation20/en/images/IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..be3704cb72d6c2614614852bfef17147ce49d061
Binary files /dev/null and b/documentation20/en/images/IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.jpg differ
diff --git a/documentation20/en/images/IT-DevOps-Solutions-Immigrate-TDengine-Arch.jpg b/documentation20/en/images/IT-DevOps-Solutions-Immigrate-TDengine-Arch.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..fd406a140beea43fbfe2c417c85b872cfd6a2219
Binary files /dev/null and b/documentation20/en/images/IT-DevOps-Solutions-Immigrate-TDengine-Arch.jpg differ
diff --git a/documentation20/en/images/IT-DevOps-Solutions-Telegraf.png b/documentation20/en/images/IT-DevOps-Solutions-Telegraf.png
new file mode 100644
index 0000000000000000000000000000000000000000..e1334bb937febd395eca0b0c44c8a2f315910606
Binary files /dev/null and b/documentation20/en/images/IT-DevOps-Solutions-Telegraf.png differ
diff --git a/documentation20/en/images/IT-DevOps-Solutions-collectd-dashboard.png b/documentation20/en/images/IT-DevOps-Solutions-collectd-dashboard.png
new file mode 100644
index 0000000000000000000000000000000000000000..17d0fd31b9424b071783696668d5706b90274867
Binary files /dev/null and b/documentation20/en/images/IT-DevOps-Solutions-collectd-dashboard.png differ
diff --git a/documentation20/en/images/IT-DevOps-Solutions-statsd-dashboard.png b/documentation20/en/images/IT-DevOps-Solutions-statsd-dashboard.png
new file mode 100644
index 0000000000000000000000000000000000000000..f122cbc5dc0bb5b7faccdbc7c4c8bcca59b6c9ed
Binary files /dev/null and b/documentation20/en/images/IT-DevOps-Solutions-statsd-dashboard.png differ
diff --git a/documentation20/en/images/IT-DevOps-Solutions-telegraf-dashboard.png b/documentation20/en/images/IT-DevOps-Solutions-telegraf-dashboard.png
new file mode 100644
index 0000000000000000000000000000000000000000..d695a3af30154d2fc2217996f3ff4878abab097c
Binary files /dev/null and b/documentation20/en/images/IT-DevOps-Solutions-telegraf-dashboard.png differ
diff --git a/packaging/tools/install.sh b/packaging/tools/install.sh
index 284926625f869d7ea82a800f4c470a83e2840404..95bf3e7b74f6e0f782a8cd5caefd196510358f87 100755
--- a/packaging/tools/install.sh
+++ b/packaging/tools/install.sh
@@ -846,7 +846,7 @@ vercomp () {
function is_version_compatible() {
- curr_version=`ls ${script_dir}/driver/libtaos.so* |cut -d '.' -f 3-6`
+ curr_version=`ls ${script_dir}/driver/libtaos.so* | awk -F 'libtaos.so.' '{print $2}'`
if [ -f ${script_dir}/driver/vercomp.txt ]; then
min_compatible_version=`cat ${script_dir}/driver/vercomp.txt`
diff --git a/src/client/src/tscSQLParser.c b/src/client/src/tscSQLParser.c
index 0a4b7e9f787dbd01685c2913e513250e46136b4a..bb21b43e677c50abf49223ff86bce1d1ac95b5d7 100644
--- a/src/client/src/tscSQLParser.c
+++ b/src/client/src/tscSQLParser.c
@@ -2505,6 +2505,7 @@ int32_t addExprAndResultField(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t col
const char* msg13 = "parameter list required";
const char* msg14 = "third parameter algorithm must be 'default' or 't-digest'";
const char* msg15 = "parameter is out of range [1, 1000]";
+ const char* msg16 = "elapsed duration should be greater than or equal to database precision";
switch (functionId) {
case TSDB_FUNC_COUNT: {
@@ -2599,19 +2600,21 @@ int32_t addExprAndResultField(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t col
case TSDB_FUNC_FLOOR:
case TSDB_FUNC_ROUND:
case TSDB_FUNC_STDDEV:
- case TSDB_FUNC_LEASTSQR: {
+ case TSDB_FUNC_LEASTSQR:
+ case TSDB_FUNC_ELAPSED: {
// 1. valid the number of parameters
int32_t numOfParams = (pItem->pNode->Expr.paramList == NULL)? 0: (int32_t) taosArrayGetSize(pItem->pNode->Expr.paramList);
// no parameters or more than one parameter for function
if (pItem->pNode->Expr.paramList == NULL ||
- (functionId != TSDB_FUNC_LEASTSQR && functionId != TSDB_FUNC_DERIVATIVE && numOfParams != 1) ||
- ((functionId == TSDB_FUNC_LEASTSQR || functionId == TSDB_FUNC_DERIVATIVE) && numOfParams != 3)) {
+ (functionId != TSDB_FUNC_LEASTSQR && functionId != TSDB_FUNC_DERIVATIVE && functionId != TSDB_FUNC_ELAPSED && numOfParams != 1) ||
+ ((functionId == TSDB_FUNC_LEASTSQR || functionId == TSDB_FUNC_DERIVATIVE) && numOfParams != 3) ||
+ (functionId == TSDB_FUNC_ELAPSED && numOfParams > 2)) {
return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg2);
}
tSqlExprItem* pParamElem = taosArrayGet(pItem->pNode->Expr.paramList, 0);
- if (pParamElem->pNode->tokenId != TK_ALL && pParamElem->pNode->tokenId != TK_ID) {
+ if ((pParamElem->pNode->tokenId != TK_ALL && pParamElem->pNode->tokenId != TK_ID) || 0 == pParamElem->pNode->columnName.n) {
return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg2);
}
@@ -2620,6 +2623,11 @@ int32_t addExprAndResultField(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t col
return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg3);
}
+ // elapsed only can be applied to primary key
+ if (functionId == TSDB_FUNC_ELAPSED && index.columnIndex != PRIMARYKEY_TIMESTAMP_COL_INDEX) {
+ return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), "elapsed only can be applied to primary key");
+ }
+
pTableMetaInfo = tscGetMetaInfo(pQueryInfo, index.tableIndex);
STableComInfo info = tscGetTableInfo(pTableMetaInfo->pTableMeta);
@@ -2631,7 +2639,7 @@ int32_t addExprAndResultField(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t col
// 2. check if sql function can be applied on this column data type
SSchema* pSchema = tscGetTableColumnSchema(pTableMetaInfo->pTableMeta, index.columnIndex);
- if (!IS_NUMERIC_TYPE(pSchema->type)) {
+ if (!IS_NUMERIC_TYPE(pSchema->type) && (functionId != TSDB_FUNC_ELAPSED)) {
return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg1);
} else if (IS_UNSIGNED_NUMERIC_TYPE(pSchema->type) && (functionId == TSDB_FUNC_DIFF || functionId == TSDB_FUNC_DERIVATIVE)) {
return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg9);
@@ -2676,11 +2684,11 @@ int32_t addExprAndResultField(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t col
} else if (functionId == TSDB_FUNC_IRATE) {
int64_t prec = info.precision;
tscExprAddParams(&pExpr->base, (char*)&prec, TSDB_DATA_TYPE_BIGINT, LONG_BYTES);
- } else if (functionId == TSDB_FUNC_DERIVATIVE) {
+ } else if (functionId == TSDB_FUNC_DERIVATIVE || (functionId == TSDB_FUNC_ELAPSED && 2 == numOfParams)) {
char val[8] = {0};
int64_t tickPerSec = 0;
- if (tVariantDump(&pParamElem[1].pNode->value, (char*) &tickPerSec, TSDB_DATA_TYPE_BIGINT, true) < 0) {
+ if ((TSDB_DATA_TYPE_NULL == pParamElem[1].pNode->value.nType) || tVariantDump(&pParamElem[1].pNode->value, (char*) &tickPerSec, TSDB_DATA_TYPE_BIGINT, true) < 0) {
return TSDB_CODE_TSC_INVALID_OPERATION;
}
@@ -2690,23 +2698,27 @@ int32_t addExprAndResultField(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t col
tickPerSec /= TSDB_TICK_PER_SECOND(TSDB_TIME_PRECISION_MILLI);
}
- if (tickPerSec <= 0 || tickPerSec < TSDB_TICK_PER_SECOND(info.precision)) {
+ if ((tickPerSec < TSDB_TICK_PER_SECOND(info.precision)) && (functionId == TSDB_FUNC_DERIVATIVE)) {
return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg10);
- }
+ } else if (tickPerSec <= 0) {
+ return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg16);
+ }
tscExprAddParams(&pExpr->base, (char*) &tickPerSec, TSDB_DATA_TYPE_BIGINT, LONG_BYTES);
- memset(val, 0, tListLen(val));
+ if (functionId == TSDB_FUNC_DERIVATIVE) {
+ memset(val, 0, tListLen(val));
- if (tVariantDump(&pParamElem[2].pNode->value, val, TSDB_DATA_TYPE_BIGINT, true) < 0) {
- return TSDB_CODE_TSC_INVALID_OPERATION;
- }
+ if (tVariantDump(&pParamElem[2].pNode->value, val, TSDB_DATA_TYPE_BIGINT, true) < 0) {
+ return TSDB_CODE_TSC_INVALID_OPERATION;
+ }
- int64_t v = *(int64_t*) val;
- if (v != 0 && v != 1) {
- return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg11);
- }
+ int64_t v = *(int64_t*) val;
+ if (v != 0 && v != 1) {
+ return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg11);
+ }
- tscExprAddParams(&pExpr->base, val, TSDB_DATA_TYPE_BIGINT, LONG_BYTES);
+ tscExprAddParams(&pExpr->base, val, TSDB_DATA_TYPE_BIGINT, LONG_BYTES);
+ }
}
SColumnList ids = createColumnList(1, index.tableIndex, index.columnIndex);
@@ -3125,7 +3137,6 @@ int32_t addExprAndResultField(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t col
return TSDB_CODE_SUCCESS;
}
-
default: {
pUdfInfo = isValidUdf(pQueryInfo->pUdfInfo, pItem->pNode->Expr.operand.z, pItem->pNode->Expr.operand.n);
if (pUdfInfo == NULL) {
@@ -3496,7 +3507,7 @@ int32_t tscTansformFuncForSTableQuery(SQueryInfo* pQueryInfo) {
if ((functionId >= TSDB_FUNC_SUM && functionId <= TSDB_FUNC_TWA) ||
(functionId >= TSDB_FUNC_FIRST_DST && functionId <= TSDB_FUNC_STDDEV_DST) ||
(functionId >= TSDB_FUNC_RATE && functionId <= TSDB_FUNC_IRATE) ||
- (functionId == TSDB_FUNC_SAMPLE)) {
+ (functionId == TSDB_FUNC_SAMPLE) || (functionId == TSDB_FUNC_ELAPSED)) {
if (getResultDataInfo(pSrcSchema->type, pSrcSchema->bytes, functionId, (int32_t)pExpr->base.param[0].i64, &type, &bytes,
&interBytes, 0, true, NULL) != TSDB_CODE_SUCCESS) {
return TSDB_CODE_TSC_INVALID_OPERATION;
@@ -3551,8 +3562,8 @@ void tscRestoreFuncForSTableQuery(SQueryInfo* pQueryInfo) {
}
bool hasUnsupportFunctionsForSTableQuery(SSqlCmd* pCmd, SQueryInfo* pQueryInfo) {
- const char* msg1 = "TWA/Diff/Derivative/Irate/CSUM/MAVG/SAMPLE/INTERP are not allowed to apply to super table directly";
- const char* msg2 = "TWA/Diff/Derivative/Irate/CSUM/MAVG/SAMPLE/INTERP only support group by tbname for super table query";
+ const char* msg1 = "TWA/Diff/Derivative/Irate/CSUM/MAVG/SAMPLE/INTERP/Elapsed are not allowed to apply to super table directly";
+ const char* msg2 = "TWA/Diff/Derivative/Irate/CSUM/MAVG/SAMPLE/INTERP/Elapsed only support group by tbname for super table query";
const char* msg3 = "functions not support for super table query";
// filter sql function not supported by metric query yet.
@@ -3570,7 +3581,7 @@ bool hasUnsupportFunctionsForSTableQuery(SSqlCmd* pCmd, SQueryInfo* pQueryInfo)
}
if (tscIsTWAQuery(pQueryInfo) || tscIsDiffDerivLikeQuery(pQueryInfo) || tscIsIrateQuery(pQueryInfo) ||
- tscQueryContainsFunction(pQueryInfo, TSDB_FUNC_SAMPLE) || tscGetPointInterpQuery(pQueryInfo)) {
+ tscQueryContainsFunction(pQueryInfo, TSDB_FUNC_SAMPLE) || tscGetPointInterpQuery(pQueryInfo) || tscQueryContainsFunction(pQueryInfo, TSDB_FUNC_ELAPSED)) {
if (pQueryInfo->groupbyExpr.numOfGroupCols == 0) {
invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg1);
return true;
@@ -7474,7 +7485,7 @@ int32_t doFunctionsCompatibleCheck(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, char*
const char* msg3 = "group by/session/state_window not allowed on projection query";
const char* msg4 = "retrieve tags not compatible with group by or interval query";
const char* msg5 = "functions can not be mixed up";
- const char* msg6 = "TWA/Diff/Derivative/Irate/CSum/MAvg only support group by tbname";
+ const char* msg6 = "TWA/Diff/Derivative/Irate/CSum/MAvg/Elapsed only support group by tbname";
// only retrieve tags, group by is not supportted
if (tscQueryTags(pQueryInfo)) {
@@ -7536,7 +7547,7 @@ int32_t doFunctionsCompatibleCheck(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, char*
}
if ((!pQueryInfo->stateWindow) && (f == TSDB_FUNC_DIFF || f == TSDB_FUNC_DERIVATIVE || f == TSDB_FUNC_TWA ||
- f == TSDB_FUNC_IRATE || f == TSDB_FUNC_CSUM || f == TSDB_FUNC_MAVG)) {
+ f == TSDB_FUNC_IRATE || f == TSDB_FUNC_CSUM || f == TSDB_FUNC_MAVG || f == TSDB_FUNC_ELAPSED)) {
for (int32_t j = 0; j < pQueryInfo->groupbyExpr.numOfGroupCols; ++j) {
SColIndex* pColIndex = taosArrayGet(pQueryInfo->groupbyExpr.columnInfo, j);
if (j == 0) {
@@ -7585,7 +7596,7 @@ int32_t doFunctionsCompatibleCheck(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, char*
int32_t validateFunctionFromUpstream(SQueryInfo* pQueryInfo, char* msg) {
- const char* msg1 = "TWA/Diff/Derivative/Irate are not allowed to apply to super table without group by tbname";
+ const char* msg1 = "TWA/Diff/Derivative/Irate/elapsed are not allowed to apply to super table without group by tbname";
const char* msg2 = "group by not supported in nested interp query";
const char* msg3 = "order by not supported in nested interp query";
const char* msg4 = "first column should be timestamp for interp query";
@@ -7598,7 +7609,7 @@ int32_t validateFunctionFromUpstream(SQueryInfo* pQueryInfo, char* msg) {
SExprInfo* pExpr = tscExprGet(pQueryInfo, i);
int32_t f = pExpr->base.functionId;
- if (f == TSDB_FUNC_DERIVATIVE || f == TSDB_FUNC_TWA || f == TSDB_FUNC_IRATE || f == TSDB_FUNC_DIFF) {
+ if (f == TSDB_FUNC_DERIVATIVE || f == TSDB_FUNC_TWA || f == TSDB_FUNC_IRATE || f == TSDB_FUNC_DIFF || f == TSDB_FUNC_ELAPSED) {
for (int32_t j = 0; j < upNum; ++j) {
SQueryInfo* pUp = taosArrayGetP(pQueryInfo->pUpstream, j);
STableMetaInfo *pTableMetaInfo = tscGetMetaInfo(pUp, 0);
diff --git a/src/client/src/tscServer.c b/src/client/src/tscServer.c
index 98693c94f1d68c194946fbf8b4c00e92c410c9ea..361f73945533b03017b5e156fff975fa1106925f 100644
--- a/src/client/src/tscServer.c
+++ b/src/client/src/tscServer.c
@@ -943,6 +943,7 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
pQueryMsg->tsCompQuery = query.tsCompQuery;
pQueryMsg->simpleAgg = query.simpleAgg;
pQueryMsg->pointInterpQuery = query.pointInterpQuery;
+ pQueryMsg->needTableSeqScan = query.needTableSeqScan;
pQueryMsg->needReverseScan = query.needReverseScan;
pQueryMsg->stateWindow = query.stateWindow;
pQueryMsg->numOfTags = htonl(numOfTags);
diff --git a/src/client/src/tscUtil.c b/src/client/src/tscUtil.c
index 94b4b45eda919e704f2551624b120081d903f50b..835b32eaaa198445945aff5ddd72cedc444f8318 100644
--- a/src/client/src/tscUtil.c
+++ b/src/client/src/tscUtil.c
@@ -375,6 +375,10 @@ bool tscIsPointInterpQuery(SQueryInfo* pQueryInfo) {
return true;
}
+bool tscNeedTableSeqScan(SQueryInfo* pQueryInfo) {
+ return pQueryInfo->stableQuery && (tscQueryContainsFunction(pQueryInfo, TSDB_FUNC_TWA) || tscQueryContainsFunction(pQueryInfo, TSDB_FUNC_ELAPSED));
+}
+
bool tscGetPointInterpQuery(SQueryInfo* pQueryInfo) {
size_t size = tscNumOfExprs(pQueryInfo);
for (int32_t i = 0; i < size; ++i) {
@@ -391,7 +395,6 @@ bool tscGetPointInterpQuery(SQueryInfo* pQueryInfo) {
return false;
}
-
bool tsIsArithmeticQueryOnAggResult(SQueryInfo* pQueryInfo) {
if (tscIsProjectionQuery(pQueryInfo)) {
return false;
@@ -524,7 +527,7 @@ bool timeWindowInterpoRequired(SQueryInfo *pQueryInfo) {
}
int32_t functionId = pExpr->base.functionId;
- if (functionId == TSDB_FUNC_TWA || functionId == TSDB_FUNC_INTERP) {
+ if (functionId == TSDB_FUNC_TWA || functionId == TSDB_FUNC_INTERP || functionId == TSDB_FUNC_ELAPSED) {
return true;
}
}
@@ -5054,6 +5057,7 @@ int32_t tscCreateQueryFromQueryInfo(SQueryInfo* pQueryInfo, SQueryAttr* pQueryAt
pQueryAttr->groupbyColumn = (!pQueryInfo->stateWindow) && tscGroupbyColumn(pQueryInfo);
pQueryAttr->queryBlockDist = isBlockDistQuery(pQueryInfo);
pQueryAttr->pointInterpQuery = tscIsPointInterpQuery(pQueryInfo);
+ pQueryAttr->needTableSeqScan = tscNeedTableSeqScan(pQueryInfo);
pQueryAttr->timeWindowInterpo = timeWindowInterpoRequired(pQueryInfo);
pQueryAttr->distinct = pQueryInfo->distinct;
pQueryAttr->sw = pQueryInfo->sessionWindow;
diff --git a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/cases/AuthenticationTest.java b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/cases/AuthenticationTest.java
index 5b38f9b0640bb6eec6d1c9749db0abf0388c04ce..d2f5b915ee1b39146ccc91131fae801c291d08cc 100644
--- a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/cases/AuthenticationTest.java
+++ b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/cases/AuthenticationTest.java
@@ -2,7 +2,6 @@ package com.taosdata.jdbc.cases;
import com.taosdata.jdbc.TSDBErrorNumbers;
import org.junit.Assert;
-import org.junit.Before;
import org.junit.Ignore;
import org.junit.Test;
@@ -59,38 +58,31 @@ public class AuthenticationTest {
@Test
public void test() throws SQLException {
// change password
- String url = "jdbc:TAOS-RS://" + host + ":6041/restful_test?user=" + user + "&password=taosdata";
- try (Connection conn = DriverManager.getConnection(url);
- Statement stmt = conn.createStatement();) {
- stmt.execute("alter user " + user + " pass '" + password + "'");
- }
+ conn = DriverManager.getConnection("jdbc:TAOS-RS://" + host + ":6041/?user=" + user + "&password=taosdata");
+ Statement stmt = conn.createStatement();
+ stmt.execute("alter user " + user + " pass '" + password + "'");
+ stmt.close();
+ conn.close();
// use new to login and execute query
- url = "jdbc:TAOS-RS://" + host + ":6041/restful_test?user=" + user + "&password=" + password;
- try (Connection conn = DriverManager.getConnection(url);
- Statement stmt = conn.createStatement()) {
- stmt.execute("show databases");
- ResultSet rs = stmt.getResultSet();
- ResultSetMetaData meta = rs.getMetaData();
- while (rs.next()) {
+ conn = DriverManager.getConnection("jdbc:TAOS-RS://" + host + ":6041/?user=" + user + "&password=" + password);
+ stmt = conn.createStatement();
+ stmt.execute("show databases");
+ ResultSet rs = stmt.getResultSet();
+ ResultSetMetaData meta = rs.getMetaData();
+ while (rs.next()) {
+ for (int i = 1; i <= meta.getColumnCount(); i++) {
+ System.out.print(meta.getColumnLabel(i) + ":" + rs.getString(i) + "\t");
}
+ System.out.println();
}
// change password back
- url = "jdbc:TAOS-RS://" + host + ":6041/restful_test?user=" + user + "&password=" + password;
- try (Connection conn = DriverManager.getConnection(url);
- Statement stmt = conn.createStatement()) {
- stmt.execute("alter user " + user + " pass 'taosdata'");
- }
- }
-
- @Before
- public void before() {
- try {
- Class.forName("com.taosdata.jdbc.rs.RestfulDriver");
- } catch (ClassNotFoundException e) {
- e.printStackTrace();
- }
+ conn = DriverManager.getConnection("jdbc:TAOS-RS://" + host + ":6041/?user=" + user + "&password=" + password);
+ stmt = conn.createStatement();
+ stmt.execute("alter user " + user + " pass 'taosdata'");
+ stmt.close();
+ conn.close();
}
}
diff --git a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/cases/InsertDbwithoutUseDbTest.java b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/cases/InsertDbwithoutUseDbTest.java
index beea990456ec98c2ab51fc2086034e0b31b570b6..05c7b0feca21f3f5b9062f9cbc26921aa607732a 100644
--- a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/cases/InsertDbwithoutUseDbTest.java
+++ b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/cases/InsertDbwithoutUseDbTest.java
@@ -18,9 +18,8 @@ public class InsertDbwithoutUseDbTest {
private static final Random random = new Random(System.currentTimeMillis());
@Test
- public void case001() throws ClassNotFoundException, SQLException {
+ public void case001() throws SQLException {
// prepare schema
- Class.forName("com.taosdata.jdbc.TSDBDriver");
String url = "jdbc:TAOS://127.0.0.1:6030/?user=root&password=taosdata";
Connection conn = DriverManager.getConnection(url, properties);
try (Statement stmt = conn.createStatement()) {
@@ -51,9 +50,8 @@ public class InsertDbwithoutUseDbTest {
}
@Test
- public void case002() throws ClassNotFoundException, SQLException {
+ public void case002() throws SQLException {
// prepare the schema
- Class.forName("com.taosdata.jdbc.rs.RestfulDriver");
final String url = "jdbc:TAOS-RS://" + host + ":6041/inWithoutDb?user=root&password=taosdata";
Connection conn = DriverManager.getConnection(url, properties);
try (Statement stmt = conn.createStatement()) {
diff --git a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/cases/TimestampPrecisonInNanoRestTest.java b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/cases/TimestampPrecisonInNanoRestTest.java
index 2ae03b4e5cd92056ce0ea995c8edcd21e51e24bb..cfd6a066acc2c2abd94e525fb69d4027a317134c 100644
--- a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/cases/TimestampPrecisonInNanoRestTest.java
+++ b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/cases/TimestampPrecisonInNanoRestTest.java
@@ -25,7 +25,7 @@ public class TimestampPrecisonInNanoRestTest {
private static final String date4 = format.format(new Date(timestamp1 + 10L));
private static final String date2 = date1 + "123455";
private static final String date3 = date4 + "123456";
-
+
private static Connection conn;
@@ -43,7 +43,7 @@ public class TimestampPrecisonInNanoRestTest {
stmt.execute("drop database if exists " + ns_timestamp_db);
stmt.execute("create database if not exists " + ns_timestamp_db + " precision 'ns'");
stmt.execute("create table " + ns_timestamp_db + ".weather(ts timestamp, ts2 timestamp, f1 int)");
- stmt.executeUpdate("insert into " + ns_timestamp_db + ".weather(ts, ts2, f1) values(\"" + date3 + "\", \"" + date3 + "\", 128)");
+ stmt.executeUpdate("insert into " + ns_timestamp_db + ".weather(ts, ts2, f1) values(\"" + date3 + "\", \"" + date3 + "\", 128)");
stmt.executeUpdate("insert into " + ns_timestamp_db + ".weather(ts, ts2, f1) values(" + timestamp2 + "," + timestamp2 + ", 127)");
stmt.close();
}
@@ -54,7 +54,7 @@ public class TimestampPrecisonInNanoRestTest {
stmt.execute("drop database if exists " + ns_timestamp_db);
stmt.execute("create database if not exists " + ns_timestamp_db + " precision 'ns'");
stmt.execute("create table " + ns_timestamp_db + ".weather(ts timestamp, ts2 timestamp, f1 int)");
- stmt.executeUpdate("insert into " + ns_timestamp_db + ".weather(ts, ts2, f1) values(\"" + date3 + "\", \"" + date3 + "\", 128)");
+ stmt.executeUpdate("insert into " + ns_timestamp_db + ".weather(ts, ts2, f1) values(\"" + date3 + "\", \"" + date3 + "\", 128)");
stmt.executeUpdate("insert into " + ns_timestamp_db + ".weather(ts, ts2, f1) values(" + timestamp2 + "," + timestamp2 + ", 127)");
stmt.close();
}
@@ -105,7 +105,7 @@ public class TimestampPrecisonInNanoRestTest {
@Test
public void canImportTimestampAndQueryByEqualToInDateTypeInBothFirstAndSecondCol() {
try (Statement stmt = conn.createStatement()) {
- stmt.executeUpdate("import into " + ns_timestamp_db + ".weather(ts, ts2, f1) values(\"" + date1 + "123123\", \"" + date1 + "123123\", 127)");
+ stmt.executeUpdate("import into " + ns_timestamp_db + ".weather(ts, ts2, f1) values(\"" + date1 + "123123\", \"" + date1 + "123123\", 127)");
ResultSet rs = stmt.executeQuery("select count(*) from " + ns_timestamp_db + ".weather where ts = '" + date1 + "123123'");
checkCount(1l, rs);
rs = stmt.executeQuery("select ts from " + ns_timestamp_db + ".weather where ts = '" + date1 + "123123'");
@@ -139,7 +139,7 @@ public class TimestampPrecisonInNanoRestTest {
public void canImportTimestampAndQueryByEqualToInNumberTypeInBothFirstAndSecondCol() {
try (Statement stmt = conn.createStatement()) {
long timestamp4 = timestamp1 * 1000_000 + 123123;
- stmt.executeUpdate("import into " + ns_timestamp_db + ".weather(ts, ts2, f1) values(" + timestamp4 + ", " + timestamp4 + ", 127)");
+ stmt.executeUpdate("import into " + ns_timestamp_db + ".weather(ts, ts2, f1) values(" + timestamp4 + ", " + timestamp4 + ", 127)");
ResultSet rs = stmt.executeQuery("select count(*) from " + ns_timestamp_db + ".weather where ts = '" + timestamp4 + "'");
checkCount(1l, rs);
rs = stmt.executeQuery("select ts from " + ns_timestamp_db + ".weather where ts = '" + timestamp4 + "'");
@@ -215,7 +215,7 @@ public class TimestampPrecisonInNanoRestTest {
} catch (SQLException e) {
e.printStackTrace();
}
- }
+ }
@Test
public void canQueryLargerThanInNumberTypeForFirstCol() {
@@ -279,7 +279,7 @@ public class TimestampPrecisonInNanoRestTest {
} catch (SQLException e) {
e.printStackTrace();
}
- }
+ }
@Test
public void canQueryLessThanInDateTypeForFirstCol() {
@@ -347,7 +347,7 @@ public class TimestampPrecisonInNanoRestTest {
} catch (SQLException e) {
e.printStackTrace();
}
- }
+ }
@Test
public void canQueryLessThanOrEqualToInNumberTypeForFirstCol() {
@@ -466,7 +466,7 @@ public class TimestampPrecisonInNanoRestTest {
}
@Test
- public void canInsertTimestampWithNowAndNsOffsetInBothFirstAndSecondCol(){
+ public void canInsertTimestampWithNowAndNsOffsetInBothFirstAndSecondCol() {
try (Statement stmt = conn.createStatement()) {
stmt.executeUpdate("insert into " + ns_timestamp_db + ".weather(ts, ts2, f1) values(now + 1000b, now - 1000b, 128)");
ResultSet rs = stmt.executeQuery("select count(*) from " + ns_timestamp_db + ".weather");
@@ -477,7 +477,7 @@ public class TimestampPrecisonInNanoRestTest {
}
@Test
- public void canIntervalAndSlidingAcceptNsUnitForFirstCol(){
+ public void canIntervalAndSlidingAcceptNsUnitForFirstCol() {
try (Statement stmt = conn.createStatement()) {
ResultSet rs = stmt.executeQuery("select sum(f1) from " + ns_timestamp_db + ".weather where ts >= '" + date2 + "' and ts <= '" + date3 + "' interval(10000000b) sliding(10000000b)");
rs.next();
@@ -492,7 +492,7 @@ public class TimestampPrecisonInNanoRestTest {
}
@Test
- public void canIntervalAndSlidingAcceptNsUnitForSecondCol(){
+ public void canIntervalAndSlidingAcceptNsUnitForSecondCol() {
try (Statement stmt = conn.createStatement()) {
ResultSet rs = stmt.executeQuery("select sum(f1) from " + ns_timestamp_db + ".weather where ts2 >= '" + date2 + "' and ts <= '" + date3 + "' interval(10000000b) sliding(10000000b)");
rs.next();
@@ -506,21 +506,17 @@ public class TimestampPrecisonInNanoRestTest {
}
}
- @Test
- public void testDataOutOfRangeExceptionForFirstCol() {
+ @Test(expected = SQLException.class)
+ public void testDataOutOfRangeExceptionForFirstCol() throws SQLException {
try (Statement stmt = conn.createStatement()) {
stmt.executeUpdate("insert into " + ns_timestamp_db + ".weather(ts, ts2, f1) values(123456789012345678, 1234567890123456789, 127)");
- } catch (SQLException e) {
- Assert.assertEquals("TDengine ERROR (60b): Timestamp data out of range", e.getMessage());
}
}
- @Test
- public void testDataOutOfRangeExceptionForSecondCol() {
+ @Test(expected = SQLException.class)
+ public void testDataOutOfRangeExceptionForSecondCol() throws SQLException {
try (Statement stmt = conn.createStatement()) {
stmt.executeUpdate("insert into " + ns_timestamp_db + ".weather(ts, ts2, f1) values(1234567890123456789, 123456789012345678, 127)");
- } catch (SQLException e) {
- Assert.assertEquals("TDengine ERROR (60b): Timestamp data out of range", e.getMessage());
}
}
diff --git a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/RestfulConnectionTest.java b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/RestfulConnectionTest.java
index b08f8ff227dc16e1b413391e58a9de8fd0182c42..e7ce1d76f123a043d49eb64931c0d537d09664df 100644
--- a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/RestfulConnectionTest.java
+++ b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/RestfulConnectionTest.java
@@ -373,11 +373,12 @@ public class RestfulConnectionTest {
properties.setProperty(TSDBDriver.PROPERTY_KEY_CHARSET, "UTF-8");
properties.setProperty(TSDBDriver.PROPERTY_KEY_LOCALE, "en_US.UTF-8");
properties.setProperty(TSDBDriver.PROPERTY_KEY_TIME_ZONE, "UTC-8");
- conn = DriverManager.getConnection("jdbc:TAOS-RS://" + host + ":6041/log?user=root&password=taosdata", properties);
+ conn = DriverManager.getConnection("jdbc:TAOS-RS://" + host + ":6041/?user=root&password=taosdata", properties);
// create test database for test cases
try (Statement stmt = conn.createStatement()) {
stmt.execute("create database if not exists test");
}
+
}
@AfterClass
diff --git a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/RestfulJDBCTest.java b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/RestfulJDBCTest.java
index 858f7b32f0d8a72be5b6cfa68aa120b08909df6c..5de1655ee48776b6798619814fe2729625282764 100644
--- a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/RestfulJDBCTest.java
+++ b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/RestfulJDBCTest.java
@@ -9,9 +9,10 @@ import java.util.Random;
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class RestfulJDBCTest {
- private static final String host = "127.0.0.1";
- private final Random random = new Random(System.currentTimeMillis());
- private Connection connection;
+ // private static final String host = "127.0.0.1";
+ private static final String host = "master";
+ private static final Random random = new Random(System.currentTimeMillis());
+ private static Connection connection;
@Test
public void testCase001() throws SQLException {
@@ -129,15 +130,23 @@ public class RestfulJDBCTest {
}
}
- @Before
- public void before() throws SQLException {
- connection = DriverManager.getConnection("jdbc:TAOS-RS://" + host + ":6041/restful_test?user=root&password=taosdata&httpKeepAlive=false");
+ @BeforeClass
+ public static void beforeClass() {
+ try {
+ connection = DriverManager.getConnection("jdbc:TAOS-RS://" + host + ":6041/?user=root&password=taosdata");
+ } catch (SQLException e) {
+ e.printStackTrace();
+ }
}
- @After
- public void after() throws SQLException {
- if (connection != null)
+ @AfterClass
+ public static void afterClass() throws SQLException {
+ if (connection != null) {
+ Statement stmt = connection.createStatement();
+ stmt.execute("drop database if exists restful_test");
+ stmt.close();
connection.close();
+ }
}
}
diff --git a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/RestfulResultSetMetaDataTest.java b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/RestfulResultSetMetaDataTest.java
index c7fc81297264f3cf38795d9d5a3b7eccc51574c9..f3011af799c987ed399920875ae512fd8533ec77 100644
--- a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/RestfulResultSetMetaDataTest.java
+++ b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/RestfulResultSetMetaDataTest.java
@@ -186,22 +186,17 @@ public class RestfulResultSetMetaDataTest {
}
@BeforeClass
- public static void beforeClass() {
- try {
- Class.forName("com.taosdata.jdbc.rs.RestfulDriver");
- conn = DriverManager.getConnection("jdbc:TAOS-RS://" + host + ":6041/restful_test?user=root&password=taosdata");
- stmt = conn.createStatement();
- stmt.execute("create database if not exists restful_test");
- stmt.execute("use restful_test");
- stmt.execute("drop table if exists weather");
- stmt.execute("create table if not exists weather(f1 timestamp, f2 int, f3 bigint, f4 float, f5 double, f6 binary(64), f7 smallint, f8 tinyint, f9 bool, f10 nchar(64))");
- stmt.execute("insert into restful_test.weather values('2021-01-01 00:00:00.000', 1, 100, 3.1415, 3.1415926, 'abc', 10, 10, true, '涛思数据')");
- rs = stmt.executeQuery("select * from restful_test.weather");
- rs.next();
- meta = rs.getMetaData();
- } catch (ClassNotFoundException | SQLException e) {
- e.printStackTrace();
- }
+ public static void beforeClass() throws SQLException {
+ conn = DriverManager.getConnection("jdbc:TAOS-RS://" + host + ":6041/?user=root&password=taosdata");
+ stmt = conn.createStatement();
+ stmt.execute("create database if not exists restful_test");
+ stmt.execute("use restful_test");
+ stmt.execute("drop table if exists weather");
+ stmt.execute("create table if not exists weather(f1 timestamp, f2 int, f3 bigint, f4 float, f5 double, f6 binary(64), f7 smallint, f8 tinyint, f9 bool, f10 nchar(64))");
+ stmt.execute("insert into restful_test.weather values('2021-01-01 00:00:00.000', 1, 100, 3.1415, 3.1415926, 'abc', 10, 10, true, '涛思数据')");
+ rs = stmt.executeQuery("select * from restful_test.weather");
+ rs.next();
+ meta = rs.getMetaData();
}
@AfterClass
diff --git a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/RestfulResultSetTest.java b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/RestfulResultSetTest.java
index 86b0f1be9e7ee99f50201dc98f197c07f5bb9aef..4058dd8b550b6e9ac5553144de92d908d804dce1 100644
--- a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/RestfulResultSetTest.java
+++ b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/RestfulResultSetTest.java
@@ -17,7 +17,8 @@ import java.text.SimpleDateFormat;
public class RestfulResultSetTest {
- private static final String host = "127.0.0.1";
+ // private static final String host = "127.0.0.1";
+ private static final String host = "master";
private static Connection conn;
private static Statement stmt;
private static ResultSet rs;
@@ -658,35 +659,29 @@ public class RestfulResultSetTest {
}
@BeforeClass
- public static void beforeClass() {
- try {
- conn = DriverManager.getConnection("jdbc:TAOS-RS://" + host + ":6041/restful_test?user=root&password=taosdata");
- stmt = conn.createStatement();
- stmt.execute("create database if not exists restful_test");
- stmt.execute("use restful_test");
- stmt.execute("drop table if exists weather");
- stmt.execute("create table if not exists weather(f1 timestamp, f2 int, f3 bigint, f4 float, f5 double, f6 binary(64), f7 smallint, f8 tinyint, f9 bool, f10 nchar(64))");
- stmt.execute("insert into restful_test.weather values('2021-01-01 00:00:00.000', 1, 100, 3.1415, 3.1415926, 'abc', 10, 10, true, '涛思数据')");
- rs = stmt.executeQuery("select * from restful_test.weather");
- rs.next();
- } catch (SQLException e) {
- e.printStackTrace();
- }
-
+ public static void beforeClass() throws SQLException {
+ conn = DriverManager.getConnection("jdbc:TAOS-RS://" + host + ":6041/?user=root&password=taosdata");
+ stmt = conn.createStatement();
+ stmt.execute("drop database if exists restful_test");
+ stmt.execute("create database if not exists restful_test");
+ stmt.execute("use restful_test");
+ stmt.execute("drop table if exists weather");
+ stmt.execute("create table if not exists weather(f1 timestamp, f2 int, f3 bigint, f4 float, f5 double, f6 binary(64), f7 smallint, f8 tinyint, f9 bool, f10 nchar(64))");
+ stmt.execute("insert into restful_test.weather values('2021-01-01 00:00:00.000', 1, 100, 3.1415, 3.1415926, 'abc', 10, 10, true, '涛思数据')");
+ rs = stmt.executeQuery("select * from restful_test.weather");
+ rs.next();
}
@AfterClass
- public static void afterClass() {
- try {
- if (rs != null)
- rs.close();
- if (stmt != null)
- stmt.close();
- if (conn != null)
- conn.close();
- } catch (SQLException e) {
- e.printStackTrace();
+ public static void afterClass() throws SQLException {
+ if (rs != null)
+ rs.close();
+ if (stmt != null) {
+ stmt.execute("drop database if exists restful_test");
+ stmt.close();
}
+ if (conn != null)
+ conn.close();
}
}
\ No newline at end of file
diff --git a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/SQLTest.java b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/SQLTest.java
index a28bdbe2e5f6e0d545241a80071d85b0964a4102..4893e6062f8719152539d80a6da21730d47dfa92 100644
--- a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/SQLTest.java
+++ b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/SQLTest.java
@@ -572,11 +572,14 @@ public class SQLTest {
@BeforeClass
public static void before() throws SQLException {
- connection = DriverManager.getConnection("jdbc:TAOS-RS://" + host + ":6041/restful_test?user=root&password=taosdata");
+ connection = DriverManager.getConnection("jdbc:TAOS-RS://" + host + ":6041/?user=root&password=taosdata");
}
@AfterClass
public static void after() throws SQLException {
+ Statement stmt = connection.createStatement();
+ stmt.execute("drop database if exists restful_test");
+ stmt.close();
connection.close();
}
diff --git a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/WasNullTest.java b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/WasNullTest.java
index a78284b7a2ecf1b43b96180fa9d819e89ecdc595..f0cd200e04bc66bb0571534c99a348c3a823fcb3 100644
--- a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/WasNullTest.java
+++ b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/WasNullTest.java
@@ -1,6 +1,9 @@
package com.taosdata.jdbc.rs;
-import org.junit.*;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
import java.sql.*;
@@ -9,9 +12,8 @@ public class WasNullTest {
private static final String host = "127.0.0.1";
private Connection conn;
-
@Test
- public void testGetTimestamp() {
+ public void testGetTimestamp() throws SQLException {
try (Statement stmt = conn.createStatement()) {
stmt.execute("drop table if exists weather");
stmt.execute("create table if not exists weather(f1 timestamp, f2 timestamp, f3 int)");
@@ -34,14 +36,11 @@ public class WasNullTest {
}
}
}
-
- } catch (SQLException e) {
- e.printStackTrace();
}
}
@Test
- public void testGetObject() {
+ public void testGetObject() throws SQLException {
try (Statement stmt = conn.createStatement()) {
stmt.execute("drop table if exists weather");
stmt.execute("create table if not exists weather(f1 timestamp, f2 int, f3 bigint, f4 float, f5 double, f6 binary(64), f7 smallint, f8 tinyint, f9 bool, f10 nchar(64))");
@@ -63,32 +62,25 @@ public class WasNullTest {
}
}
- } catch (SQLException e) {
- e.printStackTrace();
}
}
@Before
- public void before() {
- try {
- conn = DriverManager.getConnection("jdbc:TAOS-RS://" + host + ":6041/restful_test?user=root&password=taosdata");
- Statement stmt = conn.createStatement();
+ public void before() throws SQLException {
+ conn = DriverManager.getConnection("jdbc:TAOS-RS://" + host + ":6041/?user=root&password=taosdata");
+ try (Statement stmt = conn.createStatement()) {
stmt.execute("drop database if exists restful_test");
stmt.execute("create database if not exists restful_test");
- } catch (SQLException e) {
- e.printStackTrace();
+ stmt.execute("use restful_test");
}
}
@After
- public void after() {
- try {
+ public void after() throws SQLException {
+ if (conn != null) {
Statement statement = conn.createStatement();
statement.execute("drop database if exists restful_test");
- if (conn != null)
- conn.close();
- } catch (SQLException e) {
- e.printStackTrace();
+ conn.close();
}
}
}
diff --git a/src/inc/taosmsg.h b/src/inc/taosmsg.h
index 0f291936f5519b1db7f98b098e5f9f82303cd0f5..84491e0a438fdb3b5dd2905acffaf32c76b23c9b 100644
--- a/src/inc/taosmsg.h
+++ b/src/inc/taosmsg.h
@@ -475,6 +475,7 @@ typedef struct {
bool tsCompQuery; // is tscomp query
bool simpleAgg;
bool pointInterpQuery; // point interpolation query
+ bool needTableSeqScan; // need scan table by table
bool needReverseScan; // need reverse scan
bool stateWindow; // state window flag
diff --git a/src/kit/shell/src/shellWindows.c b/src/kit/shell/src/shellWindows.c
index 0babd88333c846c1f0b5dbe4baede4a6d38cbcdd..b1c85d951bf1f8cf801286f51b84d47d9c893b5c 100644
--- a/src/kit/shell/src/shellWindows.c
+++ b/src/kit/shell/src/shellWindows.c
@@ -17,7 +17,7 @@
#include "taos.h"
#include "shellCommand.h"
-#define SHELL_INPUT_MAX_COMMAND_SIZE 500000
+#define SHELL_INPUT_MAX_COMMAND_SIZE 10000
extern char configDir[];
diff --git a/src/plugins/CMakeLists.txt b/src/plugins/CMakeLists.txt
index 9e0de204d78cb54bea240a734f2373b709b6c6f9..c7221a6d301ae09e47bd68c76a90599fd85dff2a 100644
--- a/src/plugins/CMakeLists.txt
+++ b/src/plugins/CMakeLists.txt
@@ -43,7 +43,7 @@ ELSE ()
COMMAND git clean -f -d
BUILD_COMMAND CGO_CFLAGS=-I${CMAKE_CURRENT_SOURCE_DIR}/../inc CGO_LDFLAGS=-L${CMAKE_BINARY_DIR}/build/lib go build -ldflags "-s -w -X github.com/taosdata/taosadapter/version.CommitID=${taosadapter_commit_sha1}"
INSTALL_COMMAND
- COMMAND curl -sL https://github.com/upx/upx/releases/download/v3.96/upx-3.96-amd64_linux.tar.xz -o upx.tar.xz && tar xvJf upx.tar.xz --strip-components 1 > /dev/null && ./upx taosadapter || :
+ COMMAND curl -sL https://github.com/upx/upx/releases/download/v3.96/upx-3.96-amd64_linux.tar.xz -o upx.tar.xz && tar -xvJf upx.tar.xz -C ${CMAKE_BINARY_DIR} --strip-components 1 > /dev/null && ${CMAKE_BINARY_DIR}/upx taosadapter || :
COMMAND cmake -E copy taosadapter ${CMAKE_BINARY_DIR}/build/bin
COMMAND cmake -E make_directory ${CMAKE_BINARY_DIR}/test/cfg/
COMMAND cmake -E copy ./example/config/taosadapter.toml ${CMAKE_BINARY_DIR}/test/cfg/
diff --git a/src/plugins/taosadapter b/src/plugins/taosadapter
index 6397bf5963f62f0aa5c4b9b961b16ed5c62579f1..88346a2e4e2e9282d2ec8b8c5264ca1ec23698a1 160000
--- a/src/plugins/taosadapter
+++ b/src/plugins/taosadapter
@@ -1 +1 @@
-Subproject commit 6397bf5963f62f0aa5c4b9b961b16ed5c62579f1
+Subproject commit 88346a2e4e2e9282d2ec8b8c5264ca1ec23698a1
diff --git a/src/query/inc/qAggMain.h b/src/query/inc/qAggMain.h
index c9a022d7a1210b31b81bf3895a9b804a03bd30ae..be0f6aee59de760088c8f10b9d1a5dca79882edd 100644
--- a/src/query/inc/qAggMain.h
+++ b/src/query/inc/qAggMain.h
@@ -79,6 +79,8 @@ extern "C" {
#define TSDB_FUNC_BLKINFO 39
+#define TSDB_FUNC_ELAPSED 40
+
///////////////////////////////////////////
// the following functions is not implemented.
// after implementation, move them before TSDB_FUNC_BLKINFO. also make TSDB_FUNC_BLKINFO the maxium function index
diff --git a/src/query/inc/qExecutor.h b/src/query/inc/qExecutor.h
index fe4fb6c950d4f3e0186668d957900934ba243e5d..0a52a44ed1f7019abc7542fab75cfd098302dbc1 100644
--- a/src/query/inc/qExecutor.h
+++ b/src/query/inc/qExecutor.h
@@ -230,6 +230,7 @@ typedef struct SQueryAttr {
bool diffQuery; // is diff query
bool simpleAgg;
bool pointInterpQuery; // point interpolation query
+ bool needTableSeqScan; // need scan table by table
bool needReverseScan; // need reverse scan
bool distinct; // distinct query or not
bool stateWindow; // window State on sub/normal table
diff --git a/src/query/src/qAggMain.c b/src/query/src/qAggMain.c
index f26b3cda1a56df698db0db1465bd6116726ca0ae..62f0edc8b13c97955fb35c6ae74a9c9abb3b59ea 100644
--- a/src/query/src/qAggMain.c
+++ b/src/query/src/qAggMain.c
@@ -196,6 +196,12 @@ typedef struct {
char *taglists;
} SSampleFuncInfo;
+typedef struct SElapsedInfo {
+ int8_t hasResult;
+ TSKEY min;
+ TSKEY max;
+} SElapsedInfo;
+
typedef struct {
bool valueAssigned;
union {
@@ -371,6 +377,11 @@ int32_t getResultDataInfo(int32_t dataType, int32_t dataBytes, int32_t functionI
*bytes = sizeof(STwaInfo);
*interBytes = *bytes;
return TSDB_CODE_SUCCESS;
+ } else if (functionId == TSDB_FUNC_ELAPSED) {
+ *type = TSDB_DATA_TYPE_BINARY;
+ *bytes = sizeof(SElapsedInfo);
+ *interBytes = *bytes;
+ return TSDB_CODE_SUCCESS;
}
}
@@ -471,6 +482,10 @@ int32_t getResultDataInfo(int32_t dataType, int32_t dataBytes, int32_t functionI
*bytes = sizeof(SStddevdstInfo);
*interBytes = (*bytes);
+ } else if (functionId == TSDB_FUNC_ELAPSED) {
+ *type = TSDB_DATA_TYPE_DOUBLE;
+ *bytes = tDataTypes[*type].bytes;
+ *interBytes = sizeof(SElapsedInfo);
} else {
return TSDB_CODE_TSC_INVALID_OPERATION;
}
@@ -480,7 +495,7 @@ int32_t getResultDataInfo(int32_t dataType, int32_t dataBytes, int32_t functionI
// TODO use hash table
int32_t isValidFunction(const char* name, int32_t len) {
- for(int32_t i = 0; i <= TSDB_FUNC_BLKINFO; ++i) {
+ for(int32_t i = 0; i <= TSDB_FUNC_ELAPSED; ++i) {
int32_t nameLen = (int32_t) strlen(aAggs[i].name);
if (len != nameLen) {
continue;
@@ -3449,7 +3464,7 @@ static void spread_function(SQLFunctionCtx *pCtx) {
SSpreadInfo *pInfo = GET_ROWCELL_INTERBUF(pResInfo);
int32_t numOfElems = 0;
-
+
// todo : opt with pre-calculated result
// column missing cause the hasNull to be true
if (pCtx->preAggVals.isSet) {
@@ -3552,7 +3567,7 @@ void spread_function_finalizer(SQLFunctionCtx *pCtx) {
* the type of intermediate data is binary
*/
SResultRowCellInfo *pResInfo = GET_RES_INFO(pCtx);
-
+
if (pCtx->currentStage == MERGE_STAGE) {
assert(pCtx->inputType == TSDB_DATA_TYPE_BINARY);
@@ -4922,6 +4937,120 @@ static void sample_func_finalizer(SQLFunctionCtx *pCtx) {
doFinalizer(pCtx);
}
+static SElapsedInfo * getSElapsedInfo(SQLFunctionCtx *pCtx) {
+ if (pCtx->stableQuery && pCtx->currentStage != MERGE_STAGE) {
+ return (SElapsedInfo *)pCtx->pOutput;
+ } else {
+ return GET_ROWCELL_INTERBUF(GET_RES_INFO(pCtx));
+ }
+}
+
+static bool elapsedSetup(SQLFunctionCtx *pCtx, SResultRowCellInfo* pResInfo) {
+ if (!function_setup(pCtx, pResInfo)) {
+ return false;
+ }
+
+ SElapsedInfo *pInfo = getSElapsedInfo(pCtx);
+ pInfo->min = MAX_TS_KEY;
+ pInfo->max = 0;
+ pInfo->hasResult = 0;
+
+ return true;
+}
+
+static int32_t elapsedRequired(SQLFunctionCtx *pCtx, STimeWindow* w, int32_t colId) {
+ return BLK_DATA_NO_NEEDED;
+}
+
+static void elapsedFunction(SQLFunctionCtx *pCtx) {
+ SElapsedInfo *pInfo = getSElapsedInfo(pCtx);
+ if (pCtx->preAggVals.isSet) {
+ if (pInfo->min == MAX_TS_KEY) {
+ pInfo->min = pCtx->preAggVals.statis.min;
+ pInfo->max = pCtx->preAggVals.statis.max;
+ } else {
+ if (pCtx->order == TSDB_ORDER_ASC) {
+ pInfo->max = pCtx->preAggVals.statis.max;
+ } else {
+ pInfo->min = pCtx->preAggVals.statis.min;
+ }
+ }
+ } else {
+ // 0 == pCtx->size mean this is end interpolation.
+ if (0 == pCtx->size) {
+ if (pCtx->order == TSDB_ORDER_DESC) {
+ if (pCtx->end.key != INT64_MIN) {
+ pInfo->min = pCtx->end.key;
+ }
+ } else {
+ if (pCtx->end.key != INT64_MIN) {
+ pInfo->max = pCtx->end.key + 1;
+ }
+ }
+ goto elapsedOver;
+ }
+
+ int64_t *ptsList = (int64_t *)GET_INPUT_DATA_LIST(pCtx);
+ // pCtx->start.key == INT64_MIN mean this is first window or there is actual start point of current window.
+ // pCtx->end.key == INT64_MIN mean current window does not end in current data block or there is actual end point of current window.
+ if (pCtx->order == TSDB_ORDER_DESC) {
+ if (pCtx->start.key == INT64_MIN) {
+ pInfo->max = (pInfo->max < ptsList[pCtx->size - 1]) ? ptsList[pCtx->size - 1] : pInfo->max;
+ } else {
+ pInfo->max = pCtx->start.key + 1;
+ }
+
+ if (pCtx->end.key != INT64_MIN) {
+ pInfo->min = pCtx->end.key;
+ } else {
+ pInfo->min = ptsList[0];
+ }
+ } else {
+ if (pCtx->start.key == INT64_MIN) {
+ pInfo->min = (pInfo->min > ptsList[0]) ? ptsList[0] : pInfo->min;
+ } else {
+ pInfo->min = pCtx->start.key;
+ }
+
+ if (pCtx->end.key != INT64_MIN) {
+ pInfo->max = pCtx->end.key + 1;
+ } else {
+ pInfo->max = ptsList[pCtx->size - 1];
+ }
+ }
+ }
+
+elapsedOver:
+ SET_VAL(pCtx, pCtx->size, 1);
+
+ if (pCtx->size > 0) {
+ GET_RES_INFO(pCtx)->hasResult = DATA_SET_FLAG;
+ pInfo->hasResult = DATA_SET_FLAG;
+ }
+}
+
+static void elapsedMerge(SQLFunctionCtx *pCtx) {
+ SElapsedInfo *pInfo = getSElapsedInfo(pCtx);
+ memcpy(pInfo, pCtx->pInput, (size_t)pCtx->inputBytes);
+ GET_RES_INFO(pCtx)->hasResult = pInfo->hasResult;
+}
+
+static void elapsedFinalizer(SQLFunctionCtx *pCtx) {
+ if (GET_RES_INFO(pCtx)->hasResult != DATA_SET_FLAG) {
+ setNull(pCtx->pOutput, pCtx->outputType, pCtx->outputBytes);
+ return;
+ }
+
+ SElapsedInfo *pInfo = GET_ROWCELL_INTERBUF(GET_RES_INFO(pCtx));
+ *(double *)pCtx->pOutput = (double)pInfo->max - (double)pInfo->min;
+ if (pCtx->numOfParams > 0 && pCtx->param[0].i64 > 0) {
+ *(double *)pCtx->pOutput = *(double *)pCtx->pOutput / pCtx->param[0].i64;
+ }
+ GET_RES_INFO(pCtx)->numOfRes = 1;
+
+ doFinalizer(pCtx);
+}
+
/////////////////////////////////////////////////////////////////////////////////////////////
/*
* function compatible list.
@@ -4942,8 +5071,8 @@ int32_t functionCompatList[] = {
1, 1, 1, 1, -1, 1, 1, 1, 5, 1, 1,
// tid_tag, deriv, ceil, floor, round, csum, mavg, sample,
6, 8, 1, 1, 1, -1, -1, -1,
- // block_info
- 7
+ // block_info, elapsed
+ 7, 1
};
SAggFunctionInfo aAggs[] = {{
@@ -5426,4 +5555,16 @@ SAggFunctionInfo aAggs[] = {{
block_func_merge,
dataBlockRequired,
},
+ {
+ // 40
+ "elapsed",
+ TSDB_FUNC_ELAPSED,
+ TSDB_FUNC_ELAPSED,
+ TSDB_BASE_FUNC_SO,
+ elapsedSetup,
+ elapsedFunction,
+ elapsedFinalizer,
+ elapsedMerge,
+ elapsedRequired,
+ }
};
diff --git a/src/query/src/qExecutor.c b/src/query/src/qExecutor.c
index 7e89b3a766f0c9124417c65b83dfe55853b0f094..251e210600198de0ba9aec34d322de6839a621b2 100644
--- a/src/query/src/qExecutor.c
+++ b/src/query/src/qExecutor.c
@@ -933,9 +933,10 @@ void doInvokeUdf(SUdfInfo* pUdfInfo, SQLFunctionCtx *pCtx, int32_t idx, int32_t
static void doApplyFunctions(SQueryRuntimeEnv* pRuntimeEnv, SQLFunctionCtx* pCtx, STimeWindow* pWin, int32_t offset,
int32_t forwardStep, TSKEY* tsCol, int32_t numOfTotal, int32_t numOfOutput) {
SQueryAttr *pQueryAttr = pRuntimeEnv->pQueryAttr;
- bool hasAggregates = pCtx[0].preAggVals.isSet;
for (int32_t k = 0; k < numOfOutput; ++k) {
+ bool hasAggregates = pCtx[k].preAggVals.isSet;
+
pCtx[k].size = forwardStep;
pCtx[k].startTs = pWin->skey;
@@ -1258,7 +1259,7 @@ void doTimeWindowInterpolation(SOperatorInfo* pOperator, SOptrBasicInfo* pInfo,
for (int32_t k = 0; k < pOperator->numOfOutput; ++k) {
int32_t functionId = pCtx[k].functionId;
- if (functionId != TSDB_FUNC_TWA && functionId != TSDB_FUNC_INTERP) {
+ if (functionId != TSDB_FUNC_TWA && functionId != TSDB_FUNC_INTERP && functionId != TSDB_FUNC_ELAPSED) {
pCtx[k].start.key = INT64_MIN;
continue;
}
@@ -1301,7 +1302,7 @@ void doTimeWindowInterpolation(SOperatorInfo* pOperator, SOptrBasicInfo* pInfo,
pCtx[k].end.ptr = (char *)pColInfo->pData + curRowIndex * pColInfo->info.bytes;
}
}
- } else if (functionId == TSDB_FUNC_TWA) {
+ } else if (functionId == TSDB_FUNC_TWA || functionId == TSDB_FUNC_ELAPSED) {
assert(curTs != windowKey);
if (prevRowIndex == -1) {
@@ -1468,7 +1469,6 @@ static void hashIntervalAgg(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResul
STimeWindow win = getActiveTimeWindow(pResultRowInfo, ts, pQueryAttr);
bool masterScan = IS_MASTER_SCAN(pRuntimeEnv);
-
SResultRow* pResult = NULL;
int32_t ret = setResultOutputBufByKey(pRuntimeEnv, pResultRowInfo, pSDataBlock->info.tid, &win, masterScan, &pResult, tableGroupId, pInfo->pCtx,
numOfOutput, pInfo->rowCellInfoOffset);
@@ -1491,23 +1491,22 @@ static void hashIntervalAgg(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResul
continue;
}
- STimeWindow w = pRes->win;
- ret = setResultOutputBufByKey(pRuntimeEnv, pResultRowInfo, pSDataBlock->info.tid, &w, masterScan, &pResult,
- tableGroupId, pInfo->pCtx, numOfOutput, pInfo->rowCellInfoOffset);
- if (ret != TSDB_CODE_SUCCESS) {
- longjmp(pRuntimeEnv->env, TSDB_CODE_QRY_OUT_OF_MEMORY);
- }
-
- assert(!resultRowInterpolated(pResult, RESULT_ROW_END_INTERP));
+ STimeWindow w = pRes->win;
+ ret = setResultOutputBufByKey(pRuntimeEnv, pResultRowInfo, pSDataBlock->info.tid, &w, masterScan, &pResult,
+ tableGroupId, pInfo->pCtx, numOfOutput, pInfo->rowCellInfoOffset);
+ if (ret != TSDB_CODE_SUCCESS) {
+ longjmp(pRuntimeEnv->env, TSDB_CODE_QRY_OUT_OF_MEMORY);
+ }
- doTimeWindowInterpolation(pOperatorInfo, pInfo, pSDataBlock->pDataBlock, *(TSKEY*)pRuntimeEnv->prevRow[0], -1,
- tsCols[startPos], startPos, w.ekey, RESULT_ROW_END_INTERP);
+ assert(!resultRowInterpolated(pResult, RESULT_ROW_END_INTERP));
- setResultRowInterpo(pResult, RESULT_ROW_END_INTERP);
- setNotInterpoWindowKey(pInfo->pCtx, pQueryAttr->numOfOutput, RESULT_ROW_START_INTERP);
+ doTimeWindowInterpolation(pOperatorInfo, pInfo, pSDataBlock->pDataBlock, *(TSKEY*)pRuntimeEnv->prevRow[0], -1,
+ tsCols[startPos], startPos, QUERY_IS_ASC_QUERY(pQueryAttr) ? w.ekey : w.skey, RESULT_ROW_END_INTERP);
- doApplyFunctions(pRuntimeEnv, pInfo->pCtx, &w, startPos, 0, tsCols, pSDataBlock->info.rows, numOfOutput);
- }
+ setResultRowInterpo(pResult, RESULT_ROW_END_INTERP);
+ setNotInterpoWindowKey(pInfo->pCtx, pQueryAttr->numOfOutput, RESULT_ROW_START_INTERP);
+ doApplyFunctions(pRuntimeEnv, pInfo->pCtx, &w, startPos, 0, tsCols, pSDataBlock->info.rows, numOfOutput);
+ }
// restore current time window
ret = setResultOutputBufByKey(pRuntimeEnv, pResultRowInfo, pSDataBlock->info.tid, &win, masterScan, &pResult, tableGroupId, pInfo->pCtx,
@@ -1821,7 +1820,7 @@ void setBlockStatisInfo(SQLFunctionCtx *pCtx, SSDataBlock* pSDataBlock, SColInde
pCtx->hasNull = hasNull(pColIndex, pStatis);
// set the statistics data for primary time stamp column
- if (pCtx->functionId == TSDB_FUNC_SPREAD && pColIndex->colId == PRIMARYKEY_TIMESTAMP_COL_INDEX) {
+ if ((pCtx->functionId == TSDB_FUNC_SPREAD || pCtx->functionId == TSDB_FUNC_ELAPSED) && pColIndex->colId == PRIMARYKEY_TIMESTAMP_COL_INDEX) {
pCtx->preAggVals.isSet = true;
pCtx->preAggVals.statis.min = pSDataBlock->info.window.skey;
pCtx->preAggVals.statis.max = pSDataBlock->info.window.ekey;
@@ -6203,7 +6202,17 @@ group_finished_exit:
return true;
}
+static void resetInterpolation(SQLFunctionCtx *pCtx, SQueryRuntimeEnv* pRuntimeEnv, int32_t numOfOutput) {
+ if (!pRuntimeEnv->pQueryAttr->timeWindowInterpo) {
+ return;
+ }
+ for (int32_t i = 0; i < numOfOutput; ++i) {
+ pCtx[i].start.key = INT64_MIN;
+ pCtx[i].end.key = INT64_MIN;
+ }
+ *(TSKEY *)pRuntimeEnv->prevRow[0] = INT64_MIN;
+}
static void doTimeEveryImpl(SOperatorInfo* pOperator, SQLFunctionCtx *pCtx, SSDataBlock* pBlock, bool newgroup) {
STimeEveryOperatorInfo* pEveryInfo = (STimeEveryOperatorInfo*) pOperator->info;
@@ -6431,6 +6440,7 @@ static SSDataBlock* doSTableIntervalAgg(void* param, bool* newgroup) {
SOperatorInfo* upstream = pOperator->upstream[0];
+ STableId prevId = {0, 0};
while(1) {
publishOperatorProfEvent(upstream, QUERY_PROF_BEFORE_OPERATOR_EXEC);
SSDataBlock* pBlock = upstream->exec(upstream, newgroup);
@@ -6440,6 +6450,12 @@ static SSDataBlock* doSTableIntervalAgg(void* param, bool* newgroup) {
break;
}
+ if (prevId.tid != pBlock->info.tid || prevId.uid != pBlock->info.uid) {
+ resetInterpolation(pIntervalInfo->pCtx, pRuntimeEnv, pOperator->numOfOutput);
+ prevId.uid = pBlock->info.uid;
+ prevId.tid = pBlock->info.tid;
+ }
+
// the pDataBlock are always the same one, no need to call this again
STableQueryInfo* pTableQueryInfo = pRuntimeEnv->current;
@@ -8785,6 +8801,7 @@ SQInfo* createQInfoImpl(SQueryTableMsg* pQueryMsg, SGroupbyExpr* pGroupbyExpr, S
pQueryAttr->tsCompQuery = pQueryMsg->tsCompQuery;
pQueryAttr->simpleAgg = pQueryMsg->simpleAgg;
pQueryAttr->pointInterpQuery = pQueryMsg->pointInterpQuery;
+ pQueryAttr->needTableSeqScan = pQueryMsg->needTableSeqScan;
pQueryAttr->needReverseScan = pQueryMsg->needReverseScan;
pQueryAttr->stateWindow = pQueryMsg->stateWindow;
pQueryAttr->vgId = vgId;
diff --git a/src/query/src/qPlan.c b/src/query/src/qPlan.c
index 27a22f70832dc9669aa473b03820d84d4736b497..eb3a3f36207d27d610e29bd890a56b2ef411157c 100644
--- a/src/query/src/qPlan.c
+++ b/src/query/src/qPlan.c
@@ -538,7 +538,7 @@ SArray* createTableScanPlan(SQueryAttr* pQueryAttr) {
} else {
if (pQueryAttr->queryBlockDist) {
op = OP_TableBlockInfoScan;
- } else if (pQueryAttr->tsCompQuery || pQueryAttr->diffQuery) {
+ } else if (pQueryAttr->tsCompQuery || pQueryAttr->diffQuery || pQueryAttr->needTableSeqScan) {
op = OP_TableSeqScan;
} else if (pQueryAttr->needReverseScan || pQueryAttr->pointInterpQuery) {
op = OP_DataBlocksOptScan;
diff --git a/tests/pytest/fulltest.sh b/tests/pytest/fulltest.sh
index 8206510357bc2c1e5856a8cb897e625f6a26b1cb..0e259090699c6930b532fc55cdb9b850b7199669 100755
--- a/tests/pytest/fulltest.sh
+++ b/tests/pytest/fulltest.sh
@@ -371,6 +371,7 @@ python3 ./test.py -f functions/function_irate.py
python3 ./test.py -f functions/function_ceil.py
python3 ./test.py -f functions/function_floor.py
python3 ./test.py -f functions/function_round.py
+python3 ./test.py -f functions/function_elapsed.py
python3 ./test.py -f functions/function_mavg.py
python3 ./test.py -f functions/function_csum.py
diff --git a/tests/pytest/functions/function_elapsed.py b/tests/pytest/functions/function_elapsed.py
new file mode 100644
index 0000000000000000000000000000000000000000..6bc54bfc1c7fc173bf9447da1a9b0aa4aba3e525
--- /dev/null
+++ b/tests/pytest/functions/function_elapsed.py
@@ -0,0 +1,97 @@
+###################################################################
+# Copyright (c) 2020 by TAOS Technologies, Inc.
+# All rights reserved.
+#
+# This file is proprietary and confidential to TAOS Technologies.
+# No part of this file may be reproduced, stored, transmitted,
+# disclosed or used in any form or by any means other than as
+# expressly provided by the written permission from Jianhui Tao
+#
+###################################################################
+
+# -*- coding: utf-8 -*-
+
+import sys
+import taos
+from util.log import *
+from util.cases import *
+from util.sql import *
+from functions.function_elapsed_case import *
+
+class TDTestCase:
+ def init(self, conn, logSql):
+ tdLog.debug("start to execute %s" % __file__)
+ tdSql.init(conn.cursor())
+
+ def genTime(self, no):
+ h = int(no / 60)
+ hs = "%d" %h
+ if h < 10:
+ hs = "0%d" %h
+
+ m = int(no % 60)
+ ms = "%d" %m
+ if m < 10:
+ ms = "0%d" %m
+
+ return hs, ms
+
+ def general(self):
+ # normal table
+ tdSql.execute("create database wxy_db minrows 10 maxrows 200")
+ tdSql.execute("use wxy_db")
+ tdSql.execute("create table t1(ts timestamp, i int, b bigint, f float, d double, bin binary(10), s smallint, t tinyint, bl bool, n nchar(10), ts1 timestamp)")
+ for i in range(1, 1001):
+ hs, ms = self.genTime(i)
+ if i < 500:
+ ret = tdSql.execute("insert into t1(ts, i, b) values (\"2021-11-22 %s:%s:00\", %d, 1)" % (hs, ms, i))
+ else:
+ ret = tdSql.execute("insert into t1(ts, i, b) values (\"2021-11-22 %s:%s:00\", %d, 0)" % (hs, ms, i))
+ tdSql.query("select count(*) from t1")
+ tdSql.checkEqual(int(tdSql.getData(0, 0)), 1000)
+
+ # empty normal table
+ tdSql.execute("create table t2(ts timestamp, i int, b bigint, f float, d double, bin binary(10), s smallint, t tinyint, bl bool, n nchar(10), ts1 timestamp)")
+
+ tdSql.execute("create database wxy_db_ns precision \"ns\"")
+ tdSql.execute("use wxy_db_ns")
+ tdSql.execute("create table t1 (ts timestamp, f float)")
+ tdSql.execute("insert into t1 values('2021-11-18 00:00:00.000000100', 1)"
+ "('2021-11-18 00:00:00.000000200', 2)"
+ "('2021-11-18 00:00:00.000000300', 3)"
+ "('2021-11-18 00:00:00.000000500', 4)")
+
+ # super table
+ tdSql.execute("use wxy_db")
+ tdSql.execute("create stable st1(ts timestamp, i int, b bigint, f float, d double, bin binary(10), s smallint, t tinyint, bl bool, n nchar(10), ts1 timestamp) tags(id int)")
+ tdSql.execute("create table st1s1 using st1 tags(1)")
+ tdSql.execute("create table st1s2 using st1 tags(2)")
+ for i in range(1, 1001):
+ hs, ms = self.genTime(i)
+ if 0 == i % 2:
+ ret = tdSql.execute("insert into st1s1(ts, i) values (\"2021-11-22 %s:%s:00\", %d)" % (hs, ms, i))
+ else:
+ ret = tdSql.execute("insert into st1s2(ts, i) values (\"2021-11-22 %s:%s:00\", %d)" % (hs, ms, i))
+ tdSql.query("select count(*) from st1s1")
+ tdSql.checkEqual(int(tdSql.getData(0, 0)), 500)
+ tdSql.query("select count(*) from st1s2")
+ tdSql.checkEqual(int(tdSql.getData(0, 0)), 500)
+ # empty super table
+ tdSql.execute("create stable st2(ts timestamp, i int, b bigint, f float, d double, bin binary(10), s smallint, t tinyint, bl bool, n nchar(10), ts1 timestamp) tags(id int)")
+ tdSql.execute("create table st2s1 using st1 tags(1)")
+ tdSql.execute("create table st2s2 using st1 tags(2)")
+
+ tdSql.execute("create stable st3(ts timestamp, i int, b bigint, f float, d double, bin binary(10), s smallint, t tinyint, bl bool, n nchar(10), ts1 timestamp) tags(id int)")
+
+ def run(self):
+ tdSql.prepare()
+ self.general()
+ ElapsedCase().run()
+
+
+ def stop(self):
+ tdSql.close()
+ tdLog.success("%s successfully executed" % __file__)
+
+tdCases.addWindows(__file__, TDTestCase())
+tdCases.addLinux(__file__, TDTestCase())
diff --git a/tests/pytest/functions/function_elapsed_case.py b/tests/pytest/functions/function_elapsed_case.py
new file mode 100644
index 0000000000000000000000000000000000000000..56610a9347c3ab90a9addc64dd62a6ed60758abf
--- /dev/null
+++ b/tests/pytest/functions/function_elapsed_case.py
@@ -0,0 +1,374 @@
+###################################################################
+# Copyright (c) 2020 by TAOS Technologies, Inc.
+# All rights reserved.
+#
+# This file is proprietary and confidential to TAOS Technologies.
+# No part of this file may be reproduced, stored, transmitted,
+# disclosed or used in any form or by any means other than as
+# expressly provided by the written permission from Jianhui Tao
+#
+###################################################################
+
+# -*- coding: utf-8 -*-
+
+import sys
+import taos
+from util.log import *
+from util.cases import *
+from util.sql import *
+
+class ElapsedCase:
+ def __init__(self, restart = False):
+ self.restart = restart
+
+ def selectTest(self):
+ tdSql.execute("use wxy_db")
+
+ tdSql.query("select elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.checkRows(1)
+ tdSql.checkCols(1)
+
+ tdSql.query("select elapsed(ts, 1m) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.checkEqual(int(tdSql.getData(0, 0)), 999)
+
+ tdSql.query("select elapsed(ts), elapsed(ts, 1m), elapsed(ts, 10m) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.checkEqual(int(tdSql.getData(0, 1)), 999)
+ tdSql.checkEqual(int(tdSql.getData(0, 2)), 99)
+
+ tdSql.query("select elapsed(ts), count(*), avg(f), twa(f), irate(f), sum(f), stddev(f), leastsquares(f, 1, 1), "
+ "min(f), max(f), first(f), last(f), percentile(i, 20), apercentile(i, 30), last_row(i), spread(i) "
+ "from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.checkRows(1)
+ tdSql.checkCols(16)
+ tdSql.checkEqual(int(tdSql.getData(0, 1)), 1000)
+
+ tdSql.query("select elapsed(ts) + 10, elapsed(ts) - 20, elapsed(ts) * 0, elapsed(ts) / 10, elapsed(ts) / elapsed(ts, 1m) from t1 "
+ "where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.checkRows(1)
+ tdSql.checkCols(5)
+ tdSql.checkEqual(int(tdSql.getData(0, 2)), 0)
+
+ tdSql.query("select elapsed(ts) from st1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' group by tbname")
+ tdSql.checkRows(2)
+ tdSql.checkCols(2) # append tbname
+
+ tdSql.query("select elapsed(ts, 10m) from st1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' group by tbname")
+ tdSql.checkEqual(int(tdSql.getData(0, 0)), 99)
+ tdSql.checkEqual(int(tdSql.getData(1, 0)), 99)
+
+ tdSql.query("select elapsed(ts), elapsed(ts, 10m), elapsed(ts, 100m) from st1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' group by tbname")
+ tdSql.checkEqual(int(tdSql.getData(0, 1)), 99)
+ tdSql.checkEqual(int(tdSql.getData(0, 2)), 9)
+ # stddev(f),
+ tdSql.query("select elapsed(ts), count(*), avg(f), twa(f), irate(f), sum(f), min(f), max(f), first(f), last(f), apercentile(i, 30), last_row(i), spread(i) "
+ "from st1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' group by tbname")
+ tdSql.checkRows(2)
+ tdSql.checkCols(14) # append tbname
+ tdSql.checkEqual(int(tdSql.getData(0, 1)), 500)
+
+ tdSql.query("select elapsed(ts) + 10, elapsed(ts) - 20, elapsed(ts) * 0, elapsed(ts) / 10, elapsed(ts) / elapsed(ts, 1m) "
+ "from st1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' group by tbname")
+ tdSql.checkRows(2)
+ tdSql.checkCols(6) # append tbname
+ tdSql.checkEqual(int(tdSql.getData(0, 2)), 0)
+
+ tdSql.query("select elapsed(ts), tbname from st1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' group by tbname")
+ tdSql.checkRows(2)
+ tdSql.checkCols(3) # additional append tbname
+
+ tdSql.execute("use wxy_db_ns")
+ tdSql.query("select elapsed(ts, 1b), elapsed(ts, 1u) from t1")
+ tdSql.checkRows(1)
+ tdSql.checkCols(2)
+
+ self.selectIllegalTest()
+
+ # It has little to do with the elapsed function, so just simple test.
+ def whereTest(self):
+ tdSql.execute("use wxy_db")
+
+ tdSql.query("select elapsed(ts) from st1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' and id = 1 group by tbname")
+ tdSql.checkRows(1)
+ tdSql.checkCols(2) # append tbname
+
+ # It has little to do with the elapsed function, so just simple test.
+ def sessionTest(self):
+ tdSql.execute("use wxy_db")
+
+ tdSql.query("select elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' session(ts, 10s)")
+ tdSql.checkRows(1000)
+
+ tdSql.query("select elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' session(ts, 70s)")
+ tdSql.checkRows(1)
+
+ # It has little to do with the elapsed function, so just simple test.
+ def stateWindowTest(self):
+ tdSql.execute("use wxy_db")
+
+ tdSql.query("select elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' state_window(i)")
+ tdSql.checkRows(1000)
+
+ tdSql.query("select elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' state_window(b)")
+ tdSql.checkRows(2)
+
+ def intervalTest(self):
+ tdSql.execute("use wxy_db")
+
+ tdSql.query("select elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(1m)")
+ tdSql.checkRows(1000)
+
+ # The first window has 9 records, and the last window has 1 record.
+ tdSql.query("select elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(10m)")
+ tdSql.checkRows(101)
+ tdSql.checkEqual(int(tdSql.getData(0, 1)), 9 * 60 * 1000)
+ tdSql.checkEqual(int(tdSql.getData(100, 1)), 0)
+
+ # Skip windows without data.
+ tdSql.query("select elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(35s)")
+ tdSql.checkRows(1000)
+
+ tdSql.query("select elapsed(ts), count(*), avg(f), twa(f), irate(f), sum(f), stddev(f), leastsquares(f, 1, 1), "
+ "min(f), max(f), first(f), last(f), percentile(i, 20), apercentile(i, 30), last_row(i), spread(i) "
+ "from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(20m)")
+ tdSql.checkRows(51) # ceil(1000/50) + 1(last point), window is half-open interval.
+ tdSql.checkCols(17) # front push timestamp
+
+ tdSql.query("select elapsed(ts) from st1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(40s) group by tbname")
+ tdSql.checkRows(1000)
+
+ tdSql.query("select elapsed(ts) + 10, elapsed(ts) - 20, elapsed(ts) * 0, elapsed(ts) / 10, elapsed(ts) / elapsed(ts, 1m) "
+ "from st1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(30m) group by tbname")
+ tdSql.checkRows(68) # ceil(1000/30)
+ tdSql.checkCols(7) # front push timestamp and append tbname
+
+ # It has little to do with the elapsed function, so just simple test.
+ def fillTest(self):
+ tdSql.execute("use wxy_db")
+
+ tdSql.query("select elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(30s) fill(value, 1000)")
+ tdSql.checkRows(2880) # The range of window conditions is 24 hours.
+ tdSql.checkEqual(int(tdSql.getData(0, 1)), 1000)
+
+ tdSql.query("select elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(30s) fill(prev)")
+ tdSql.checkRows(2880) # The range of window conditions is 24 hours.
+ tdSql.checkData(0, 1, None)
+
+ tdSql.query("select elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(30s) fill(null)")
+ tdSql.checkRows(2880) # The range of window conditions is 24 hours.
+ tdSql.checkData(0, 1, None)
+
+ tdSql.query("select elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(30s) fill(linear)")
+ tdSql.checkRows(2880) # The range of window conditions is 24 hours.
+
+ tdSql.query("select elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(30s) fill(next)")
+ tdSql.checkRows(2880) # The range of window conditions is 24 hours.
+
+ # Elapsed only support group by tbname. Supported tests have been done in selectTest().
+ def groupbyTest(self):
+ tdSql.execute("use wxy_db")
+
+ tdSql.error("select elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' group by i")
+ tdSql.error("select elapsed(ts) from st1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' group by i")
+
+ def orderbyCheck(self, sql, elapsedCol):
+ resultAsc = tdSql.getResult(sql)
+ resultdesc = tdSql.getResult(sql + " order by ts desc")
+ resultRows = len(resultAsc)
+ for i in range(resultRows):
+ tdSql.checkEqual(resultAsc[i][elapsedCol], resultdesc[resultRows - i - 1][elapsedCol])
+
+ def splitStableResult(self, sql, elapsedCol, tbnameCol):
+ subtable = {}
+ result = tdSql.getResult(sql)
+ for i in range(len(result)):
+ if None == subtable.get(result[i][tbnameCol]):
+ subtable[result[i][tbnameCol]] = [result[i][elapsedCol]]
+ else:
+ subtable[result[i][tbnameCol]].append(result[i][elapsedCol])
+ return subtable
+
+ def doOrderbyCheck(self, resultAsc, resultdesc):
+ resultRows = len(resultAsc)
+ for i in range(resultRows):
+ tdSql.checkEqual(resultAsc[i], resultdesc[resultRows - i - 1])
+
+ def orderbyForStableCheck(self, sql, elapsedCol, tbnameCol):
+ subtableAsc = self.splitStableResult(sql, elapsedCol, tbnameCol)
+ subtableDesc = self.splitStableResult(sql + " order by ts desc", elapsedCol, tbnameCol)
+ for kv in subtableAsc.items():
+ descValue = subtableDesc.get(kv[0])
+ if None == descValue:
+ tdLog.exit("%s failed: subtable %s not exists" % (sql))
+ else:
+ self.doOrderbyCheck(kv[1], descValue)
+
+ # Orderby clause only changes the output order and has no effect on the calculation results.
+ def orderbyTest(self):
+ tdSql.execute("use wxy_db")
+
+ self.orderbyCheck("select elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'", 0)
+ self.orderbyCheck("select elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(40s)", 1)
+ self.orderbyCheck("select elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(1m)", 1)
+ self.orderbyCheck("select elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(10m)", 1)
+ self.orderbyCheck("select elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(150m)", 1)
+ self.orderbyCheck("select elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(222m)", 1)
+ self.orderbyCheck("select elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(1000m)", 1)
+
+ self.orderbyForStableCheck("select elapsed(ts) from st1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' group by tbname", 0, 1)
+ self.orderbyForStableCheck("select elapsed(ts) from st1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(40s) group by tbname", 1, 2)
+ self.orderbyForStableCheck("select elapsed(ts) from st1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(1m) group by tbname", 1, 2)
+ self.orderbyForStableCheck("select elapsed(ts) from st1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(10m) group by tbname", 1, 2)
+ self.orderbyForStableCheck("select elapsed(ts) from st1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(150m) group by tbname", 1, 2)
+ self.orderbyForStableCheck("select elapsed(ts) from st1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(222m) group by tbname", 1, 2)
+ self.orderbyForStableCheck("select elapsed(ts) from st1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(1000m) group by tbname", 1, 2)
+
+ def slimitCheck(self, sql):
+ tdSql.checkEqual(tdSql.query(sql + " slimit 0"), 0)
+ tdSql.checkEqual(tdSql.query(sql + " slimit 1 soffset 0"), tdSql.query(sql + " slimit 0, 1"))
+ tdSql.checkEqual(tdSql.query(sql + " slimit 1, 1"), tdSql.query(sql) / 2)
+ tdSql.checkEqual(tdSql.query(sql + " slimit 10"), tdSql.query(sql))
+
+ # It has little to do with the elapsed function, so just simple test.
+ def slimitTest(self):
+ tdSql.execute("use wxy_db")
+
+ self.slimitCheck("select elapsed(ts) from st1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' group by tbname")
+ self.slimitCheck("select elapsed(ts) from st1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(40s) group by tbname")
+
+ def limitCheck(self, sql, groupby = 0):
+ rows = tdSql.query(sql)
+ if rows > 0:
+ tdSql.checkEqual(tdSql.query(sql + " limit 0"), 0)
+ if 1 == groupby:
+ tdSql.checkEqual(tdSql.query(sql + " limit 1"), 2)
+ tdSql.checkEqual(tdSql.query(sql + " limit %d offset %d" % (rows / 2, rows / 3)), tdSql.query(sql + " limit %d, %d" % (rows / 3, rows / 2)))
+ tdSql.checkEqual(tdSql.query(sql + " limit %d" % (rows / 2)), rows)
+ else:
+ tdSql.checkEqual(tdSql.query(sql + " limit 1"), 1)
+ tdSql.checkEqual(tdSql.query(sql + " limit %d offset %d" % (rows / 2, rows / 3)), tdSql.query(sql + " limit %d, %d" % (rows / 3, rows / 2)))
+ tdSql.checkEqual(tdSql.query(sql + " limit %d" % (rows + 1)), rows)
+
+ # It has little to do with the elapsed function, so just simple test.
+ def limitTest(self):
+ tdSql.execute("use wxy_db")
+
+ self.limitCheck("select elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ self.limitCheck("select elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(40s)")
+
+ self.limitCheck("select elapsed(ts) from st1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' group by tbname", 1)
+ self.limitCheck("select elapsed(ts) from st1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(40s) group by tbname", 1)
+
+ def fromCheck(self, sqlTemplate, table):
+ tdSql.checkEqual(tdSql.getResult(sqlTemplate % table), tdSql.getResult(sqlTemplate % ("(select * from %s)" % table)))
+ tdSql.query(sqlTemplate % ("(select last(ts) from %s interval(10s))" % table))
+ tdSql.query(sqlTemplate % ("(select elapsed(ts) from %s interval(10s))" % table))
+
+ # It has little to do with the elapsed function, so just simple test.
+ def fromTest(self):
+ tdSql.execute("use wxy_db")
+
+ self.fromCheck("select elapsed(ts) from %s where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'", "t1")
+ self.fromCheck("select elapsed(ts) from %s where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(40s)", "t1")
+ tdSql.query("select * from (select elapsed(ts) from t1 interval(10s)) where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.query("select * from (select elapsed(ts) from t1)")
+ # empty table test
+ tdSql.checkEqual(tdSql.query("select elapsed(ts) from t2"), 0)
+ tdSql.checkEqual(tdSql.query("select elapsed(ts) from st2 group by tbname"), 0)
+ tdSql.checkEqual(tdSql.query("select elapsed(ts) from st3 group by tbname"), 0)
+ # Tags not allowed for table query, so there is no need to test super table.
+ tdSql.error("select elapsed(ts) from (select * from st1)")
+
+ def joinCheck(self, sqlTemplate, rtable):
+ tdSql.checkEqual(tdSql.getResult(sqlTemplate % (rtable, "")), tdSql.getResult(sqlTemplate % ("t1, %s t2" % rtable, "t1.ts = t2.ts and ")))
+
+ # It has little to do with the elapsed function, so just simple test.
+ def joinTest(self):
+ tdSql.execute("use wxy_db")
+
+ # st1s1 is a subset of t1.
+ self.joinCheck("select elapsed(ts) from %s where %s ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'", "st1s1")
+ self.joinCheck("select elapsed(ts) from %s where %s ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(150m)", "st1s1")
+ # join query does not support group by, so there is no need to test super table.
+
+ def unionAllCheck(self, sql1, sql2):
+ rows1 = tdSql.query(sql1)
+ rows2 = tdSql.query(sql2)
+ tdSql.checkEqual(tdSql.query(sql1 + " union all " + sql2), rows1 + rows2)
+
+ # It has little to do with the elapsed function, so just simple test.
+ def unionAllTest(self):
+ tdSql.execute("use wxy_db")
+
+ self.unionAllCheck("select elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'",
+ "select elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-22 01:00:00'")
+ self.unionAllCheck("select elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(40s)",
+ "select elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(150m)")
+ self.unionAllCheck("select elapsed(ts) from st1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' group by tbname",
+ "select elapsed(ts) from st1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-22 02:00:00' group by tbname")
+ self.unionAllCheck("select elapsed(ts) from st1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(1m) group by tbname",
+ "select elapsed(ts) from st1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' interval(222m) group by tbname")
+
+ # It has little to do with the elapsed function, so just simple test.
+ def continuousQueryTest(self):
+ tdSql.execute("use wxy_db")
+
+ if (self.restart):
+ tdSql.execute("drop table elapsed_t")
+ tdSql.execute("drop table elapsed_st")
+ tdSql.execute("create table elapsed_t as select elapsed(ts) from t1 interval(1m) sliding(30s)")
+ tdSql.execute("create table elapsed_st as select elapsed(ts) from st1 interval(1m) sliding(30s) group by tbname")
+
+ def selectIllegalTest(self):
+ tdSql.execute("use wxy_db")
+ tdSql.error("select elapsed(1) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed('2021-11-18 00:00:10') from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(now) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(i) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(b) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(f) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(d) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(bin) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(s) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(t) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(bl) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(n) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(ts1) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(*) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(ts, '1s') from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(ts, i) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ #tdSql.error("select elapsed(ts, now) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(ts, ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(ts + 1) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(ts, 1b) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(ts, 1u) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(max(ts)) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select distinct elapsed(ts) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(ts) from st1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select distinct elapsed(ts) from st1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00' group by tbname")
+ tdSql.error("select elapsed(ts), i from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(ts), ts from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(ts), _c0 from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(ts), top(i, 1) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(ts), bottom(i, 1) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(ts), inerp(i) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(ts), diff(i) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(ts), derivative(i, 1s, 0) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(ts), ceil(i) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(ts), floor(i) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+ tdSql.error("select elapsed(ts), round(i) from t1 where ts > '2021-11-22 00:00:00' and ts < '2021-11-23 00:00:00'")
+
+ def run(self):
+ self.selectTest()
+ self.whereTest()
+ self.sessionTest()
+ self.stateWindowTest()
+ self.intervalTest()
+ self.fillTest()
+ self.groupbyTest()
+ self.orderbyTest()
+ self.slimitTest()
+ self.limitTest()
+ self.fromTest()
+ self.joinTest()
+ self.unionAllTest()
+ self.continuousQueryTest()
diff --git a/tests/pytest/functions/function_elapsed_restart.py b/tests/pytest/functions/function_elapsed_restart.py
new file mode 100644
index 0000000000000000000000000000000000000000..8b492267abdd8ea2d2b2fc27ee2e957e1038f48d
--- /dev/null
+++ b/tests/pytest/functions/function_elapsed_restart.py
@@ -0,0 +1,35 @@
+###################################################################
+# Copyright (c) 2020 by TAOS Technologies, Inc.
+# All rights reserved.
+#
+# This file is proprietary and confidential to TAOS Technologies.
+# No part of this file may be reproduced, stored, transmitted,
+# disclosed or used in any form or by any means other than as
+# expressly provided by the written permission from Jianhui Tao
+#
+###################################################################
+
+# -*- coding: utf-8 -*-
+
+import sys
+import taos
+from util.log import *
+from util.cases import *
+from util.sql import *
+from functions.function_elapsed_case import *
+
+class TDTestCase:
+ def init(self, conn, logSql):
+ tdLog.debug("start to execute %s" % __file__)
+ tdSql.init(conn.cursor())
+
+ def run(self):
+ tdSql.prepare()
+ ElapsedCase(True).run()
+
+ def stop(self):
+ tdSql.close()
+ tdLog.success("%s successfully executed" % __file__)
+
+tdCases.addWindows(__file__, TDTestCase())
+tdCases.addLinux(__file__, TDTestCase())