@@ -44,7 +44,7 @@ For more details on features, please read through the entire documentation.
...
@@ -44,7 +44,7 @@ For more details on features, please read through the entire documentation.
## Competitive Advantages
## Competitive Advantages
By making full use of [characteristics of time series data](https://tdengine.com/tsdb/characteristics-of-time-series-data/), TDengine differentiates itself from other [time series databases](https://tdengine.com/tsdb), with the following advantages.
By making full use of [characteristics of time series data](https://tdengine.com/tsdb/characteristics-of-time-series-data/), TDengine differentiates itself from other [time series databases](https://tdengine.com/tsdb/), with the following advantages.
-**[High-Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression.
-**[High-Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression.
...
@@ -123,13 +123,12 @@ As a high-performance, scalable and SQL supported time-series database, TDengine
...
@@ -123,13 +123,12 @@ As a high-performance, scalable and SQL supported time-series database, TDengine
## Comparison with other databases
## Comparison with other databases
-[Writing Performance Comparison of TDengine and InfluxDB ](https://tdengine.com/performance-comparison-of-tdengine-and-influxdb/)
-[TDengine vs. InfluxDB](https://tdengine.com/tsdb-comparison-influxdb-vs-tdengine/)
-[Query Performance Comparison of TDengine and InfluxDB](https://tdengine.com/query-performance-comparison-test-report-tdengine-vs-influxdb/)
-[TDengine vs. TimescaleDB](https://tdengine.com/tsdb-comparison-timescaledb-vs-tdengine/)
-[TDengine vs OpenTSDB](https://tdengine.com/performance-tdengine-vs-opentsdb/)
-[TDengine vs. OpenTSDB](https://tdengine.com/performance-tdengine-vs-opentsdb/)
-[TDengine vs Cassandra](https://tdengine.com/performance-tdengine-vs-cassandra/)
-[TDengine vs. Cassandra](https://tdengine.com/performance-tdengine-vs-cassandra/)
-[TDengine vs InfluxDB](https://tdengine.com/performance-tdengine-vs-influxdb/)
## More readings
## More readings
-[Introduction to Time-Series Database](https://tdengine.com/tsdb/)
-[Introduction to Time-Series Database](https://tdengine.com/tsdb/)
-[Introduction to TDengine competitive advantages](https://tdengine.com/tdengine/)
-[Introduction to TDengine competitive advantages](https://tdengine.com/tdengine/)
@@ -6,7 +6,7 @@ description: This document describes how to install TDengine in a Docker contain
...
@@ -6,7 +6,7 @@ description: This document describes how to install TDengine in a Docker contain
This document describes how to install TDengine in a Docker container and perform queries and inserts.
This document describes how to install TDengine in a Docker container and perform queries and inserts.
- The easiest way to explore TDengine is through [TDengine Cloud](http://cloud.tdengine.com).
- The easiest way to explore TDengine is through [TDengine Cloud](https://cloud.tdengine.com).
- To get started with TDengine in a non-containerized environment, see [Quick Install from Package](../../get-started/package).
- To get started with TDengine in a non-containerized environment, see [Quick Install from Package](../../get-started/package).
- If you want to view the source code, build TDengine yourself, or contribute to the project, see the [TDengine GitHub repository](https://github.com/taosdata/TDengine).
- If you want to view the source code, build TDengine yourself, or contribute to the project, see the [TDengine GitHub repository](https://github.com/taosdata/TDengine).
@@ -10,7 +10,7 @@ import PkgListV3 from "/components/PkgListV3";
...
@@ -10,7 +10,7 @@ import PkgListV3 from "/components/PkgListV3";
This document describes how to install TDengine on Linux/Windows/macOS and perform queries and inserts.
This document describes how to install TDengine on Linux/Windows/macOS and perform queries and inserts.
- The easiest way to explore TDengine is through [TDengine Cloud](http://cloud.tdengine.com).
- The easiest way to explore TDengine is through [TDengine Cloud](https://cloud.tdengine.com).
- To get started with TDengine on Docker, see [Quick Install on Docker](../../get-started/docker).
- To get started with TDengine on Docker, see [Quick Install on Docker](../../get-started/docker).
- If you want to view the source code, build TDengine yourself, or contribute to the project, see the [TDengine GitHub repository](https://github.com/taosdata/TDengine).
- If you want to view the source code, build TDengine yourself, or contribute to the project, see the [TDengine GitHub repository](https://github.com/taosdata/TDengine).
@@ -288,6 +288,6 @@ Prior to establishing connection, please make sure TDengine is already running a
...
@@ -288,6 +288,6 @@ Prior to establishing connection, please make sure TDengine is already running a
</Tabs>
</Tabs>
:::tip
:::tip
If the connection fails, in most cases it's caused by improper configuration for FQDN or firewall. Please refer to the section "Unable to establish connection" in [FAQ](https://docs.tdengine.com/train-faq/faq).
If the connection fails, in most cases it's caused by improper configuration for FQDN or firewall. Please refer to the section "Unable to establish connection" in [FAQ](../../train-faq/faq).
@@ -23,7 +23,7 @@ By subscribing to a topic, a consumer can obtain the latest data in that topic i
...
@@ -23,7 +23,7 @@ By subscribing to a topic, a consumer can obtain the latest data in that topic i
To implement these features, TDengine indexes its write-ahead log (WAL) file for fast random access and provides configurable methods for replacing and retaining this file. You can define a retention period and size for this file. For information, see the CREATE DATABASE statement. In this way, the WAL file is transformed into a persistent storage engine that remembers the order in which events occur. However, note that configuring an overly long retention period for your WAL files makes database compression inefficient. TDengine then uses the WAL file instead of the time-series database as its storage engine for queries in the form of topics. TDengine reads the data from the WAL file; uses a unified query engine instance to perform filtering, transformations, and other operations; and finally pushes the data to consumers.
To implement these features, TDengine indexes its write-ahead log (WAL) file for fast random access and provides configurable methods for replacing and retaining this file. You can define a retention period and size for this file. For information, see the CREATE DATABASE statement. In this way, the WAL file is transformed into a persistent storage engine that remembers the order in which events occur. However, note that configuring an overly long retention period for your WAL files makes database compression inefficient. TDengine then uses the WAL file instead of the time-series database as its storage engine for queries in the form of topics. TDengine reads the data from the WAL file; uses a unified query engine instance to perform filtering, transformations, and other operations; and finally pushes the data to consumers.
Tips:The default data subscription is to consume data from the wal. If the wal is deleted, the consumed data will be incomplete. At this time, you can set the parameter experimental.snapshot.enable to true to obtain all data from the tsdb, but in this way, the consumption order of the data cannot be guaranteed. Therefore, it is recommended to set a reasonable retention policy for WAL based on your consumption situation to ensure that you can subscribe all data from WAL.
Tips: Data subscription is to consume data from the wal. If some wal files are deleted according to WAL retention policy, the deleted data can't be consumed any more. So you need to set a reasonable value for parameter `WAL_RETENTION_PERIOD` or `WAL_RETENTION_SIZE` when creating the database and make sure your application consume the data in a timely way to make sure there is no data loss. This behavior is similar to Kafka and other widely used message queue products.
## Data Schema and API
## Data Schema and API
...
@@ -294,7 +294,6 @@ You configure the following parameters when creating a consumer:
...
@@ -294,7 +294,6 @@ You configure the following parameters when creating a consumer:
| `auto.offset.reset` | enum | Initial offset for the consumer group | Specify `earliest`, `latest`, or `none`(default) |
| `auto.offset.reset` | enum | Initial offset for the consumer group | Specify `earliest`, `latest`, or `none`(default) |
| `enable.auto.commit` | boolean | Commit automatically; true: user application doesn't need to explicitly commit; false: user application need to handle commit by itself | Default value is true |
| `enable.auto.commit` | boolean | Commit automatically; true: user application doesn't need to explicitly commit; false: user application need to handle commit by itself | Default value is true |
| `auto.commit.interval.ms` | integer | Interval for automatic commits, in milliseconds |
| `auto.commit.interval.ms` | integer | Interval for automatic commits, in milliseconds |
| `experimental.snapshot.enable` | boolean | Specify whether to consume data in TSDB; true: both data in WAL and in TSDB can be consumed; false: only data in WAL can be consumed | default value: false |
| `msg.with.table.name` | boolean | Specify whether to deserialize table names from messages | default value: false
| `msg.with.table.name` | boolean | Specify whether to deserialize table names from messages | default value: false
The method of specifying these parameters depends on the language used:
The method of specifying these parameters depends on the language used:
| `auto.commit.interval.ms` | string | Interval for automatic commits, in milliseconds | |
| `auto.commit.interval.ms` | string | Interval for automatic commits, in milliseconds | |
| `auto.offset.reset` | string | Initial offset for the consumer group | Specify `earliest`, `latest`, or `none`(default) |
| `auto.offset.reset` | string | Initial offset for the consumer group | Specify `earliest`, `latest`, or `none`(default) |
| `experimental.snapshot.enable` | string | Specify whether it's allowed to consume messages from the WAL or from TSDB | Specify `true` or `false` |
| `enable.heartbeat.background` | string | Backend heartbeat; if enabled, the consumer does not go offline even if it has not polled for a long time | Specify `true` or `false` |
| `enable.heartbeat.background` | string | Backend heartbeat; if enabled, the consumer does not go offline even if it has not polled for a long time | Specify `true` or `false` |
The preceding SQL statement shows all supertables in the current TDengine database, including the name, creation time, number of columns, number of tags, and number of subtables for each supertable.
The preceding SQL statement shows all supertables in the current TDengine database.
- The output time range of `INTERP` is specified by `RANGE(timestamp1,timestamp2)` parameter, with timestamp1 <= timestamp2. timestamp1 is the starting point of the output time range and must be specified. timestamp2 is the ending point of the output time range and must be specified.
- The output time range of `INTERP` is specified by `RANGE(timestamp1,timestamp2)` parameter, with timestamp1 <= timestamp2. timestamp1 is the starting point of the output time range and must be specified. timestamp2 is the ending point of the output time range and must be specified.
- The number of rows in the result set of `INTERP` is determined by the parameter `EVERY(time_unit)`. Starting from timestamp1, one interpolation is performed for every time interval specified `time_unit` parameter. The parameter `time_unit` must be an integer, with no quotes, with a time unit of: a(millisecond)), s(second), m(minute), h(hour), d(day), or w(week). For example, `EVERY(500a)` will interpolate every 500 milliseconds.
- The number of rows in the result set of `INTERP` is determined by the parameter `EVERY(time_unit)`. Starting from timestamp1, one interpolation is performed for every time interval specified `time_unit` parameter. The parameter `time_unit` must be an integer, with no quotes, with a time unit of: a(millisecond)), s(second), m(minute), h(hour), d(day), or w(week). For example, `EVERY(500a)` will interpolate every 500 milliseconds.
- Interpolation is performed based on `FILL` parameter. For more information about FILL clause, see [FILL Clause](../distinguished/#fill-clause).
- Interpolation is performed based on `FILL` parameter. For more information about FILL clause, see [FILL Clause](../distinguished/#fill-clause).
- `INTERP` can only be used to interpolate in single timeline. So it must be used with `partition by tbname` when it's used on a STable.
- `INTERP` can be applied to supertable by interpolating primary key sorted data of all its childtables. It can also be used with `partition by tbname` when applied to supertable to generate interpolation on each single timeline.
- Pseudocolumn `_irowts` can be used along with `INTERP` to return the timestamps associated with interpolation points(support after version 3.0.2.0).
- Pseudocolumn `_irowts` can be used along with `INTERP` to return the timestamps associated with interpolation points(support after version 3.0.2.0).
- Pseudocolumn `_isfilled` can be used along with `INTERP` to indicate whether the results are original records or data points generated by interpolation algorithm(support after version 3.0.3.0).
- Pseudocolumn `_isfilled` can be used along with `INTERP` to indicate whether the results are original records or data points generated by interpolation algorithm(support after version 3.0.3.0).
| 3.0.1 - 3.0.4 | fix the resultSet data is parsed incorrectly sometimes. 3.0.1 is compiled on JDK 11, you are advised to use other version in the JDK 8 environment |
| 3.0.0 | Support for TDengine 3.0 |
| 2.0.42 | fix wasNull interface return value in WebSocket connection |
| 2.0.41 | fix decode method of username and password in REST connection |
| 0x2301 | connection already closed | The connection has been closed, check the connection status, or recreate the connection to execute the relevant instructions. |
| 0x2302 | this operation is NOT supported currently! | The current interface does not support the connection. You can use another connection mode. |
| 0x2303 | invalid variables | The parameter is invalid. Check the interface specification and adjust the parameter type and size. |
| 0x2304 | statement is closed | The statement is closed. Check whether the statement is closed and used again, or whether the connection is normal. |
| 0x2305 | resultSet is closed | result set The result set is released. Check whether the result set is released and used again. |
| 0x2306 | Batch is empty! | prepare statement Add parameters and then execute batch. |
| 0x2307 | Can not issue data manipulation statements with executeQuery() | The update operation should use execute update(), not execute query(). |
| 0x2308 | Can not issue SELECT via executeUpdate() | The query operation should use execute query(), not execute update(). |
| 0x230d | parameter index out of range | The parameter is out of bounds. Check the proper range of the parameter. |
| 0x230e | connection already closed | The connection has been closed. Please check whether the connection is closed and used again, or whether the connection is normal. |
| 0x230f | unknown sql type in tdengine | Check the data type supported by TDengine. |
| 0x2310 | can't register JDBC-JNI driver | The native driver cannot be registered. Please check whether the url is correct. |
| 0x2312 | url is not set | Check whether the REST connection url is correct. |
| 0x2314 | numeric value out of range | Check that the correct interface is used for the numeric types in the obtained result set. |
| 0x2315 | unknown taos type in tdengine | Whether the correct TDengine data type is specified when converting the TDengine data type to the JDBC data type. |
| 0x2317 | | wrong request type was used in the REST connection. |
| 0x2318 | | data transmission exception occurred during the REST connection. Please check the network status and try again. |
| 0x2319 | user is required | The user name information is missing when creating the connection |
| 0x231a | password is required | Password information is missing when creating a connection |
| 0x231c | httpEntity is null, sql: | Execution exception occurred during the REST connection |
| 0x2350 | unknown error | Unknown exception, please return to the developer on github. |
| 0x2352 | Unsupported encoding | An unsupported character encoding set is specified under the native Connection. |
| 0x2353 | internal error of database, please see taoslog for more details | An error occurs when the prepare statement is executed on the native connection. Check the taos log to locate the fault. |
| 0x2354 | JNI connection is NULL | When the command is executed, the native Connection is closed. Check the connection to TDengine. |
| 0x2355 | JNI result set is NULL | The result set is abnormal. Please check the connection status and try again. |
| 0x2356 | invalid num of fields | The meta information of the result set obtained by the native connection does not match. |
| 0x2357 | empty sql string | Fill in the correct SQL for execution. |
| 0x2359 | JNI alloc memory failed, please see taoslog for more details | Memory allocation for the native connection failed. Check the taos log to locate the problem. |
| 0x2371 | consumer properties must not be null! | The parameter is empty when you create a subscription. Please fill in the correct parameter. |
| 0x2372 | configs contain empty key, failed to set consumer property | The parameter key contains a null value. Please enter the correct parameter. |
| 0x2373 | failed to set consumer property, | The parameter value contains a null value. Please enter the correct parameter. |
| 0x2375 | topic reference has been destroyed | The topic reference is released during the creation of the data subscription. Check the connection to TDengine. |
| 0x2376 | failed to set consumer topic, topic name is empty | During data subscription creation, the subscription topic name is empty. Check that the specified topic name is correct. |
| 0x2377 | consumer reference has been destroyed | The subscription data transfer channel has been closed. Please check the connection to TDengine. |
| 0x2378 | consumer create error | Failed to create a data subscription. Check the taos log according to the error message to locate the fault. |
| - | can't create connection with server within | Increase the connection time by adding the httpConnectTimeout parameter, or check the connection to the taos adapter. |
| - | failed to complete the task within the specified time | Increase the execution time by adding the messageWaitTimeout parameter, or check the connection to the taos adapter. |
TDengine currently supports timestamp, number, character, Boolean type, and the corresponding type conversion with Java is as follows:
TDengine currently supports timestamp, number, character, Boolean type, and the corresponding type conversion with Java is as follows:
...
@@ -82,7 +169,7 @@ Add following dependency in the `pom.xml` file of your Maven project:
...
@@ -82,7 +169,7 @@ Add following dependency in the `pom.xml` file of your Maven project:
<dependency>
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.1.0</version>
<version>3.2.1</version>
</dependency>
</dependency>
```
```
...
@@ -97,7 +184,7 @@ cd taos-connector-jdbc
...
@@ -97,7 +184,7 @@ cd taos-connector-jdbc
mvn clean install -Dmaven.test.skip=true
mvn clean install -Dmaven.test.skip=true
```
```
After you have compiled taos-jdbcdriver, the `taos-jdbcdriver-3.0.*-dist.jar` file is created in the target directory. The compiled JAR file is automatically stored in your local Maven repository.
After you have compiled taos-jdbcdriver, the `taos-jdbcdriver-3.2.*-dist.jar` file is created in the target directory. The compiled JAR file is automatically stored in your local Maven repository.
</TabItem>
</TabItem>
</Tabs>
</Tabs>
...
@@ -333,35 +420,6 @@ while(resultSet.next()){
...
@@ -333,35 +420,6 @@ while(resultSet.next()){
> The query is consistent with operating a relational database. When using subscripts to get the contents of the returned fields, you have to start from 1. However, we recommend using the field names to get the values of the fields in the result set.
> The query is consistent with operating a relational database. When using subscripts to get the contents of the returned fields, you have to start from 1. However, we recommend using the field names to get the values of the fields in the result set.
### Handling exceptions
After an error is reported, the error message and error code can be obtained through SQLException.
TDengine has significantly improved the bind APIs to support data writing (INSERT) scenarios. Writing data in this way avoids the resource consumption of SQL syntax parsing, resulting in significant write performance improvements in many cases.
TDengine has significantly improved the bind APIs to support data writing (INSERT) scenarios. Writing data in this way avoids the resource consumption of SQL syntax parsing, resulting in significant write performance improvements in many cases.
...
@@ -369,9 +427,12 @@ TDengine has significantly improved the bind APIs to support data writing (INSER
...
@@ -369,9 +427,12 @@ TDengine has significantly improved the bind APIs to support data writing (INSER
**Note:**
**Note:**
- JDBC REST connections do not currently support bind interface
- JDBC REST connections do not currently support bind interface
- The following sample code is based on taos-jdbcdriver-3.1.0
- The following sample code is based on taos-jdbcdriver-3.2.1
- The setString method should be called for binary type data, and the setNString method should be called for nchar type data
- The setString method should be called for binary type data, and the setNString method should be called for nchar type data
- both setString and setNString require the user to declare the width of the corresponding column in the size parameter of the table definition
- Do not use `db.?` in prepareStatement when specify the database with the table name, should directly use `?`, then specify the database in setTableName, for example: `prepareStatement.setTableName("db.t1")`.
@@ -599,21 +660,7 @@ public class ParameterBindingDemo {
...
@@ -599,21 +660,7 @@ public class ParameterBindingDemo {
}
}
```
```
The methods to set TAGS values:
**Note**: both setString and setNString require the user to declare the width of the corresponding column in the size parameter of the table definition
```java
public void setTagNull(int index, int type)
public void setTagBoolean(int index, boolean value)
public void setTagInt(int index, int value)
public void setTagByte(int index, byte value)
public void setTagShort(int index, short value)
public void setTagLong(int index, long value)
public void setTagTimestamp(int index, long value)
public void setTagFloat(int index, float value)
public void setTagDouble(int index, double value)
public void setTagString(int index, String value)
public void setTagNString(int index, String value)
```
The methods to set VALUES columns:
The methods to set VALUES columns:
...
@@ -630,17 +677,203 @@ public void setString(int columnIndex, ArrayList<String> list, int size) throws
...
@@ -630,17 +677,203 @@ public void setString(int columnIndex, ArrayList<String> list, int size) throws
public void setNString(int columnIndex, ArrayList<String> list, int size) throws SQLException
public void setNString(int columnIndex, ArrayList<String> list, int size) throws SQLException
```
```
</TabItem>
<TabItem value="ws" label="WebSocket connection">
```java
public class ParameterBindingDemo {
private static final String host = "127.0.0.1";
private static final Random random = new Random(System.currentTimeMillis());
private static final int BINARY_COLUMN_SIZE = 30;
private static final String[] schemaList = {
"create table stable1(ts timestamp, f1 tinyint, f2 smallint, f3 int, f4 bigint) tags(t1 tinyint, t2 smallint, t3 int, t4 bigint)",
pstmt.setTimestamp(0, new Timestamp(current + j));
pstmt.setNString(1, "California.SanFrancisco");
pstmt.addBatch();
}
pstmt.executeBatch();
}
}
}
}
```
</TabItem>
</Tabs>
The methods to set TAGS values:
```java
public void setTagNull(int index, int type)
public void setTagBoolean(int index, boolean value)
public void setTagInt(int index, int value)
public void setTagByte(int index, byte value)
public void setTagShort(int index, short value)
public void setTagLong(int index, long value)
public void setTagTimestamp(int index, long value)
public void setTagFloat(int index, float value)
public void setTagDouble(int index, double value)
public void setTagString(int index, String value)
public void setTagNString(int index, String value)
```
### Schemaless Writing
### Schemaless Writing
TDengine supports schemaless writing. It is compatible with InfluxDB's Line Protocol, OpenTSDB's telnet line protocol, and OpenTSDB's JSON format protocol. For more information, see [Schemaless Writing](../../schemaless).
TDengine supports schemaless writing. It is compatible with InfluxDB's Line Protocol, OpenTSDB's telnet line protocol, and OpenTSDB's JSON format protocol. For more information, see [Schemaless Writing](../../schemaless).
| 3.0.1 - 3.0.4 | fix the resultSet data is parsed incorrectly sometimes. 3.0.1 is compiled on JDK 11, you are advised to use other version in the JDK 8 environment |
| 3.0.0 | Support for TDengine 3.0 |
| 2.0.42 | fix wasNull interface return value in WebSocket connection |
| 2.0.41 | fix decode method of username and password in REST connection |
@@ -12,8 +12,8 @@ After TDengine starts, it automatically writes many metrics in specific interval
...
@@ -12,8 +12,8 @@ After TDengine starts, it automatically writes many metrics in specific interval
To deploy TDinsight, we need
To deploy TDinsight, we need
- a single-node TDengine server or a multi-node TDengine cluster and a [Grafana] server are required. This dashboard requires TDengine 3.0.1.0 and above, with the monitoring feature enabled. For detailed configuration, please refer to [TDengine monitoring configuration](../config/#monitoring-parameters).
- a single-node TDengine server or a multi-node TDengine cluster and a [Grafana] server are required. This dashboard requires TDengine 3.0.1.0 and above, with the monitoring feature enabled. For detailed configuration, please refer to [TDengine monitoring configuration](../config/#monitoring-parameters).
- taosAdapter has been instaleld and running, please refer to [taosAdapter](../taosadapter).
- taosAdapter has been installed and running, please refer to [taosAdapter](../taosadapter).
- taosKeeper has been installed and running, please refer to [taosKeeper](../taoskeeper).
- taosKeeper has been installed and running, please refer to [taosKeeper](../taosKeeper).
Please record
Please record
- The endpoint of taosAdapter REST service, for example `http://tdengine.local:6041`
- The endpoint of taosAdapter REST service, for example `http://tdengine.local:6041`
# /home/TDinternal/community/source/libs/scalar/src/sclvector.c:1109:66: runtime error: signed integer overflow: 9223372034707292160 + 1676867897049 cannot be represented in type 'long int'
# /home/TDinternal/community/source/libs/scalar/src/sclvector.c:1109:66: runtime error: signed integer overflow: 9223372034707292160 + 1676867897049 cannot be represented in type 'long int'
runtime_error=`cat${LOG_DIR}/*.asan | grep"runtime error" | grep-v"trees.c:873" | grep-v"sclfunc.c.*outside the range of representable values of type"| grep-v"signed integer overflow" |grep -v"strerror.c"| grep-v"asan_malloc_linux.cc" |wc -l`
#0 0x7f2d64f5a808 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cc:144
#1 0x7f2d63fcf459 in strerror /build/glibc-SzIz7B/glibc-2.31/string/strerror.c:38
runtime_error=`cat${LOG_DIR}/*.asan | grep"runtime error" | grep-v"trees.c:873" | grep-v"sclfunc.c.*outside the range of representable values of type"| grep-v"signed integer overflow" |grep -v"strerror.c"| grep-v"asan_malloc_linux.cc" |grep -v"strerror.c"|wc -l`
tdSql.query(f"select tbname, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by tbname range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(prev)")
tdSql.checkRows(48)
foriinrange(0,18):
tdSql.checkData(i,0,'ctb1')
foriinrange(18,34):
tdSql.checkData(i,0,'ctb2')
foriinrange(34,48):
tdSql.checkData(i,0,'ctb3')
tdSql.checkData(0,1,'2020-02-01 00:00:01.000')
tdSql.checkData(17,1,'2020-02-01 00:00:18.000')
tdSql.checkData(18,1,'2020-02-01 00:00:03.000')
tdSql.checkData(33,1,'2020-02-01 00:00:18.000')
tdSql.checkData(34,1,'2020-02-01 00:00:05.000')
tdSql.checkData(47,1,'2020-02-01 00:00:18.000')
foriinrange(0,6):
tdSql.checkData(i,3,1)
foriinrange(6,12):
tdSql.checkData(i,3,7)
foriinrange(12,18):
tdSql.checkData(i,3,13)
foriinrange(18,24):
tdSql.checkData(i,3,3)
foriinrange(24,30):
tdSql.checkData(i,3,9)
foriinrange(30,34):
tdSql.checkData(i,3,15)
foriinrange(34,40):
tdSql.checkData(i,3,5)
foriinrange(40,46):
tdSql.checkData(i,3,11)
foriinrange(46,48):
tdSql.checkData(i,3,17)
tdSql.query(f"select tbname, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by tbname range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(next)")
tdSql.checkRows(48)
foriinrange(0,14):
tdSql.checkData(i,0,'ctb1')
foriinrange(14,30):
tdSql.checkData(i,0,'ctb2')
foriinrange(30,48):
tdSql.checkData(i,0,'ctb3')
tdSql.checkData(0,1,'2020-02-01 00:00:00.000')
tdSql.checkData(13,1,'2020-02-01 00:00:13.000')
tdSql.checkData(14,1,'2020-02-01 00:00:00.000')
tdSql.checkData(29,1,'2020-02-01 00:00:15.000')
tdSql.checkData(30,1,'2020-02-01 00:00:00.000')
tdSql.checkData(47,1,'2020-02-01 00:00:17.000')
foriinrange(0,2):
tdSql.checkData(i,3,1)
foriinrange(2,8):
tdSql.checkData(i,3,7)
foriinrange(8,14):
tdSql.checkData(i,3,13)
foriinrange(14,18):
tdSql.checkData(i,3,3)
foriinrange(18,24):
tdSql.checkData(i,3,9)
foriinrange(24,30):
tdSql.checkData(i,3,15)
foriinrange(30,36):
tdSql.checkData(i,3,5)
foriinrange(36,42):
tdSql.checkData(i,3,11)
foriinrange(42,48):
tdSql.checkData(i,3,17)
tdSql.query(f"select tbname, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by tbname range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear)")
tdSql.checkRows(39)
foriinrange(0,13):
tdSql.checkData(i,0,'ctb1')
foriinrange(13,26):
tdSql.checkData(i,0,'ctb2')
foriinrange(26,39):
tdSql.checkData(i,0,'ctb3')
tdSql.checkData(0,1,'2020-02-01 00:00:01.000')
tdSql.checkData(12,1,'2020-02-01 00:00:13.000')
tdSql.checkData(13,1,'2020-02-01 00:00:03.000')
tdSql.checkData(25,1,'2020-02-01 00:00:15.000')
tdSql.checkData(26,1,'2020-02-01 00:00:05.000')
tdSql.checkData(38,1,'2020-02-01 00:00:17.000')
foriinrange(0,13):
tdSql.checkData(i,3,i+1)
foriinrange(13,26):
tdSql.checkData(i,3,i-10)
foriinrange(26,39):
tdSql.checkData(i,3,i-21)
# select interp from supertable partition by column
tdSql.query(f"select c0, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by c0 range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(null)")
tdSql.checkRows(171)
tdSql.query(f"select c0, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by c0 range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(value, 0)")
tdSql.checkRows(171)
tdSql.query(f"select c0, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by c0 range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(prev)")
tdSql.checkRows(90)
tdSql.error(f"select interp(c0) from {dbname}.{stbname} range('2020-02-01 00:00:04', '2020-02-01 00:00:16') every(1s) fill(null)")
tdSql.query(f"select c0, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by c0 range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(next)")
#tdSql.checkRows(13)
tdSql.checkRows(90)
#tdSql.query(f"select interp(c0) from {dbname}.{ctbname1} range('2020-02-01 00:00:04', '2020-02-01 00:00:16') every(1s) fill(null)")
tdSql.query(f"select c0, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by c0 range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear)")
#tdSql.checkRows(13)
tdSql.checkRows(9)
# select interp from supertable partition by tag
tdSql.query(f"select t1, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by t1 range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(null)")
tdSql.checkRows(57)
tdSql.query(f"select t1, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by t1 range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(value, 0)")
tdSql.checkRows(57)
tdSql.query(f"select t1, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by t1 range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(prev)")
tdSql.checkRows(48)
tdSql.query(f"select t1, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by t1 range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(next)")
tdSql.checkRows(48)
tdSql.query(f"select t1, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by t1 range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear)")
tdSql.checkRows(39)
# select interp from supertable filter
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} where ts between '2020-02-01 00:00:01.000' and '2020-02-01 00:00:13.000' range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear)")
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} where ts between '2020-02-01 00:00:01.000' and '2020-02-01 00:00:13.000' partition by tbname range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear)")
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by tbname range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear) limit 40")
tdSql.checkRows(39)
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} where ts between '2020-02-01 00:00:01.000' and '2020-02-01 00:00:13.000' partition by tbname range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear) limit 10")