diff --git a/README-CN.md b/README-CN.md index 7df2733a2e76f602363f219d61cc1f877f48f12e..6bfab379fe89c4cec91b48c65d514e97039634ee 100644 --- a/README-CN.md +++ b/README-CN.md @@ -21,17 +21,17 @@ TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series Database, TSDB)。TDengine 能被广泛运用于物联网、工业互联网、车联网、IT 运维、金融等领域。除核心的时序数据库功能外,TDengine 还提供缓存、数据订阅、流式计算等功能,是一极简的时序数据处理平台,最大程度的减小系统设计的复杂度,降低研发和运营成本。与其他时序数据库相比,TDengine 的主要优势如下: -- 高性能:通过创新的存储引擎设计,无论是数据写入还是查询,TDengine 的性能比通用数据库快 10 倍以上,也远超其他时序数据库,存储空间不及通用数据库的1/10。 +- **高性能**:通过创新的存储引擎设计,无论是数据写入还是查询,TDengine 的性能比通用数据库快 10 倍以上,也远超其他时序数据库,存储空间不及通用数据库的1/10。 -- 云原生:通过原生分布式的设计,充分利用云平台的优势,TDengine 提供了水平扩展能力,具备弹性、韧性和可观测性,支持k8s部署,可运行在公有云、私有云和混合云上。 +- **云原生**:通过原生分布式的设计,充分利用云平台的优势,TDengine 提供了水平扩展能力,具备弹性、韧性和可观测性,支持k8s部署,可运行在公有云、私有云和混合云上。 -- 极简时序数据平台:TDengine 内建消息队列、缓存、流式计算等功能,应用无需再集成 Kafka/Redis/HBase/Spark 等软件,大幅降低系统的复杂度,降低应用开发和运营成本。 +- **极简时序数据平台**:TDengine 内建消息队列、缓存、流式计算等功能,应用无需再集成 Kafka/Redis/HBase/Spark 等软件,大幅降低系统的复杂度,降低应用开发和运营成本。 -- 分析能力:支持 SQL,同时为时序数据特有的分析提供SQL扩展。通过超级表、存储计算分离、分区分片、预计算、自定义函数等技术,TDengine 具备强大的分析能力。 +- **分析能力**:支持 SQL,同时为时序数据特有的分析提供SQL扩展。通过超级表、存储计算分离、分区分片、预计算、自定义函数等技术,TDengine 具备强大的分析能力。 -- 简单易用:无任何依赖,安装、集群几秒搞定;提供REST以及各种语言连接器,与众多第三方工具无缝集成;提供命令行程序,便于管理和即席查询;提供各种运维工具。 +- **简单易用**:无任何依赖,安装、集群几秒搞定;提供REST以及各种语言连接器,与众多第三方工具无缝集成;提供命令行程序,便于管理和即席查询;提供各种运维工具。 -- 核心开源:TDengine 的核心代码包括集群功能全部开源,截止到2022年8月1日,全球超过 135.9k 个运行实例,GitHub Star 18.7k,Fork 4.4k,社区活跃。 +- **核心开源**:TDengine 的核心代码包括集群功能全部开源,截止到2022年8月1日,全球超过 135.9k 个运行实例,GitHub Star 18.7k,Fork 4.4k,社区活跃。 # 文档 diff --git a/README.md b/README.md index c915fe3aef8d46389af223708146a6a47dc8af0a..6baabed7be32fff97c4809f76666f0becf62040b 100644 --- a/README.md +++ b/README.md @@ -20,23 +20,19 @@ English | [简体中文](README-CN.md) | We are hiring, check [here](https://tde # What is TDengine? +TDengine is an open source, high-performance, cloud native time-series database optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. TDengine differentiates itself from other time-seires databases with the following advantages: -TDengine is an open source, high performance , cloud native time-series database (Time-Series Database, TSDB). +- **High-Performance**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression. -TDengine can be optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT, IT operation and maintenance, finance and other fields. In addition to the core time series database functions, TDengine also provides functions such as caching, data subscription, and streaming computing. It is a minimalist time series data processing platform that minimizes the complexity of system design and reduces R&D and operating costs. Compared with other time series databases, the main advantages of TDengine are as follows: +- **Simplified Solution**: Through built-in caching, stream processing and data subscription features, TDengine provides a simplified solution for time-series data processing. It reduces system design complexity and operation costs significantly. +- **Cloud Native**: Through native distributed design, sharding and partitioning, separation of compute and storage, RAFT, support for kubernetes deployment and full observability, TDengine is a cloud native Time-Series Database and can be deployed on public, private or hybrid clouds. -- High-Performance: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression. +- **Ease of Use**: For administrators, TDengine significantly reduces the effort to deploy and maintain. For developers, it provides a simple interface, simplified solution and seamless integrations for third party tools. For data users, it gives easy data access. -- Simplified Solution: Through built-in caching, stream processing and data subscription features, TDengine provides a simplified solution for time-series data processing. It reduces system design complexity and operation costs significantly. +- **Easy Data Analytics**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way. -- Cloud Native: Through native distributed design, sharding and partitioning, separation of compute and storage, RAFT, support for kubernetes deployment and full observability, TDengine is a cloud native Time-Series Database and can be deployed on public, private or hybrid clouds. - -- Ease of Use: For administrators, TDengine significantly reduces the effort to deploy and maintain. For developers, it provides a simple interface, simplified solution and seamless integrations for third party tools. For data users, it gives easy data access. - -- Easy Data Analytics: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way. - -- Open Source: TDengine’s core modules, including cluster feature, are all available under open source licenses. It has gathered 18.8k stars on GitHub, an active developer community, and over 137k running instances worldwide. +- **Open Source**: TDengine’s core modules, including cluster feature, are all available under open source licenses. It has gathered 18.8k stars on GitHub. There is an active developer community, and over 139k running instances worldwide. # Documentation @@ -44,14 +40,9 @@ For user manual, system design and architecture, please refer to [TDengine Docum # Building - At the moment, TDengine server supports running on Linux, Windows systems.Any OS application can also choose the RESTful interface of taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU , and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future. - - -You can choose to install through source code according to your needs, [container](https://docs.taosdata.com/get-started/docker/), [installation package](https://docs.taosdata.com/get-started/package/) or [Kubenetes](https://docs.taosdata.com/deployment/k8s/) to install. This quick guide only applies to installing from source. - - +You can choose to install through source code according to your needs, [container](https://docs.taosdata.com/get-started/docker/), [installation package](https://docs.taosdata.com/get-started/package/) or [Kubenetes](https://docs.taosdata.com/deployment/k8s/) to install. This quick guide only applies to installing from source. TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine. diff --git a/docs/en/07-develop/_sub_java.mdx b/docs/en/07-develop/_sub_java.mdx index e7de158cc8d2b0b686b25bbe96e7a092c2a68e51..d14b5fd6095dd90f89dd2c2e828858585cfddff9 100644 --- a/docs/en/07-develop/_sub_java.mdx +++ b/docs/en/07-develop/_sub_java.mdx @@ -1,5 +1,7 @@ ```java {{#include docs/examples/java/src/main/java/com/taos/example/SubscribeDemo.java}} +{{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}} +{{#include docs/examples/java/src/main/java/com/taos/example/Meters.java}} ``` ```java {{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}} diff --git a/docs/en/14-reference/03-connector/java.mdx b/docs/en/14-reference/03-connector/java.mdx index cbf7daa879ae58e6c4f08c23330d943d50f7f4bc..1ea033eab4c7f9afb2cd4bf1eb82f5fd5b31d31a 100644 --- a/docs/en/14-reference/03-connector/java.mdx +++ b/docs/en/14-reference/03-connector/java.mdx @@ -130,7 +130,7 @@ The configuration parameters in the URL are as follows: - charset: The character set used by the client, the default value is the system character set. - locale: Client locale, by default, use the system's current locale. - timezone: The time zone used by the client, the default value is the system's current time zone. -- batchfetch: true: pulls result sets in batches when executing queries; false: pulls result sets row by row. The default value is: false. Enabling batch pulling and obtaining a batch of data can improve query performance when the query data volume is large. +- batchfetch: true: pulls result sets in batches when executing queries; false: pulls result sets row by row. The default value is: true. Enabling batch pulling and obtaining a batch of data can improve query performance when the query data volume is large. - batchErrorIgnore:true: When executing statement executeBatch, if there is a SQL execution failure in the middle, the following SQL will continue to be executed. false: No more statements after the failed SQL are executed. The default value is: false. For more information about JDBC native connections, see [Video Tutorial](https://www.taosdata.com/blog/2020/11/11/1955.html). diff --git a/docs/zh/05-get-started/03-package.md b/docs/zh/05-get-started/03-package.md index b7b9d41d2dff74aa0239dff0bd136ea0816f3fb8..9cd2446ba918ca14856c52c02986b2e7a8462847 100644 --- a/docs/zh/05-get-started/03-package.md +++ b/docs/zh/05-get-started/03-package.md @@ -7,43 +7,51 @@ import Tabs from "@theme/Tabs"; import TabItem from "@theme/TabItem"; import PkgListV3 from "/components/PkgListV3"; -在 Linux 系统上,TDengine 开源版本提供 deb 和 rpm 格式安装包,用户可以根据自己的运行环境选择合适的安装包。其中 deb 支持 Debian/Ubuntu 及衍生系统,rpm 支持 CentOS/RHEL/SUSE 及衍生系统。同时我们也为企业用户提供 tar.gz 格式安装包,也支持通过 `apt-get` 工具从线上进行安装。TDengine 也提供 Windows x64 平台的安装包。您也可以[用Docker立即体验](../../get-started/docker/)。如果您希望对 TDengine 贡献代码或对内部实现感兴趣,请参考我们的 [TDengine GitHub 主页](https://github.com/taosdata/TDengine) 下载源码构建和安装. +TDengine 完整的软件包包括服务端(taosd)、用于与第三方系统对接并提供 RESTful 接口的 taosAdapter、应用驱动(taosc)、命令行程序 (CLI,taos) 和一些工具软件。目前 taosAdapter 仅在 Linux 系统上安装和运行,后续将支持 Windows、macOS 等系统。TDengine 除了提供多种语言的连接器之外,还通过 [taosAdapter](../../reference/taosadapter/) 提供 [RESTful 接口](../../reference/rest-api/)。 + +为方便使用,标准的服务端安装包包含了 taos、taosd、taosAdapter、taosdump、taosBenchmark、TDinsight 安装脚本和示例代码;如果您只需要用到服务端程序和客户端连接的 C/C++ 语言支持,也可以仅下载 lite 版本的安装包。 + +在 Linux 系统上,TDengine 开源版本提供 deb 和 rpm 格式安装包,用户可以根据自己的运行环境选择合适的安装包。其中 deb 支持 Debian/Ubuntu 及衍生系统,rpm 支持 CentOS/RHEL/SUSE 及衍生系统。同时我们也为企业用户提供 tar.gz 格式安装包,也支持通过 `apt-get` 工具从线上进行安装。TDengine 也提供 Windows x64 平台的安装包。您也可以[用 Docker 立即体验](../../get-started/docker/)。需要注意的是,rpm 和 deb 包不含 taosdump 和 TDinsight 安装脚本,这些工具需要通过安装 taosTool 包获得。如果您希望对 TDengine 贡献代码或对内部实现感兴趣,请参考我们的 [TDengine GitHub 主页](https://github.com/taosdata/TDengine) 下载源码构建和安装. + ## 安装 -1. 从列表中下载获得 deb 安装包,例如 TDengine-server-3.0.0.0-Linux-x64.deb; +1. 从列表中下载获得 deb 安装包; -2. 进入到 TDengine-server-3.0.0.0-Linux-x64.deb 安装包所在目录,执行如下的安装命令: +2. 进入到安装包所在目录,执行如下的安装命令: ```bash -sudo dpkg -i TDengine-server-3.0.0.0-Linux-x64.deb +# 替换为下载的安装包版本 +sudo dpkg -i TDengine-server--Linux-x64.deb ``` -1. 从列表中下载获得 rpm 安装包,例如 TDengine-server-3.0.0.0-Linux-x64.rpm; +1. 从列表中下载获得 rpm 安装包; -2. 进入到 TDengine-server-3.0.0.0-Linux-x64.rpm 安装包所在目录,执行如下的安装命令: +2. 进入到安装包所在目录,执行如下的安装命令: ```bash -sudo rpm -ivh TDengine-server-3.0.0.0-Linux-x64.rpm +# 替换为下载的安装包版本 +sudo rpm -ivh TDengine-server--Linux-x64.rpm ``` -1. 从列表中下载获得 tar.gz 安装包,例如 TDengine-server-3.0.0.0-Linux-x64.tar.gz; +1. 从列表中下载获得 tar.gz 安装包; -2. 进入到 TDengine-server-3.0.0.0-Linux-x64.tar.gz 安装包所在目录,先解压文件后,进入子目录,执行其中的 install.sh 安装脚本: +2. 进入到安装包所在目录,先解压文件后,进入子目录,执行其中的 install.sh 安装脚本: ```bash -tar -zxvf TDengine-server-3.0.0.0-Linux-x64.tar.gz +# 替换为下载的安装包版本 +tar -zxvf TDengine-server--Linux-x64.tar.gz ``` 解压后进入相应路径,执行 @@ -60,9 +68,9 @@ install.sh 安装脚本在执行过程中,会通过命令行交互界面询问 -1. 从列表中下载获得 exe 安装程序,例如 TDengine-server-3.0.0.0-Windows-x64.exe; +1. 从列表中下载获得 exe 安装程序; -2. 运行 TDengine-server-3.0.0.0-Windows-x64.exe 来安装 TDengine。 +2. 运行可执行程序来安装 TDengine。 @@ -197,7 +205,7 @@ Query OK, 2 row(s) in set (0.003128s) ## 使用 taosBenchmark 体验写入速度 -启动 TDengine 的服务,在 Linux 终端执行 `taosBenchmark` (曾命名为 `taosdemo`): +启动 TDengine 的服务,在 Linux 或 windows 终端执行 `taosBenchmark` (曾命名为 `taosdemo`): ```bash taosBenchmark diff --git a/docs/zh/07-develop/07-tmq.mdx b/docs/zh/07-develop/07-tmq.mdx index c9ac17808175841cd048d06a9460a8e1bb44bd51..c487835e2d93668c1848584bb3974785787ceee0 100644 --- a/docs/zh/07-develop/07-tmq.mdx +++ b/docs/zh/07-develop/07-tmq.mdx @@ -132,6 +132,58 @@ func (c *Consumer) Unsubscribe() error + + +```rust +impl TBuilder for TmqBuilder + fn from_dsn(dsn: D) -> Result + fn build(&self) -> Result + +impl AsAsyncConsumer for Consumer + async fn subscribe, I: IntoIterator + Send>( + &mut self, + topics: I, + ) -> Result<(), Self::Error>; + fn stream( + &self, + ) -> Pin< + Box< + dyn '_ + + Send + + futures::Stream< + Item = Result<(Self::Offset, MessageSet), Self::Error>, + >, + >, + >; + async fn commit(&self, offset: Self::Offset) -> Result<(), Self::Error>; + + async fn unsubscribe(self); +``` + +可在 上查看详细 API 说明。 + + + + + +```js +function TMQConsumer(config) + +function subscribe(topic) + +function consume(timeout) + +function subscription() + +function unsubscribe() + +function commit(msg) + +function close() +``` + + + ```csharp @@ -157,27 +209,6 @@ void Close() ``` - - - -```node -function TMQConsumer(config) - -function subscribe(topic) - -function consume(timeout) - -function subscription() - -function unsubscribe() - -function commit(msg) - -function close() -``` - - - ## 写入数据 @@ -321,28 +352,6 @@ public class MetersDeserializer extends ReferenceDeserializer { - - -Python 使用以下配置项创建一个 Consumer 实例。 - -| 参数名称 | 类型 | 参数说明 | 备注 | -| :----------------------------: | :-----: | -------------------------------------------------------- | ------------------------------------------- | -| `td_connect_ip` | string | 用于创建连接,同 `taos_connect` | | -| `td_connect_user` | string | 用于创建连接,同 `taos_connect` | | -| `td_connect_pass` | string | 用于创建连接,同 `taos_connect` | | -| `td_connect_port` | string | 用于创建连接,同 `taos_connect` | | -| `group_id` | string | 消费组 ID,同一消费组共享消费进度 | **必填项**。最大长度:192。 | -| `client_id` | string | 客户端 ID | 最大长度:192。 | -| `auto_offset_reset` | string | 消费组订阅的初始位置 | 可选:`earliest`, `latest`, `none`(default) | -| `enable_auto_commit` | string | 启用自动提交 | 合法值:`true`, `false`。 | -| `auto_commit_interval_ms` | string | 以毫秒为单位的自动提交时间间隔 | | -| `enable_heartbeat_background` | string | 启用后台心跳,启用后即使长时间不 poll 消息也不会造成离线 | 合法值:`true`, `false` | -| `experimental_snapshot_enable` | string | 从 WAL 开始消费,还是从 TSBS 开始消费 | 合法值:`true`, `false` | -| `msg_with_table_name` | string | 是否允许从消息中解析表名 | 合法值:`true`, `false` | -| `timeout` | int | 消费者拉去的超时时间 | | - - - ```go @@ -394,35 +403,46 @@ if err != nil { - + -```csharp -using TDengineTMQ; +```rust +let mut dsn: Dsn = "taos://".parse()?; +dsn.set("group.id", "group1"); +dsn.set("client.id", "test"); +dsn.set("auto.offset.reset", "earliest"); -// 根据需要,设置消费组 (GourpId)、自动提交 (EnableAutoCommit)、 -// 自动提交时间间隔 (AutoCommitIntervalMs)、用户名 (TDConnectUser)、密码 (TDConnectPasswd) 等参数 -var cfg = new ConsumerConfig - { - EnableAutoCommit = "true" - AutoCommitIntervalMs = "1000" - GourpId = "TDengine-TMQ-C#", - TDConnectUser = "root", - TDConnectPasswd = "taosdata", - AutoOffsetReset = "earliest" - MsgWithTableName = "true", - TDConnectIp = "127.0.0.1", - TDConnectPort = "6030" - }; - -var consumer = new ConsumerBuilder(cfg).Build(); +let tmq = TmqBuilder::from_dsn(dsn)?; +let mut consumer = tmq.build()?; ``` + + +Python 使用以下配置项创建一个 Consumer 实例。 + +| 参数名称 | 类型 | 参数说明 | 备注 | +| :----------------------------: | :----: | -------------------------------------------------------- | ------------------------------------------- | +| `td_connect_ip` | string | 用于创建连接,同 `taos_connect` | | +| `td_connect_user` | string | 用于创建连接,同 `taos_connect` | | +| `td_connect_pass` | string | 用于创建连接,同 `taos_connect` | | +| `td_connect_port` | string | 用于创建连接,同 `taos_connect` | | +| `group_id` | string | 消费组 ID,同一消费组共享消费进度 | **必填项**。最大长度:192。 | +| `client_id` | string | 客户端 ID | 最大长度:192。 | +| `auto_offset_reset` | string | 消费组订阅的初始位置 | 可选:`earliest`, `latest`, `none`(default) | +| `enable_auto_commit` | string | 启用自动提交 | 合法值:`true`, `false`。 | +| `auto_commit_interval_ms` | string | 以毫秒为单位的自动提交时间间隔 | | +| `enable_heartbeat_background` | string | 启用后台心跳,启用后即使长时间不 poll 消息也不会造成离线 | 合法值:`true`, `false` | +| `experimental_snapshot_enable` | string | 从 WAL 开始消费,还是从 TSBS 开始消费 | 合法值:`true`, `false` | +| `msg_with_table_name` | string | 是否允许从消息中解析表名 | 合法值:`true`, `false` | +| `timeout` | int | 消费者拉去的超时时间 | | + + + -``` node +```js // 根据需要,设置消费组 (group.id)、自动提交 (enable.auto.commit)、 // 自动提交时间间隔 (auto.commit.interval.ms)、用户名 (td.connect.user)、密码 (td.connect.pass) 等参数 @@ -437,6 +457,31 @@ let consumer = taos.consumer({ 'td.connect.ip','127.0.0.1', 'td.connect.port','6030' }); +``` + + + + + +```csharp +using TDengineTMQ; + +// 根据需要,设置消费组 (GourpId)、自动提交 (EnableAutoCommit)、 +// 自动提交时间间隔 (AutoCommitIntervalMs)、用户名 (TDConnectUser)、密码 (TDConnectPasswd) 等参数 +var cfg = new ConsumerConfig + { + EnableAutoCommit = "true" + AutoCommitIntervalMs = "1000" + GourpId = "TDengine-TMQ-C#", + TDConnectUser = "root", + TDConnectPasswd = "taosdata", + AutoOffsetReset = "earliest" + MsgWithTableName = "true", + TDConnectIp = "127.0.0.1", + TDConnectPort = "6030" + }; + +var consumer = new ConsumerBuilder(cfg).Build(); ``` @@ -487,28 +532,25 @@ if err != nil { ``` + - - -```csharp -// 创建订阅 topics 列表 -List topics = new List(); -topics.add("tmq_topic"); -// 启动订阅 -consumer.Subscribe(topics); +```rust +consumer.subscribe(["tmq_meters"]).await?; ``` + ```python consumer = TaosConsumer('topic_ctb_column', group_id='vg2') ``` + -```node +```js // 创建订阅 topics 列表 let topics = ['topic_test'] @@ -518,6 +560,18 @@ consumer.subscribe(topics); + + +```csharp +// 创建订阅 topics 列表 +List topics = new List(); +topics.add("tmq_topic"); +// 启动订阅 +consumer.Subscribe(topics); +``` + + + ## 消费 @@ -551,14 +605,6 @@ while(running){ - -```python -for msg in consumer: - for row in msg: - print(row) -``` - - ```go @@ -575,6 +621,64 @@ for { + + +```rust +{ + let mut stream = consumer.stream(); + + while let Some((offset, message)) = stream.try_next().await? { + // get information from offset + + // the topic + let topic = offset.topic(); + // the vgroup id, like partition id in kafka. + let vgroup_id = offset.vgroup_id(); + println!("* in vgroup id {vgroup_id} of topic {topic}\n"); + + if let Some(data) = message.into_data() { + while let Some(block) = data.fetch_raw_block().await? { + // one block for one table, get table name if needed + let name = block.table_name(); + let records: Vec = block.deserialize().try_collect()?; + println!( + "** table: {}, got {} records: {:#?}\n", + name.unwrap(), + records.len(), + records + ); + } + } + consumer.commit(offset).await?; + } +} +``` + + + + +```python +for msg in consumer: + for row in msg: + print(row) +``` + + + + + +```js +while(true){ + msg = consumer.consume(200); + // process message(consumeResult) + console.log(msg.topicPartition); + console.log(msg.block); + console.log(msg.fields) +} +``` + + + ```csharp @@ -590,20 +694,6 @@ while (true) - - -```node -while(true){ - msg = consumer.consume(200); - // process message(consumeResult) - console.log(msg.topicPartition); - console.log(msg.block); - console.log(msg.fields) - } -``` - - - ## 结束消费 @@ -634,16 +724,6 @@ consumer.close(); - - -```python -/* 取消订阅 */ -consumer.unsubscribe(); - -/* 关闭消费 */ -consumer.close(); - - @@ -652,26 +732,45 @@ consumer.Close() ``` - -```csharp -// 取消订阅 -consumer.Unsubscribe(); + -// 关闭消费 -consumer.Close(); +```rust +consumer.unsubscribe().await; ``` + + + +```py +# 取消订阅 +consumer.unsubscribe() +# 关闭消费 +consumer.close() +``` + + -```node +```js consumer.unsubscribe(); consumer.close(); ``` + + +```csharp +// 取消订阅 +consumer.Unsubscribe(); + +// 关闭消费 +consumer.Close(); +``` + + ## 删除 *topic* diff --git a/docs/zh/07-develop/_sub_java.mdx b/docs/zh/07-develop/_sub_java.mdx index e7de158cc8d2b0b686b25bbe96e7a092c2a68e51..d14b5fd6095dd90f89dd2c2e828858585cfddff9 100644 --- a/docs/zh/07-develop/_sub_java.mdx +++ b/docs/zh/07-develop/_sub_java.mdx @@ -1,5 +1,7 @@ ```java {{#include docs/examples/java/src/main/java/com/taos/example/SubscribeDemo.java}} +{{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}} +{{#include docs/examples/java/src/main/java/com/taos/example/Meters.java}} ``` ```java {{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}} diff --git a/docs/zh/14-reference/03-connector/java.mdx b/docs/zh/14-reference/03-connector/java.mdx index 6a78902b1edb19bb140ef1176b88f8280de54c93..183994313e205bbaf13f30d534fa151a23216708 100644 --- a/docs/zh/14-reference/03-connector/java.mdx +++ b/docs/zh/14-reference/03-connector/java.mdx @@ -131,7 +131,7 @@ url 中的配置参数如下: - charset:客户端使用的字符集,默认值为系统字符集。 - locale:客户端语言环境,默认值系统当前 locale。 - timezone:客户端使用的时区,默认值为系统当前时区。 -- batchfetch: true:在执行查询时批量拉取结果集;false:逐行拉取结果集。默认值为:false。开启批量拉取同时获取一批数据在查询数据量较大时批量拉取可以有效的提升查询性能。 +- batchfetch: true:在执行查询时批量拉取结果集;false:逐行拉取结果集。默认值为:true。开启批量拉取同时获取一批数据在查询数据量较大时批量拉取可以有效的提升查询性能。 - batchErrorIgnore:true:在执行 Statement 的 executeBatch 时,如果中间有一条 SQL 执行失败将继续执行下面的 SQL。false:不再执行失败 SQL 后的任何语句。默认值为:false。 JDBC 原生连接的使用请参见[视频教程](https://www.taosdata.com/blog/2020/11/11/1955.html)。 diff --git a/docs/zh/14-reference/15-taosKeeper.md b/docs/zh/14-reference/15-taosKeeper.md new file mode 100644 index 0000000000000000000000000000000000000000..d3f96bc5a9e857538698e9af5c634e1b594496f4 --- /dev/null +++ b/docs/zh/14-reference/15-taosKeeper.md @@ -0,0 +1,134 @@ +--- +sidebar_label: taosKeeper +title: taosKeeper +description: TDengine taosKeeper 使用说明 +--- + +## 简介 + +TaosKeeper 是 TDengine 3.0 版本监控指标的导出工具,通过简单的几项配置即可获取 TDengine 的运行状态。taosKeeper 使用 TDengine RESTful 接口,所以不需要安装 TDengine 客户端即可使用。 + +## 安装 + + +taosKeeper 安装方式: + + + + +- 单独编译 taosKeeper 并安装,详情请参考 [taosKeeper](https://github.com/taosdata/taoskeeper) 仓库。 + +## 运行 + +### 配置和运行方式 + + +taosKeeper 需要在操作系统终端执行,该工具支持 [配置文件启动](#配置文件启动)。 + +**在运行 taosKeeper 之前要确保 TDengine 集群与 taosAdapter 已经在正确运行。** + + +### 配置文件启动 + +执行以下命令即可快速体验 taosKeeper。当不指定 taosKeeper 配置文件时,优先使用 `/etc/taos/keeper.toml` 配置,否则将使用默认配置。 + +```shell +taoskeeper -c +``` + +**下面是配置文件的示例:** +```toml +# gin 框架是否启用 debug +debug = false + +# 服务监听端口, 默认为 6043 +port = 6043 + +# 日志级别,包含 panic、error、info、debug、trace等 +loglevel = "info" + +# 程序中使用协程池的大小 +gopoolsize = 50000 + +# 查询 TDengine 监控数据轮询间隔 +RotationInterval = "15s" + +[tdengine] +host = "127.0.0.1" +port = 6041 +username = "root" +password = "taosdata" + +# 需要被监控的 taosAdapter +[taosAdapter] +address = ["127.0.0.1:6041","192.168.1.95:6041"] + +[metrics] +# 监控指标前缀 +prefix = "taos" + +# 集群数据的标识符 +cluster = "production" + +# 存放监控数据的数据库 +database = "log" + +# 指定需要监控的普通表 +tables = ["normal_table"] +``` + +### 获取监控指标 + +taosKeeper 作为 TDengine 监控指标的导出工具,可以将 TDengine 产生的监控数据记录在指定数据库中,并提供导出接口。 + +#### 查看监控结果集 + +```shell +$ taos +# +> use log; +> select * from cluster_info limit 1; +``` + +结果示例: + +```shell + ts | first_ep | first_ep_dnode_id | version | master_uptime | monitor_interval | dbs_total | tbs_total | stbs_total | dnodes_total | dnodes_alive | mnodes_total | mnodes_alive | vgroups_total | vgroups_alive | vnodes_total | vnodes_alive | connections_total | protocol | cluster_id | +=============================================================================================================================================================================================================================================================================================================================================================================== + 2022-08-16 17:37:01.629 | hlb:6030 | 1 | 3.0.0.0 | 0.27250 | 15 | 2 | 27 | 38 | 1 | 1 | 1 | 1 | 4 | 4 | 4 | 4 | 14 | 1 | 5981392874047724755 | +Query OK, 1 rows in database (0.036162s) +``` + +#### 导出监控指标 + +```shell +curl http://127.0.0.1:6043/metrics +``` + +部分结果集: + +```shell +# HELP taos_cluster_info_connections_total +# TYPE taos_cluster_info_connections_total counter +taos_cluster_info_connections_total{cluster_id="5981392874047724755"} 16 +# HELP taos_cluster_info_dbs_total +# TYPE taos_cluster_info_dbs_total counter +taos_cluster_info_dbs_total{cluster_id="5981392874047724755"} 2 +# HELP taos_cluster_info_dnodes_alive +# TYPE taos_cluster_info_dnodes_alive counter +taos_cluster_info_dnodes_alive{cluster_id="5981392874047724755"} 1 +# HELP taos_cluster_info_dnodes_total +# TYPE taos_cluster_info_dnodes_total counter +taos_cluster_info_dnodes_total{cluster_id="5981392874047724755"} 1 +# HELP taos_cluster_info_first_ep +# TYPE taos_cluster_info_first_ep gauge +taos_cluster_info_first_ep{cluster_id="5981392874047724755",value="hlb:6030"} 1 +``` \ No newline at end of file diff --git a/examples/JDBC/connectionPools/pom.xml b/examples/JDBC/connectionPools/pom.xml index 34518900ed30f48effd47a8786233080f3e5291f..99a7892a250bd656479b0901682d6a86c2b27d14 100644 --- a/examples/JDBC/connectionPools/pom.xml +++ b/examples/JDBC/connectionPools/pom.xml @@ -53,7 +53,7 @@ org.apache.logging.log4j log4j-core - 2.14.1 + 2.17.1 diff --git a/examples/JDBC/taosdemo/pom.xml b/examples/JDBC/taosdemo/pom.xml index 91b976c2ae6c76a5ae2d7b76c3b90d05e4dae57f..07fd4a3576243b8950ccd25515f2512226e313d6 100644 --- a/examples/JDBC/taosdemo/pom.xml +++ b/examples/JDBC/taosdemo/pom.xml @@ -10,7 +10,7 @@ Demo project for TDengine - 5.3.2 + 5.3.20 @@ -75,20 +75,20 @@ com.alibaba fastjson - 1.2.75 + 1.2.83 mysql mysql-connector-java - 8.0.16 + 8.0.28 test org.apache.logging.log4j log4j-core - 2.14.1 + 2.17.1 diff --git a/include/common/tglobal.h b/include/common/tglobal.h index 9111728e1ad15d7cfc105a5a65ee8364f7ab2f95..cd74ffd47764fab78f224c2f373e0c93e8117d12 100644 --- a/include/common/tglobal.h +++ b/include/common/tglobal.h @@ -139,7 +139,6 @@ int32_t taosInitCfg(const char *cfgDir, const char **envCmd, const char *envFile bool tsc); void taosCleanupCfg(); void taosCfgDynamicOptions(const char *option, const char *value); -void taosAddDataDir(int32_t index, char *v1, int32_t level, int32_t primary); struct SConfig *taosGetCfg(); diff --git a/source/common/src/tglobal.c b/source/common/src/tglobal.c index 59f7ec02f7bd7089a3a692906b39b6ed9ab6b3e6..c763bbed9c470d9527877a7cfb2312efdc8d612a 100644 --- a/source/common/src/tglobal.c +++ b/source/common/src/tglobal.c @@ -166,7 +166,24 @@ int32_t tsTtlPushInterval = 86400; int32_t tsGrantHBInterval = 60; #ifndef _STORAGE -int32_t taosSetTfsCfg(SConfig *pCfg) { return 0; } +int32_t taosSetTfsCfg(SConfig *pCfg) { + SConfigItem *pItem = cfgGetItem(pCfg, "dataDir"); + memset(tsDataDir, 0, PATH_MAX); + + int32_t size = taosArrayGetSize(pItem->array); + tsDiskCfgNum = 1; + tstrncpy(tsDiskCfg[0].dir, pItem->str, TSDB_FILENAME_LEN); + tsDiskCfg[0].level = 0; + tsDiskCfg[0].primary = 1; + tstrncpy(tsDataDir, pItem->str, PATH_MAX); + if (taosMulMkDir(tsDataDir) != 0) { + uError("failed to create dataDir:%s", tsDataDir); + return -1; + } + return 0; +} +#else +int32_t taosSetTfsCfg(SConfig *pCfg); #endif struct SConfig *taosGetCfg() { diff --git a/source/dnode/mnode/impl/src/mndSubscribe.c b/source/dnode/mnode/impl/src/mndSubscribe.c index 3f310ee9c09753d143e5c44e33506651c2765881..10e520d9ec49a53e5fcedcf668a40732480aa75b 100644 --- a/source/dnode/mnode/impl/src/mndSubscribe.c +++ b/source/dnode/mnode/impl/src/mndSubscribe.c @@ -356,31 +356,44 @@ static int32_t mndDoRebalance(SMnode *pMnode, const SMqRebInputObj *pInput, SMqR taosArrayPush(pConsumerEp->vgs, &pRebVg->pVgEp); pRebVg->newConsumerId = pConsumerEp->consumerId; taosArrayPush(pOutput->rebVgs, pRebVg); - mInfo("mq rebalance: add vgId:%d to consumer:%" PRId64 ",(second scan)", pRebVg->pVgEp->vgId, + mInfo("mq rebalance: add vgId:%d to consumer:%" PRId64 " (second scan) (not enough)", pRebVg->pVgEp->vgId, pConsumerEp->consumerId); } } + ASSERT(pIter == NULL); // 7. handle unassigned vg if (taosHashGetSize(pOutput->pSub->consumerHash) != 0) { // if has consumer, assign all left vg while (1) { + SMqConsumerEp *pConsumerEp = NULL; pRemovedIter = taosHashIterate(pHash, pRemovedIter); - if (pRemovedIter == NULL) break; - pIter = taosHashIterate(pOutput->pSub->consumerHash, pIter); - ASSERT(pIter); + if (pRemovedIter == NULL) { + if (pIter != NULL) { + taosHashCancelIterate(pOutput->pSub->consumerHash, pIter); + pIter = NULL; + } + break; + } + while (1) { + pIter = taosHashIterate(pOutput->pSub->consumerHash, pIter); + ASSERT(pIter); + pConsumerEp = (SMqConsumerEp *)pIter; + ASSERT(pConsumerEp->consumerId > 0); + if (taosArrayGetSize(pConsumerEp->vgs) == minVgCnt) { + break; + } + } pRebVg = (SMqRebOutputVg *)pRemovedIter; - SMqConsumerEp *pConsumerEp = (SMqConsumerEp *)pIter; - ASSERT(pConsumerEp->consumerId > 0); taosArrayPush(pConsumerEp->vgs, &pRebVg->pVgEp); pRebVg->newConsumerId = pConsumerEp->consumerId; if (pRebVg->newConsumerId == pRebVg->oldConsumerId) { - mInfo("mq rebalance: skip vg %d for same consumer:%" PRId64 ",(second scan)", pRebVg->pVgEp->vgId, + mInfo("mq rebalance: skip vg %d for same consumer:%" PRId64 " (second scan)", pRebVg->pVgEp->vgId, pConsumerEp->consumerId); continue; } taosArrayPush(pOutput->rebVgs, pRebVg); - mInfo("mq rebalance: add vgId:%d to consumer:%" PRId64 ",(second scan)", pRebVg->pVgEp->vgId, + mInfo("mq rebalance: add vgId:%d to consumer:%" PRId64 " (second scan) (unassigned)", pRebVg->pVgEp->vgId, pConsumerEp->consumerId); } } else { @@ -571,7 +584,7 @@ static int32_t mndProcessRebalanceReq(SRpcMsg *pMsg) { SMqTopicObj *pTopic = mndAcquireTopic(pMnode, topic); /*ASSERT(pTopic);*/ if (pTopic == NULL) { - mError("rebalance %s failed since topic %s was dropped, abort", pRebInfo->key, topic); + mError("mq rebalance %s failed since topic %s not exist, abort", pRebInfo->key, topic); continue; } taosRLockLatch(&pTopic->lock); @@ -601,7 +614,7 @@ static int32_t mndProcessRebalanceReq(SRpcMsg *pMsg) { // TODO replace assert with error check if (mndPersistRebResult(pMnode, pMsg, &rebOutput) < 0) { - mError("persist rebalance output error, possibly vnode splitted or dropped"); + mError("mq rebalance persist rebalance output error, possibly vnode splitted or dropped"); } taosArrayDestroy(pRebInfo->lostConsumers); taosArrayDestroy(pRebInfo->newConsumers); diff --git a/source/dnode/vnode/CMakeLists.txt b/source/dnode/vnode/CMakeLists.txt index b218d982e9e37f0315bde21ff4c29fdee9154cc3..a3e17f53774c82ea9fca1ff0a88943c8e7971725 100644 --- a/source/dnode/vnode/CMakeLists.txt +++ b/source/dnode/vnode/CMakeLists.txt @@ -24,6 +24,7 @@ target_sources( "src/meta/metaCommit.c" "src/meta/metaEntry.c" "src/meta/metaSnapshot.c" + "src/meta/metaCache.c" # sma "src/sma/smaEnv.c" diff --git a/source/dnode/vnode/src/inc/meta.h b/source/dnode/vnode/src/inc/meta.h index a72546fe86026288109f692315f850d7f5852997..2efc33a8ee1350d3c2ee05536eced70b9edc3463 100644 --- a/source/dnode/vnode/src/inc/meta.h +++ b/source/dnode/vnode/src/inc/meta.h @@ -23,8 +23,9 @@ extern "C" { #endif -typedef struct SMetaIdx SMetaIdx; -typedef struct SMetaDB SMetaDB; +typedef struct SMetaIdx SMetaIdx; +typedef struct SMetaDB SMetaDB; +typedef struct SMetaCache SMetaCache; // metaDebug ================== // clang-format off @@ -60,6 +61,13 @@ static FORCE_INLINE tb_uid_t metaGenerateUid(SMeta* pMeta) { return tGenIdPI64() // metaTable ================== int metaHandleEntry(SMeta* pMeta, const SMetaEntry* pME); +// metaCache ================== +int32_t metaCacheOpen(SMeta* pMeta); +void metaCacheClose(SMeta* pMeta); +int32_t metaCacheUpsert(SMeta* pMeta, SMetaInfo* pInfo); +int32_t metaCacheDrop(SMeta* pMeta, int64_t uid); +int32_t metaCacheGet(SMeta* pMeta, int64_t uid, SMetaInfo* pInfo); + struct SMeta { TdThreadRwlock lock; @@ -84,6 +92,8 @@ struct SMeta { TTB* pStreamDb; SMetaIdx* pIdx; + + SMetaCache* pCache; }; typedef struct { @@ -92,6 +102,12 @@ typedef struct { } STbDbKey; #pragma pack(push, 1) +typedef struct { + tb_uid_t suid; + int64_t version; + int32_t skmVer; +} SUidIdxVal; + typedef struct { tb_uid_t uid; int32_t sver; diff --git a/source/dnode/vnode/src/inc/vnodeInt.h b/source/dnode/vnode/src/inc/vnodeInt.h index 35c26eac44d51e72eaabe490f6237d9d51e90f81..47e18732e020149634cd7534a9178930f884c909 100644 --- a/source/dnode/vnode/src/inc/vnodeInt.h +++ b/source/dnode/vnode/src/inc/vnodeInt.h @@ -130,6 +130,14 @@ int metaTtlSmaller(SMeta* pMeta, uint64_t time, SArray* uidList); int32_t metaCreateTSma(SMeta* pMeta, int64_t version, SSmaCfg* pCfg); int32_t metaDropTSma(SMeta* pMeta, int64_t indexUid); +typedef struct SMetaInfo { + int64_t uid; + int64_t suid; + int64_t version; + int32_t skmVer; +} SMetaInfo; +int32_t metaGetInfo(SMeta* pMeta, int64_t uid, SMetaInfo* pInfo); + // tsdb int tsdbOpen(SVnode* pVnode, STsdb** ppTsdb, const char* dir, STsdbKeepCfg* pKeepCfg); int tsdbClose(STsdb** pTsdb); diff --git a/source/dnode/vnode/src/meta/metaCache.c b/source/dnode/vnode/src/meta/metaCache.c new file mode 100644 index 0000000000000000000000000000000000000000..b8cc9f0df2f10d6bd42f6d68f801d2a749479626 --- /dev/null +++ b/source/dnode/vnode/src/meta/metaCache.c @@ -0,0 +1,206 @@ +/* + * Copyright (c) 2019 TAOS Data, Inc. + * + * This program is free software: you can use, redistribute, and/or modify + * it under the terms of the GNU Affero General Public License, version 3 + * or later ("AGPL"), as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. + * + * You should have received a copy of the GNU Affero General Public License + * along with this program. If not, see . + */ +#include "meta.h" + +#define META_CACHE_BASE_BUCKET 1024 + +// (uid , suid) : child table +// (uid, 0) : normal table +// (suid, suid) : super table +typedef struct SMetaCacheEntry SMetaCacheEntry; +struct SMetaCacheEntry { + SMetaCacheEntry* next; + SMetaInfo info; +}; + +struct SMetaCache { + int32_t nEntry; + int32_t nBucket; + SMetaCacheEntry** aBucket; +}; + +int32_t metaCacheOpen(SMeta* pMeta) { + int32_t code = 0; + SMetaCache* pCache = NULL; + + pCache = (SMetaCache*)taosMemoryMalloc(sizeof(SMetaCache)); + if (pCache == NULL) { + code = TSDB_CODE_OUT_OF_MEMORY; + goto _err; + } + + pCache->nEntry = 0; + pCache->nBucket = META_CACHE_BASE_BUCKET; + pCache->aBucket = (SMetaCacheEntry**)taosMemoryCalloc(pCache->nBucket, sizeof(SMetaCacheEntry*)); + if (pCache->aBucket == NULL) { + code = TSDB_CODE_OUT_OF_MEMORY; + taosMemoryFree(pCache); + goto _err; + } + + pMeta->pCache = pCache; + +_exit: + return code; + +_err: + metaError("vgId:%d meta open cache failed since %s", TD_VID(pMeta->pVnode), tstrerror(code)); + return code; +} + +void metaCacheClose(SMeta* pMeta) { + if (pMeta->pCache) { + for (int32_t iBucket = 0; iBucket < pMeta->pCache->nBucket; iBucket++) { + SMetaCacheEntry* pEntry = pMeta->pCache->aBucket[iBucket]; + while (pEntry) { + SMetaCacheEntry* tEntry = pEntry->next; + taosMemoryFree(pEntry); + pEntry = tEntry; + } + } + taosMemoryFree(pMeta->pCache->aBucket); + taosMemoryFree(pMeta->pCache); + pMeta->pCache = NULL; + } +} + +static int32_t metaRehashCache(SMetaCache* pCache, int8_t expand) { + int32_t code = 0; + int32_t nBucket; + + if (expand) { + nBucket = pCache->nBucket * 2; + } else { + nBucket = pCache->nBucket / 2; + } + + SMetaCacheEntry** aBucket = (SMetaCacheEntry**)taosMemoryCalloc(nBucket, sizeof(SMetaCacheEntry*)); + if (aBucket == NULL) { + code = TSDB_CODE_OUT_OF_MEMORY; + goto _exit; + } + + // rehash + for (int32_t iBucket = 0; iBucket < pCache->nBucket; iBucket++) { + SMetaCacheEntry* pEntry = pCache->aBucket[iBucket]; + + while (pEntry) { + SMetaCacheEntry* pTEntry = pEntry->next; + + pEntry->next = aBucket[TABS(pEntry->info.uid) % nBucket]; + aBucket[TABS(pEntry->info.uid) % nBucket] = pEntry; + + pEntry = pTEntry; + } + } + + // final set + taosMemoryFree(pCache->aBucket); + pCache->nBucket = nBucket; + pCache->aBucket = aBucket; + +_exit: + return code; +} + +int32_t metaCacheUpsert(SMeta* pMeta, SMetaInfo* pInfo) { + int32_t code = 0; + + // ASSERT(metaIsWLocked(pMeta)); + + // search + SMetaCache* pCache = pMeta->pCache; + int32_t iBucket = TABS(pInfo->uid) % pCache->nBucket; + SMetaCacheEntry** ppEntry = &pCache->aBucket[iBucket]; + while (*ppEntry && (*ppEntry)->info.uid != pInfo->uid) { + ppEntry = &(*ppEntry)->next; + } + + if (*ppEntry) { // update + ASSERT(pInfo->suid == (*ppEntry)->info.suid); + if (pInfo->version > (*ppEntry)->info.version) { + (*ppEntry)->info.version = pInfo->version; + (*ppEntry)->info.skmVer = pInfo->skmVer; + } + } else { // insert + if (pCache->nEntry >= pCache->nBucket) { + code = metaRehashCache(pCache, 1); + if (code) goto _exit; + + iBucket = TABS(pInfo->uid) % pCache->nBucket; + } + + SMetaCacheEntry* pEntryNew = (SMetaCacheEntry*)taosMemoryMalloc(sizeof(*pEntryNew)); + if (pEntryNew == NULL) { + code = TSDB_CODE_OUT_OF_MEMORY; + goto _exit; + } + + pEntryNew->info = *pInfo; + pEntryNew->next = pCache->aBucket[iBucket]; + pCache->aBucket[iBucket] = pEntryNew; + pCache->nEntry++; + } + +_exit: + return code; +} + +int32_t metaCacheDrop(SMeta* pMeta, int64_t uid) { + int32_t code = 0; + + SMetaCache* pCache = pMeta->pCache; + int32_t iBucket = TABS(uid) % pCache->nBucket; + SMetaCacheEntry** ppEntry = &pCache->aBucket[iBucket]; + while (*ppEntry && (*ppEntry)->info.uid != uid) { + ppEntry = &(*ppEntry)->next; + } + + SMetaCacheEntry* pEntry = *ppEntry; + if (pEntry) { + *ppEntry = pEntry->next; + taosMemoryFree(pEntry); + pCache->nEntry--; + if (pCache->nEntry < pCache->nBucket / 4 && pCache->nBucket > META_CACHE_BASE_BUCKET) { + code = metaRehashCache(pCache, 0); + if (code) goto _exit; + } + } else { + code = TSDB_CODE_NOT_FOUND; + } + +_exit: + return code; +} + +int32_t metaCacheGet(SMeta* pMeta, int64_t uid, SMetaInfo* pInfo) { + int32_t code = 0; + + SMetaCache* pCache = pMeta->pCache; + int32_t iBucket = TABS(uid) % pCache->nBucket; + SMetaCacheEntry* pEntry = pCache->aBucket[iBucket]; + + while (pEntry && pEntry->info.uid != uid) { + pEntry = pEntry->next; + } + + if (pEntry) { + *pInfo = pEntry->info; + } else { + code = TSDB_CODE_NOT_FOUND; + } + + return code; +} diff --git a/source/dnode/vnode/src/meta/metaOpen.c b/source/dnode/vnode/src/meta/metaOpen.c index 941d2c6d724db5b93c4d011988b83484a3e746ad..cf17459bc241672359b85a37270b8d98253225c7 100644 --- a/source/dnode/vnode/src/meta/metaOpen.c +++ b/source/dnode/vnode/src/meta/metaOpen.c @@ -73,7 +73,7 @@ int metaOpen(SVnode *pVnode, SMeta **ppMeta) { } // open pUidIdx - ret = tdbTbOpen("uid.idx", sizeof(tb_uid_t), sizeof(int64_t), uidIdxKeyCmpr, pMeta->pEnv, &pMeta->pUidIdx); + ret = tdbTbOpen("uid.idx", sizeof(tb_uid_t), sizeof(SUidIdxVal), uidIdxKeyCmpr, pMeta->pEnv, &pMeta->pUidIdx); if (ret < 0) { metaError("vgId:%d, failed to open meta uid idx since %s", TD_VID(pVnode), tstrerror(terrno)); goto _err; @@ -143,6 +143,13 @@ int metaOpen(SVnode *pVnode, SMeta **ppMeta) { goto _err; } + int32_t code = metaCacheOpen(pMeta); + if (code) { + terrno = code; + metaError("vgId:%d, failed to open meta cache since %s", TD_VID(pVnode), tstrerror(terrno)); + goto _err; + } + metaDebug("vgId:%d, meta is opened", TD_VID(pVnode)); *ppMeta = pMeta; @@ -169,6 +176,7 @@ _err: int metaClose(SMeta *pMeta) { if (pMeta) { + if (pMeta->pCache) metaCacheClose(pMeta); if (pMeta->pIdx) metaCloseIdx(pMeta); if (pMeta->pStreamDb) tdbTbClose(pMeta->pStreamDb); if (pMeta->pSmaIdx) tdbTbClose(pMeta->pSmaIdx); diff --git a/source/dnode/vnode/src/meta/metaQuery.c b/source/dnode/vnode/src/meta/metaQuery.c index eed0ae5e14761f25312c339f6fff9dae5059e144..6ff55d2f4ef30857a0b21f82cab6c02b12d3323b 100644 --- a/source/dnode/vnode/src/meta/metaQuery.c +++ b/source/dnode/vnode/src/meta/metaQuery.c @@ -63,7 +63,7 @@ int metaGetTableEntryByUid(SMetaReader *pReader, tb_uid_t uid) { return -1; } - version = *(int64_t *)pReader->pBuf; + version = ((SUidIdxVal *)pReader->pBuf)[0].version; return metaGetTableEntryByVersion(pReader, version, uid); } @@ -160,7 +160,7 @@ int metaTbCursorNext(SMTbCursor *pTbCur) { tDecoderClear(&pTbCur->mr.coder); - metaGetTableEntryByVersion(&pTbCur->mr, *(int64_t *)pTbCur->pVal, *(tb_uid_t *)pTbCur->pKey); + metaGetTableEntryByVersion(&pTbCur->mr, ((SUidIdxVal *)pTbCur->pVal)[0].version, *(tb_uid_t *)pTbCur->pKey); if (pTbCur->mr.me.type == TSDB_SUPER_TABLE) { continue; } @@ -185,7 +185,7 @@ _query: goto _err; } - version = *(int64_t *)pData; + version = ((SUidIdxVal *)pData)[0].version; tdbTbGet(pMeta->pTbDb, &(STbDbKey){.uid = uid, .version = version}, sizeof(STbDbKey), &pData, &nData); SMetaEntry me = {0}; @@ -888,3 +888,41 @@ END: return ret; } + +int32_t metaGetInfo(SMeta *pMeta, int64_t uid, SMetaInfo *pInfo) { + int32_t code = 0; + void *pData = NULL; + int nData = 0; + + metaRLock(pMeta); + + // search cache + if (metaCacheGet(pMeta, uid, pInfo) == 0) { + metaULock(pMeta); + goto _exit; + } + + // search TDB + if (tdbTbGet(pMeta->pUidIdx, &uid, sizeof(uid), &pData, &nData) < 0) { + // not found + metaULock(pMeta); + code = TSDB_CODE_NOT_FOUND; + goto _exit; + } + + metaULock(pMeta); + + pInfo->uid = uid; + pInfo->suid = ((SUidIdxVal *)pData)->suid; + pInfo->version = ((SUidIdxVal *)pData)->version; + pInfo->skmVer = ((SUidIdxVal *)pData)->skmVer; + + // upsert the cache + metaWLock(pMeta); + metaCacheUpsert(pMeta, pInfo); + metaULock(pMeta); + +_exit: + tdbFree(pData); + return code; +} diff --git a/source/dnode/vnode/src/meta/metaSma.c b/source/dnode/vnode/src/meta/metaSma.c index 1e5b699fce275f2333fc1b60bcc723ed3507a222..3ada7d1814b241081e52dc5e7ac8e104288ee3ad 100644 --- a/source/dnode/vnode/src/meta/metaSma.c +++ b/source/dnode/vnode/src/meta/metaSma.c @@ -28,9 +28,9 @@ int32_t metaCreateTSma(SMeta *pMeta, int64_t version, SSmaCfg *pCfg) { int vLen = 0; const void *pKey = NULL; const void *pVal = NULL; - void * pBuf = NULL; + void *pBuf = NULL; int32_t szBuf = 0; - void * p = NULL; + void *p = NULL; SMetaReader mr = {0}; // validate req @@ -83,8 +83,8 @@ int32_t metaDropTSma(SMeta *pMeta, int64_t indexUid) { static int metaSaveSmaToDB(SMeta *pMeta, const SMetaEntry *pME) { STbDbKey tbDbKey; - void * pKey = NULL; - void * pVal = NULL; + void *pKey = NULL; + void *pVal = NULL; int kLen = 0; int vLen = 0; SEncoder coder = {0}; @@ -130,7 +130,8 @@ _err: } static int metaUpdateUidIdx(SMeta *pMeta, const SMetaEntry *pME) { - return tdbTbInsert(pMeta->pUidIdx, &pME->uid, sizeof(tb_uid_t), &pME->version, sizeof(int64_t), &pMeta->txn); + SUidIdxVal uidIdxVal = {.suid = pME->smaEntry.tsma->indexUid, .version = pME->version, .skmVer = 0}; + return tdbTbInsert(pMeta->pUidIdx, &pME->uid, sizeof(tb_uid_t), &uidIdxVal, sizeof(uidIdxVal), &pMeta->txn); } static int metaUpdateNameIdx(SMeta *pMeta, const SMetaEntry *pME) { diff --git a/source/dnode/vnode/src/meta/metaTable.c b/source/dnode/vnode/src/meta/metaTable.c index e56b8ad9399be9cfa54b44066841b9dc8db0b262..424a0cd3b6e15919891696278489e1c06a50a93a 100644 --- a/source/dnode/vnode/src/meta/metaTable.c +++ b/source/dnode/vnode/src/meta/metaTable.c @@ -27,6 +27,23 @@ static int metaUpdateSuidIdx(SMeta *pMeta, const SMetaEntry *pME); static int metaUpdateTagIdx(SMeta *pMeta, const SMetaEntry *pCtbEntry); static int metaDropTableByUid(SMeta *pMeta, tb_uid_t uid, int *type); +static void metaGetEntryInfo(const SMetaEntry *pEntry, SMetaInfo *pInfo) { + pInfo->uid = pEntry->uid; + pInfo->version = pEntry->version; + if (pEntry->type == TSDB_SUPER_TABLE) { + pInfo->suid = pEntry->uid; + pInfo->skmVer = pEntry->stbEntry.schemaRow.version; + } else if (pEntry->type == TSDB_CHILD_TABLE) { + pInfo->suid = pEntry->ctbEntry.suid; + pInfo->skmVer = 0; + } else if (pEntry->type == TSDB_NORMAL_TABLE) { + pInfo->suid = 0; + pInfo->skmVer = pEntry->ntbEntry.schemaRow.version; + } else { + ASSERT(0); + } +} + static int metaUpdateMetaRsp(tb_uid_t uid, char *tbName, SSchemaWrapper *pSchema, STableMetaRsp *pMetaRsp) { pMetaRsp->pSchemas = taosMemoryMalloc(pSchema->nCols * sizeof(SSchema)); if (NULL == pMetaRsp->pSchemas) { @@ -163,30 +180,12 @@ int metaDelJsonVarFromIdx(SMeta *pMeta, const SMetaEntry *pCtbEntry, const SSche } int metaCreateSTable(SMeta *pMeta, int64_t version, SVCreateStbReq *pReq) { - SMetaEntry me = {0}; - int kLen = 0; - int vLen = 0; - const void *pKey = NULL; - const void *pVal = NULL; - void *pBuf = NULL; - int32_t szBuf = 0; - void *p = NULL; - SMetaReader mr = {0}; + SMetaEntry me = {0}; // validate req - metaReaderInit(&mr, pMeta, 0); - if (metaGetTableEntryByName(&mr, pReq->name) == 0) { -// TODO: just for pass case -#if 0 - terrno = TSDB_CODE_TDB_STB_ALREADY_EXIST; - metaReaderClear(&mr); - return -1; -#else - metaReaderClear(&mr); + if (tdbTbGet(pMeta->pNameIdx, pReq->name, strlen(pReq->name), NULL, NULL) == 0) { return 0; -#endif } - metaReaderClear(&mr); // set structs me.version = version; @@ -265,8 +264,8 @@ int metaDropSTable(SMeta *pMeta, int64_t verison, SVDropStbReq *pReq, SArray *tb // drop super table _drop_super_table: tdbTbGet(pMeta->pUidIdx, &pReq->suid, sizeof(tb_uid_t), &pData, &nData); - tdbTbDelete(pMeta->pTbDb, &(STbDbKey){.version = *(int64_t *)pData, .uid = pReq->suid}, sizeof(STbDbKey), - &pMeta->txn); + tdbTbDelete(pMeta->pTbDb, &(STbDbKey){.version = ((SUidIdxVal *)pData)[0].version, .uid = pReq->suid}, + sizeof(STbDbKey), &pMeta->txn); tdbTbDelete(pMeta->pNameIdx, pReq->name, strlen(pReq->name) + 1, &pMeta->txn); tdbTbDelete(pMeta->pUidIdx, &pReq->suid, sizeof(tb_uid_t), &pMeta->txn); tdbTbDelete(pMeta->pSuidIdx, &pReq->suid, sizeof(tb_uid_t), &pMeta->txn); @@ -309,7 +308,7 @@ int metaAlterSTable(SMeta *pMeta, int64_t version, SVCreateStbReq *pReq) { return -1; } - oversion = *(int64_t *)pData; + oversion = ((SUidIdxVal *)pData)[0].version; tdbTbcOpen(pMeta->pTbDb, &pTbDbc, &pMeta->txn); ret = tdbTbcMoveTo(pTbDbc, &((STbDbKey){.uid = pReq->suid, .version = oversion}), sizeof(STbDbKey), &c); @@ -336,15 +335,14 @@ int metaAlterSTable(SMeta *pMeta, int64_t version, SVCreateStbReq *pReq) { metaSaveToSkmDb(pMeta, &nStbEntry); } - // if (oStbEntry.stbEntry.schemaTag.sver != pReq->schemaTag.sver) { - // // change tag schema - // } - // update table.db metaSaveToTbDb(pMeta, &nStbEntry); // update uid index - tdbTbcUpsert(pUidIdxc, &pReq->suid, sizeof(tb_uid_t), &version, sizeof(version), 0); + SMetaInfo info; + metaGetEntryInfo(&nStbEntry, &info); + tdbTbcUpsert(pUidIdxc, &pReq->suid, sizeof(tb_uid_t), + &(SUidIdxVal){.suid = info.suid, .version = info.version, .skmVer = info.skmVer}, sizeof(SUidIdxVal), 0); if (oStbEntry.pBuf) taosMemoryFree(oStbEntry.pBuf); metaULock(pMeta); @@ -503,7 +501,7 @@ static int metaDropTableByUid(SMeta *pMeta, tb_uid_t uid, int *type) { SDecoder dc = {0}; rc = tdbTbGet(pMeta->pUidIdx, &uid, sizeof(uid), &pData, &nData); - int64_t version = *(int64_t *)pData; + int64_t version = ((SUidIdxVal *)pData)[0].version; tdbTbGet(pMeta->pTbDb, &(STbDbKey){.version = version, .uid = uid}, sizeof(STbDbKey), &pData, &nData); @@ -517,7 +515,7 @@ static int metaDropTableByUid(SMeta *pMeta, tb_uid_t uid, int *type) { int tLen = 0; if (tdbTbGet(pMeta->pUidIdx, &e.ctbEntry.suid, sizeof(tb_uid_t), &tData, &tLen) == 0) { - version = *(int64_t *)tData; + version = ((SUidIdxVal *)tData)[0].version; STbDbKey tbDbKey = {.uid = e.ctbEntry.suid, .version = version}; if (tdbTbGet(pMeta->pTbDb, &tbDbKey, sizeof(tbDbKey), &tData, &tLen) == 0) { SDecoder tdc = {0}; @@ -556,6 +554,8 @@ static int metaDropTableByUid(SMeta *pMeta, tb_uid_t uid, int *type) { --pMeta->pVnode->config.vndStats.numOfSTables; } + metaCacheDrop(pMeta, uid); + tDecoderClear(&dc); tdbFree(pData); @@ -594,7 +594,7 @@ static int metaAlterTableColumn(SMeta *pMeta, int64_t version, SVAlterTbReq *pAl ASSERT(c == 0); tdbTbcGet(pUidIdxc, NULL, NULL, &pData, &nData); - oversion = *(int64_t *)pData; + oversion = ((SUidIdxVal *)pData)[0].version; // search table.db TBC *pTbDbc = NULL; @@ -708,7 +708,7 @@ static int metaAlterTableColumn(SMeta *pMeta, int64_t version, SVAlterTbReq *pAl // save to table db metaSaveToTbDb(pMeta, &entry); - tdbTbcUpsert(pUidIdxc, &entry.uid, sizeof(tb_uid_t), &version, sizeof(version), 0); + metaUpdateUidIdx(pMeta, &entry); metaSaveToSkmDb(pMeta, &entry); @@ -764,7 +764,7 @@ static int metaUpdateTableTagVal(SMeta *pMeta, int64_t version, SVAlterTbReq *pA ASSERT(c == 0); tdbTbcGet(pUidIdxc, NULL, NULL, &pData, &nData); - oversion = *(int64_t *)pData; + oversion = ((SUidIdxVal *)pData)[0].version; // search table.db TBC *pTbDbc = NULL; @@ -784,8 +784,8 @@ static int metaUpdateTableTagVal(SMeta *pMeta, int64_t version, SVAlterTbReq *pA /* get stbEntry*/ tdbTbGet(pMeta->pUidIdx, &ctbEntry.ctbEntry.suid, sizeof(tb_uid_t), &pVal, &nVal); - tdbTbGet(pMeta->pTbDb, &((STbDbKey){.uid = ctbEntry.ctbEntry.suid, .version = *(int64_t *)pVal}), sizeof(STbDbKey), - (void **)&stbEntry.pBuf, &nVal); + tdbTbGet(pMeta->pTbDb, &((STbDbKey){.uid = ctbEntry.ctbEntry.suid, .version = ((SUidIdxVal *)pVal)[0].version}), + sizeof(STbDbKey), (void **)&stbEntry.pBuf, &nVal); tdbFree(pVal); tDecoderInit(&dc2, stbEntry.pBuf, nVal); metaDecodeEntry(&dc2, &stbEntry); @@ -859,7 +859,7 @@ static int metaUpdateTableTagVal(SMeta *pMeta, int64_t version, SVAlterTbReq *pA metaSaveToTbDb(pMeta, &ctbEntry); // save to uid.idx - tdbTbUpsert(pMeta->pUidIdx, &ctbEntry.uid, sizeof(tb_uid_t), &version, sizeof(version), &pMeta->txn); + metaUpdateUidIdx(pMeta, &ctbEntry); if (iCol == 0) { metaUpdateTagIdx(pMeta, &ctbEntry); @@ -914,7 +914,7 @@ static int metaUpdateTableOptions(SMeta *pMeta, int64_t version, SVAlterTbReq *p ASSERT(c == 0); tdbTbcGet(pUidIdxc, NULL, NULL, &pData, &nData); - oversion = *(int64_t *)pData; + oversion = ((SUidIdxVal *)pData)[0].version; // search table.db TBC *pTbDbc = NULL; @@ -959,7 +959,7 @@ static int metaUpdateTableOptions(SMeta *pMeta, int64_t version, SVAlterTbReq *p // save to table db metaSaveToTbDb(pMeta, &entry); - tdbTbcUpsert(pUidIdxc, &entry.uid, sizeof(tb_uid_t), &version, sizeof(version), 0); + metaUpdateUidIdx(pMeta, &entry); metaULock(pMeta); tdbTbcClose(pTbDbc); @@ -1042,7 +1042,14 @@ _err: } static int metaUpdateUidIdx(SMeta *pMeta, const SMetaEntry *pME) { - return tdbTbInsert(pMeta->pUidIdx, &pME->uid, sizeof(tb_uid_t), &pME->version, sizeof(int64_t), &pMeta->txn); + // upsert cache + SMetaInfo info; + metaGetEntryInfo(pME, &info); + metaCacheUpsert(pMeta, &info); + + SUidIdxVal uidIdxVal = {.suid = info.suid, .version = info.version, .skmVer = info.skmVer}; + + return tdbTbUpsert(pMeta->pUidIdx, &pME->uid, sizeof(tb_uid_t), &uidIdxVal, sizeof(uidIdxVal), &pMeta->txn); } static int metaUpdateSuidIdx(SMeta *pMeta, const SMetaEntry *pME) { @@ -1118,7 +1125,7 @@ static int metaUpdateTagIdx(SMeta *pMeta, const SMetaEntry *pCtbEntry) { return -1; } tbDbKey.uid = pCtbEntry->ctbEntry.suid; - tbDbKey.version = *(int64_t *)pData; + tbDbKey.version = ((SUidIdxVal *)pData)[0].version; tdbTbGet(pMeta->pTbDb, &tbDbKey, sizeof(tbDbKey), &pData, &nData); tDecoderInit(&dc, pData, nData); diff --git a/source/dnode/vnode/src/tsdb/tsdbMemTable.c b/source/dnode/vnode/src/tsdb/tsdbMemTable.c index 6fc66366230eeadc88b8667d4f2e336dbcab42a2..34b37ffe9b4ed8a15f85525c393b86c75133c696 100644 --- a/source/dnode/vnode/src/tsdb/tsdbMemTable.c +++ b/source/dnode/vnode/src/tsdb/tsdbMemTable.c @@ -108,29 +108,21 @@ int32_t tsdbInsertTableData(STsdb *pTsdb, int64_t version, SSubmitMsgIter *pMsgI STbData *pTbData = NULL; tb_uid_t suid = pMsgIter->suid; tb_uid_t uid = pMsgIter->uid; - int32_t sverNew; - - // check if table exists (todo: refact) - SMetaReader mr = {0}; - // SMetaEntry me = {0}; - metaReaderInit(&mr, pTsdb->pVnode->pMeta, 0); - if (metaGetTableEntryByUid(&mr, pMsgIter->uid) < 0) { - metaReaderClear(&mr); - code = TSDB_CODE_PAR_TABLE_NOT_EXIST; + + SMetaInfo info; + code = metaGetInfo(pTsdb->pVnode->pMeta, uid, &info); + if (code) { + code = TSDB_CODE_TDB_TABLE_NOT_EXIST; goto _err; } - if (pRsp->tblFName) strcat(pRsp->tblFName, mr.me.name); - - if (mr.me.type == TSDB_NORMAL_TABLE) { - sverNew = mr.me.ntbEntry.schemaRow.version; - } else { - tDecoderClear(&mr.coder); - - metaGetTableEntryByUid(&mr, mr.me.ctbEntry.suid); - sverNew = mr.me.stbEntry.schemaRow.version; + if (info.suid != suid) { + code = TSDB_CODE_INVALID_MSG; + goto _err; } - metaReaderClear(&mr); - pRsp->sver = sverNew; + if (info.suid) { + metaGetInfo(pTsdb->pVnode->pMeta, info.suid, &info); + } + pRsp->sver = info.skmVer; // create/get STbData to op code = tsdbGetOrCreateTbData(pMemTable, suid, uid, &pTbData); @@ -157,7 +149,17 @@ int32_t tsdbDeleteTableData(STsdb *pTsdb, int64_t version, tb_uid_t suid, tb_uid SVBufPool *pPool = pTsdb->pVnode->inUse; TSDBKEY lastKey = {.version = version, .ts = eKey}; - // check if table exists (todo) + // check if table exists + SMetaInfo info; + code = metaGetInfo(pTsdb->pVnode->pMeta, uid, &info); + if (code) { + code = TSDB_CODE_TDB_TABLE_NOT_EXIST; + goto _err; + } + if (info.suid != suid) { + code = TSDB_CODE_INVALID_MSG; + goto _err; + } code = tsdbGetOrCreateTbData(pMemTable, suid, uid, &pTbData); if (code) { diff --git a/source/dnode/vnode/src/vnd/vnodeSvr.c b/source/dnode/vnode/src/vnd/vnodeSvr.c index 43c9b4c09f28b5b8652f08d576a0d19bee96791b..ded357274694cf2e294bf928f7c26d47f96d3c30 100644 --- a/source/dnode/vnode/src/vnd/vnodeSvr.c +++ b/source/dnode/vnode/src/vnd/vnodeSvr.c @@ -869,7 +869,7 @@ static int32_t vnodeProcessSubmitReq(SVnode *pVnode, int64_t version, void *pReq submitBlkRsp.uid = createTbReq.uid; submitBlkRsp.tblFName = taosMemoryMalloc(strlen(pVnode->config.dbname) + strlen(createTbReq.name) + 2); - sprintf(submitBlkRsp.tblFName, "%s.", pVnode->config.dbname); + sprintf(submitBlkRsp.tblFName, "%s.%s", pVnode->config.dbname, createTbReq.name); msgIter.uid = createTbReq.uid; if (createTbReq.type == TSDB_CHILD_TABLE) { diff --git a/source/libs/executor/src/scanoperator.c b/source/libs/executor/src/scanoperator.c index 12df378a421e00caec177d3fdb4121c9bda06517..367fae66a6e326e7e28d9e56f527d9f7ae89a581 100644 --- a/source/libs/executor/src/scanoperator.c +++ b/source/libs/executor/src/scanoperator.c @@ -353,6 +353,7 @@ static int32_t loadDataBlock(SOperatorInfo* pOperator, STableScanInfo* pTableSca pBlockInfo->window.skey, pBlockInfo->window.ekey, pBlockInfo->rows); pCost->skipBlocks += 1; + *status = FUNC_DATA_REQUIRED_FILTEROUT; return TSDB_CODE_SUCCESS; } @@ -1174,19 +1175,19 @@ static void checkUpdateData(SStreamScanInfo* pInfo, bool invertible, SSDataBlock SColumnInfoData* pColDataInfo = taosArrayGet(pBlock->pDataBlock, pInfo->primaryTsIndex); ASSERT(pColDataInfo->info.type == TSDB_DATA_TYPE_TIMESTAMP); TSKEY* tsCol = (TSKEY*)pColDataInfo->pData; + bool tableInserted = updateInfoIsTableInserted(pInfo->pUpdateInfo, pBlock->info.uid); for (int32_t rowId = 0; rowId < pBlock->info.rows; rowId++) { SResultRowInfo dumyInfo; dumyInfo.cur.pageId = -1; bool isClosed = false; STimeWindow win = {.skey = INT64_MIN, .ekey = INT64_MAX}; - if (isOverdue(tsCol[rowId], &pInfo->twAggSup)) { + if (tableInserted && isOverdue(tsCol[rowId], &pInfo->twAggSup)) { win = getActiveTimeWindow(NULL, &dumyInfo, tsCol[rowId], &pInfo->interval, TSDB_ORDER_ASC); isClosed = isCloseWindow(&win, &pInfo->twAggSup); } - bool inserted = updateInfoIsTableInserted(pInfo->pUpdateInfo, pBlock->info.uid); // must check update info first. bool update = updateInfoIsUpdated(pInfo->pUpdateInfo, pBlock->info.uid, tsCol[rowId]); - bool closedWin = isClosed && inserted && isSignleIntervalWindow(pInfo) && + bool closedWin = isClosed && isSignleIntervalWindow(pInfo) && isDeletedWindow(&win, pBlock->info.groupId, pInfo->sessionSup.pIntervalAggSup); if ((update || closedWin) && out) { appendOneRow(pInfo->pUpdateDataRes, tsCol + rowId, tsCol + rowId, &pBlock->info.uid); diff --git a/tests/script/tsim/parser/columnValue_unsign.sim b/tests/script/tsim/parser/columnValue_unsign.sim index 758814bc2b662998f5074dc36dbf45cf67ae41d7..85ff490bf4e520cdbbc0ed0008499af4425b2b93 100644 --- a/tests/script/tsim/parser/columnValue_unsign.sim +++ b/tests/script/tsim/parser/columnValue_unsign.sim @@ -76,17 +76,16 @@ if $data03 != NULL then return -1 endi -sql insert into mt_unsigned_1 values(now, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL); -sql insert into mt_unsigned_1 values(now+1s, 1, 2, 3, 4, 5, 6, 7, 8, 9); - -sql_error insert into mt_unsigned_1 values(now, -1, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL); -sql_error insert into mt_unsigned_1 values(now, NULL, -1, NULL, NULL, NULL, NULL, NULL, NULL, NULL); -sql_error insert into mt_unsigned_1 values(now, NULL, NULL, -1, NULL, NULL, NULL, NULL, NULL, NULL); -sql_error insert into mt_unsigned_1 values(now, NULL, NULL, NULL, -1, NULL, NULL, NULL, NULL, NULL); -sql insert into mt_unsigned_1 values(now, 255, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL); -sql insert into mt_unsigned_1 values(now, NULL, 65535, NULL, NULL, NULL, NULL, NULL, NULL, NULL); -sql insert into mt_unsigned_1 values(now, NULL, NULL, 4294967295, NULL, NULL, NULL, NULL, NULL, NULL); -sql insert into mt_unsigned_1 values(now, NULL, NULL, NULL, 18446744073709551615, NULL, NULL, NULL, NULL, NULL); +sql insert into mt_unsigned_1 values(now+1s, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL); +sql insert into mt_unsigned_1 values(now+2s, 1, 2, 3, 4, 5, 6, 7, 8, 9); +sql_error insert into mt_unsigned_1 values(now+3s, -1, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL); +sql_error insert into mt_unsigned_1 values(now+4s, NULL, -1, NULL, NULL, NULL, NULL, NULL, NULL, NULL); +sql_error insert into mt_unsigned_1 values(now+5s, NULL, NULL, -1, NULL, NULL, NULL, NULL, NULL, NULL); +sql_error insert into mt_unsigned_1 values(now+6s, NULL, NULL, NULL, -1, NULL, NULL, NULL, NULL, NULL); +sql insert into mt_unsigned_1 values(now+7s, 255, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL); +sql insert into mt_unsigned_1 values(now+8s, NULL, 65535, NULL, NULL, NULL, NULL, NULL, NULL, NULL); +sql insert into mt_unsigned_1 values(now+9s, NULL, NULL, 4294967295, NULL, NULL, NULL, NULL, NULL, NULL); +sql insert into mt_unsigned_1 values(now+10s, NULL, NULL, NULL, 18446744073709551615, NULL, NULL, NULL, NULL, NULL); sql select count(a),count(b),count(c),count(d), count(e) from mt_unsigned_1 if $rows != 1 then diff --git a/tests/system-test/simpletest.bat b/tests/system-test/simpletest.bat index 656828aa1ea4b1b2681dab2edac2c94178b5176a..cc4ae1795534e943ed603e2db5202ce272f54ab1 100644 --- a/tests/system-test/simpletest.bat +++ b/tests/system-test/simpletest.bat @@ -4,9 +4,9 @@ python3 .\test.py -f 0-others\taosShellError.py python3 .\test.py -f 0-others\taosShellNetChk.py python3 .\test.py -f 0-others\telemetry.py python3 .\test.py -f 0-others\taosdMonitor.py -python3 .\test.py -f 0-others\udfTest.py -python3 .\test.py -f 0-others\udf_create.py -python3 .\test.py -f 0-others\udf_restart_taosd.py +@REM python3 .\test.py -f 0-others\udfTest.py +@REM python3 .\test.py -f 0-others\udf_create.py +@REM python3 .\test.py -f 0-others\udf_restart_taosd.py @REM python3 .\test.py -f 0-others\cachelast.py @REM python3 .\test.py -f 0-others\user_control.py