diff --git a/Jenkinsfile2 b/Jenkinsfile2 index 36652ff695d8a7fcdb13e6bfad8bfd797436f31f..9e13b15d25dabcb2ba563de315f761fce8d52adc 100644 --- a/Jenkinsfile2 +++ b/Jenkinsfile2 @@ -247,14 +247,14 @@ pipeline { } } parallel { - stage ('build worker06_arm64') { - agent {label " worker06_arm64 "} + stage ('build worker08_arm32') { + agent {label " worker08_arm32 "} steps { timeout(time: 20, unit: 'MINUTES') { pre_test() script { sh ''' - echo "worker06_arm64 build done" + echo "worker08_arm32 build done" date ''' } diff --git a/README-CN.md b/README-CN.md index 4eaa6a338bba0f9047d271c87009844f12bd147d..270c09fd2385530e09a974b361f41a9d255fb10f 100644 --- a/README-CN.md +++ b/README-CN.md @@ -11,18 +11,29 @@ # TDengine 简介 -TDengine是涛思数据专为物联网、车联网、工业互联网、IT运维等设计和优化的大数据平台。除核心的快10倍以上的时序数据库功能外,还提供缓存、数据订阅、流式计算等功能,最大程度减少研发和运维的复杂度,且核心代码,包括集群功能全部开源(开源协议,AGPL v3.0)。 +TDengine是一款高性能、分布式、支持SQL的时序数据库。而且除时序数据库功能外,它还提供缓存、数据订阅、流式计算等功能,最大程度减少研发和运维的复杂度,且核心代码,包括集群功能全部开源(开源协议,AGPL v3.0)。与其他时序数据数据库相比,TDengine有以下特点: -- 10 倍以上性能提升。定义了创新的数据存储结构,单核每秒就能处理至少2万次请求,插入数百万个数据点,读出一千万以上数据点,比现有通用数据库快了十倍以上。 -- 硬件或云服务成本降至1/5。由于超强性能,计算资源不到通用大数据方案的1/5;通过列式存储和先进的压缩算法,存储空间不到通用数据库的1/10。 -- 全栈时序数据处理引擎。将数据库、消息队列、缓存、流式计算等功能融合一起,应用无需再集成Kafka/Redis/HBase/Spark等软件,大幅降低应用开发和维护成本。 -- 强大的分析功能。无论是十年前还是一秒钟前的数据,指定时间范围即可查询。数据可在时间轴上或多个设备上进行聚合。即席查询可通过Shell/Python/R/Matlab随时进行。 -- 与第三方工具无缝连接。不用一行代码,即可与Telegraf, Grafana, EMQ X, Prometheus, Matlab, R集成。后续还将支持MQTT, OPC, Hadoop,Spark等, BI工具也将无缝连接。 -- 零运维成本、零学习成本。安装、集群一秒搞定,无需分库分表,实时备份。标准SQL,支持JDBC,RESTful,支持Python/Java/C/C++/Go/Node.JS, 与MySQL相似,零学习成本。 +- **高性能**:通过创新的存储引擎设计,无论是数据写入还是查询,TDengine 的性能比通用数据库快10倍以上,也远超其他时序数据库,而且存储空间也大为节省。 + +- **分布式**:通过原生分布式的设计,TDengine 提供了水平扩展的能力,只需要增加节点就能获得更强的数据处理能力,同时通过多副本机制保证了系统的高可用。 + +- **支持SQL**:TDengine 采用 SQL 作为数据查询语言,减少学习和迁移成本,同时提供 SQL扩展来处理时序数据特有的分析,而且支持方便灵活的 schemaless 数据写入。 + +- **All in One**。将数据库、消息队列、缓存、流式计算等功能融合一起,应用无需再集成Kafka/Redis/HBase/Spark等软件,大幅降低应用开发和维护成本。 + +- **零管理**:安装、集群几秒搞定,无任何依赖,不用分库分表,系统运行状态监测能与 Grafana 或其他运维工具无缝集成。 + +- **零学习成本**:采用SQL查询语言,支持Python, Java, C/C++, Go, Rust, Node.JS等多种编程语言,与MySQL相似,零学习成本。 + +- **无缝集成**:不用一行代码,即可与 Telegraf, Grafana, EMQ X, Prometheus, StatsD, collectd, Matlab, R 等第三方工具无缝集成。 + +- **互动Console**: 通过命令行 console,不用编程,执行 SQL 语句就能做即席查询、各种数据库的操作、管理以及集群的维护. + +TDengine可以广泛应用于物联网、工业互联网、车联网、IT运维、能源、金融等领域, 让大量设备、数据采集器每天产生的高达TB甚至PB级的数据能得到高效实时的处理,对业务的运行状态进行实时的监测、预警,从大数据中挖掘出商业价值。 # 文档 -TDengine是一个高效的存储、查询、分析时序大数据的平台,专为物联网、车联网、工业互联网、运维监测等优化而设计。您可以像使用关系型数据库MySQL一样来使用它,但建议您在使用前仔细阅读一遍下面的文档,特别是 [数据模型](https://www.taosdata.com/cn/documentation/architecture) 与 [数据建模](https://www.taosdata.com/cn/documentation/model)。除本文档之外,欢迎 [下载产品白皮书](https://www.taosdata.com/downloads/TDengine%20White%20Paper.pdf)。 +TDengine采用传统的关系数据库模型,您可以像使用关系型数据库MySQL一样来使用它。但由于引入了超级表,一个采集点一张表的概念,建议您在使用前仔细阅读一遍下面的文档,特别是 [数据模型](https://www.taosdata.com/cn/documentation/architecture) 与 [数据建模](https://www.taosdata.com/cn/documentation/model)。除本文档之外,欢迎 [下载产品白皮书](https://www.taosdata.com/downloads/TDengine%20White%20Paper.pdf)。 # 构建 diff --git a/README.md b/README.md index 8f1eb8ddc925ca7f0fc35c17f7a19943bfe7c66c..eb4e71d4105aca46bf7d0ef4f4f8ae26b358d2f8 100644 --- a/README.md +++ b/README.md @@ -11,19 +11,25 @@ We are hiring, check [here](https://www.taosdata.com/en/careers/) # What is TDengine? -TDengine is an open-sourced big data platform under [GNU AGPL v3.0](http://www.gnu.org/licenses/agpl-3.0.html), designed and optimized for the Internet of Things (IoT), Connected Cars, Industrial IoT, and IT Infrastructure and Application Monitoring. Besides the 10x faster time-series database, it provides caching, stream computing, message queuing and other functionalities to reduce the complexity and cost of development and operation. +TDengine is a high-performance, scalable time-series database with SQL support. Its code including cluster feature is open source under [GNU AGPL v3.0](http://www.gnu.org/licenses/agpl-3.0.html). Besides the database, it provides caching, stream processing, data data subscription and other functionalities to reduce the complexity and cost of development and operation. TDengine differentiates itself from other TSDBs with the following advanatages. -- **10x Faster on Insert/Query Speeds**: Through the innovative design on storage, on a single-core machine, over 20K requests can be processed, millions of data points can be ingested, and over 10 million data points can be retrieved in a second. It is 10 times faster than other databases. +- **High Peroformance**: TDengine outperforms other time series databases in data ingestion and querying while significantly reducing storage cost and compute costs, with an innovatively designed and purpose-built storage engine. -- **1/5 Hardware/Cloud Service Costs**: Compared with typical big data solutions, less than 1/5 of computing resources are required. Via column-based storage and tuned compression algorithms for different data types, less than 1/10 of storage space is needed. +- **Scalable**: TDengine provides out-of-box scalability and high-availability through its native distributed design. Nodes can be added through simple configuration to achieve greater data processing power. In addition, this feature is open source. -- **Full Stack for Time-Series Data**: By integrating a database with message queuing, caching, and stream computing features together, it is no longer necessary to integrate Kafka/Redis/HBase/Spark or other software. It makes the system architecture much simpler and more robust. +- **SQL Support**: TDengine uses SQL as the query language, thereby reducing learning and migration costs, while adding SQL extensions to handle time-series data better, and supporting convenient and flexible schemaless data ingestion. -- **Powerful Data Analysis**: Whether it is 10 years or one minute ago, data can be queried just by specifying the time range. Data can be aggregated over time, multiple time streams or both. Ad Hoc queries or analyses can be executed via TDengine shell, Python, R or Matlab. +- **All in One**: TDengine has built-in caching, stream processing and data subscription functions, it is no longer necessary to integrate Kafka/Redis/HBase/Spark or other software in some scenarios. It makes the system architecture much simpler and easy to maintain. -- **Seamless Integration with Other Tools**: Telegraf, Grafana, Matlab, R, and other tools can be integrated with TDengine without a line of code. MQTT, OPC, Hadoop, Spark, and many others will be integrated soon. +- **Seamless Integration**: Without a single line of code, TDengine provide seamless integration with third-party tools such as Telegraf, Grafana, EMQ X, Prometheus, StatsD, collectd, etc. More will be integrated. + +- **Zero Management**: Installation and cluster setup can be done in seconds. Data partitioning and sharding are executed automatically. TDengine’s running status can be monitored via Grafana or other DevOps tools. -- **Zero Management, No Learning Curve**: It takes only seconds to download, install, and run it successfully; there are no other dependencies. Automatic partitioning on tables or DBs. Standard SQL is used, with C/C++, Python, JDBC, Go and RESTful connectors. +- **Zero Learning Cost**: With SQL as the query language, support for ubiquitous tools like Python, Java, C/C++, Go, Rust, Node.js connectors, there is zero learning cost. + +- **Interactive Console**: TDengine provides convenient console access to the database to run ad hoc queries, maintain the database, or manage the cluster without any programming. + +TDengine can be widely applied to Internet of Things (IoT), Connected Vehicles, Industrial IoT, DevOps, energy, finance and many other scenarios. # Documentation diff --git a/TDenginelogo.png b/TDenginelogo.png index 19a92592d7e8871778f5f3a6edd6314260d62551..50a7afc1749ae9fef9cac5110700908ca1173432 100644 Binary files a/TDenginelogo.png and b/TDenginelogo.png differ diff --git a/cmake/install.inc b/cmake/install.inc index b1cf7b3f9dc3fd0e559a65fd4a04eeb780b164fb..8124929746f42462fd29865814cc2d661a3019f9 100755 --- a/cmake/install.inc +++ b/cmake/install.inc @@ -9,11 +9,12 @@ ELSEIF (TD_WINDOWS) INSTALL(DIRECTORY ${TD_COMMUNITY_DIR}/src/connector/nodejs DESTINATION connector) INSTALL(DIRECTORY ${TD_COMMUNITY_DIR}/src/connector/python DESTINATION connector) INSTALL(DIRECTORY ${TD_COMMUNITY_DIR}/src/connector/C\# DESTINATION connector) - INSTALL(DIRECTORY ${TD_COMMUNITY_DIR}/tests/examples DESTINATION .) + INSTALL(DIRECTORY ${TD_COMMUNITY_DIR}/examples DESTINATION .) INSTALL(FILES ${TD_COMMUNITY_DIR}/packaging/cfg/taos.cfg DESTINATION cfg) INSTALL(FILES ${TD_COMMUNITY_DIR}/src/inc/taos.h DESTINATION include) INSTALL(FILES ${TD_COMMUNITY_DIR}/src/inc/taoserror.h DESTINATION include) INSTALL(FILES ${LIBRARY_OUTPUT_PATH}/taos.lib DESTINATION driver) + INSTALL(FILES ${LIBRARY_OUTPUT_PATH}/taos_static.lib DESTINATION driver) INSTALL(FILES ${LIBRARY_OUTPUT_PATH}/taos.exp DESTINATION driver) INSTALL(FILES ${LIBRARY_OUTPUT_PATH}/taos.dll DESTINATION driver) INSTALL(FILES ${EXECUTABLE_OUTPUT_PATH}/taos.exe DESTINATION .) diff --git a/documentation20/cn/00.index/docs.md b/documentation20/cn/00.index/docs.md index 27730e42054f7421d7b2b55fea4ec9162d7bca9e..463e59d27fcfd944bfef751d427a85bdea8e5045 100644 --- a/documentation20/cn/00.index/docs.md +++ b/documentation20/cn/00.index/docs.md @@ -1,166 +1,166 @@ -# TDengine文档 +# TDengine 文档 -TDengine是一个高效的存储、查询、分析时序大数据的平台,专为物联网、车联网、工业互联网、运维监测等优化而设计。您可以像使用关系型数据库MySQL一样来使用它,但建议您在使用前仔细阅读一遍下面的文档,特别是 [数据模型](/architecture) 与 [数据建模](/model)。除本文档之外,欢迎 [下载产品白皮书](https://www.taosdata.com/downloads/TDengine%20White%20Paper.pdf)。 +TDengine 是一个高效的存储、查询、分析时序大数据的平台,专为物联网、车联网、工业互联网、运维监测等优化而设计。您可以像使用关系型数据库 MySQL 一样来使用它,但建议您在使用前仔细阅读一遍下面的文档,特别是 [数据模型](/architecture) 与 [数据建模](/model)。除本文档之外,欢迎 [下载产品白皮书](https://www.taosdata.com/downloads/TDengine%20White%20Paper.pdf)。 -## [TDengine介绍](/evaluation) +## [TDengine 介绍](/evaluation) -* [TDengine 简介及特色](/evaluation#intro) -* [TDengine 适用场景](/evaluation#scenes) -* [TDengine 性能指标介绍和验证方法](/evaluation#) +- [TDengine 简介及特色](/evaluation#intro) +- [TDengine 适用场景](/evaluation#scenes) +- [TDengine 性能指标介绍和验证方法](/evaluation#) ## [立即开始](/getting-started) -* [快捷安装](/getting-started#install):可通过源码、安装包或docker安装,三秒钟搞定 -* [轻松启动](/getting-started#start):使用systemctl 启停TDengine -* [命令行程序TAOS](/getting-started#console):访问TDengine的简便方式 -* [极速体验](/getting-started#demo):运行示例程序,快速体验高效的数据插入、查询 -* [支持平台列表](/getting-started#platforms):TDengine服务器和客户端支持的平台列表 -* [Kubernetes部署](https://taosdata.github.io/TDengine-Operator/zh/index.html):TDengine在Kubernetes环境进行部署的详细说明 +- [快捷安装](/getting-started#install):可通过源码、安装包或 Docker 安装,三秒钟搞定 +- [轻松启动](/getting-started#start):使用 systemctl 启停 TDengine +- [命令行程序 TAOS](/getting-started#console):访问 TDengine 的简便方式 +- [极速体验](/getting-started#demo):运行示例程序,快速体验高效的数据插入、查询 +- [支持平台列表](/getting-started#platforms):TDengine 服务器和客户端支持的平台列表 +- [Kubernetes 部署](https://taosdata.github.io/TDengine-Operator/zh/index.html):TDengine 在 Kubernetes 环境进行部署的详细说明 ## [整体架构](/architecture) -* [数据模型](/architecture#model):关系型数据库模型,但要求每个采集点单独建表 -* [集群与基本逻辑单元](/architecture#cluster):吸取NoSQL优点,支持水平扩展,支持高可靠 -* [存储模型与数据分区、分片](/architecture#sharding):标签数据与时序数据完全分离,按vnode和时间两个维度对数据切分 -* [数据写入与复制流程](/architecture#replication):先写入WAL、之后写入缓存,再给应用确认,支持多副本 -* [缓存与持久化](/architecture#persistence):最新数据缓存在内存中,但落盘时采用列式存储、超高压缩比 -* [数据查询](/architecture#query):支持各种函数、时间轴聚合、插值、多表聚合 +- [数据模型](/architecture#model):关系型数据库模型,但要求每个采集点单独建表 +- [集群与基本逻辑单元](/architecture#cluster):吸取 NoSQL 优点,支持水平扩展,支持高可靠 +- [存储模型与数据分区、分片](/architecture#sharding):标签数据与时序数据完全分离,按 VNode 和时间两个维度对数据切分 +- [数据写入与复制流程](/architecture#replication):先写入 WAL、之后写入缓存,再给应用确认,支持多副本 +- [缓存与持久化](/architecture#persistence):最新数据缓存在内存中,但落盘时采用列式存储、超高压缩比 +- [数据查询](/architecture#query):支持各种函数、时间轴聚合、插值、多表聚合 ## [数据建模](/model) -* [创建库](/model#create-db):为具有相似数据特征的数据采集点创建一个库 -* [创建超级表](/model#create-stable):为同一类型的数据采集点创建一个超级表 -* [创建表](/model#create-table):使用超级表做模板,为每一个具体的数据采集点单独建表 +- [创建库](/model#create-db):为具有相似数据特征的数据采集点创建一个库 +- [创建超级表](/model#create-stable):为同一类型的数据采集点创建一个超级表 +- [创建表](/model#create-table):使用超级表做模板,为每一个具体的数据采集点单独建表 ## [TAOS SQL](/taos-sql) -* [支持的数据类型](/taos-sql#data-type):支持时间戳、整型、浮点型、布尔型、字符型等多种数据类型 -* [数据库管理](/taos-sql#management):添加、删除、查看数据库 -* [表管理](/taos-sql#table):添加、删除、查看、修改表 -* [超级表管理](/taos-sql#super-table):添加、删除、查看、修改超级表 -* [标签管理](/taos-sql#tags):增加、删除、修改标签 -* [数据写入](/taos-sql#insert):支持单表单条、多条、多表多条写入,支持历史数据写入 -* [数据查询](/taos-sql#select):支持时间段、值过滤、排序、嵌套查询、UINON、JOIN、查询结果手动分页等 -* [SQL函数](/taos-sql#functions):支持各种聚合函数、选择函数、计算函数,如avg, min, diff等 -* [窗口切分聚合](/taos-sql#aggregation):将表中数据按照时间段等方式进行切割后聚合,降维处理 -* [边界限制](/taos-sql#limitation):库、表、SQL等边界限制条件 -* [UDF](/taos-sql/udf):用户定义函数的创建和管理方法 -* [错误码](/taos-sql/error-code):TDengine 2.0 错误码以及对应的十进制码 +- [支持的数据类型](/taos-sql#data-type):支持时间戳、整型、浮点型、布尔型、字符型等多种数据类型 +- [数据库管理](/taos-sql#management):添加、删除、查看数据库 +- [表管理](/taos-sql#table):添加、删除、查看、修改表 +- [超级表管理](/taos-sql#super-table):添加、删除、查看、修改超级表 +- [标签管理](/taos-sql#tags):增加、删除、修改标签 +- [数据写入](/taos-sql#insert):支持单表单条、多条、多表多条写入,支持历史数据写入 +- [数据查询](/taos-sql#select):支持时间段、值过滤、排序、嵌套查询、Union、Join、查询结果手动分页等 +- [SQL 函数](/taos-sql#functions):支持各种聚合函数、选择函数、计算函数,如 AVG, MIN, DIFF 等 +- [窗口切分聚合](/taos-sql#aggregation):将表中数据按照时间段等方式进行切割后聚合,降维处理 +- [边界限制](/taos-sql#limitation):库、表、SQL 等边界限制条件 +- [UDF](/taos-sql/udf):用户定义函数的创建和管理方法 +- [错误码](/taos-sql/error-code):TDengine 2.0 错误码以及对应的十进制码 ## [高效写入数据](/insert) -* [SQL 写入](/insert#sql):使用SQL insert命令向一张或多张表写入单条或多条记录 -* [Schemaless 写入](/insert#schemaless):免于预先建表,将数据直接写入时自动维护元数据结构 -* [Prometheus 写入](/insert#prometheus):配置Prometheus, 不用任何代码,将数据直接写入 -* [Telegraf 写入](/insert#telegraf):配置Telegraf, 不用任何代码,将采集数据直接写入 -* [collectd 直接写入](/insert#collectd):配置 collectd,不用任何代码,将采集数据直接写入 -* [StatsD 直接写入](/insert#statsd):配置 StatsD,不用任何代码,将采集数据直接写入 -* [EMQ X Broker](/insert#emq):配置EMQ X,不用任何代码,就可将MQTT数据直接写入 -* [HiveMQ Broker](/insert#hivemq):配置HiveMQ,不用任何代码,就可将MQTT数据直接写入 +- [SQL 写入](/insert#sql):使用 SQL INSERT 命令向一张或多张表写入单条或多条记录 +- [Schemaless 写入](/insert#schemaless):免于预先建表,将数据直接写入时自动维护元数据结构 +- [Prometheus 写入](/insert#prometheus):配置 Prometheus, 不用任何代码,将数据直接写入 +- [Telegraf 写入](/insert#telegraf):配置 Telegraf, 不用任何代码,将采集数据直接写入 +- [collectd 直接写入](/insert#collectd):配置 collectd,不用任何代码,将采集数据直接写入 +- [StatsD 直接写入](/insert#statsd):配置 StatsD,不用任何代码,将采集数据直接写入 +- [EMQX Broker](/insert#emq):配置 EMQX,不用任何代码,就可将 MQTT 数据直接写入 +- [HiveMQ Broker](/insert#hivemq):配置 HiveMQ,不用任何代码,就可将 MQTT 数据直接写入 ## [高效查询数据](/queries) -* [主要查询功能](/queries#queries):支持各种标准函数,设置过滤条件,时间段查询 -* [多表聚合查询](/queries#aggregation):使用超级表,设置标签过滤条件,进行高效聚合查询 -* [降采样查询值](/queries#sampling):按时间段分段聚合,支持插值 +- [主要查询功能](/queries#queries):支持各种标准函数,设置过滤条件,时间段查询 +- [多表聚合查询](/queries#aggregation):使用超级表,设置标签过滤条件,进行高效聚合查询 +- [降采样查询值](/queries#sampling):按时间段分段聚合,支持插值 ## [高级功能](/advanced-features) -* [连续查询(Continuous Query)](/advanced-features#continuous-query):基于滑动窗口,定时自动的对数据流进行查询计算 -* [数据订阅(Publisher/Subscriber)](/advanced-features#subscribe):类似典型的消息队列,应用可订阅接收到的最新数据 -* [缓存(Cache)](/advanced-features#cache):每个设备最新的数据都会缓存在内存中,可快速获取 +- [连续查询(Continuous Query)](/advanced-features#continuous-query):基于滑动窗口,定时自动的对数据流进行查询计算 +- [数据订阅(Publisher/Subscriber)](/advanced-features#subscribe):类似典型的消息队列,应用可订阅接收到的最新数据 +- [缓存(Cache)](/advanced-features#cache):每个设备最新的数据都会缓存在内存中,可快速获取 ## [连接器](/connector) -* [C/C++ Connector](/connector#c-cpp):通过libtaos客户端的库,连接TDengine服务器的主要方法 -* [Java Connector(JDBC)](/connector/java):通过标准的JDBC API,给Java应用提供到TDengine的连接 -* [Python Connector](/connector#python):给Python应用提供一个连接TDengine服务器的驱动 -* [RESTful Connector](/connector#restful):提供一最简单的连接TDengine服务器的方式 -* [Go Connector](/connector#go):给Go应用提供一个连接TDengine服务器的驱动 -* [Node.js Connector](/connector#nodejs):给node应用提供一个连接TDengine服务器的驱动 -* [C# Connector](/connector#csharp):给C#应用提供一个连接TDengine服务器的驱动 -* [Windows客户端](https://www.taosdata.com/blog/2019/07/26/514.html):自行编译windows客户端,Windows环境的各种连接器都需要它 -* [Rust Connector](/connector/rust): Rust语言下通过libtaos客户端或RESTful接口,连接TDengine服务器。 +- [C/C++ Connector](/connector#c-cpp):通过 libtaos 客户端的库,连接 TDengine 服务器的主要方法 +- [Java Connector(JDBC)](/connector/java):通过标准的 JDBC API,给 Java 应用提供到 TDengine 的连接 +- [Python Connector](/connector#python):给 Python 应用提供一个连接 TDengine 服务器的驱动 +- [RESTful Connector](/connector#restful):提供一最简单的连接 TDengine 服务器的方式 +- [Go Connector](/connector#go):给 Go 应用提供一个连接 TDengine 服务器的驱动 +- [Node.js Connector](/connector#nodejs):给 Node.js 应用提供一个连接 TDengine 服务器的驱动 +- [C# Connector](/connector#csharp):给 C# 应用提供一个连接 TDengine 服务器的驱动 +- [Windows 客户端](https://www.taosdata.com/blog/2019/07/26/514.html):自行编译 Windows 客户端,Windows 环境的各种连接器都需要它 +- [Rust Connector](/connector/rust): Rust 语言下通过 libtaos 客户端或 RESTful 接口,连接 TDengine 服务器。 ## TDengine 组件与工具 -* [taosAdapter](/tools/adapter): TDengine 集群和应用之间的 RESTful 接口适配服务。 -* [TDinsight](/tools/insight): 监控 TDengine 集群的 Grafana 面板集合。 -* [taosdump](/tools/taosdump): TDengine 数据备份工具。使用 taosdump 请安装 taosTools。 -* [taosBenchmark](/tools/taosbenchmark): TDengine 压力测试工具。 -* [taosTools](/tools/taos-tools): taosTools 是用于 TDengine 的辅助工具软件集合。。 +- [taosAdapter](/tools/adapter): TDengine 集群和应用之间的 RESTful 接口适配服务。 +- [TDinsight](/tools/insight): 监控 TDengine 集群的 Grafana 面板集合。 +- [taosTools](/tools/taos-tools): taosTools 是用于 TDengine 的辅助工具软件集合。。 +- [taosdump](/tools/taosdump): TDengine 数据备份工具。使用 taosdump 请安装 taosTools。 +- [taosBenchmark](/tools/taosbenchmark): TDengine 压力测试工具。 ## [与其他工具的连接](/connections) -* [Grafana](/connections#grafana):获取并可视化保存在TDengine的数据 -* [IDEA Database](https://www.taosdata.com/blog/2020/08/27/1767.html):通过IDEA 数据库管理工具可视化使用 TDengine -* [TDengineGUI](https://github.com/skye0207/TDengineGUI):基于Electron开发的跨平台TDengine图形化管理工具 -* [DataX](https://www.taosdata.com/blog/2021/10/26/3156.html):支持 TDeninge 和其他数据库之间进行数据迁移的工具 - -## [TDengine集群的安装、管理](/cluster) - -* [准备工作](/cluster#prepare):部署环境前的几点注意事项 -* [创建第一个节点](/cluster#node-one):与快捷安装完全一样,非常简单 -* [创建后续节点](/cluster#node-other):配置新节点的taos.cfg, 在现有集群添加新的节点 -* [节点管理](/cluster#management):增加、删除、查看集群的节点 -* [Vnode 的高可用性](/cluster#high-availability):通过多副本的机制来提供 Vnode 的高可用性 -* [Mnode 的管理](/cluster#mnode):系统自动创建、无需任何人工干预 -* [负载均衡](/cluster#load-balancing):一旦节点个数或负载有变化,自动进行 -* [节点离线处理](/cluster#offline):节点离线超过一定时长,将从集群中剔除 -* [Arbitrator](/cluster#arbitrator):对于偶数个副本的情形,使用它可以防止split brain - -## [TDengine的运营和维护](/administrator) - -* [容量规划](/administrator#planning):根据场景,估算硬件资源 -* [容错和灾备](/administrator#tolerance):设置正确的WAL和数据副本数 -* [系统配置](/administrator#config):端口,缓存大小,文件块大小和其他系统配置 -* [用户管理](/administrator#user):添加、删除TDengine用户,修改用户密码 -* [数据导入](/administrator#import):可按脚本文件导入,也可按数据文件导入 -* [数据导出](/administrator#export):从shell按表导出,也可用taosdump工具做各种导出 -* [系统连接、任务查询管理](/administrator#status):检查系统现有的连接、查询、流式计算,日志和事件等 -* [系统监控](/administrator#monitoring):系统监控,使用TDinsight进行集群监控等 -* [性能优化](/administrator#optimize):对长期运行的系统进行维护优化,保障性能表现 -* [文件目录结构](/administrator#directories):TDengine数据文件、配置文件等所在目录 -* [参数限制与保留关键字](/administrator#keywords):TDengine的参数限制与保留关键字列表 - -## TDengine的技术设计 - -* [系统模块](/architecture/taosd):taosd的功能和模块划分 -* [数据复制](/architecture/replica):支持实时同步、异步复制,保证系统的High Availibility -* [技术博客](https://www.taosdata.com/cn/blog/?categories=3):更多的技术分析和架构设计文章 +- [Grafana](/connections#grafana):获取并可视化保存在 TDengine 的数据 +- [IDEA Database](https://www.taosdata.com/blog/2020/08/27/1767.html):通过 IDEA 数据库管理工具可视化使用 TDengine +- [TDengineGUI](https://github.com/skye0207/TDengineGUI):基于 Electron 开发的跨平台 TDengine 图形化管理工具 +- [DataX](https://www.taosdata.com/blog/2021/10/26/3156.html):支持 TDeninge 和其他数据库之间进行数据迁移的工具 + +## [TDengine 集群的安装、管理](/cluster) + +- [准备工作](/cluster#prepare):部署环境前的几点注意事项 +- [创建第一个节点](/cluster#node-one):与快捷安装完全一样,非常简单 +- [创建后续节点](/cluster#node-other):配置新节点的 taos.cfg, 在现有集群添加新的节点 +- [节点管理](/cluster#management):增加、删除、查看集群的节点 +- [VNode 的高可用性](/cluster#high-availability):通过多副本的机制来提供 VNode 的高可用性 +- [MNode 的管理](/cluster#mnode):系统自动创建、无需任何人工干预 +- [负载均衡](/cluster#load-balancing):一旦节点个数或负载有变化,自动进行 +- [节点离线处理](/cluster#offline):节点离线超过一定时长,将从集群中剔除 +- [Arbitrator](/cluster#arbitrator):对于偶数个副本的情形,使用它可以防止脑裂(Split-brain)问题 + +## [TDengine 的运营和维护](/administrator) + +- [容量规划](/administrator#planning):根据场景,估算硬件资源 +- [容错和灾备](/administrator#tolerance):设置正确的 WAL 和数据副本数 +- [系统配置](/administrator#config):端口,缓存大小,文件块大小和其他系统配置 +- [用户管理](/administrator#user):添加、删除 TDengine 用户,修改用户密码 +- [数据导入](/administrator#import):可按脚本文件导入,也可按数据文件导入 +- [数据导出](/administrator#export):从 Shell 按表导出,也可用 taosdump 工具做各种导出 +- [系统连接、任务查询管理](/administrator#status):检查系统现有的连接、查询、流式计算,日志和事件等 +- [系统监控](/administrator#monitoring):系统监控,使用 TDinsight 进行集群监控等 +- [性能优化](/administrator#optimize):对长期运行的系统进行维护优化,保障性能表现 +- [文件目录结构](/administrator#directories):TDengine 数据文件、配置文件等所在目录 +- [参数限制与保留关键字](/administrator#keywords):TDengine 的参数限制与保留关键字列表 + +## TDengine 的技术设计 + +- [系统模块](/architecture/taosd):taosd 的功能和模块划分 +- [数据复制](/architecture/replica):支持实时同步、异步复制,保证系统的高可用性 +- [技术博客](https://www.taosdata.com/cn/blog/?categories=3):更多的技术分析和架构设计文章 ## 应用 TDengine 快速搭建 IT 运维系统 -* [devops](/devops/telegraf):使用 TDengine + Telegraf + Grafana 快速搭建 IT 运维系统 -* [devops](/devops/collectd):使用 TDengine + collectd_statsd + Grafana 快速搭建 IT 运维系统 -* [最佳实践](/devops/immigrate):OpenTSDB 应用迁移到 TDengine 的最佳实践 +- [DevOps](/devops/telegraf):使用 TDengine + Telegraf + Grafana 快速搭建 IT 运维系统 +- [DevOps](/devops/collectd):使用 TDengine + collectd/StatsD + Grafana 快速搭建 IT 运维系统 +- [最佳实践](/devops/immigrate):OpenTSDB 应用迁移到 TDengine 的最佳实践 -## TDengine与其他数据库的对比测试 +## TDengine 与其他数据库的对比测试 -* [用InfluxDB开源的性能测试工具对比InfluxDB和TDengine](https://www.taosdata.com/blog/2020/01/13/1105.html) -* [TDengine与OpenTSDB对比测试](https://www.taosdata.com/blog/2019/08/21/621.html) -* [TDengine与Cassandra对比测试](https://www.taosdata.com/blog/2019/08/14/573.html) -* [TDengine与InfluxDB对比测试](https://www.taosdata.com/blog/2019/07/19/419.html) -* [TDengine与InfluxDB、OpenTSDB、Cassandra、MySQL、ClickHouse等数据库的对比测试报告](https://www.taosdata.com/downloads/TDengine_Testing_Report_cn.pdf) +- [用 InfluxDB 开源的性能测试工具对比 InfluxDB 和 TDengine](https://www.taosdata.com/blog/2020/01/13/1105.html) +- [TDengine 与 OpenTSDB 对比测试](https://www.taosdata.com/blog/2019/08/21/621.html) +- [TDengine 与 Cassandra 对比测试](https://www.taosdata.com/blog/2019/08/14/573.html) +- [TDengine 与 InfluxDB 对比测试](https://www.taosdata.com/blog/2019/07/19/419.html) +- [TDengine 与 InfluxDB、OpenTSDB、Cassandra、MySQL、ClickHouse 等数据库的对比测试报告](https://www.taosdata.com/downloads/TDengine_Testing_Report_cn.pdf) ## 物联网大数据 -* [物联网、工业互联网大数据的特点](https://www.taosdata.com/blog/2019/07/09/105.html) -* [物联网大数据平台应具备的功能和特点](https://www.taosdata.com/blog/2019/07/29/542.html) -* [通用大数据架构为什么不适合处理物联网数据?](https://www.taosdata.com/blog/2019/07/09/107.html) -* [物联网、车联网、工业互联网大数据平台,为什么推荐使用TDengine?](https://www.taosdata.com/blog/2019/07/09/109.html) - -## 培训和FAQ - -* [FAQ:常见问题与答案](/faq) -* [技术公开课:开源、高效的物联网大数据平台,TDengine内核技术剖析](https://www.taosdata.com/blog/2020/12/25/2126.html) -* [TDengine视频教程-快速上手](https://www.taosdata.com/blog/2020/11/11/1941.html) -* [TDengine视频教程-数据建模](https://www.taosdata.com/blog/2020/11/11/1945.html) -* [TDengine视频教程-集群搭建](https://www.taosdata.com/blog/2020/11/11/1961.html) -* [TDengine视频教程-Go Connector](https://www.taosdata.com/blog/2020/11/11/1951.html) -* [TDengine视频教程-JDBC Connector](https://www.taosdata.com/blog/2020/11/11/1955.html) -* [TDengine视频教程-NodeJS Connector](https://www.taosdata.com/blog/2020/11/11/1957.html) -* [TDengine视频教程-Python Connector](https://www.taosdata.com/blog/2020/11/11/1963.html) -* [TDengine视频教程-RESTful Connector](https://www.taosdata.com/blog/2020/11/11/1965.html) -* [TDengine视频教程-“零”代码运维监控](https://www.taosdata.com/blog/2020/11/11/1959.html) -* [应用案例:一些使用实例来解释如何使用TDengine](https://www.taosdata.com/cn/blog/?categories=4) +- [物联网、工业互联网大数据的特点](https://www.taosdata.com/blog/2019/07/09/105.html) +- [物联网大数据平台应具备的功能和特点](https://www.taosdata.com/blog/2019/07/29/542.html) +- [通用大数据架构为什么不适合处理物联网数据?](https://www.taosdata.com/blog/2019/07/09/107.html) +- [物联网、车联网、工业互联网大数据平台,为什么推荐使用 TDengine?](https://www.taosdata.com/blog/2019/07/09/109.html) + +## 培训和 FAQ + +- [FAQ:常见问题与答案](/faq) +- [技术公开课:开源、高效的物联网大数据平台,TDengine 内核技术剖析](https://www.taosdata.com/blog/2020/12/25/2126.html) +- [TDengine 视频教程 - 快速上手](https://www.taosdata.com/blog/2020/11/11/1941.html) +- [TDengine 视频教程 - 数据建模](https://www.taosdata.com/blog/2020/11/11/1945.html) +- [TDengine 视频教程 - 集群搭建](https://www.taosdata.com/blog/2020/11/11/1961.html) +- [TDengine 视频教程 - Go Connector](https://www.taosdata.com/blog/2020/11/11/1951.html) +- [TDengine 视频教程 - JDBC Connector](https://www.taosdata.com/blog/2020/11/11/1955.html) +- [TDengine 视频教程 - Node.js Connector](https://www.taosdata.com/blog/2020/11/11/1957.html) +- [TDengine 视频教程 - Python Connector](https://www.taosdata.com/blog/2020/11/11/1963.html) +- [TDengine 视频教程 - RESTful Connector](https://www.taosdata.com/blog/2020/11/11/1965.html) +- [TDengine 视频教程 - “零”代码运维监控](https://www.taosdata.com/blog/2020/11/11/1959.html) +- [应用案例:一些使用实例来解释如何使用 TDengine](https://www.taosdata.com/cn/blog/?categories=4) diff --git a/documentation20/cn/02.getting-started/03.install/docs.md b/documentation20/cn/02.getting-started/03.install/docs.md index 208271bf54f87c4fbb4d681653cbf53dd6e318c7..aebe2fd38b070069a423c8934a665b8f818e1072 100644 --- a/documentation20/cn/02.getting-started/03.install/docs.md +++ b/documentation20/cn/02.getting-started/03.install/docs.md @@ -11,24 +11,27 @@ TDengine 开源版本提供 deb 和 rpm 格式安装包,用户可以根据自 2、进入到TDengine-server-2.0.0.0-Linux-x64.deb安装包所在目录,执行如下的安装命令: ``` -plum@ubuntu:~/git/taosv16$ sudo dpkg -i TDengine-server-2.0.0.0-Linux-x64.deb - -Selecting previously unselected package tdengine. -(Reading database ... 233181 files and directories currently installed.) -Preparing to unpack TDengine-server-2.0.0.0-Linux-x64.deb ... -Failed to stop taosd.service: Unit taosd.service not loaded. -Stop taosd service success! -Unpacking tdengine (2.0.0.0) ... -Setting up tdengine (2.0.0.0) ... -Start to install TDEngine... -Synchronizing state of taosd.service with SysV init with /lib/systemd/systemd-sysv-install... -Executing /lib/systemd/systemd-sysv-install enable taosd -insserv: warning: current start runlevel(s) (empty) of script `taosd' overrides LSB defaults (2 3 4 5). -insserv: warning: current stop runlevel(s) (0 1 2 3 4 5 6) of script `taosd' overrides LSB defaults (0 1 6). -Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join OR leave it blank to build one : +$ sudo dpkg -i TDengine-server-2.4.0.7-Linux-x64.deb +(Reading database ... 137504 files and directories currently installed.) +Preparing to unpack TDengine-server-2.4.0.7-Linux-x64.deb ... +TDengine is removed successfully! +Unpacking tdengine (2.4.0.7) over (2.4.0.7) ... +Setting up tdengine (2.4.0.7) ... +Start to install TDengine... + +System hostname is: shuduo-1804 + +Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join +OR leave it blank to build one: + +Enter your email address for priority support or enter empty to skip: +Created symlink /etc/systemd/system/multi-user.target.wants/taosd.service → /etc/systemd/system/taosd.service. + To configure TDengine : edit /etc/taos/taos.cfg To start TDengine : sudo systemctl start taosd -To access TDengine : use taos in shell +To access TDengine : taos -h shuduo-1804 to login into TDengine server + + TDengine is installed successfully! ``` @@ -41,10 +44,10 @@ TDengine is installed successfully! 卸载命令如下: ``` - plum@ubuntu:~/git/tdengine/debs$ sudo dpkg -r tdengine - (Reading database ... 233482 files and directories currently installed.) - Removing tdengine (2.0.0.0) ... - TDEngine is removed successfully! +$ sudo dpkg -r tdengine +(Reading database ... 137504 files and directories currently installed.) +Removing tdengine (2.4.0.7) ... +TDengine is removed successfully! ``` ## rpm包的安装和卸载 @@ -55,16 +58,27 @@ TDengine is installed successfully! 2、进入到TDengine-server-2.0.0.0-Linux-x64.rpm安装包所在目录,执行如下的安装命令: ``` - [root@bogon x86_64]# rpm -iv TDengine-server-2.0.0.0-Linux-x64.rpm - Preparing packages... - TDengine-2.0.0.0-3.x86_64 - Start to install TDEngine... - Created symlink from /etc/systemd/system/multi-user.target.wants/taosd.service to /etc/systemd/system/taosd.service. - Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join OR leave it blank to build one : - To configure TDengine : edit /etc/taos/taos.cfg - To start TDengine : sudo systemctl start taosd - To access TDengine : use taos in shell - TDengine is installed successfully! +$ sudo rpm -ivh TDengine-server-2.4.0.7-Linux-x64.rpm +Preparing... ################################# [100%] +Updating / installing... + 1:tdengine-2.4.0.7-3 ################################# [100%] +Start to install TDengine... + +System hostname is: centos7 + +Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join +OR leave it blank to build one: + +Enter your email address for priority support or enter empty to skip: + +Created symlink from /etc/systemd/system/multi-user.target.wants/taosd.service to /etc/systemd/system/taosd.service. + +To configure TDengine : edit /etc/taos/taos.cfg +To start TDengine : sudo systemctl start taosd +To access TDengine : taos -h centos7 to login into TDengine server + + +TDengine is installed successfully! ``` ### 卸载 rpm @@ -72,8 +86,8 @@ TDengine is installed successfully! 卸载命令如下: ``` - [root@bogon x86_64]# rpm -e tdengine - TDEngine is removed successfully! +$ sudo rpm -e tdengine +TDengine is removed successfully! ``` ## tar.gz 格式安装包的安装和卸载 @@ -84,37 +98,47 @@ TDengine is installed successfully! 2、进入到TDengine-server-2.0.0.0-Linux-x64.tar.gz安装包所在目录,先解压文件后,进入子目录,执行其中的install.sh安装脚本: ``` - plum@ubuntu:~/git/tdengine/release$ sudo tar -xzvf TDengine-server-2.0.0.0-Linux-x64.tar.gz - plum@ubuntu:~/git/tdengine/release$ ll - total 3796 - drwxr-xr-x 3 root root 4096 Aug 9 14:20 ./ - drwxrwxr-x 11 plum plum 4096 Aug 8 11:03 ../ - drwxr-xr-x 5 root root 4096 Aug 8 11:03 TDengine-server/ - -rw-r--r-- 1 root root 3871844 Aug 8 11:03 TDengine-server-2.0.0.0-Linux-x64.tar.gz - plum@ubuntu:~/git/tdengine/release$ cd TDengine-server/ - plum@ubuntu:~/git/tdengine/release/TDengine-server$ ll - total 2640 - drwxr-xr-x 5 root root 4096 Aug 8 11:03 ./ - drwxr-xr-x 3 root root 4096 Aug 9 14:20 ../ - drwxr-xr-x 5 root root 4096 Aug 8 11:03 connector/ - drwxr-xr-x 2 root root 4096 Aug 8 11:03 driver/ - drwxr-xr-x 8 root root 4096 Aug 8 11:03 examples/ - -rwxr-xr-x 1 root root 13095 Aug 8 11:03 install.sh* - -rw-r--r-- 1 root root 2651954 Aug 8 11:03 taos.tar.gz - plum@ubuntu:~/git/tdengine/release/TDengine-server$ sudo ./install.sh - This is ubuntu system - verType=server interactiveFqdn=yes - Start to install TDengine... - Synchronizing state of taosd.service with SysV init with /lib/systemd/systemd-sysv-install... - Executing /lib/systemd/systemd-sysv-install enable taosd - insserv: warning: current start runlevel(s) (empty) of script `taosd' overrides LSB defaults (2 3 4 5). - insserv: warning: current stop runlevel(s) (0 1 2 3 4 5 6) of script `taosd' overrides LSB defaults (0 1 6). - Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join OR leave it blank to build one :hostname.taosdata.com:7030 - To configure TDengine : edit /etc/taos/taos.cfg - To start TDengine : sudo systemctl start taosd - To access TDengine : use taos in shell - Please run: taos -h hostname.taosdata.com:7030 to login into cluster, then execute : create dnode 'newDnodeFQDN:port'; in TAOS shell to add this new node into the clsuter - TDengine is installed successfully! +$ tar xvzf TDengine-enterprise-server-2.4.0.7-Linux-x64.tar.gz +TDengine-enterprise-server-2.4.0.7/ +TDengine-enterprise-server-2.4.0.7/driver/ +TDengine-enterprise-server-2.4.0.7/driver/vercomp.txt +TDengine-enterprise-server-2.4.0.7/driver/libtaos.so.2.4.0.7 +TDengine-enterprise-server-2.4.0.7/install.sh +TDengine-enterprise-server-2.4.0.7/examples/ +... + +$ ll +total 43816 +drwxrwxr-x 3 ubuntu ubuntu 4096 Feb 22 09:31 ./ +drwxr-xr-x 20 ubuntu ubuntu 4096 Feb 22 09:30 ../ +drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 TDengine-enterprise-server-2.4.0.7/ +-rw-rw-r-- 1 ubuntu ubuntu 44852544 Feb 22 09:31 TDengine-enterprise-server-2.4.0.7-Linux-x64.tar.gz + +$ cd TDengine-enterprise-server-2.4.0.7/ + + $ ll +total 40784 +drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 ./ +drwxrwxr-x 3 ubuntu ubuntu 4096 Feb 22 09:31 ../ +drwxrwxr-x 2 ubuntu ubuntu 4096 Feb 22 09:30 driver/ +drwxrwxr-x 10 ubuntu ubuntu 4096 Feb 22 09:30 examples/ +-rwxrwxr-x 1 ubuntu ubuntu 33294 Feb 22 09:30 install.sh* +-rw-rw-r-- 1 ubuntu ubuntu 41704288 Feb 22 09:30 taos.tar.gz + +$ sudo ./install.sh + +Start to update TDengine... +Created symlink /etc/systemd/system/multi-user.target.wants/taosd.service → /etc/systemd/system/taosd.service. +Nginx for TDengine is updated successfully! + +To configure TDengine : edit /etc/taos/taos.cfg +To configure Taos Adapter (if has) : edit /etc/taos/taosadapter.toml +To start TDengine : sudo systemctl start taosd +To access TDengine : use taos -h shuduo-1804 in shell OR from http://127.0.0.1:6060 + +TDengine is updated successfully! +Install taoskeeper as a standalone service +taoskeeper is installed, enable it by `systemctl enable taoskeeper` ``` 说明:install.sh 安装脚本在执行过程中,会通过命令行交互界面询问一些配置信息。如果希望采取无交互安装方式,那么可以用 -e no 参数来执行 install.sh 脚本。运行 ./install.sh -h 指令可以查看所有参数的详细说明信息。 @@ -124,8 +148,11 @@ TDengine is installed successfully! 卸载命令如下: ``` - plum@ubuntu:~/git/tdengine/release/TDengine-server$ rmtaos - TDEngine is removed successfully! +$ rmtaos +Nginx for TDengine is running, stopping it... +TDengine is removed successfully! + +taosKeeper is removed successfully! ``` ## 安装目录说明 @@ -133,19 +160,19 @@ TDengine is installed successfully! TDengine成功安装后,主安装目录是/usr/local/taos,目录内容如下: ``` - plum@ubuntu:/usr/local/taos$ cd /usr/local/taos - plum@ubuntu:/usr/local/taos$ ll - total 36 - drwxr-xr-x 9 root root 4096 7月 30 19:20 ./ - drwxr-xr-x 13 root root 4096 7月 30 19:20 ../ - drwxr-xr-x 2 root root 4096 7月 30 19:20 bin/ - drwxr-xr-x 2 root root 4096 7月 30 19:20 cfg/ - lrwxrwxrwx 1 root root 13 7月 30 19:20 data -> /var/lib/taos/ - drwxr-xr-x 2 root root 4096 7月 30 19:20 driver/ - drwxr-xr-x 8 root root 4096 7月 30 19:20 examples/ - drwxr-xr-x 2 root root 4096 7月 30 19:20 include/ - drwxr-xr-x 2 root root 4096 7月 30 19:20 init.d/ - lrwxrwxrwx 1 root root 13 7月 30 19:20 log -> /var/log/taos/ +$ cd /usr/local/taos +$ ll +$ ll +total 28 +drwxr-xr-x 7 root root 4096 Feb 22 09:34 ./ +drwxr-xr-x 12 root root 4096 Feb 22 09:34 ../ +drwxr-xr-x 2 root root 4096 Feb 22 09:34 bin/ +drwxr-xr-x 2 root root 4096 Feb 22 09:34 cfg/ +lrwxrwxrwx 1 root root 13 Feb 22 09:34 data -> /var/lib/taos/ +drwxr-xr-x 2 root root 4096 Feb 22 09:34 driver/ +drwxr-xr-x 10 root root 4096 Feb 22 09:34 examples/ +drwxr-xr-x 2 root root 4096 Feb 22 09:34 include/ +lrwxrwxrwx 1 root root 13 Feb 22 09:34 log -> /var/log/taos/ ``` - 自动生成配置文件目录、数据库目录、日志目录。 @@ -169,7 +196,7 @@ TDengine成功安装后,主安装目录是/usr/local/taos,目录内容如下 - 对于deb包安装后,如果安装目录被手工误删了部分,出现卸载、或重新安装不能成功。此时,需要清除 tdengine 包的安装信息,执行如下命令: ``` - plum@ubuntu:~/git/tdengine/$ sudo rm -f /var/lib/dpkg/info/tdengine* +$ sudo rm -f /var/lib/dpkg/info/tdengine* ``` 然后再重新进行安装就可以了。 @@ -177,7 +204,7 @@ TDengine成功安装后,主安装目录是/usr/local/taos,目录内容如下 - 对于rpm包安装后,如果安装目录被手工误删了部分,出现卸载、或重新安装不能成功。此时,需要清除tdengine包的安装信息,执行如下命令: ``` - [root@bogon x86_64]# rpm -e --noscripts tdengine +$ sudo rpm -e --noscripts tdengine ``` 然后再重新进行安装就可以了。 diff --git a/documentation20/cn/02.getting-started/docs.md b/documentation20/cn/02.getting-started/docs.md index 6cf41c65eba732f64a40156a9917875bcb28bfd7..e9f25c9662638a3e16e8ad1eedbb3bd7ea712ad0 100644 --- a/documentation20/cn/02.getting-started/docs.md +++ b/documentation20/cn/02.getting-started/docs.md @@ -2,7 +2,8 @@ ## 快捷安装 -TDengine 包括服务端、客户端和周边生态工具软件,目前 2.0 版服务端仅在 Linux 系统上安装和运行,后续将支持 Windows、Mac OS 等系统。客户端可以在 Windows 或 Linux 上安装和运行。在任何操作系统上的应用都可以使用 RESTful 接口连接服务端程序 taosd,其中 2.4 之后版本默认使用单独运行的独立组件 taosAdapter 提供 http 服务和更多数据写入方式。taosAdapter 需要手动启动。而之前版本 TDengine 使用内置 http 服务。 +TDengine 包括服务端、客户端和周边生态工具软件,目前 2.0 版服务端仅在 Linux 系统上安装和运行,后续将支持 Windows、Mac OS 等系统。客户端可以在 Windows 或 Linux 上安装和运行。在任何操作系统上的应用都可以使用 RESTful 接口连接服务端程序 taosd,其中 2.4 之后版本默认使用单独运行的独立组件 taosAdapter 提供 http 服务和更多数据写入方式。taosAdapter 需要手动启动。 +之前版本 TDengine 服务端,以及所有服务端lite版,均使用内置 http 服务。 TDengine 支持 X64/ARM64/MIPS64/Alpha64 硬件平台,后续将支持 ARM32、RISC-V 等 CPU 架构。 @@ -16,9 +17,18 @@ docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengin 注:暂时不建议生产环境采用 Docker 来部署 TDengine 的客户端或服务端,但在开发环境下或初次尝试时,使用 Docker 方式部署是十分方便的。特别是,利用 Docker,可以方便地在 Mac OS X 和 Windows 环境下尝试 TDengine。 +从 2.4.0.10 开始,除taosd以外,docker镜像还包含:taos、taosAdapter、taosdump、taosBenchmark、TDinsight安装脚本和示例代码。启动docker容器时,将同时启动taosAdapter和taosd,实现对restful的支持。 + + ### 通过安装包安装 -TDengine 的安装非常简单,从下载到安装成功仅仅只要几秒钟。为方便使用,标准的服务端安装包包含了客户端程序和示例代码;如果您只需要用到服务端程序和客户端连接的 C/C++ 语言支持,也可以仅下载 lite 版本的安装包。在安装包格式上,我们提供 rpm 和 deb 格式,也为企业客户提供 tar.gz 格式安装包,以方便在特定操作系统上使用。发布版本包括稳定版和 Beta 版,Beta版含有更多新功能。正式上线或测试建议安装稳定版。您可以根据需要选择下载: +TDengine 的安装非常简单,从下载到安装成功仅仅只要几秒钟。 + +为方便使用,从 2.4.0.10 开始,标准的服务端安装包包含了taos、taosd、taosAdapter、taosdump、taosBenchmark、TDinsight安装脚本和示例代码;如果您只需要用到服务端程序和客户端连接的 C/C++ 语言支持,也可以仅下载 lite 版本的安装包。 + +在安装包格式上,我们提供tar.gz, rpm 和 deb 格式,为企业客户提供 tar.gz 格式安装包,以方便在特定操作系统上使用。需要注意的是,rpm和deb包不含taosdump、taosBenchmark和TDinsight安装脚本,这些工具需要通过安装taosTool包获得。 + +发布版本包括稳定版和 Beta 版,Beta版含有更多新功能。正式上线或测试建议安装稳定版。您可以根据需要选择下载: diff --git a/documentation20/cn/05.insert/docs.md b/documentation20/cn/05.insert/docs.md index ff22c1ae0dede9af739ced37ff7bb6dada6cf81e..537a578946b4e86fcf757688658441f03be8a81d 100644 --- a/documentation20/cn/05.insert/docs.md +++ b/documentation20/cn/05.insert/docs.md @@ -315,7 +315,7 @@ taosAdapter 相关配置参数请参考 taosadapter --help 命令输出以及相 ## EMQ Broker 直接写入 -MQTT是流行的物联网数据传输协议,[EMQ](https://github.com/emqx/emqx)是一开源的MQTT Broker软件,无需任何代码,只需要在EMQ Dashboard里使用“规则”做简单配置,即可将MQTT的数据直接写入TDengine。EMQ X 支持通过 发送到 Web 服务的方式保存数据到 TDEngine,也在企业版上提供原生的 TDEngine 驱动实现直接保存。详细使用方法请参考 [EMQ 官方文档](https://docs.emqx.io/broker/latest/cn/rule/rule-example.html#%E4%BF%9D%E5%AD%98%E6%95%B0%E6%8D%AE%E5%88%B0-tdengine)。 +MQTT是流行的物联网数据传输协议,[EMQ](https://github.com/emqx/emqx)是一开源的MQTT Broker软件,无需任何代码,只需要在EMQ Dashboard里使用“规则”做简单配置,即可将MQTT的数据直接写入TDengine。EMQ X 支持通过 发送到 Web 服务的方式保存数据到 TDengine,也在企业版上提供原生的 TDengine 驱动实现直接保存。详细使用方法请参考 [EMQ 官方文档](https://docs.emqx.com/zh/enterprise/v4.4/rule/backend_tdengine.html#%E4%BF%9D%E5%AD%98%E6%95%B0%E6%8D%AE%E5%88%B0-tdengine)。 ## HiveMQ Broker 直接写入 diff --git a/documentation20/cn/07.advanced-features/docs.md b/documentation20/cn/07.advanced-features/docs.md index 36516cf31d969152178137c65777526c5027ce10..ef7ca882a4a0b2783eb80a4c2afc9ea54379a522 100644 --- a/documentation20/cn/07.advanced-features/docs.md +++ b/documentation20/cn/07.advanced-features/docs.md @@ -2,19 +2,19 @@ ## 连续查询(Continuous Query) -连续查询是TDengine定期自动执行的查询,采用滑动窗口的方式进行计算,是一种简化的时间驱动的流式计算。针对库中的表或超级表,TDengine可提供定期自动执行的连续查询,用户可让TDengine推送查询的结果,也可以将结果再写回到TDengine中。每次执行的查询是一个时间窗口,时间窗口随着时间流动向前滑动。在定义连续查询的时候需要指定时间窗口(time window, 参数interval)大小和每次前向增量时间(forward sliding times, 参数sliding)。 +连续查询是 TDengine 定期自动执行的查询,采用滑动窗口的方式进行计算,是一种简化的时间驱动的流式计算。针对库中的表或超级表,TDengine 可提供定期自动执行的连续查询,用户可让 TDengine 推送查询的结果,也可以将结果再写回到 TDengine 中。每次执行的查询是一个时间窗口,时间窗口随着时间流动向前滑动。在定义连续查询的时候需要指定时间窗口(time window, 参数interval)大小和每次前向增量时间(forward sliding times, 参数sliding)。 -TDengine的连续查询采用时间驱动模式,可以直接使用TAOS SQL进行定义,不需要额外的操作。使用连续查询,可以方便快捷地按照时间窗口生成结果,从而对原始采集数据进行降采样(down sampling)。用户通过TAOS SQL定义连续查询以后,TDengine自动在最后的一个完整的时间周期末端拉起查询,并将计算获得的结果推送给用户或者写回TDengine。 +TDengine 的连续查询采用时间驱动模式,可以直接使用 TAOS SQL 进行定义,不需要额外的操作。使用连续查询,可以方便快捷地按照时间窗口生成结果,从而对原始采集数据进行降采样(down sampling)。用户通过 TAOS SQL 定义连续查询以后,TDengine 自动在最后的一个完整的时间周期末端拉起查询,并将计算获得的结果推送给用户或者写回 TDengine。 -TDengine提供的连续查询与普通流计算中的时间窗口计算具有以下区别: +TDengine 提供的连续查询与普通流计算中的时间窗口计算具有以下区别: -- 不同于流计算的实时反馈计算结果,连续查询只在时间窗口关闭以后才开始计算。例如时间周期是1天,那么当天的结果只会在23:59:59以后才会生成。 -- 如果有历史记录写入到已经计算完成的时间区间,连续查询并不会重新进行计算,也不会重新将结果推送给用户。对于写回TDengine的模式,也不会更新已经存在的计算结果。 -- 使用连续查询推送结果的模式,服务端并不缓存客户端计算状态,也不提供Exactly-Once的语意保证。如果用户的应用端崩溃,再次拉起的连续查询将只会从再次拉起的时间开始重新计算最近的一个完整的时间窗口。如果使用写回模式,TDengine可确保数据写回的有效性和连续性。 +- 不同于流计算的实时反馈计算结果,连续查询只在时间窗口关闭以后才开始计算。例如时间周期是 1 天,那么当天的结果只会在 23:59:59 以后才会生成。 +- 如果有历史记录写入到已经计算完成的时间区间,连续查询并不会重新进行计算,也不会重新将结果推送给用户。对于写回 TDengine 的模式,也不会更新已经存在的计算结果。 +- 使用连续查询推送结果的模式,服务端并不缓存客户端计算状态,也不提供 Exactly-Once 的语意保证。如果用户的应用端崩溃,再次拉起的连续查询将只会从再次拉起的时间开始重新计算最近的一个完整的时间窗口。如果使用写回模式,TDengine 可确保数据写回的有效性和连续性。 ### 使用连续查询 -下面以智能电表场景为例介绍连续查询的具体使用方法。假设我们通过下列SQL语句创建了超级表和子表: +下面以智能电表场景为例介绍连续查询的具体使用方法。假设我们通过下列 SQL 语句创建了超级表和子表: ```sql create table meters (ts timestamp, current float, voltage int, phase float) tags (location binary(64), groupId int); @@ -23,25 +23,25 @@ create table D1002 using meters tags ("Beijing.Haidian", 2); ... ``` -我们已经知道,可以通过下面这条SQL语句以一分钟为时间窗口、30秒为前向增量统计这些电表的平均电压。 +我们已经知道,可以通过下面这条 SQL 语句以一分钟为时间窗口、30秒为前向增量统计这些电表的平均电压。 ```sql select avg(voltage) from meters interval(1m) sliding(30s); ``` -每次执行这条语句,都会重新计算所有数据。 如果需要每隔30秒执行一次来增量计算最近一分钟的数据,可以把上面的语句改进成下面的样子,每次使用不同的 `startTime` 并定期执行: +每次执行这条语句,都会重新计算所有数据。 如果需要每隔 30 秒执行一次来增量计算最近一分钟的数据,可以把上面的语句改进成下面的样子,每次使用不同的 `startTime` 并定期执行: ```sql select avg(voltage) from meters where ts > {startTime} interval(1m) sliding(30s); ``` -这样做没有问题,但TDengine提供了更简单的方法,只要在最初的查询语句前面加上 `create table {tableName} as ` 就可以了,例如: +这样做没有问题,但 TDengine 提供了更简单的方法,只要在最初的查询语句前面加上 `create table {tableName} as` 就可以了,例如: ```sql create table avg_vol as select avg(voltage) from meters interval(1m) sliding(30s); ``` -会自动创建一个名为 `avg_vol` 的新表,然后每隔30秒,TDengine会增量执行 `as` 后面的 SQL 语句,并将查询结果写入这个表中,用户程序后续只要从 `avg_vol` 中查询数据即可。例如: +会自动创建一个名为 `avg_vol` 的新表,然后每隔 30 秒,TDengine 会增量执行 `as` 后面的 SQL 语句,并将查询结果写入这个表中,用户程序后续只要从 `avg_vol` 中查询数据即可。例如: ```mysql taos> select * from avg_vol; @@ -55,29 +55,27 @@ taos> select * from avg_vol; 需要注意,查询时间窗口的最小值是10毫秒,没有时间窗口范围的上限。 -此外,TDengine还支持用户指定连续查询的起止时间。如果不输入开始时间,连续查询将从第一条原始数据所在的时间窗口开始;如果没有输入结束时间,连续查询将永久运行;如果用户指定了结束时间,连续查询在系统时间达到指定的时间以后停止运行。比如使用下面的SQL创建的连续查询将运行一小时,之后会自动停止。 +此外,TDengine 还支持用户指定连续查询的起止时间。如果不输入开始时间,连续查询将从第一条原始数据所在的时间窗口开始;如果没有输入结束时间,连续查询将永久运行;如果用户指定了结束时间,连续查询在系统时间达到指定的时间以后停止运行。比如使用下面的SQL创建的连续查询将运行一小时,之后会自动停止。 ```mysql create table avg_vol as select avg(voltage) from meters where ts > now and ts <= now + 1h interval(1m) sliding(30s); ``` -需要说明的是,上面例子中的 `now` 是指创建连续查询的时间,而不是查询执行的时间,否则,查询就无法自动停止了。另外,为了尽量避免原始数据延迟写入导致的问题,TDengine中连续查询的计算有一定的延迟。也就是说,一个时间窗口过去后,TDengine并不会立即计算这个窗口的数据,所以要稍等一会(一般不会超过1分钟)才能查到计算结果。 - +需要说明的是,上面例子中的 `now` 是指创建连续查询的时间,而不是查询执行的时间,否则,查询就无法自动停止了。另外,为了尽量避免原始数据延迟写入导致的问题,TDengine 中连续查询的计算有一定的延迟。也就是说,一个时间窗口过去后,TDengine 并不会立即计算这个窗口的数据,所以要稍等一会(一般不会超过 1 分钟)才能查到计算结果。 ### 管理连续查询 用户可在控制台中通过 `show streams` 命令来查看系统中全部运行的连续查询,并可以通过 `kill stream` 命令杀掉对应的连续查询。后续版本会提供更细粒度和便捷的连续查询管理命令。 - ## 数据订阅(Publisher/Subscriber) -基于数据天然的时间序列特性,TDengine的数据写入(insert)与消息系统的数据发布(pub)逻辑上一致,均可视为系统中插入一条带时间戳的新记录。同时,TDengine在内部严格按照数据时间序列单调递增的方式保存数据。本质上来说,TDengine中里每一张表均可视为一个标准的消息队列。 +基于数据天然的时间序列特性,TDengine 的数据写入(insert)与消息系统的数据发布(pub)逻辑上一致,均可视为系统中插入一条带时间戳的新记录。同时,TDengine 在内部严格按照数据时间序列单调递增的方式保存数据。本质上来说,TDengine 中里每一张表均可视为一个标准的消息队列。 -TDengine内嵌支持轻量级的消息订阅与推送服务。使用系统提供的API,用户可使用普通查询语句订阅数据库中的一张或多张表。订阅的逻辑和操作状态的维护均是由客户端完成,客户端定时轮询服务器是否有新的记录到达,有新的记录到达就会将结果反馈到客户。 +TDengine 内嵌支持轻量级的消息订阅与推送服务。使用系统提供的 API,用户可使用普通查询语句订阅数据库中的一张或多张表。订阅的逻辑和操作状态的维护均是由客户端完成,客户端定时轮询服务器是否有新的记录到达,有新的记录到达就会将结果反馈到客户。 -TDengine的订阅与推送服务的状态是客户端维持,TDengine服务器并不维持。因此如果应用重启,从哪个时间点开始获取最新数据,由应用决定。 +TDengine 的订阅与推送服务的状态是客户端维持,TDengine 服务器并不维持。因此如果应用重启,从哪个时间点开始获取最新数据,由应用决定。 -TDengine的API中,与订阅相关的主要有以下三个: +TDengine 的 API 中,与订阅相关的主要有以下三个: ```c taos_subscribe @@ -85,9 +83,9 @@ taos_consume taos_unsubscribe ``` -这些API的文档请见 [C/C++ Connector](https://www.taosdata.com/cn/documentation/connector#c-cpp),下面仍以智能电表场景为例介绍一下它们的具体用法(超级表和子表结构请参考上一节“连续查询”),完整的示例代码可以在 [这里](https://github.com/taosdata/TDengine/blob/master/tests/examples/c/subscribe.c) 找到。 +这些API的文档请见 [C/C++ Connector](https://www.taosdata.com/cn/documentation/connector#c-cpp),下面仍以智能电表场景为例介绍一下它们的具体用法(超级表和子表结构请参考上一节“连续查询”),完整的示例代码可以在 [这里](https://github.com/taosdata/TDengine/blob/master/examples/c/subscribe.c) 找到。 -如果我们希望当某个电表的电流超过一定限制(比如10A)后能得到通知并进行一些处理, 有两种方法:一是分别对每张子表进行查询,每次查询后记录最后一条数据的时间戳,后续只查询这个时间戳之后的数据: +如果我们希望当某个电表的电流超过一定限制(比如 10A)后能得到通知并进行一些处理, 有两种方法:一是分别对每张子表进行查询,每次查询后记录最后一条数据的时间戳,后续只查询这个时间戳之后的数据: ```sql select * from D1001 where ts > {last_timestamp1} and current > 10; @@ -103,11 +101,11 @@ select * from D1002 where ts > {last_timestamp2} and current > 10; select * from meters where ts > {last_timestamp} and current > 10; ``` -但是,如何选择 `last_timestamp` 就成了一个新的问题。因为,一方面数据的产生时间(也就是数据时间戳)和数据入库的时间一般并不相同,有时偏差还很大;另一方面,不同电表的数据到达TDengine的时间也会有差异。所以,如果我们在查询中使用最慢的那台电表的数据的时间戳作为 `last_timestamp`,就可能重复读入其它电表的数据;如果使用最快的电表的时间戳,其它电表的数据就可能被漏掉。 +但是,如何选择 `last_timestamp` 就成了一个新的问题。因为,一方面数据的产生时间(也就是数据时间戳)和数据入库的时间一般并不相同,有时偏差还很大;另一方面,不同电表的数据到达 TDengine 的时间也会有差异。所以,如果我们在查询中使用最慢的那台电表的数据的时间戳作为 `last_timestamp`,就可能重复读入其它电表的数据;如果使用最快的电表的时间戳,其它电表的数据就可能被漏掉。 -TDengine的订阅功能为上面这个问题提供了一个彻底的解决方案。 +TDengine 的订阅功能为上面这个问题提供了一个彻底的解决方案。 -首先是使用`taos_subscribe`创建订阅: +首先是使用 `taos_subscribe` 创建订阅: ```c TAOS_SUB* tsub = NULL; @@ -120,11 +118,11 @@ if (async) { } ``` -TDengine中的订阅既可以是同步的,也可以是异步的,上面的代码会根据从命令行获取的参数`async`的值来决定使用哪种方式。这里,同步的意思是用户程序要直接调用`taos_consume`来拉取数据,而异步则由API在内部的另一个线程中调用`taos_consume`,然后把拉取到的数据交给回调函数`subscribe_callback`去处理。(注意,`subscribe_callback` 中不宜做较为耗时的操作,否则有可能导致客户端阻塞等不可控的问题。) +TDengine 中的订阅既可以是同步的,也可以是异步的,上面的代码会根据从命令行获取的参数 `async` 的值来决定使用哪种方式。这里,同步的意思是用户程序要直接调用 `taos_consume` 来拉取数据,而异步则由 API 在内部的另一个线程中调用 `taos_consume`,然后把拉取到的数据交给回调函数 `subscribe_callback`去处理。(注意,`subscribe_callback` 中不宜做较为耗时的操作,否则有可能导致客户端阻塞等不可控的问题。) -参数`taos`是一个已经建立好的数据库连接,在同步模式下无特殊要求。但在异步模式下,需要注意它不会被其它线程使用,否则可能导致不可预计的错误,因为回调函数在API的内部线程中被调用,而TDengine的部分API不是线程安全的。 +参数 `taos` 是一个已经建立好的数据库连接,在同步模式下无特殊要求。但在异步模式下,需要注意它不会被其它线程使用,否则可能导致不可预计的错误,因为回调函数在API的内部线程中被调用,而 TDengine 的部分 API 不是线程安全的。 -参数`sql`是查询语句,可以在其中使用where子句指定过滤条件。在我们的例子中,如果只想订阅电流超过10A时的数据,可以这样写: +参数 `sql` 是查询语句,可以在其中使用where子句指定过滤条件。在我们的例子中,如果只想订阅电流超过 10A 时的数据,可以这样写: ```sql select * from meters where current > 10; @@ -136,13 +134,13 @@ select * from meters where current > 10; select * from meters where ts > now - 1d and current > 10; ``` -订阅的`topic`实际上是它的名字,因为订阅功能是在客户端API中实现的,所以没必要保证它全局唯一,但需要它在一台客户端机器上唯一。 +订阅的 `topic` 实际上是它的名字,因为订阅功能是在客户端API中实现的,所以没必要保证它全局唯一,但需要它在一台客户端机器上唯一。 -如果名为`topic`的订阅不存在,参数`restart`没有意义;但如果用户程序创建这个订阅后退出,当它再次启动并重新使用这个`topic`时,`restart`就会被用于决定是从头开始读取数据,还是接续上次的位置进行读取。本例中,如果`restart`是 **true**(非零值),用户程序肯定会读到所有数据。但如果这个订阅之前就存在了,并且已经读取了一部分数据,且`restart`是 **false**(**0**),用户程序就不会读到之前已经读取的数据了。 +如果名为 `topic` 的订阅不存在,参数 `restart` 没有意义;但如果用户程序创建这个订阅后退出,当它再次启动并重新使用这个 `topic` 时,`restart` 就会被用于决定是从头开始读取数据,还是接续上次的位置进行读取。本例中,如果 `restart` 是 **true**(非零值),用户程序肯定会读到所有数据。但如果这个订阅之前就存在了,并且已经读取了一部分数据,且 `restart` 是 **false**(**0**),用户程序就不会读到之前已经读取的数据了。 -`taos_subscribe`的最后一个参数是以毫秒为单位的轮询周期。在同步模式下,如果前后两次调用`taos_consume`的时间间隔小于此时间,`taos_consume`会阻塞,直到间隔超过此时间。异步模式下,这个时间是两次调用回调函数的最小时间间隔。 +`taos_subscribe`的最后一个参数是以毫秒为单位的轮询周期。在同步模式下,如果前后两次调用 `taos_consume` 的时间间隔小于此时间,`taos_consume` 会阻塞,直到间隔超过此时间。异步模式下,这个时间是两次调用回调函数的最小时间间隔。 -`taos_subscribe`的倒数第二个参数用于用户程序向回调函数传递附加参数,订阅API不对其做任何处理,只原样传递给回调函数。此参数在同步模式下无意义。 +`taos_subscribe` 的倒数第二个参数用于用户程序向回调函数传递附加参数,订阅 API 不对其做任何处理,只原样传递给回调函数。此参数在同步模式下无意义。 订阅创建以后,就可以消费其数据了,同步模式下,示例代码是下面的 else 部分: @@ -161,7 +159,7 @@ if (async) { } ``` -这里是一个 **while** 循环,用户每按一次回车键就调用一次`taos_consume`,而`taos_consume`的返回值是查询到的结果集,与`taos_use_result`完全相同,例子中使用这个结果集的代码是函数`print_result`: +这里是一个 **while** 循环,用户每按一次回车键就调用一次 `taos_consume`,而 `taos_consume` 的返回值是查询到的结果集,与 `taos_use_result` 完全相同,例子中使用这个结果集的代码是函数 `print_result`: ```c void print_result(TAOS_RES* res, int blockFetch) { @@ -196,13 +194,13 @@ void subscribe_callback(TAOS_SUB* tsub, TAOS_RES *res, void* param, int code) { } ``` -当要结束一次数据订阅时,需要调用`taos_unsubscribe`: +当要结束一次数据订阅时,需要调用 `taos_unsubscribe`: ```c taos_unsubscribe(tsub, keep); ``` -其第二个参数,用于决定是否在客户端保留订阅的进度信息。如果这个参数是**false**(**0**),那无论下次调用`taos_subscribe`时的`restart`参数是什么,订阅都只能重新开始。另外,进度信息的保存位置是 *{DataDir}/subscribe/* 这个目录下,每个订阅有一个与其`topic`同名的文件,删掉某个文件,同样会导致下次创建其对应的订阅时只能重新开始。 +其第二个参数,用于决定是否在客户端保留订阅的进度信息。如果这个参数是**false**(**0**),那无论下次调用 `taos_subscribe` 时的 `restart` 参数是什么,订阅都只能重新开始。另外,进度信息的保存位置是 *{DataDir}/subscribe/* 这个目录下,每个订阅有一个与其 `topic` 同名的文件,删掉某个文件,同样会导致下次创建其对应的订阅时只能重新开始。 代码介绍完毕,我们来看一下实际的运行效果。假设: @@ -213,8 +211,8 @@ taos_unsubscribe(tsub, keep); 则可以在示例代码所在目录执行以下命令来编译并启动示例程序: ```bash -$ make -$ ./subscribe -sql='select * from meters where current > 10;' +make +./subscribe -sql='select * from meters where current > 10;' ``` 示例程序启动后,打开另一个终端窗口,启动 TDengine 的 shell 向 **D1001** 插入一条电流为 12A 的数据: @@ -225,13 +223,13 @@ $ taos > insert into D1001 values(now, 12, 220, 1); ``` -这时,因为电流超过了10A,您应该可以看到示例程序将它输出到了屏幕上。您可以继续插入一些数据观察示例程序的输出。 +这时,因为电流超过了 10A,您应该可以看到示例程序将它输出到了屏幕上。您可以继续插入一些数据观察示例程序的输出。 ### Java 使用数据订阅功能 订阅功能也提供了 Java 开发接口,相关说明请见 [Java Connector](https://www.taosdata.com/cn/documentation/connector/java#subscribe)。需要注意的是,目前 Java 接口没有提供异步订阅模式,但用户程序可以通过创建 `TimerTask` 等方式达到同样的效果。 -下面以一个示例程序介绍其具体使用方法。它所完成的功能与前面介绍的 C 语言示例基本相同,也是订阅数据库中所有电流超过 10A 的记录。 +下面以一个示例程序介绍其具体使用方法。它所完成的功能与前面介绍的 C 语言示例基本相同,也是订阅数据库中所有电流超过 10A 的记录。 #### 准备数据 @@ -302,8 +300,8 @@ public class SubscribeDemo { try { if (null != subscribe) subscribe.close(true); // 关闭订阅 - if (connection != null) - connection.close(); + if (connection != null) + connection.close(); } catch (SQLException throwables) { throwables.printStackTrace(); } @@ -315,7 +313,7 @@ public class SubscribeDemo { 运行示例程序,首先,它会消费符合查询条件的所有历史数据: ```bash -# java -jar subscribe.jar +# java -jar subscribe.jar ts: 1597464000000 current: 12.0 voltage: 220 phase: 1 location: Beijing.Chaoyang groupid : 2 ts: 1597464600000 current: 12.3 voltage: 220 phase: 2 location: Beijing.Chaoyang groupid : 2 @@ -338,21 +336,20 @@ taos> insert into d1001 values("2020-08-15 12:40:00.000", 12.4, 220, 1); ts: 1597466400000 current: 12.4 voltage: 220 phase: 1 location: Beijing.Chaoyang groupid: 2 ``` - ## 缓存(Cache) -TDengine采用时间驱动缓存管理策略(First-In-First-Out,FIFO),又称为写驱动的缓存管理机制。这种策略有别于读驱动的数据缓存模式(Least-Recent-Used,LRU),直接将最近写入的数据保存在系统的缓存中。当缓存达到临界值的时候,将最早的数据批量写入磁盘。一般意义上来说,对于物联网数据的使用,用户最为关心最近产生的数据,即当前状态。TDengine充分利用了这一特性,将最近到达的(当前状态)数据保存在缓存中。 +TDengine 采用时间驱动缓存管理策略(First-In-First-Out,FIFO),又称为写驱动的缓存管理机制。这种策略有别于读驱动的数据缓存模式(Least-Recent-Used,LRU),直接将最近写入的数据保存在系统的缓存中。当缓存达到临界值的时候,将最早的数据批量写入磁盘。一般意义上来说,对于物联网数据的使用,用户最为关心最近产生的数据,即当前状态。TDengine 充分利用了这一特性,将最近到达的(当前状态)数据保存在缓存中。 -TDengine通过查询函数向用户提供毫秒级的数据获取能力。直接将最近到达的数据保存在缓存中,可以更加快速地响应用户针对最近一条或一批数据的查询分析,整体上提供更快的数据库查询响应能力。从这个意义上来说,可通过设置合适的配置参数将TDengine作为数据缓存来使用,而不需要再部署额外的缓存系统,可有效地简化系统架构,降低运维的成本。需要注意的是,TDengine重启以后系统的缓存将被清空,之前缓存的数据均会被批量写入磁盘,缓存的数据将不会像专门的key-value缓存系统再将之前缓存的数据重新加载到缓存中。 +TDengine 通过查询函数向用户提供毫秒级的数据获取能力。直接将最近到达的数据保存在缓存中,可以更加快速地响应用户针对最近一条或一批数据的查询分析,整体上提供更快的数据库查询响应能力。从这个意义上来说,可通过设置合适的配置参数将 TDengine 作为数据缓存来使用,而不需要再部署额外的缓存系统,可有效地简化系统架构,降低运维的成本。需要注意的是,TDengine 重启以后系统的缓存将被清空,之前缓存的数据均会被批量写入磁盘,缓存的数据将不会像专门的 key-value 缓存系统再将之前缓存的数据重新加载到缓存中。 -TDengine分配固定大小的内存空间作为缓存空间,缓存空间可根据应用的需求和硬件资源配置。通过适当的设置缓存空间,TDengine可以提供极高性能的写入和查询的支持。TDengine中每个虚拟节点(virtual node)创建时分配独立的缓存池。每个虚拟节点管理自己的缓存池,不同虚拟节点间不共享缓存池。每个虚拟节点内部所属的全部表共享该虚拟节点的缓存池。 +TDengine 分配固定大小的内存空间作为缓存空间,缓存空间可根据应用的需求和硬件资源配置。通过适当的设置缓存空间,TDengine 可以提供极高性能的写入和查询的支持。TDengine 中每个虚拟节点(virtual node)创建时分配独立的缓存池。每个虚拟节点管理自己的缓存池,不同虚拟节点间不共享缓存池。每个虚拟节点内部所属的全部表共享该虚拟节点的缓存池。 -TDengine将内存池按块划分进行管理,数据在内存块里是以行(row)的形式存储。一个vnode的内存池是在vnode创建时按块分配好,而且每个内存块按照先进先出的原则进行管理。在创建内存池时,块的大小由系统配置参数cache决定;每个vnode中内存块的数目则由配置参数blocks决定。因此对于一个vnode,总的内存大小为:`cache * blocks`。一个cache block需要保证每张表能存储至少几十条以上记录,才会有效率。 +TDengine 将内存池按块划分进行管理,数据在内存块里是以行(row)的形式存储。一个 vnode 的内存池是在 vnode 创建时按块分配好,而且每个内存块按照先进先出的原则进行管理。在创建内存池时,块的大小由系统配置参数 cache 决定;每个 vnode 中内存块的数目则由配置参数blocks决定。因此对于一个 vnode,总的内存大小为:`cache * blocks`。一个 cache block 需要保证每张表能存储至少几十条以上记录,才会有效率。 -你可以通过函数last_row快速获取一张表或一张超级表的最后一条记录,这样很便于在大屏显示各设备的实时状态或采集值。例如: +你可以通过函数 last_row() 快速获取一张表或一张超级表的最后一条记录,这样很便于在大屏显示各设备的实时状态或采集值。例如: ```mysql select last_row(voltage) from meters where location='Beijing.Chaoyang'; ``` -该SQL语句将获取所有位于北京朝阳区的电表最后记录的电压值。 +该 SQL 语句将获取所有位于北京朝阳区的电表最后记录的电压值。 diff --git a/documentation20/cn/08.connector/01.java/docs.md b/documentation20/cn/08.connector/01.java/docs.md index dd5166c4b98046e5a7576adbbbad12d7945e7444..4749475432c03fd2c626ee3297ac621adf3e4bff 100644 --- a/documentation20/cn/08.connector/01.java/docs.md +++ b/documentation20/cn/08.connector/01.java/docs.md @@ -56,15 +56,15 @@ INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('beijing') VALUES( ## TAOS-JDBCDriver 版本以及支持的 TDengine 版本和 JDK 版本 | taos-jdbcdriver 版本 | TDengine 2.0.x.x 版本 | TDengine 2.2.x.x 版本 | TDengine 2.4.x.x 版本 | JDK 版本 | -|---------------------| ----------------------| ----------------------| ----------------------| -------- | -| 2.0.37 | X | X | 2.4.0.6 以上 | 1.8.x | -| 2.0.36 | X | 2.2.2.11 以上 | 2.4.0.0 - 2.4.0.5 | 1.8.x | -| 2.0.35 | X | 2.2.2.11 以上 | 2.3.0.0 - 2.4.0.5 | 1.8.x | -| 2.0.33 - 2.0.34 | 2.0.3.0 以上 | 2.2.0.0 以上 | 2.4.0.0 - 2.4.0.5 | 1.8.x | -| 2.0.31 - 2.0.32 | 2.1.3.0 - 2.1.7.7 | X | X | 1.8.x | -| 2.0.22 - 2.0.30 | 2.0.18.0 - 2.1.2.1 | X | X | 1.8.x | -| 2.0.12 - 2.0.21 | 2.0.8.0 - 2.0.17.4 | X | X | 1.8.x | -| 2.0.4 - 2.0.11 | 2.0.0.0 - 2.0.7.3 | X | X | 1.8.x | +| -------------------- | --------------------- | --------------------- | --------------------- | -------- | +| 2.0.37 | X | X | 2.4.0.6 以上 | 1.8.x | +| 2.0.36 | X | 2.2.2.11 以上 | 2.4.0.0 - 2.4.0.5 | 1.8.x | +| 2.0.35 | X | 2.2.2.11 以上 | 2.3.0.0 - 2.4.0.5 | 1.8.x | +| 2.0.33 - 2.0.34 | 2.0.3.0 以上 | 2.2.0.0 以上 | 2.4.0.0 - 2.4.0.5 | 1.8.x | +| 2.0.31 - 2.0.32 | 2.1.3.0 - 2.1.7.7 | X | X | 1.8.x | +| 2.0.22 - 2.0.30 | 2.0.18.0 - 2.1.2.1 | X | X | 1.8.x | +| 2.0.12 - 2.0.21 | 2.0.8.0 - 2.0.17.4 | X | X | 1.8.x | +| 2.0.4 - 2.0.11 | 2.0.0.0 - 2.0.7.3 | X | X | 1.8.x | ## TDengine DataType 和 Java DataType @@ -72,18 +72,18 @@ INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('beijing') VALUES( TDengine 目前支持时间戳、数字、字符、布尔类型,与 Java 对应类型转换如下: | TDengine DataType | JDBCType (driver 版本 < 2.0.24) | JDBCType (driver 版本 >= 2.0.24) | -|-------------------|-------------------------------| ------------------ | -| TIMESTAMP | java.lang.Long | java.sql.Timestamp | -| INT | java.lang.Integer | java.lang.Integer | -| BIGINT | java.lang.Long | java.lang.Long | -| FLOAT | java.lang.Float | java.lang.Float | -| DOUBLE | java.lang.Double | java.lang.Double | -| SMALLINT | java.lang.Short | java.lang.Short | -| TINYINT | java.lang.Byte | java.lang.Byte | -| BOOL | java.lang.Boolean | java.lang.Boolean | -| BINARY | java.lang.String | byte array | -| NCHAR | java.lang.String | java.lang.String | -| JSON | - | java.lang.String | +| ----------------- | --------------------------------- | ---------------------------------- | +| TIMESTAMP | java.lang.Long | java.sql.Timestamp | +| INT | java.lang.Integer | java.lang.Integer | +| BIGINT | java.lang.Long | java.lang.Long | +| FLOAT | java.lang.Float | java.lang.Float | +| DOUBLE | java.lang.Double | java.lang.Double | +| SMALLINT | java.lang.Short | java.lang.Short | +| TINYINT | java.lang.Byte | java.lang.Byte | +| BOOL | java.lang.Boolean | java.lang.Boolean | +| BINARY | java.lang.String | byte array | +| NCHAR | java.lang.String | java.lang.String | +| JSON | - | java.lang.String | 注意:JSON类型仅在tag中支持。 @@ -177,7 +177,7 @@ url中的配置参数如下: * timezone:客户端使用的时区,默认值为系统当前时区。 * batchfetch: 仅在使用JDBC-JNI时生效。true:在执行查询时批量拉取结果集;false:逐行拉取结果集。默认值为:false。 * timestampFormat: 仅在使用JDBC-RESTful时生效. 'TIMESTAMP':结果集中timestamp类型的字段为一个long值; 'UTC':结果集中timestamp类型的字段为一个UTC时间格式的字符串; 'STRING':结果集中timestamp类型的字段为一个本地时间格式的字符串。默认值为'STRING'。 -* batchErrorIgnore:true:在执行Statement的executeBatch时,如果中间有一条sql执行失败,继续执行下面的sq了。false:不再执行失败sql后的任何语句。默认值为:false。 +* batchErrorIgnore:true:在执行Statement的executeBatch时,如果中间有一条sql执行失败,继续执行下面的sql了。false:不再执行失败sql后的任何语句。默认值为:false。 #### 指定URL和Properties获取连接 @@ -345,6 +345,7 @@ JDBC连接器可能报错的错误码包括3种:JDBC driver本身的报错( * setString 和 setNString 都要求用户在 size 参数里声明表定义中对应列的列宽 示例代码: + ```java public class ParameterBindingDemo { @@ -572,6 +573,7 @@ public class ParameterBindingDemo { ``` 用于设定 TAGS 取值的方法总共有: + ```java public void setTagNull(int index, int type) public void setTagBoolean(int index, boolean value) @@ -587,6 +589,7 @@ public void setTagNString(int index, String value) ``` 用于设定 VALUES 数据列的取值的方法总共有: + ```java public void setInt(int columnIndex, ArrayList list) throws SQLException public void setFloat(int columnIndex, ArrayList list) throws SQLException @@ -600,14 +603,56 @@ public void setString(int columnIndex, ArrayList list, int size) throws public void setNString(int columnIndex, ArrayList list, int size) throws SQLException ``` +### 无模式写入 + +从 2.2.0.0 版本开始,TDengine 增加了对无模式写入功能。无模式写入兼容 InfluxDB 的 行协议(Line Protocol)、OpenTSDB 的 telnet 行协议和 OpenTSDB 的 JSON 格式协议。详情请参见[无模式写入](https://www.taosdata.com/docs/cn/v2.0/insert#schemaless)。 + +注意: +* JDBC-RESTful 实现并不提供无模式写入这种使用方式 +* 以下示例代码基于taos-jdbcdriver-2.0.36 + +示例代码: + +```java +public class SchemalessInsertTest { + private static final String host = "127.0.0.1"; + private static final String lineDemo = "st,t1=3i64,t2=4f64,t3=\"t3\" c1=3i64,c3=L\"passit\",c2=false,c4=4f64 1626006833639000000"; + private static final String telnetDemo = "stb0_0 1626006833 4 host=host0 interface=eth0"; + private static final String jsonDemo = "{\"metric\": \"meter_current\",\"timestamp\": 1346846400,\"value\": 10.3, \"tags\": {\"groupid\": 2, \"location\": \"Beijing\", \"id\": \"d1001\"}}"; + + public static void main(String[] args) throws SQLException { + final String url = "jdbc:TAOS://" + host + ":6030/?user=root&password=taosdata"; + try (Connection connection = DriverManager.getConnection(url)) { + init(connection); + + SchemalessWriter writer = new SchemalessWriter(connection); + writer.write(lineDemo, SchemalessProtocolType.LINE, SchemalessTimestampType.NANO_SECONDS); + writer.write(telnetDemo, SchemalessProtocolType.TELNET, SchemalessTimestampType.MILLI_SECONDS); + writer.write(jsonDemo, SchemalessProtocolType.JSON, SchemalessTimestampType.NOT_CONFIGURED); + } + } + + private static void init(Connection connection) throws SQLException { + try (Statement stmt = connection.createStatement()) { + stmt.executeUpdate("drop database if exists test_schemaless"); + stmt.executeUpdate("create database if not exists test_schemaless"); + stmt.executeUpdate("use test_schemaless"); + } + } +} +``` + ### 设置客户端参数 + 从TDengine-2.3.5.0版本开始,jdbc driver支持在应用的第一次连接中,设置TDengine的客户端参数。Driver支持JDBC-JNI方式中,通过jdbcUrl和properties两种方式设置client parameter。 + 注意: * JDBC-RESTful不支持设置client parameter的功能。 * 应用中设置的client parameter为进程级别的,即如果要更新client的参数,需要重启应用。这是因为client parameter是全局参数,仅在应用程序的第一次设置生效。 * 以下示例代码基于taos-jdbcdriver-2.0.36。 示例代码: + ```java public class ClientParameterSetting { private static final String host = "127.0.0.1"; diff --git a/documentation20/cn/08.connector/docs.md b/documentation20/cn/08.connector/docs.md index ea85096e076a4e315914798437c548ac2d04a52a..82aee012beaa2611af9fb2915032e1b4d4289729 100644 --- a/documentation20/cn/08.connector/docs.md +++ b/documentation20/cn/08.connector/docs.md @@ -1,10 +1,10 @@ # 连接器 -TDengine提供了丰富的应用程序开发接口,其中包括C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用。 +TDengine 提供了丰富的应用程序开发接口,其中包括 C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用。 ![image-connecotr](../images/connector.png) -目前TDengine的连接器可支持的平台广泛,包括:X64/X86/ARM64/ARM32/MIPS/Alpha等硬件平台,以及Linux/Win64/Win32等开发环境。对照矩阵如下: +目前 TDengine 的连接器可支持的平台广泛,包括:X64/X86/ARM64/ARM32/MIPS/Alpha 等硬件平台,以及 Linux/Win64/Win32 等开发环境。对照矩阵如下: | **CPU** | **X64 64bit** | **X64 64bit** | **X64 64bit** | **X86 32bit** | **ARM64** | **ARM32** | **MIPS 龙芯** | **Alpha 申威** | **X64 海光** | | ----------- | --------------- | --------------- | --------------- | --------------- | --------- | --------- | --------------- | ---------------- | -------------- | @@ -21,24 +21,24 @@ TDengine提供了丰富的应用程序开发接口,其中包括C/C++、Java、 注意: -* 在没有安装TDengine服务端软件的系统中使用连接器(除RESTful外)访问 TDengine 数据库,需要安装相应版本的客户端安装包来使应用驱动(Linux系统中文件名为libtaos.so,Windows系统中为taos.dll)被安装在系统中,否则会产生无法找到相应库文件的错误。 -* 所有执行 SQL 语句的 API,例如 C/C++ Connector 中的 `tao_query`、`taos_query_a`、`taos_subscribe` 等,以及其它语言中与它们对应的API,每次都只能执行一条 SQL 语句,如果实际参数中包含了多条语句,它们的行为是未定义的。 +* 在没有安装 TDengine 服务端软件的系统中使用连接器(除 RESTful 外)访问 TDengine 数据库,需要安装相应版本的客户端安装包来使应用驱动(Linux 系统中文件名为 libtaos.so,Windows 系统中为 taos.dll)被安装在系统中,否则会产生无法找到相应库文件的错误。 +* 所有执行 SQL 语句的 API,例如 C/C++ Connector 中的 `tao_query`、`taos_query_a`、`taos_subscribe` 等,以及其它语言中与它们对应的 API,每次都只能执行一条 SQL 语句,如果实际参数中包含了多条语句,它们的行为是未定义的。 * 升级 TDengine 到 2.0.8.0 版本的用户,必须更新 JDBC。连接 TDengine 必须升级 taos-jdbcdriver 到 2.0.12 及以上。详细的版本依赖关系请参见 [taos-jdbcdriver 文档](https://www.taosdata.com/cn/documentation/connector/java#version)。 * 无论选用何种编程语言的连接器,2.0 及以上版本的 TDengine 推荐数据库应用的每个线程都建立一个独立的连接,或基于线程建立连接池,以避免连接内的“USE statement”状态量在线程之间相互干扰(但连接的查询和写入操作都是线程安全的)。 ## 安装连接器驱动步骤 -服务器应该已经安装TDengine服务端安装包。连接器驱动安装步骤如下: +服务器应该已经安装 TDengine 服务端安装包。连接器驱动安装步骤如下: **Linux** **1. 从[涛思官网](https://www.taosdata.com/cn/all-downloads/)下载:** -* X64硬件环境:TDengine-client-2.x.x.x-Linux-x64.tar.gz +* X64 硬件环境:TDengine-client-2.x.x.x-Linux-x64.tar.gz -* ARM64硬件环境:TDengine-client-2.x.x.x-Linux-aarch64.tar.gz +* ARM64 硬件环境:TDengine-client-2.x.x.x-Linux-aarch64.tar.gz -* ARM32硬件环境:TDengine-client-2.x.x.x-Linux-aarch32.tar.gz +* ARM32 硬件环境:TDengine-client-2.x.x.x-Linux-aarch32.tar.gz **2. 解压缩软件包** @@ -62,11 +62,12 @@ TDengine提供了丰富的应用程序开发接口,其中包括C/C++、Java、 **4. 配置taos.cfg** -编辑taos.cfg文件(默认路径/etc/taos/taos.cfg),将firstEP修改为TDengine服务器的End Point,例如:h1.taos.com:6030 +编辑 taos.cfg 文件(默认路径/etc/taos/taos.cfg),将 firstEP 修改为 TDengine 服务器的 End Point,例如:h1.taos.com:6030 **提示:** -1. **如本机没有部署TDengine服务,仅安装了应用驱动,则taos.cfg中仅需配置firstEP,无需配置FQDN。** -2. **为防止与服务器端连接时出现“unable to resolve FQDN”错误,建议确认客户端的hosts文件已经配置正确的FQDN值。** + +1. **如本机没有部署 TDengine 服务,仅安装了应用驱动,则 taos.cfg 中仅需配置 firstEP,无需配置 FQDN。** +2. **为防止与服务器端连接时出现 “unable to resolve FQDN” 错误,建议确认客户端的 hosts 文件已经配置正确的 FQDN 值。** **Windows x64/x86** @@ -93,20 +94,20 @@ TDengine提供了丰富的应用程序开发接口,其中包括C/C++、Java、 **4. 配置taos.cfg** -编辑taos.cfg文件(默认路径C:\TDengine\cfg\taos.cfg),将firstEP修改为TDengine服务器的End Point,例如:h1.taos.com:6030 +编辑 taos.cfg 文件(默认路径C:\TDengine\cfg\taos.cfg),将 firstEP 修改为 TDengine 服务器的 End Point,例如:h1.taos.com:6030 **提示:** -1. **如利用FQDN连接服务器,必须确认本机网络环境DNS已配置好,或在hosts文件中添加FQDN寻址记录,如编辑C:\Windows\system32\drivers\etc\hosts,添加如下的记录:`192.168.1.99 h1.taos.com` ** +1. **如利用 FQDN 连接服务器,必须确认本机网络环境DNS已配置好,或在 hosts 文件中添加 FQDN 寻址记录,如编辑C:\Windows\system32\drivers\etc\hosts,添加如下的记录:`192.168.1.99 h1.taos.com` ** 2. **卸载:运行unins000.exe可卸载TDengine应用驱动。** ### 安装验证 -以上安装和配置完成后,并确认TDengine服务已经正常启动运行,此时可以执行taos客户端进行登录。 +以上安装和配置完成后,并确认 TDengine 服务已经正常启动运行,此时可以执行 taos 客户端进行登录。 **Linux环境:** -在Linux shell下直接执行 taos,应该就能正常连接到TDegine服务,进入到taos shell界面,示例如下: +在 Linux shell 下直接执行 taos,应该就能正常连接到 TDegine 服务,进入到 taos shell 界面,示例如下: ```mysql $ taos @@ -123,7 +124,7 @@ taos> **Windows(x64/x86)环境:** -在cmd下进入到c:\TDengine目录下直接执行 taos.exe,应该就能正常链接到tdegine服务,进入到taos shell界面,示例如下: +在 cmd 下进入到 C:\TDengine 目录下直接执行 taos.exe,应该就能正常链接到 TDengine 服务,进入到 taos shell 界面,示例如下: ```mysql C:\TDengine>taos @@ -147,7 +148,7 @@ taos> | **OS类型** | Linux | Win64 | Win32 | Linux | Linux | | **支持与否** | **支持** | **支持** | **支持** | **支持** | **支持** | -C/C++的API类似于MySQL的C API。应用程序使用时,需要包含TDengine头文件 *taos.h*,里面列出了提供的API的函数原型。安装后,taos.h位于: +C/C++ 的 API 类似于 MySQL 的 C API。应用程序使用时,需要包含 TDengine 头文件 *taos.h*,里面列出了提供的 API 的函数原型。安装后,taos.h 位于: - Linux:`/usr/local/taos/include` - Windows:`C:\TDengine\include` @@ -158,40 +159,42 @@ C/C++的API类似于MySQL的C API。应用程序使用时,需要包含TDengine 注意: -* 在编译时需要链接TDengine动态库。Linux 为 *libtaos.so* ,安装后,位于 _/usr/local/taos/driver_。Windows为 taos.dll,安装后位于 *C:\TDengine*。 +* 在编译时需要链接 TDengine 动态库。Linux 为 *libtaos.so* ,安装后,位于 _/usr/local/taos/driver_。Windows 为 taos.dll,安装后位于 *C:\TDengine*。 * 如未特别说明,当API的返回值是整数时,_0_ 代表成功,其它是代表失败原因的错误码,当返回值是指针时, _NULL_ 表示失败。 -* 在 taoserror.h中有所有的错误码,以及对应的原因描述。 +* 在 taoserror.h 中有所有的错误码,以及对应的原因描述。`tstrerror(errno)` 可以获取错误码对应的错误信息。 ### 示例程序 -使用C/C++连接器的示例代码请参见 https://github.com/taosdata/TDengine/tree/develop/examples/c 。 +使用 C/C++ 连接器的示例代码请参见 `https://github.com/taosdata/TDengine/tree/develop/examples/c`。 -示例程序源码也可以在安装目录下的 examples/c 路径下找到: +示例程序源码也可以在安装目录下的 `examples/c` 路径下找到: **apitest.c、asyncdemo.c、demo.c、prepare.c、stream.c、subscribe.c** -该目录下有makefile,在Linux环境下,直接执行make就可以编译得到执行文件。 +该目录下有 makefile,在 Linux 环境下,直接执行 `make` 就可以编译得到执行文件。 -在一台机器上启动TDengine服务,执行这些示例程序,按照提示输入TDengine服务的FQDN,就可以正常运行,并打印出信息。 +在一台机器上启动 TDengine 服务,执行这些示例程序,按照提示输入 TDengine 服务的 FQDN,就可以正常运行,并打印出信息。 -**提示:**在ARM环境下编译时,请将makefile中的-msse4.2打开,这个选项只有在x64/x86硬件平台上才能支持。 +**提示:**在 ARM 环境下编译时,请将 makefile 中的 `-msse4.2` 打开,这个选项只有在 x64/x86 硬件平台上才能支持。 ### 基础API -基础API用于完成创建数据库连接等工作,为其它API的执行提供运行时环境。 +基础 API 用于完成创建数据库连接等工作,为其它 API 的执行提供运行时环境。 - `void taos_init()` - 初始化运行环境。如果应用没有主动调用该API,那么应用在调用`taos_connect`时将自动调用,故应用程序一般无需手动调用该API。 + 初始化运行环境。如果应用没有主动调用该 API,那么应用在调用 `taos_connect()` 时将自动调用,故应用程序一般无需手动调用该 API。 - `void taos_cleanup()` - 清理运行环境,应用退出前应调用此API。 + 清理运行环境,应用退出前应调用此 API。 - `int taos_options(TSDB_OPTION option, const void * arg, ...)` 设置客户端选项,目前支持区域设置(`TSDB_OPTION_LOCALE`)、字符集设置(`TSDB_OPTION_CHARSET`)、时区设置(`TSDB_OPTION_TIMEZONE`)、配置文件路径设置(`TSDB_OPTION_CONFIGDIR`)。区域设置、字符集、时区默认为操作系统当前设置。 + 返回值为 `0` 表示成功,`-1` 表示失败。 + - `char *taos_get_client_info()` 获取客户端版本信息。 @@ -200,15 +203,15 @@ C/C++的API类似于MySQL的C API。应用程序使用时,需要包含TDengine 创建数据库连接,初始化连接上下文。其中需要用户提供的参数包含: - - host:TDengine管理主节点的FQDN + - host:TDengine 管理主节点的 FQDN - user:用户名 - pass:密码 - db:数据库名字,如果用户没有提供,也可以正常连接,用户可以通过该连接创建新的数据库,如果用户提供了数据库名字,则说明该数据库用户已经创建好,缺省使用该数据库 - - port:TDengine管理主节点的端口号 + - port:TDengine 管理主节点的端口号 - 返回值为空表示失败。应用程序需要保存返回的参数,以便后续API调用。 + 返回值为空表示失败。应用程序需要保存返回的参数,以便后续 API 调用。 - **提示:** 同一进程可以根据不同的host/port 连接多个taosd 集群 + **提示:** 同一进程可以根据不同的 host/port 连接多个 taosd 集群 - `char *taos_get_server_info(TAOS *taos)` @@ -218,17 +221,19 @@ C/C++的API类似于MySQL的C API。应用程序使用时,需要包含TDengine 将当前的缺省数据库设置为`db`。 + 返回值为错误码。 + - `void taos_close(TAOS *taos)` 关闭连接,其中`taos`是`taos_connect`函数返回的指针。 ### 同步查询API -传统的数据库操作API,都属于同步操作。应用调用API后,一直处于阻塞状态,直到服务器返回结果。TDengine支持如下API: +传统的数据库操作 API,都属于同步操作。应用调用 API 后,一直处于阻塞状态,直到服务器返回结果。TDengine 支持如下 API: - `TAOS_RES* taos_query(TAOS *taos, const char *sql)` - 该API用来执行SQL语句,可以是DQL、DML或DDL语句。 其中的`taos`参数是通过`taos_connect`获得的指针。不能通过返回值是否是 NULL 来判断执行结果是否失败,而是需要用`taos_errno`函数解析结果集中的错误代码来进行判断。 + 该API用来执行 SQL 语句,可以是 DQL、DML 或 DDL 语句。 其中的 `taos` 参数是通过 `taos_connect` 获得的指针。不能通过返回值是否是 NULL 来判断执行结果是否失败,而是需要用 `taos_errno` 函数解析结果集中的错误代码来进行判断。 - `int taos_result_precision(TAOS_RES *res)` @@ -256,7 +261,7 @@ C/C++的API类似于MySQL的C API。应用程序使用时,需要包含TDengine - `TAOS_FIELD *taos_fetch_fields(TAOS_RES *res)` - 获取查询结果集每列数据的属性(列的名称、列的数据类型、列的长度),与taos_num_fileds配合使用,可用来解析`taos_fetch_row`返回的一个元组(一行)的数据。 `TAOS_FIELD` 的结构如下: + 获取查询结果集每列数据的属性(列的名称、列的数据类型、列的长度),与 taos_num_fileds 配合使用,可用来解析 `taos_fetch_row` 返回的一个元组(一行)的数据。 `TAOS_FIELD` 的结构如下: ```c typedef struct taosField { @@ -272,7 +277,7 @@ typedef struct taosField { - `void taos_free_result(TAOS_RES *res)` - 释放查询结果集以及相关的资源。查询完成后,务必调用该API释放资源,否则可能导致应用内存泄露。但也需注意,释放资源后,如果再调用`taos_consume`等获取查询结果的函数,将导致应用Crash。 + 释放查询结果集以及相关的资源。查询完成后,务必调用该 API 释放资源,否则可能导致应用内存泄露。但也需注意,释放资源后,如果再调用 `taos_consume` 等获取查询结果的函数,将导致应用 Crash。 - `char *taos_errstr(TAOS_RES *res)` @@ -286,11 +291,11 @@ typedef struct taosField { ### 异步查询API -同步API之外,TDengine还提供性能更高的异步调用API处理数据插入、查询操作。在软硬件环境相同的情况下,异步API处理数据插入的速度比同步API快2~4倍。异步API采用非阻塞式的调用方式,在系统真正完成某个具体数据库操作前,立即返回。调用的线程可以去处理其他工作,从而可以提升整个应用的性能。异步API在网络延迟严重的情况下,优点尤为突出。 +同步 API 之外,TDengine 还提供性能更高的异步调用API处理数据插入、查询操作。在软硬件环境相同的情况下,异步 API 处理数据插入的速度比同步API快2~4倍。异步 API 采用非阻塞式的调用方式,在系统真正完成某个具体数据库操作前,立即返回。调用的线程可以去处理其他工作,从而可以提升整个应用的性能。异步 API 在网络延迟严重的情况下,优点尤为突出。 -异步API都需要应用提供相应的回调函数,回调函数参数设置如下:前两个参数都是一致的,第三个参数依不同的API而定。第一个参数param是应用调用异步API时提供给系统的,用于回调时,应用能够找回具体操作的上下文,依具体实现而定。第二个参数是SQL操作的结果集,如果为空,比如insert操作,表示没有记录返回,如果不为空,比如select操作,表示有记录返回。 +异步 API 都需要应用提供相应的回调函数,回调函数参数设置如下:前两个参数都是一致的,第三个参数依不同的 API 而定。第一个参数 `param` 是应用调用异步API时提供给系统的,用于回调时,应用能够找回具体操作的上下文,依具体实现而定。第二个参数是 `sql` 操作的结果集,如果为空,比如 insert 操作,表示没有记录返回,如果不为空,比如 select 操作,表示有记录返回。 -异步API对于使用者的要求相对较高,用户可根据具体应用场景选择性使用。下面是两个重要的异步API: +异步 API 对于使用者的要求相对较高,用户可根据具体应用场景选择性使用。下面是两个重要的异步 API: - `void taos_query_a(TAOS *taos, const char *sql, void (*fp)(void *param, TAOS_RES *, int code), void *param);` @@ -308,7 +313,7 @@ typedef struct taosField { * res:`taos_query_a`回调时返回的结果集 * fp:回调函数。其参数`param`是用户可定义的传递给回调函数的参数结构体;`numOfRows`是获取到的数据的行数(不是整个查询结果集的函数)。 在回调函数中,应用可以通过调用`taos_fetch_row`前向迭代获取批量记录中每一行记录。读完一块内的所有记录后,应用需要在回调函数中继续调用`taos_fetch_rows_a`获取下一批记录进行处理,直到返回的记录数(numOfRows)为零(结果返回完成)或记录数为负值(查询出错)。 -TDengine的异步API均采用非阻塞调用模式。应用程序可以用多线程同时打开多张表,并可以同时对每张打开的表进行查询或者插入操作。需要指出的是,**客户端应用必须确保对同一张表的操作完全串行化**,即对同一个表的插入或查询操作未完成时(未返回时),不能够执行第二个插入或查询操作。 +TDengine 的异步 API 均采用非阻塞调用模式。应用程序可以用多线程同时打开多张表,并可以同时对每张打开的表进行查询或者插入操作。需要指出的是,**客户端应用必须确保对同一张表的操作完全串行化**,即对同一个表的插入或查询操作未完成时(未返回时),不能够执行第二个插入或查询操作。 ### 参数绑定 API @@ -316,6 +321,7 @@ TDengine的异步API均采用非阻塞调用模式。应用程序可以用多线 除了直接调用 `taos_query` 进行查询,TDengine 也提供了支持参数绑定的 Prepare API,与 MySQL 一样,这些 API 目前也仅支持用问号 `?` 来代表待绑定的参数。文档中有时也会把此功能称为“原生接口写入”。 从 2.1.1.0 和 2.1.2.0 版本开始,TDengine 大幅改进了参数绑定接口对数据写入(INSERT)场景的支持。这样在通过参数绑定接口写入数据时,就避免了 SQL 语法解析的资源消耗,从而在绝大多数情况下显著提升写入性能。此时的典型操作步骤如下: + 1. 调用 `taos_stmt_init` 创建参数绑定对象; 2. 调用 `taos_stmt_prepare` 解析 INSERT 语句; 3. 如果 INSERT 语句中预留了表名但没有预留 TAGS,那么调用 `taos_stmt_set_tbname` 来设置表名; @@ -412,10 +418,10 @@ typedef struct TAOS_MULTI_BIND { - `TAOS_RES* taos_schemaless_insert(TAOS* taos, const char* lines[], int numLines, int protocol, int precision)` **功能说明** - 该接口将行协议的文本数据写入到TDengine中。 + 该接口将行协议的文本数据写入到 TDengine 中。 **参数说明** - taos: 数据库连接,通过taos_connect 函数建立的数据库连接。 + taos: 数据库连接,通过 taos_connect 函数建立的数据库连接。 lines:文本数据。满足解析格式要求的无模式文本字符串。 numLines:文本数据的行数,不能为 0 。 protocol: 行协议类型,用于标识文本数据格式。 @@ -446,7 +452,7 @@ typedef struct TAOS_MULTI_BIND { **支持版本** 该功能接口从2.3.0.0版本开始支持。 - + ```c #include #include @@ -482,11 +488,11 @@ int main() { ### 连续查询接口 -TDengine提供时间驱动的实时流式计算API。可以每隔一指定的时间段,对一张或多张数据库的表(数据流)进行各种实时聚合计算操作。操作简单,仅有打开、关闭流的API。具体如下: +TDengine 提供时间驱动的实时流式计算 API。可以每隔一指定的时间段,对一张或多张数据库的表(数据流)进行各种实时聚合计算操作。操作简单,仅有打开、关闭流的 API。具体如下: - `TAOS_STREAM *taos_open_stream(TAOS *taos, const char *sql, void (*fp)(void *param, TAOS_RES *, TAOS_ROW row), int64_t stime, void *param, void (*callback)(void *))` - 该API用来创建数据流,其中: + 该 API 用来创建数据流,其中: * taos:已经建立好的数据库连接。 * sql:SQL查询语句(仅能使用查询语句)。 * fp:用户定义的回调函数指针,每次流式计算完成后,TDengine将查询的结果(TAOS_ROW)、查询状态(TAOS_RES)、用户定义参数(PARAM)传递给回调函数,在回调函数内,用户可以使用taos_num_fields获取结果集列数,taos_fetch_fields获取结果集每列数据的类型。 @@ -494,15 +500,15 @@ TDengine提供时间驱动的实时流式计算API。可以每隔一指定的时 * param:是应用提供的用于回调的一个参数,回调时,提供给应用。 * callback: 第二个回调函数,会在连续查询自动停止时被调用。 - 返回值为NULL,表示创建失败;返回值不为空,表示成功。 + 返回值为 NULL,表示创建失败;返回值不为空,表示成功。 - `void taos_close_stream (TAOS_STREAM *tstr)` - 关闭数据流,其中提供的参数是taos_open_stream的返回值。用户停止流式计算的时候,务必关闭该数据流。 + 关闭数据流,其中提供的参数是 taos_open_stream 的返回值。用户停止流式计算的时候,务必关闭该数据流。 ### 数据订阅接口 -订阅API目前支持订阅一张或多张表,并通过定期轮询的方式不断获取写入表中的最新数据。 +订阅 API 目前支持订阅一张或多张表,并通过定期轮询的方式不断获取写入表中的最新数据。 * `TAOS_SUB *taos_subscribe(TAOS* taos, int restart, const char* topic, const char *sql, TAOS_SUBSCRIBE_CALLBACK fp, void *param, int interval)` @@ -513,46 +519,46 @@ TDengine提供时间驱动的实时流式计算API。可以每隔一指定的时 * sql:订阅的查询语句,此语句只能是 `select` 语句,只应查询原始数据,只能按时间正序查询数据 * fp:收到查询结果时的回调函数(稍后介绍函数原型),只在异步调用时使用,同步调用时此参数应该传 `NULL` * param:调用回调函数时的附加参数,系统API将其原样传递到回调函数,不进行任何处理 - * interval:轮询周期,单位为毫秒。异步调用时,将根据此参数周期性的调用回调函数,为避免对系统性能造成影响,不建议将此参数设置的过小;同步调用时,如两次调用`taos_consume`的间隔小于此周期,API将会阻塞,直到时间间隔超过此周期。 + * interval:轮询周期,单位为毫秒。异步调用时,将根据此参数周期性的调用回调函数,为避免对系统性能造成影响,不建议将此参数设置的过小;同步调用时,如两次调用 `taos_consume` 的间隔小于此周期,API将会阻塞,直到时间间隔超过此周期。 * `typedef void (*TAOS_SUBSCRIBE_CALLBACK)(TAOS_SUB* tsub, TAOS_RES *res, void* param, int code)` 异步模式下,回调函数的原型,其参数为: * tsub:订阅对象 * res:查询结果集,注意结果集中可能没有记录 - * param:调用 `taos_subscribe`时客户程序提供的附加参数 + * param:调用 `taos_subscribe` 时客户程序提供的附加参数 * code:错误码 **注意**:在这个回调函数里不可以做耗时过长的处理,尤其是对于返回的结果集中数据较多的情况,否则有可能导致客户端阻塞等异常状态。如果必须进行复杂计算,则建议在另外的线程中进行处理。 * `TAOS_RES *taos_consume(TAOS_SUB *tsub)` - 同步模式下,该函数用来获取订阅的结果。 用户应用程序将其置于一个循环之中。 如两次调用`taos_consume`的间隔小于订阅的轮询周期,API将会阻塞,直到时间间隔超过此周期。 如果数据库有新记录到达,该API将返回该最新的记录,否则返回一个没有记录的空结果集。 如果返回值为 `NULL`,说明系统出错。 异步模式下,用户程序不应调用此API。 + 同步模式下,该函数用来获取订阅的结果。 用户应用程序将其置于一个循环之中。 如两次调用 `taos_consume()` 的间隔小于订阅的轮询周期,API将会阻塞,直到时间间隔超过此周期。 如果数据库有新记录到达,该API将返回该最新的记录,否则返回一个没有记录的空结果集。 如果返回值为 `NULL`,说明系统出错。 异步模式下,用户程序不应调用此 API。 **注意**:在调用 `taos_consume()` 之后,用户应用应确保尽快调用 `taos_fetch_row()` 或 `taos_fetch_block()` 来处理订阅结果,否则服务端会持续缓存查询结果数据等待客户端读取,极端情况下会导致服务端内存消耗殆尽,影响服务稳定性。 * `void taos_unsubscribe(TAOS_SUB *tsub, int keepProgress)` - 取消订阅。 如参数 `keepProgress` 不为0,API会保留订阅的进度信息,后续调用 `taos_subscribe` 时可以基于此进度继续;否则将删除进度信息,后续只能重新开始读取数据。 + 取消订阅。 如参数 `keepProgress` 不为0,API会保留订阅的进度信息,后续调用 `taos_subscribe()` 时可以基于此进度继续;否则将删除进度信息,后续只能重新开始读取数据。 ## Python Connector -Python连接器的使用参见[视频教程](https://www.taosdata.com/blog/2020/11/11/1963.html) +Python 连接器的使用参见[视频教程](https://www.taosdata.com/blog/2020/11/11/1963.html) * **安装**:参见下面具体步骤 -* **示例程序**:位于install_directory/examples/python +* **示例程序**:位于 install_directory/examples/python ### 安装 -Python连接器支持的系统有:Linux 64/Windows x64 +Python 连接器支持的系统有:Linux 64/Windows x64 安装前准备: -- 已安装好TDengine应用驱动,请参考[安装连接器驱动步骤](https://www.taosdata.com/cn/documentation/connector#driver) -- 已安装python 2.7 or >= 3.4 -- 已安装pip +- 已安装好 TDengine 应用驱动,请参考[安装连接器驱动步骤](https://www.taosdata.com/cn/documentation/connector#driver) +- 已安装 Python 2.7 or >= 3.4 +- 已安装 pip ### Python连接器安装 @@ -579,19 +585,19 @@ Python 命令行依赖 taos 动态库 `libtaos.so` 或 `taos.dll`, 对于 Window * **read_example.py** Python示例源程序 -用户可以参考`read_example.py`这个程序来设计用户自己的写入、查询程序。 +用户可以参考 `read_example.py` 这个程序来设计用户自己的写入、查询程序。 -在安装了对应的应用驱动后,通过`import taos`引入taos类。主要步骤如下: +在安装了对应的应用驱动后,通过 `import taos` 引入 taos 类。主要步骤如下: -- 通过taos.connect获取TaosConnection对象,这个对象可以一个程序只申请一个,在多线程中共享。 +- 通过 taos.connect 获取 TaosConnection对象,这个对象可以一个程序只申请一个,在多线程中共享。 -- 通过TaosConnection对象的 `.cursor()` 方法获取一个新的游标对象,这个游标对象必须保证每个线程独享。 +- 通过 TaosConnection 对象的 `.cursor()` 方法获取一个新的游标对象,这个游标对象必须保证每个线程独享。 -- 通过游标对象的execute()方法,执行写入或查询的SQL语句。 +- 通过游标对象的 execute()方法,执行写入或查询的 SQL 语句。 -- 如果执行的是写入语句,execute返回的是成功写入的行数信息affected rows。 +- 如果执行的是写入语句,execute 返回的是成功写入的行数信息 affected rows。 -- 如果执行的是查询语句,则execute执行成功后,需要通过fetchall方法去拉取结果集。 具体方法可以参考示例代码。 +- 如果执行的是查询语句,则 execute 执行成功后,需要通过 fetchall 方法去拉取结果集。 具体方法可以参考示例代码。 ### 安装验证 @@ -604,11 +610,11 @@ python3 PythonChecker.py -host 验证通过将打印出成功信息。 -### Python连接器的使用 +### Python 连接器的使用 -#### PEP-249 兼容API +#### PEP-249 兼容 API -您可以像其他数据库一样,使用类似 [PEP-249](https://www.python.org/dev/peps/pep-0249/) 数据库API规范风格的API: +您可以像其他数据库一样,使用类似 [PEP-249](https://www.python.org/dev/peps/pep-0249/) 数据库 API 规范风格的 API: ```python import taos @@ -624,7 +630,7 @@ for row in results: ##### 代码示例 -1. 导入TDengine客户端模块 +1. 导入 TDengine 客户端模块 ```python import taos @@ -637,7 +643,7 @@ for row in results: c1 = conn.cursor() ``` - *host* 是TDengine 服务端所在IP, *config* 为客户端配置文件所在目录。 + *host* 是 TDengine 服务端所在IP, *config* 为客户端配置文件所在目录。 3. 写入数据 @@ -681,7 +687,7 @@ for row in results: #### Query API -从v2.1.0版本开始, 我们提供另外一种方法:`connection.query` 来操作数据库。 +从 v2.1.0 版本开始, 我们提供另外一种方法:`connection.query` 来操作数据库。 ```python import taos @@ -751,7 +757,7 @@ conn.execute("drop database pytest") #### JSON 类型 -从 `taospy` `v2.2.0` 开始,Python连接器开始支持 JSON 数据类型的标签(TDengine版本要求 Beta 版 2.3.5+, 稳定版 2.4.0+)。 +从 `taospy` `v2.2.0` 开始,Python 连接器开始支持 JSON 数据类型的标签(TDengine版本要求 Beta 版 2.3.5+, 稳定版 2.4.0+)。 创建一个使用JSON类型标签的超级表及其子表: @@ -790,33 +796,32 @@ k1 = conn.query("select info->'k1' as k1 from s1").fetch_all_into_dict() """ ``` -更多JSON类型的操作方式请参考 [JSON 类型使用说明](https://www.taosdata.com/cn/documentation/taos-sql)。 +更多 JSON 类型的操作方式请参考 [JSON 类型使用说明](https://www.taosdata.com/cn/documentation/taos-sql)。 #### 关于纳秒 (nanosecond) 在 Python 连接器中的说明 由于目前 Python 对 nanosecond 支持的不完善(参见链接 1. 2. ),目前的实现方式是在 nanosecond 精度时返回整数,而不是 ms 和 us 返回的 datetime 类型,应用开发者需要自行处理,建议使用 pandas 的 to_datetime()。未来如果 Python 正式完整支持了纳秒,涛思数据可能会修改相关接口。 -1. https://stackoverflow.com/questions/10611328/parsing-datetime-strings-containing-nanoseconds -2. https://www.python.org/dev/peps/pep-0564/ +1. `https://stackoverflow.com/questions/10611328/parsing-datetime-strings-containing-nanoseconds` +2. `https://www.python.org/dev/peps/pep-0564/` #### 帮助信息 -用户可通过python的帮助信息直接查看模块的使用信息,或者参考tests/examples/python中的示例程序。以下为部分常用类和方法: +用户可通过 Python 的帮助信息直接查看模块的使用信息,或者参考 `examples/python` 目录中的示例程序。以下为部分常用类和方法: - _TaosConnection_ 类 - 参考python中help(taos.TaosConnection)。 - 这个类对应客户端和TDengine建立的一个连接。在客户端多线程的场景下,推荐每个线程申请一个独立的连接实例,而不建议多线程共享一个连接。 + 参考 Python 中 help(taos.TaosConnection)。 + 这个类对应客户端和 TDengine 建立的一个连接。在客户端多线程的场景下,推荐每个线程申请一个独立的连接实例,而不建议多线程共享一个连接。 - _TaosCursor_ 类 - 参考python中help(taos.TaosCursor)。 + 参考 Python 中 help(taos.TaosCursor)。 这个类对应客户端进行的写入、查询操作。在客户端多线程的场景下,这个游标实例必须保持线程独享,不能跨线程共享使用,否则会导致返回结果出现错误。 - _connect_ 方法 - 用于生成taos.TaosConnection的实例。 - + 用于生成 taos.TaosConnection 的实例。 ## RESTful Connector @@ -835,11 +840,13 @@ RESTful 接口不依赖于任何 TDengine 的库,因此客户端不需要安 下面以 Ubuntu 环境中使用 curl 工具(确认已经安装)来验证 RESTful 接口的正常。 下面示例是列出所有的数据库,请把 h1.taosdata.com 和 6041(缺省值)替换为实际运行的 TDengine 服务 fqdn 和端口号: + ```html curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' h1.taosdata.com:6041/rest/sql ``` 返回值结果如下表示验证通过: + ```json { "status": "succ", @@ -865,7 +872,7 @@ http://:/rest/sql/[db_name] - port: 配置文件中 httpPort 配置项,缺省为 6041 - db_name: 可选参数,指定本次所执行的 SQL 语句的默认数据库库名。(从 2.2.0.0 版本开始支持) -例如:http://h1.taos.com:6041/rest/sql/test 是指向地址为 h1.taos.com:6041 的 url,并将默认使用的数据库库名设置为 test。 +例如:`http://h1.taos.com:6041/rest/sql/test` 是指向地址为 `h1.taos.com:6041` 的 url,并将默认使用的数据库库名设置为 test。 HTTP 请求的 Header 里需带有身份认证信息,TDengine 支持 Basic 认证与自定义认证两种机制,后续版本将提供标准安全的数字签名机制来做身份验证。 @@ -923,6 +930,7 @@ curl -u username:password -d '' :/rest/sql/[db_name] - rows: 表明总共多少行数据。 column_meta 中的列类型说明: + * 1:BOOL * 2:TINYINT * 3:SMALLINT @@ -973,6 +981,7 @@ curl http://192.168.0.1:6041/rest/login/root/taosdata ```bash curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.d1001' 192.168.0.1:6041/rest/sql ``` + 返回值: ```json @@ -995,6 +1004,7 @@ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'create database demo' 19 ``` 返回值: + ```json { "status": "succ", @@ -1033,6 +1043,7 @@ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.d1001 #### 结果集采用 UTC 时间字符串 HTTP 请求 URL 采用 `sqlutc` 时,返回结果集的时间戳将采用 UTC 时间字符串表示,例如 + ```bash curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.t1' 192.168.0.1:6041/rest/sqlutc ``` @@ -1065,28 +1076,31 @@ HTTP 请求 URL 采用 `sqlutc` 时,返回结果集的时间戳将采用 UTC ## CSharp Connector -* C#连接器支持的系统有:Linux 64/Windows x64/Windows x86 +* C# 连接器支持的系统有:Linux 64/Windows x64/Windows x86 -* C#连接器现在也支持从[Nuget下载引用](https://www.nuget.org/packages/TDengine.Connector/) +* C# 连接器现在也支持从[Nuget下载引用](https://www.nuget.org/packages/TDengine.Connector/) + +* 在 Windows 系统上,C# 应用程序可以使用 TDengine 的原生 C 接口来执行所有数据库操作,后续版本将提供 ORM(Dapper)框架驱动。 -* 在Windows系统上,C#应用程序可以使用TDengine的原生C接口来执行所有数据库操作,后续版本将提供ORM(Dapper)框架驱动。 ### 安装准备 * 应用驱动安装请参考[安装连接器驱动步骤](https://www.taosdata.com/cn/documentation/connector#driver)。 -* 接口文件TDengineDrivercs.cs和参考程序示例TDengineTest.cs均位于Windows客户端install_directory/examples/C#目录下。 +* 接口文件 TDengineDrivercs.cs 和参考程序示例 TDengineTest.cs 均位于 Windows 客户端 install_directory/examples/C#目录下。 * 安装[.NET SDK](https://dotnet.microsoft.com/download) ### 示例程序 示例程序源码位于 + * {client_install_directory}/examples/C# * [github C# example source code](https://github.com/taosdata/TDengine/tree/develop/examples/C%2523) -**注意:** TDengineTest.cs C#示例源程序,包含了数据库连接参数,以及如何执行数据插入、查询等操作。 +**注意:** TDengineTest.cs C# 示例源程序,包含了数据库连接参数,以及如何执行数据插入、查询等操作。 ### 安装验证 需要先安装 .Net SDK + ```cmd cd {client_install_directory}/examples/C#/C#Checker //运行测试 @@ -1095,36 +1109,39 @@ dotnet run -- -h . // 此步骤会先build,然后再运行。 ### C#连接器的使用 -在Windows系统上,C#应用程序可以使用TDengine的C#连接器接口来执行所有数据库的操作。使用的具体步骤如下所示: +在 Windows 系统上,C# 应用程序可以使用 TDengine 的 C# 连接器接口来执行所有数据库的操作。使用的具体步骤如下所示: + +* 创建一个 C# project(需要 .NET SDK). -需要 .NET SDK -* 创建一个c# project. ``` cmd mkdir test cd test dotnet new console ``` -* 通过Nuget引用TDengineDriver包 + +* 通过 Nuget 引用 TDengineDriver 包 + ``` cmd dotnet add package TDengine.Connector ``` -* 在项目中需要用到TDengineConnector的地方引用TDengineDriver namespace。 -```c# + +* 在项目中需要用到 TDengineConnector 的地方引用 TDengineDriver namespace。 + +```C# using TDengineDriver; ``` -* 用户可以参考[TDengineTest.cs](https://github.com/taosdata/TDengine/tree/develop/examples/C%2523/TDengineTest)来定义数据库连接参数,以及如何执行数据插入、查询等操作。 +* 用户可以参考[TDengineTest.cs](https://github.com/taosdata/TDengine/tree/develop/examples/C%2523/TDengineTest)来定义数据库连接参数,以及如何执行数据插入、查询等操作。 **注意:** -* TDengine V2.0.3.0之后同时支持32位和64位Windows系统,所以C#项目在生成.exe文件时,“解决方案”/“项目”的“平台”请选择对应的“X86” 或“x64”。 -* 此接口目前已经在Visual Studio 2015/2017中验证过,其它VS版本尚待验证。 -* 此连接器需要用到taos.dll文件,所以在未安装客户端时需要在执行应用程序前,拷贝Windows{client_install_directory}/driver目录中的taos.dll文件到项目最后生成.exe可执行文件所在的文件夹。之后运行exe文件,即可访问TDengine数据库并做插入、查询等操作。 - +* TDengine V2.0.3.0 之后同时支持 32 位和 64 位 Windows 系统,所以 C# 项目在生成 .exe 文件时,“解决方案”/“项目”的“平台”请选择对应的 x86 或 x64。 +* 此接口目前已经在 Visual Studio 2015/2017 中验证过,其它 Visual Studio 版本尚待验证。 +* 此连接器需要用到 taos.dll 文件,所以在未安装客户端时需要在执行应用程序前,拷贝 Windows{client_install_directory}/driver 目录中的 taos.dll 文件到项目最后生成 .exe 可执行文件所在的文件夹。之后运行 exe 文件,即可访问 TDengine 数据库并做插入、查询等操作。 ### 第三方驱动 -Maikebing.Data.Taos是一个TDengine的ADO.Net提供器,支持linux,windows。该开发包由热心贡献者`麦壳饼@@maikebing`提供,具体请参考 +Maikebing.Data.Taos 是一个 TDengine 的 ADO.Net 提供器,支持 linux,windows。该开发包由热心贡献者`麦壳饼@@maikebing`提供,具体请参考 ``` //接口下载 @@ -1137,7 +1154,7 @@ https://www.taosdata.com/blog/2020/11/02/1901.html ### 安装准备 -Go连接器支持的系统有: +Go 连接器支持的系统有: | **CPU类型** | x64(64bit) | | | aarch64 | aarch32 | | --------------- | ------------ | -------- | -------- | -------- | ---------- | @@ -1146,20 +1163,23 @@ Go连接器支持的系统有: 安装前准备: -- 已安装好TDengine应用驱动,参考[安装连接器驱动步骤](https://www.taosdata.com/cn/documentation/connector#driver)。 +- 已安装好 TDengine 应用驱动,参考[安装连接器驱动步骤](https://www.taosdata.com/cn/documentation/connector#driver)。 ### 示例程序 -使用 Go 连接器的示例代码请参考 https://github.com/taosdata/TDengine/tree/develop/examples/go 以及[视频教程](https://www.taosdata.com/blog/2020/11/11/1951.html)。 +使用 Go 连接器的示例代码请参考 `https://github.com/taosdata/TDengine/tree/develop/examples/go` 以及[视频教程](https://www.taosdata.com/blog/2020/11/11/1951.html)。 示例程序源码也位于安装目录下的 examples/go/taosdemo.go 文件中。 -**提示:建议Go版本是1.13及以上,并开启模块支持:** +**提示:建议 Go 版本是 1.13 及以上,并开启模块支持:** + ```sh go env -w GO111MODULE=on go env -w GOPROXY=https://goproxy.io,direct ``` -在taosdemo.go所在目录下进行编译和执行: + +在 taosdemo.go 所在目录下进行编译和执行: + ```sh go mod init taosdemo go get github.com/taosdata/driver-go/taosSql @@ -1169,9 +1189,10 @@ go build ./taosdemo -h fqdn -p serverPort ``` -### Go连接器的使用 +### Go 连接器的使用 + +TDengine 提供了GO驱动程序包`taosSql`。`taosSql` 实现了 GO 语言的内置接口 `database/sql/driver`。用户只需按如下方式引入包就可以在应用程序中访问 TDengine。 -TDengine提供了GO驱动程序包`taosSql`。`taosSql`实现了GO语言的内置接口`database/sql/driver`。用户只需按如下方式引入包就可以在应用程序中访问TDengine。 ```go import ( "database/sql" @@ -1181,54 +1202,169 @@ import ( **提示**:下划线与双引号之间必须有一个空格。 -`taosSql` 的 v2 版本进行了重构,分离出内置数据库操作接口 `database/sql/driver` 到目录 `taosSql`;订阅、stmt等其他功能放到目录 `af`。 +`taosSql` 的 v2 版本进行了重构,分离出内置数据库操作接口 `database/sql/driver` 到目录 `taosSql`;订阅、 stmt 等其他功能放到目录 `af`。 -### 常用API +### 常用 API - `sql.Open(DRIVER_NAME string, dataSourceName string) *DB` - 该API用来打开DB,返回一个类型为\*DB的对象,一般情况下,DRIVER_NAME设置为字符串`taosSql`,dataSourceName设置为字符串`user:password@/tcp(host:port)/dbname`,如果客户想要用多个goroutine并发访问TDengine, 那么需要在各个goroutine中分别创建一个sql.Open对象并用之访问TDengine + 该 API 用来打开 DB,返回一个类型为\*DB的对象,一般情况下,DRIVER_NAME设置为字符串 `taosSql`,dataSourceName 设置为字符串 `user:password@/tcp(host:port)/dbname`,如果客户想要用多个 goroutine 并发访问 TDengine, 那么需要在各个 goroutine 中分别创建一个 sql.Open 对象并用之访问 TDengine。 - **注意**: 该API成功创建的时候,并没有做权限等检查,只有在真正执行Query或者Exec的时候才能真正的去创建连接,并同时检查user/password/host/port是不是合法。另外,由于整个驱动程序大部分实现都下沉到taosSql所依赖的libtaos动态库中。所以,sql.Open本身特别轻量。 + **注意**: 该 API 成功创建的时候,并没有做权限等检查,只有在真正执行 Query 或者 Exec 的时候才能真正的去创建连接,并同时检查 user/password/host/port 是不是合法。另外,由于整个驱动程序大部分实现都下沉到 taosSql 所依赖的 libtaos 动态库中。所以,sql.Open 本身特别轻量。 - `func (db *DB) Exec(query string, args ...interface{}) (Result, error)` - sql.Open内置的方法,用来执行非查询相关SQL + sql.Open 内置的方法,用来执行非查询相关 SQL - `func (db *DB) Query(query string, args ...interface{}) (*Rows, error)` - sql.Open内置的方法,用来执行查询语句 + sql.Open 内置的方法,用来执行查询语句 - `func (db *DB) Prepare(query string) (*Stmt, error)` - sql.Open内置的方法,Prepare creates a prepared statement for later queries or executions. + sql.Open 内置的方法,Prepare creates a prepared statement for later queries or executions. - `func (s *Stmt) Exec(args ...interface{}) (Result, error)` - sql.Open内置的方法,executes a prepared statement with the given arguments and returns a Result summarizing the effect of the statement. + sql.Open 内置的方法,executes a prepared statement with the given arguments and returns a Result summarizing the effect of the statement. - `func (s *Stmt) Query(args ...interface{}) (*Rows, error)` - sql.Open内置的方法,Query executes a prepared query statement with the given arguments and returns the query results as a \*Rows. + sql.Open 内置的方法,Query executes a prepared query statement with the given arguments and returns the query results as a \*Rows. - `func (s *Stmt) Close() error` - sql.Open内置的方法,Close closes the statement. + sql.Open 内置的方法,Close closes the statement. ### 其他代码示例 [Consume Messages from Kafka](https://github.com/taosdata/go-demo-kafka) 是一个通过 Go 语言实现消费 Kafka 队列写入 TDengine 的示例程序,也可以作为通过 Go 连接 TDengine 的写法参考。 -## Node.js Connector +### Go RESTful的使用 + +#### 引入 + +```go restful +import ( + "database/sql" + _ "github.com/taosdata/driver-go/v2/taosRestful" +) +``` + +`go.mod ` 的文件 require 块使用 github.com/taosdata/driver-go/v2 develop 之后执行 `go mod tidy ` + +`sql.Open `的driverName 为 `taosRestful` + +#### DSN + +格式为: + +数据库用户名:数据库密码@连接方式(域名或ip:端口)/[数据库][?参数] -Node.js连接器支持的系统有: +样例: + +`root:taosdata@http(localhost:6041)/test?readBufferSize=52428800` + +参数: + +`disableCompression` 是否接受压缩数据,默认为 true 不接受压缩数据,如果传输数据使用 gzip 压缩设置为 false。 + +`readBufferSize` 读取数据的缓存区大小默认为 4K(4096),当查询结果数据量多时可以适当调大该值。 + +#### 使用限制 + +由于 RESTful 接口无状态所以 `use db` 语法不会生效,需要将 db 名称放到 SQL 语句中,如:`create table if not exists tb1 (ts timestamp, a int)`改为`create table if not exists test.tb1 (ts timestamp, a int)`否则将报错`[0x217] Database not specified or available`。 + +也可以将 db 名称放到 DSN 中,将 `root:taosdata@http(localhost:6041)/` 改为 `root:taosdata@http(localhost:6041)/test`,此方法在 TDengine 2.4.0.5 版本的 taosAdapter 开始支持。当指定的 db 不存在时执行 `create database` 语句不会报错,而执行针对该 db 的其他查询或写入操作会报错。完整示例如下: + +```go restful demo +package main + +import ( + "database/sql" + "fmt" + "time" + + _ "github.com/taosdata/driver-go/v2/taosRestful" +) + +func main() { + var taosDSN = "root:taosdata@http(localhost:6041)/test" + taos, err := sql.Open("taosRestful", taosDSN) + if err != nil { + fmt.Println("failed to connect TDengine, err:", err) + return + } + defer taos.Close() + taos.Exec("create database if not exists test") + taos.Exec("create table if not exists tb1 (ts timestamp, a int)") + _, err = taos.Exec("insert into tb1 values(now, 0)(now+1s,1)(now+2s,2)(now+3s,3)") + if err != nil { + fmt.Println("failed to insert, err:", err) + return + } + rows, err := taos.Query("select * from tb1") + if err != nil { + fmt.Println("failed to select from table, err:", err) + return + } + + defer rows.Close() + for rows.Next() { + var r struct { + ts time.Time + a int + } + err := rows.Scan(&r.ts, &r.a) + if err != nil { + fmt.Println("scan error:\n", err) + return + } + fmt.Println(r.ts, r.a) + } +} +``` + +#### 常见问题 + +- 无法找到包`github.com/taosdata/driver-go/v2/taosRestful` + + 将 `go.mod` 中 require 块对`github.com/taosdata/driver-go/v2`的引用改为`github.com/taosdata/driver-go/v2 develop`,之后执行 `go mod tidy`。 + +- stmt 相关接口崩溃 + + RESTful 不支持 stmt 相关接口,建议使用`db.Exec`和`db.Query`。 + +- 使用 `use db` 语句后执行其他语句报错 `[0x217] Database not specified or available` + + 在 RESTful 接口中 SQL 语句的执行无上下文关联,使用 `use db` 语句不会生效,解决办法见上方使用限制章节。 + +- 使用 taosSql 不报错使用 taosRestful 报错 `[0x217] Database not specified or available` + + 因为 RESTful 接口无状态,使用 `use db` 语句不会生效,解决办法见上方使用限制章节。 + +- 升级 `github.com/taosdata/driver-go/v2/taosRestful` + + 将 `go.mod` 文件中对 `github.com/taosdata/driver-go/v2` 的引用改为 `github.com/taosdata/driver-go/v2 develop`,之后执行 `go mod tidy`。 + +- readBufferSize 参数调大后无明显效果 + + readBufferSize 调大后会减少获取结果时 syscall 的调用。如果查询结果的数据量不大,修改该参数不会带来明显提升,如果该参数修改过大,瓶颈会在解析 JSON 数据。如果需要优化查询速度,需要根据实际情况调整该值来达到查询效果最优。 + +- disableCompression 参数设置为 false 时查询效率降低 + + 当 disableCompression 参数设置为 false 时查询结果会 gzip 压缩后传输,拿到数据后要先进行 gzip 解压。 + +## Node.js Connector + +Node.js 连接器支持的系统有: |**CPU类型** | x64(64bit) | | | aarch64 | aarch32 | | ------------ | ------------ | -------- | -------- | -------- | -------- | | **OS类型** | Linux | Win64 | Win32 | Linux | Linux | | **支持与否** | **支持** | **支持** | **支持** | **支持** | **支持** | -Node.js连接器的使用参见[视频教程](https://www.taosdata.com/blog/2020/11/11/1957.html)。 +Node.js 连接器的使用参见[视频教程](https://www.taosdata.com/blog/2020/11/11/1957.html)。 ### 安装准备 @@ -1236,56 +1372,57 @@ Node.js连接器的使用参见[视频教程](https://www.taosdata.com/blog/2020 ### 安装Node.js连接器 -用户可以通过[npm](https://www.npmjs.com/)来进行安装,也可以通过源代码*src/connector/nodejs/* 来进行安装。具体安装步骤如下: +用户可以通过 [npm](https://www.npmjs.com/) 来进行安装,也可以通过源代码 *src/connector/nodejs/* 来进行安装。具体安装步骤如下: -首先,通过[npm](https://www.npmjs.com/)安装node.js 连接器。 +首先,通过 [npm](https://www.npmjs.com/) 安装 Node.js 连接器。 ```bash npm install td2.0-connector ``` -我们建议用户使用npm 安装node.js连接器。如果您没有安装npm,可以将*src/connector/nodejs/*拷贝到您的nodejs 项目目录下。 -我们使用[node-gyp](https://github.com/nodejs/node-gyp)和TDengine服务端进行交互。安装node.js连接器之前,还需要根据具体操作系统来安装下文提到的一些依赖工具。 +我们建议用户使用 npm 安装 Node.js 连接器。如果您没有安装 npm,可以将 *src/connector/nodejs/* 拷贝到您的 nodejs 项目目录下。 + +我们使用 [node-gyp](https://github.com/nodejs/node-gyp) 和 TDengine 服务端进行交互。安装 Node.js 连接器之前,还需要根据具体操作系统来安装下文提到的一些依赖工具。 ### Linux - `python` (建议`v2.7` , `v3.x.x` 目前还不支持) - `node` 2.0.6支持v12.x和v10.x,2.0.5及更早版本支持v10.x版本,其他版本可能存在包兼容性的问题。 - `make` -- c语言编译器比如[GCC](https://gcc.gnu.org) +- C 语言编译器比如 [GCC](https://gcc.gnu.org) ### Windows #### 安装方法1 -使用微软的[windows-build-tools](https://github.com/felixrieseberg/windows-build-tools)在`cmd` 命令行界面执行`npm install --global --production windows-build-tools` 即可安装所有的必备工具。 +使用微软的 [windows-build-tools](https://github.com/felixrieseberg/windows-build-tools) 在 `cmd` 命令行界面执行 `npm install --global --production windows-build-tools` 即可安装所有的必备工具。 #### 安装方法2 手动安装以下工具: -- 安装Visual Studio相关:[Visual Studio Build 工具](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools) 或者 [Visual Studio 2017 Community](https://visualstudio.microsoft.com/pl/thank-you-downloading-visual-studio/?sku=Community) +- 安装 Visual Studio 相关:[Visual Studio Build 工具](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools) 或者 [Visual Studio 2017 Community](https://visualstudio.microsoft.com/pl/thank-you-downloading-visual-studio/?sku=Community) - 安装 [Python](https://www.python.org/downloads/) 2.7(`v3.x.x` 暂不支持) 并执行 `npm config set python python2.7` - 进入`cmd`命令行界面,`npm config set msvs_version 2017` -如果以上步骤不能成功执行,可以参考微软的node.js用户手册[Microsoft's Node.js Guidelines for Windows](https://github.com/Microsoft/nodejs-guidelines/blob/master/windows-environment.md#compiling-native-addon-modules)。 +如果以上步骤不能成功执行,可以参考微软的 Node.js 用户手册 [Microsoft's Node.js Guidelines for Windows](https://github.com/Microsoft/nodejs-guidelines/blob/master/windows-environment.md#compiling-native-addon-modules)。 -如果在Windows 10 ARM 上使用ARM64 Node.js,还需添加 "Visual C++ compilers and libraries for ARM64" 和 "Visual C++ ATL for ARM64"。 +如果在 Windows 10 ARM 上使用 ARM64 Node.js,还需添加 "Visual C++ compilers and libraries for ARM64" 和 "Visual C++ ATL for ARM64"。 ### 示例程序 -示例程序源码位于install_directory/examples/nodejs,有: +示例程序源码位于 install_directory/examples/nodejs,有: -Node-example.js node.js示例源程序 +Node-example.js Node.js示例源程序 Node-example-raw.js ### 安装验证 -在安装好TDengine客户端后,使用nodejsChecker.js程序能够验证当前环境是否支持nodejs方式访问Tdengine。 +在安装好 TDengine 客户端后,使用 nodejsChecker.js 程序能够验证当前环境是否支持 nodejs 方式访问 TDengine。 验证方法: -1. 新建安装验证目录,例如:`~/tdengine-test`,拷贝github上nodejsChecker.js源程序。下载地址:(https://github.com/taosdata/TDengine/tree/develop/examples/nodejs/nodejsChecker.js)。 +1. 新建安装验证目录,例如:`~/tdengine-test`,拷贝 github 上 nodejsChecker.js 源程序。下载地址:`https://github.com/taosdata/TDengine/tree/develop/examples/nodejs/nodejsChecker.js`。 2. 在命令行中执行以下命令: @@ -1295,15 +1432,15 @@ npm install td2.0-connector node nodejsChecker.js host=localhost ``` -3. 执行以上步骤后,在命令行会输出nodejs连接Tdengine实例,并执行简答插入和查询的结果。 +3. 执行以上步骤后,在命令行会输出 nodejs 连接 TDengine 实例,并执行简答插入和查询的结果。 ### Node.js连接器的使用 -以下是Node.js 连接器的一些基本使用方法,详细的使用方法可参考[TDengine Node.js connector](https://github.com/taosdata/TDengine/tree/develop/src/connector/nodejs)。 +以下是 Node.js 连接器的一些基本使用方法,详细的使用方法可参考[TDengine Node.js connector](https://github.com/taosdata/TDengine/tree/develop/src/connector/nodejs)。 #### 建立连接 -使用node.js连接器时,必须先`require td2.0-connector`,然后使用 `taos.connect` 函数建立到服务端的连接。例如如下代码: +使用 Node.js 连接器时,必须先 `require td2.0-connector`,然后使用 `taos.connect` 函数建立到服务端的连接。例如如下代码: ```javascript const taos = require('td2.0-connector'); @@ -1311,25 +1448,25 @@ var conn = taos.connect({host:"taosdemo.com", user:"root", password:"taosdata", var cursor = conn.cursor(); // Initializing a new cursor ``` -建立了一个到hostname为taosdemo.com,端口为6030(Tdengine的默认端口号)的连接。连接指定了用户名(root)和密码(taosdata)。taos.connect 函数必须提供的参数是`host`,其它参数在没有提供的情况下会使用如下的默认值。taos.connect返回了`cursor` 对象,使用cursor来执行sql语句。 +建立了一个到 hostname 为 taosdemo.com,端口为 6030(TDengine的默认端口号)的连接。连接指定了用户名(root)和密码(taosdata)。taos.connect 函数必须提供的参数是`host`,其它参数在没有提供的情况下会使用如下的默认值。taos.connect 返回了 `cursor` 对象,使用 cursor 来执行 SQL 语句。 -#### 执行SQL和插入数据 +#### 执行 SQL 和插入数据 -对于DDL语句(例如create database、create table、use等),可以使用cursor的execute方法。代码如下: +对于 DDL 语句(例如create database、create table、use等),可以使用 cursor 的 execute 方法。代码如下: ```js cursor.execute('create database if not exists test;') ``` -以上代码创建了一个名称为test的数据库。对于DDL语句,一般没有返回值,cursor的execute返回值为0。 +以上代码创建了一个名称为 `test` 的数据库。对于 DDL 语句,一般没有返回值,cursor 的 execute 返回值为 0。 -对于Insert语句,代码如下: +对于 Insert 语句,代码如下: ```js var affectRows = cursor.execute('insert into test.weather values(now, 22.3, 34);') ``` -execute方法的返回值为该语句影响的行数,上面的sql向test库的weather表中,插入了一条数据,则返回值affectRows为1。 +execute 方法的返回值为该语句影响的行数,上面的 SQL 向 test 库的 weather 表中,插入了一条数据,则返回值 affectRows 为 1。 TDengine 目前还不支持 delete 语句。但从 2.0.8.0 版本开始,可以通过 `CREATE DATABASE` 时指定的 UPDATE 参数来启用对数据行的 update。 @@ -1349,6 +1486,7 @@ promise.then(function(result) { result.pretty(); }); ``` + 格式化查询语句还可以使用`query`的`bind`方法。如下面的示例:`query`会自动将提供的数值填入查询语句的`?`里。 ```javascript @@ -1357,6 +1495,7 @@ query.execute().then(function(result) { result.pretty(); }) ``` + 如果在`query`语句里提供第二个参数并设为`true`也可以立即获取查询结果。如下: ```javascript @@ -1369,6 +1508,7 @@ promise.then(function(result) { #### 关闭连接 在完成插入、查询等操作后,要关闭连接。代码如下: + ```js conn.close(); ``` @@ -1376,6 +1516,7 @@ conn.close(); #### 异步函数 异步查询数据库的操作和上面类似,只需要在`cursor.execute`, `TaosQuery.execute`等函数后面加上`_a`。 + ```javascript var promise1 = cursor.query('select count(*), avg(v1), avg(v2) from meter1;').execute_a() var promise2 = cursor.query('select count(*), avg(v1), avg(v2) from meter2;').execute_a(); @@ -1389,6 +1530,6 @@ promise2.then(function(result) { ### 示例 -[node-example.js](https://github.com/taosdata/tests/tree/master/examples/nodejs/node-example.js)提供了一个使用NodeJS 连接器建表,插入天气数据并查询插入的数据的代码示例。 +[node-example.js](https://github.com/taosdata/TDengine/blob/master/examples/nodejs/node-example.js) 提供了一个使用NodeJS 连接器建表,插入天气数据并查询插入的数据的代码示例。 -[node-example-raw.js](https://github.com/taosdata/tests/tree/master/examples/nodejs/node-example-raw.js)同样是一个使用NodeJS 连接器建表,插入天气数据并查询插入的数据的代码示例,但和上面不同的是,该示例只使用`cursor`。 +[node-example-raw.js](https://github.com/taosdata/TDengine/blob/master/examples/nodejs/node-example-raw.js) 同样是一个使用 NodeJS 连接器建表,插入天气数据并查询插入的数据的代码示例,但和上面不同的是,该示例只使用 `cursor`。 diff --git a/documentation20/cn/11.administrator/docs.md b/documentation20/cn/11.administrator/docs.md index 9756a9d85403b3434fe9eedbab5aeea18041d29e..e6ebd1ab9ee372daa2afe6fac0601c3f3d00898d 100644 --- a/documentation20/cn/11.administrator/docs.md +++ b/documentation20/cn/11.administrator/docs.md @@ -118,121 +118,1123 @@ taosd -C 下面仅仅列出一些重要的配置参数,更多的参数请看配置文件里的说明。各个参数的详细介绍及作用请看前述章节,而且这些参数的缺省配置都是可以工作的,一般无需设置。**注意:配置文件参数修改后,需要重启*taosd*服务,或客户端应用才能生效。** -| **#** | **配置参数名称** | **内部** | **SC** | **单位** | **含义** | **取值范围** | **缺省值** | **补充说明** | -| ----- | ----------------------- | -------- | -------- | -------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | -| 1 | firstEP | | **SC** | | taosd启动时,主动连接的集群中首个dnode的end point | | localhost:6030 | | -| 2 | secondEP | YES | **SC** | | taosd启动时,如果firstEp连接不上,尝试连接集群中第二个dnode的end point | | 无 | | -| 3 | fqdn | | **SC** | | 数据节点的FQDN。如果习惯IP地址访问,可设置为该节点的IP地址。 | | 缺省为操作系统配置的第一个hostname。 | 这个参数值的长度需要控制在 96 个字符以内。 | -| 4 | serverPort | | **SC** | | taosd启动后,对外服务的端口号 | | 6030 | RESTful服务使用的端口号是在此基础上+11,即默认值为6041(注意2.4及后续版本使用 taosAdapter 提供 RESTful 接口)。 | -| 5 | logDir | | **SC** | | 日志文件目录,客户端和服务器的运行日志将写入该目录 | | /var/log/taos | | -| 6 | scriptDir | YES | **S** | | | | | | -| 7 | dataDir | | **S** | | 数据文件目录,所有的数据文件都将写入该目录 | | /var/lib/taos | | -| 8 | arbitrator | | **S** | | 系统中裁决器的end point | | 空 | | -| 9 | numOfThreadsPerCore | | **SC** | | 每个CPU核生成的队列消费者线程数量 | | 1.0 | | -| 10 | ratioOfQueryThreads | | **S** | | 设置查询线程的最大数量 | 0:表示只有1个查询线程;1:表示最大和CPU核数相等的查询线程;2:表示最大建立2倍CPU核数的查询线程。 | 1 | 该值可以为小数,即0.5表示最大建立CPU核数一半的查询线程。 | -| 11 | numOfMnodes | | **S** | | 系统中管理节点个数 | | 3 | | -| 12 | vnodeBak | | **S** | | 删除vnode时是否备份vnode目录 | 0:否,1:是 | 1 | | -| 13 | telemetryRePorting | | **S** | | 是否允许 TDengine 采集和上报基本使用信息 | 0:不允许;1:允许 | 1 | | -| 14 | balance | | **S** | | 是否启动负载均衡 | 0,1 | 1 | | -| 15 | balanceInterval | YES | **S** | 秒 | 管理节点在正常运行状态下,检查负载均衡的时间间隔 | 1-30000 | 300 | | -| 16 | role | | **S** | | dnode的可选角色 | 0:any(既可作为mnode,也可分配vnode);1:mgmt(只能作为mnode,不能分配vnode);2:dnode(不能作为mnode,只能分配vnode) | 0 | | -| 17 | maxTmerCtrl | | **SC** | 个 | 定时器个数 | 8-2048 | 512 | | -| 18 | monitorInterval | | **S** | 秒 | 监控数据库记录系统参数(CPU/内存)的时间间隔 | 1-600 | 30 | | -| 19 | offlineThreshold | | **S** | 秒 | dnode离线阈值,超过该时间将导致dnode离线 | 5-7200000 | 86400*10(10天) | | -| 20 | rpcTimer | | **SC** | 毫秒 | rpc重试时长 | 100-3000 | 300 | | -| 21 | rpcMaxTime | | **SC** | 秒 | rpc等待应答最大时长 | 100-7200 | 600 | | -| 22 | statusInterval | | **S** | 秒 | dnode向mnode报告状态间隔 | 1-10 | 1 | | -| 23 | shellActivityTimer | | **SC** | 秒 | shell客户端向mnode发送心跳间隔 | 1-120 | 3 | | -| 24 | tableMetaKeepTimer | | **S** | 秒 | 表的元数据cache时长 | 1-8640000 | 7200 | | -| 25 | minSlidingTime | | **S** | 毫秒 | 最小滑动窗口时长 | 10-1000000 | 10 | 支持us补值后,这个值就是1us了。 | -| 26 | minIntervalTime | | **S** | 毫秒 | 时间窗口最小值 | 1-1000000 | 10 | | -| 27 | stream | | **S** | | 是否启用连续查询(流计算功能) | 0:不允许;1:允许 | 1 | | -| 28 | maxStreamCompDelay | | **S** | 毫秒 | 连续查询启动最大延迟 | 10-1000000000 | 20000 | 为避免多个stream同时执行占用太多系统资源,程序中对stream的执行时间人为增加了一些随机的延时。maxFirstStreamCompDelay 是stream第一次执行前最少要等待的时间。streamCompDelayRatio 是延迟时间的计算系数,它乘以查询的 interval 后为延迟时间基准。maxStreamCompDelay是延迟时间基准的上限。实际延迟时间为一个不超过延迟时间基准的随机值。stream某次计算失败后需要重试,retryStreamCompDelay是重试的等待时间基准。实际重试等待时间为不超过等待时间基准的随机值。 | -| 29 | maxFirstStreamCompDelay | | **S** | 毫秒 | 第一次连续查询启动最大延迟 | 10-1000000000 | 10000 | | -| 30 | retryStreamCompDelay | | **S** | 毫秒 | 连续查询重试等待间隔 | 10-1000000000 | 10 | | -| 31 | streamCompDelayRatio | | **S** | | 连续查询的延迟时间计算系数 | 0.1-0.9 | 0.1 | | -| 32 | maxVgroupsPerDb | | **S** | | 每个DB中 能够使用的最大vnode个数 | 0-8192 | | | -| 33 | maxTablesPerVnode | | **S** | | 每个vnode中能够创建的最大表个数 | | 1000000 | | -| 34 | minTablesPerVnode | YES | **S** | | 每个vnode中必须创建的最小表个数 | | 1000 | | -| 35 | tableIncStepPerVnode | YES | **S** | | 每个vnode中超过最小表数后递增步长 | | 1000 | | -| 36 | cache | | **S** | MB | 内存块的大小 | | 16 | | -| 37 | blocks | | **S** | | 每个vnode(tsdb)中有多少cache大小的内存块。因此一个vnode的用的内存大小粗略为(cache * blocks) | | 6 | | -| 38 | days | | **S** | 天 | 数据文件存储数据的时间跨度 | | 10 | | -| 39 | keep | | **S** | 天 | 数据保留的天数 | | 3650 | | -| 40 | minRows | | **S** | | 文件块中记录的最小条数 | | 100 | | -| 41 | maxRows | | **S** | | 文件块中记录的最大条数 | | 4096 | | -| 42 | quorum | | **S** | | 多副本环境下指令执行的确认数要求 | 1,2 | 1 | | -| 43 | comp | | **S** | | 文件压缩标志位 | 0:关闭,1:一阶段压缩,2:两阶段压缩 | 2 | | -| 44 | walLevel | | **S** | | WAL级别 | 1:写wal, 但不执行fsync; 2:写wal, 而且执行fsync | 1 | | -| 45 | fsync | | **S** | 毫秒 | 当wal设置为2时,执行fsync的周期 | 最小为0,表示每次写入,立即执行fsync;最大为180000(三分钟) | 3000 | | -| 46 | replica | | **S** | | 副本个数 | 1-3 | 1 | | -| 47 | mqttHostName | YES | **S** | | mqtt uri | | | mqtt://username:password@hostname:1883/taos/ | -| 48 | mqttPort | YES | **S** | | mqtt client name | | | 1883 | -| 49 | mqttTopic | YES | **S** | | | | | /test | -| 50 | compressMsgSize | | **S** | bytes | 客户端与服务器之间进行消息通讯过程中,对通讯的消息进行压缩的阈值。如果要压缩消息,建议设置为64330字节,即大于64330字节的消息体才进行压缩。 | `0 `表示对所有的消息均进行压缩 >0: 超过该值的消息才进行压缩 -1: 不压缩 | -1 | | -| 51 | maxSQLLength | | **C** | bytes | 单条SQL语句允许的最长限制 | 65480-1048576 | 1048576 | | -| 52 | maxNumOfOrderedRes | | **SC** | | 支持超级表时间排序允许的最多记录数限制 | | 10万 | | -| 53 | timezone | | **SC** | | 时区 | | 从系统中动态获取当前的时区设置 | | -| 54 | locale | | **SC** | | 系统区位信息及编码格式 | | 系统中动态获取,如果自动获取失败,需要用户在配置文件设置或通过API设置 | | -| 55 | charset | | **SC** | | 字符集编码 | | 系统中动态获取,如果自动获取失败,需要用户在配置文件设置或通过API设置 | | -| 56 | maxShellConns | | **S** | | 一个dnode容许的连接数 | 10-50000000 | 5000 | | -| 57 | maxConnections | | **S** | | 一个数据库连接所容许的dnode连接数 | 1-100000 | 5000 | 实际测试下来,如果默认没有配,选 50 个 worker thread 会产生 Network unavailable | -| 58 | minimalLogDirGB | | **SC** | GB | 当日志文件夹的磁盘大小小于该值时,停止写日志 | | 0.1 | | -| 59 | minimalTmpDirGB | | **SC** | GB | 当日志文件夹的磁盘大小小于该值时,停止写临时文件 | | 0.1 | | -| 60 | minimalDataDirGB | | **S** | GB | 当日志文件夹的磁盘大小小于该值时,停止写时序数据 | | 0.1 | | -| 61 | mnodeEqualVnodeNum | | **S** | | 一个mnode等同于vnode消耗的个数 | | 4 | | -| 62 | http | | **S** | | 服务器内部的http服务开关。 | 0:关闭http服务, 1:激活http服务。 | 1 | | -| 63 | mqtt | YES | **S** | | 服务器内部的mqtt服务开关。 | 0:关闭mqtt服务, 1:激活mqtt服务。 | 0 | | -| 64 | monitor | | **S** | | 服务器内部的系统监控开关。监控主要负责收集物理节点的负载状况,包括CPU、内存、硬盘、网络带宽、HTTP请求量的监控记录,记录信息存储在`LOG`库中。 | 0:关闭监控服务, 1:激活监控服务。 | 0 | | -| 65 | httpEnableRecordSql | | **S** | | 内部使用,记录通过RESTFul接口,产生的SQL调用。taosAdapter 配置或有不同,请参考相应[文档](https://www.taosdata.com/cn/documentation/tools/adapter)。 | | 0 | 生成的文件(httpnote.0/httpnote.1),与服务端日志所在目录相同。 | -| 66 | httpMaxThreads | | **S** | | RESTFul接口的线程数。taosAdapter 配置或有不同,请参考相应[文档](https://www.taosdata.com/cn/documentation/tools/adapter)。 | | 2 | | -| 67 | telegrafUseFieldNum | YES | | | | | | | -| 68 | restfulRowLimit | | **S** | | RESTFul接口单次返回的记录条数。taosAdapter 配置或有不同,请参考相应[文档](https://www.taosdata.com/cn/documentation/tools/adapter)。 | | 10240 | 最大10,000,000 | -| 69 | numOfLogLines | | **SC** | | 单个日志文件允许的最大行数。 | | 10,000,000 | | -| 70 | asyncLog | | **SC** | | 日志写入模式 | 0:同步、1:异步 | 1 | | -| 71 | logKeepDays | | **SC** | 天 | 日志文件的最长保存时间 | | 0 | 大于0时,日志文件会被重命名为taosdlog.xxx,其中xxx为日志文件最后修改的时间戳。 | -| 72 | debugFlag | | **SC** | | 运行日志开关 | 131(输出错误和警告日志),135(输出错误、警告和调试日志),143(输出错误、警告、调试和跟踪日志) | 131或135(不同模块有不同的默认值) | | -| 73 | mDebugFlag | | **S** | | 管理模块的日志开关 | 同上 | 135 | | -| 74 | dDebugFlag | | **SC** | | dnode模块的日志开关 | 同上 | 135 | | -| 75 | sDebugFlag | | **SC** | | sync模块的日志开关 | 同上 | 135 | | -| 76 | wDebugFlag | | **SC** | | wal模块的日志开关 | 同上 | 135 | | -| 77 | sdbDebugFlag | | **SC** | | sdb模块的日志开关 | 同上 | 135 | | -| 78 | rpcDebugFlag | | **SC** | | rpc模块的日志开关 | 同上 | | | -| 79 | tmrDebugFlag | | **SC** | | 定时器模块的日志开关 | 同上 | | | -| 80 | cDebugFlag | | **C** | | client模块的日志开关 | 同上 | | | -| 81 | jniDebugFlag | | **C** | | jni模块的日志开关 | 同上 | | | -| 82 | odbcDebugFlag | | **C** | | odbc模块的日志开关 | 同上 | | | -| 83 | uDebugFlag | | **SC** | | 共用功能模块的日志开关 | 同上 | | | -| 84 | httpDebugFlag | | **S** | | http模块的日志开关 | 同上 | | | -| 85 | mqttDebugFlag | | **S** | | mqtt模块的日志开关 | 同上 | | | -| 86 | monitorDebugFlag | | **S** | | 监控模块的日志开关 | 同上 | | | -| 87 | qDebugFlag | | **SC** | | 查询模块的日志开关 | 同上 | | | -| 88 | vDebugFlag | | **SC** | | vnode模块的日志开关 | 同上 | | | -| 89 | tsdbDebugFlag | | **S** | | TSDB模块的日志开关 | 同上 | | | -| 90 | cqDebugFlag | | **SC** | | 连续查询模块的日志开关 | 同上 | | | -| 91 | tscEnableRecordSql | | **C** | | 是否记录客户端sql语句到文件 | 0:否,1:是 | 0 | 生成的文件(tscnote-xxxx.0/tscnote-xxx.1,xxxx是pid),与客户端日志所在目录相同。 | -| 92 | enableCoreFile | | **SC** | | 是否开启服务crash时生成core文件 | 0:否,1:是 | 1 | 不同的启动方式,生成core文件的目录如下:1、systemctl start taosd启动:生成的core在根目录下;2、手动启动,就在taosd执行目录下。 | -| 93 | gitinfo | YES | **SC** | | | 1 | | | -| 94 | gitinfoofInternal | YES | **SC** | | | 2 | | | -| 95 | Buildinfo | YES | **SC** | | | 3 | | | -| 96 | version | YES | **SC** | | | 4 | | | -| 97 | | | | | | | | | -| 98 | maxBinaryDisplayWidth | | **C** | | Taos shell中binary 和 nchar字段的显示宽度上限,超过此限制的部分将被隐藏 | 5 - | 30 | 实际上限按以下规则计算:如果字段值的长度大于 maxBinaryDisplayWidth,则显示上限为 **字段名长度** 和 **maxBinaryDisplayWidth** 的较大者。否则,上限为 **字段名长度** 和 **字段值长度** 的较大者。可在 shell 中通过命令 set max_binary_display_width nn动态修改此选项 | -| 99 | queryBufferSize | | **S** | MB | 为所有并发查询占用保留的内存大小。 | | | 计算规则可以根据实际应用可能的最大并发数和表的数字相乘,再乘 170 。(2.0.15 以前的版本中,此参数的单位是字节) | -| 100 | ratioOfQueryCores | | **S** | | 设置查询线程的最大数量。 | | | 最小值0 表示只有1个查询线程;最大值2表示最大建立2倍CPU核数的查询线程。默认为1,表示最大和CPU核数相等的查询线程。该值可以为小数,即0.5表示最大建立CPU核数一半的查询线程。 | -| 101 | update | | **S** | | 允许更新已存在的数据行 | 0:不允许更新;1:允许整行更新;2:允许部分列更新。(2.1.7.0 版本开始此参数支持设为 2,在此之前取值只能是 [0, 1]) | 0 | 2.0.8.0 版本之前,不支持此参数。 | -| 102 | cacheLast | | **S** | | 是否在内存中缓存子表的最近数据 | 0:关闭;1:缓存子表最近一行数据;2:缓存子表每一列的最近的非NULL值;3:同时打开缓存最近行和列功能。(2.1.2.0 版本开始此参数支持 0~3 的取值范围,在此之前取值只能是 [0, 1]) | 0 | 2.1.2.0 版本之前、2.0.20.7 版本之前在 taos.cfg 文件中不支持此参数。 | -| 103 | numOfCommitThreads | YES | **S** | | 设置写入线程的最大数量 | | | | -| 104 | maxWildCardsLength | | **C** | bytes | 设定 LIKE 算子的通配符字符串允许的最大长度 | 0-16384 | 100 | 2.1.6.1 版本新增。 | -| 105 | compressColData | | **S** | bytes | 客户端与服务器之间进行消息通讯过程中,对服务器端查询结果进行列压缩的阈值。 | 0: 对所有查询结果均进行压缩 >0: 查询结果中任意列大小超过该值的消息才进行压缩 -1: 不压缩 | -1 | 2.3.0.0 版本新增。 | -| 106 | tsdbMetaCompactRatio | | **C** | | tsdb meta文件中冗余数据超过多少阈值,开启meta文件的压缩功能 | 0:不开启,[1-100]:冗余数据比例 | 0 | | -| 107 | rpcForceTcp | | **SC**| | 强制使用TCP传输 | 0: 不开启 1: 开启 | 0 | 在网络比较差的环境中,建议开启。2.0版本新增。| -| 108 | maxNumOfDistinctRes | | **S**| | 允许返回的distinct结果最大行数 |默认值为10万,最大值1亿 | 10万 | 2.3版本新增。| -| 109 | clientMerge | | **C**| | 是否允许客户端对写入数据去重 |0:不开启,1:开启| 0 | 2.3版本新增。| -| 110 | httpDBNameMandatory | | **S**| | 是否在URL中输入 数据库名称|0:不开启,1:开启| 0 | 2.3版本新增。| -| 111 | maxRegexStringLen | | **C**| | 正则表达式最大允许长度 |默认值128,最大长度 16384 | 128 | 2.3版本新增。| - -**注意:**对于端口,TDengine会使用从serverPort起13个连续的TCP和UDP端口号,请务必在防火墙打开。因此如果是缺省配置,需要打开从6030到6042共13个端口,而且必须TCP和UDP都打开。(详细的端口情况请参见 [TDengine 2.0 端口说明](https://www.taosdata.com/cn/documentation/faq#port)) + +1. **firstEP** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | taosd 启动时,主动连接的集群中首个 dnode 的 end point | + | 缺省值 | localhost:6030 | + +2. **secondEP** + + | 属性 | 说明 | + |---|---| + | 内部配置 | Yes | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | taosd 启动时,如果 firstEp 连接不上,尝试连接集群中第二个 dnode 的 end point | + | 缺省值 | 无 | + +3. **fqdn** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | 数据节点的 FQDN。如果习惯 IP 地址访问,可设置为该节点的 IP 地址。 | + | 缺省值 | 缺省为操作系统配置的第一个 hostname。 | + | 补充说明 | 这个参数值的长度需要控制在 96 个字符以内。 | + +4. **serverPort** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | taosd 启动后,对外服务的端口号 | + | 缺省值 | 6030 | + | 补充说明 | RESTful 服务使用的端口号是在此基础上+11,即默认值为 6041(注意 2.4 及后续版本使用 taosAdapter 提供 RESTful 接口)。 | + +5. **logDir** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | 日志文件目录,客户端和服务器的运行日志将写入该目录 | + | 缺省值 | /var/log/taos | + +6. **scriptDir** + + | 属性 | 说明 | + |---|---| + | 内部配置 | Yes | + | 适用范围 | 仅服务端适用 | + | 含义 | | + | 缺省值 | | + +7. **dataDir** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 数据文件目录,所有的数据文件都将写入该目录 | + | 缺省值 | /var/lib/taos | + +8. **arbitrator** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 系统中裁决器的 end point | + | 缺省值 | 空 | + +9. **numOfThreadsPerCore** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | 每个 CPU 核生成的队列消费者线程数量 | + | 缺省值 | 1.0 | + +10. **ratioOfQueryThreads** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 设置查询线程的最大数量 | + | 取值范围 | 0:表示只有 1 个查询线程
1:表示最大和 CPU 核数相等的查询线程
2:表示最大建立 2 倍 CPU 核数的查询线程。 | + | 缺省值 | 1 | + | 补充说明 | 该值可以为小数,即 0.5 表示最大建立 CPU 核数一半的查询线程。 | + +11. **numOfMnodes** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 系统中管理节点个数 | + | 缺省值 | 3 | + +12. **vnodeBak** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 删除 vnode 时是否备份 vnode 目录 | + | 取值范围 | 0:否,1:是 | + | 缺省值 | 1 | + +13. **telemetryRePorting** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 是否允许 TDengine 采集和上报基本使用信息 | + | 取值范围 | 0:不允许
1:允许 | + | 缺省值 | 1 | + +14. **balance** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 是否启动负载均衡 | + | 取值范围 | 0,1 | + | 缺省值 | 1 | + +15. **balanceInterval** + + | 属性 | 说明 | + |---|---| + | 内部配置 | Yes | + | 适用范围 | 仅服务端适用 | + | 含义 | 管理节点在正常运行状态下,检查负载均衡的时间间隔 | + | 单位| 秒 | + | 取值范围 | 1-30000 | + | 缺省值 | 300 | + +16. **role** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | dnode 的可选角色 | + | 取值范围 | 0:any(既可作为 mnode,也可分配 vnode)
1:mgmt(只能作为 mnode,不能分配 vnode)
2:dnode(不能作为 mnode,只能分配 vnode) | + | 缺省值 | 0 | + +17. **maxTmerCtrl** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | 定时器个数 | + | 单位| 个 | + | 取值范围 | 8-2048 | + | 缺省值 | 512 | + +18. **monitorInterval** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 监控数据库记录系统参数(CPU/内存)的时间间隔 | + | 单位| 秒 | + | 取值范围 | 1-600 | + | 缺省值 | 30 | + +19. **offlineThreshold** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | dnode 离线阈值,超过该时间将导致 dnode 离线 | + | 单位| 秒 | + | 取值范围 | 5-7200000 | + | 缺省值 | 86400\*10(10 天) | + +20. **rpcTimer** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | rpc 重试时长 | + | 单位| 毫秒 | + | 取值范围 | 100-3000 | + | 缺省值 | 300 | + +21. **rpcMaxTime** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | rpc 等待应答最大时长 | + | 单位| 秒 | + | 取值范围 | 100-7200 | + | 缺省值 | 600 | + +22. **statusInterval** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | dnode 向 mnode 报告状态间隔 | + | 单位| 秒 | + | 取值范围 | 1-10 | + | 缺省值 | 1 | + +23. **shellActivityTimer** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | shell 客户端向 mnode 发送心跳间隔 | + | 单位| 秒 | + | 取值范围 | 1-120 | + | 缺省值 | 3 | + +24. **tableMetaKeepTimer** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 表的元数据 cache 时长 | + | 单位| 秒 | + | 取值范围 | 1-8640000 | + | 缺省值 | 7200 | + +25. **minSlidingTime** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 最小滑动窗口时长 | + | 单位| 毫秒 | + | 取值范围 | 10-1000000 | + | 缺省值 | 10 | + | 补充说明 | 支持 us 补值后,这个值就是 1us 了。 | + +26. **minIntervalTime** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 时间窗口最小值 | + | 单位| 毫秒 | + | 取值范围 | 1-1000000 | + | 缺省值 | 10 | + +27. **stream** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 是否启用连续查询(流计算功能) | + | 取值范围 | 0:不允许
1:允许 | + | 缺省值 | 1 | + +28. **maxStreamCompDelay** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 连续查询启动最大延迟 | + | 单位| 毫秒 | + | 取值范围 | 10-1000000000 | + | 缺省值 | 20000 | + | 补充说明 | 为避免多个 stream 同时执行占用太多系统资源,程序中对 stream 的执行时间人为增加了一些随机的延时。
maxFirstStreamCompDelay 是 stream 第一次执行前最少要等待的时间。
streamCompDelayRatio 是延迟时间的计算系数,它乘以查询的 interval 后为延迟时间基准。
maxStreamCompDelay 是延迟时间基准的上限。
实际延迟时间为一个不超过延迟时间基准的随机值。
stream 某次计算失败后需要重试,retryStreamCompDelay 是重试的等待时间基准。
实际重试等待时间为不超过等待时间基准的随机值。 | + +29. **maxFirstStreamCompDelay** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 第一次连续查询启动最大延迟 | + | 单位| 毫秒 | + | 取值范围 | 10-1000000000 | + | 缺省值 | 10000 | + +30. **retryStreamCompDelay** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 连续查询重试等待间隔 | + | 单位| 毫秒 | + | 取值范围 | 10-1000000000 | + | 缺省值 | 10 | + +31. **streamCompDelayRatio** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 连续查询的延迟时间计算系数 | + | 取值范围 | 0.1-0.9 | + | 缺省值 | 0.1 | + +32. **maxVgroupsPerDb** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 每个 DB 中 能够使用的最大 vnode 个数 | + | 取值范围 | 0-8192 | + | 缺省值 | | + +33. **maxTablesPerVnode** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 每个 vnode 中能够创建的最大表个数 | + | 缺省值 | 1000000 | + +34. **minTablesPerVnode** + + | 属性 | 说明 | + |---|---| + | 内部配置 | Yes | + | 适用范围 | 仅服务端适用 | + | 含义 | 每个 vnode 中必须创建的最小表个数 | + | 缺省值 | 1000 | + +35. **tableIncStepPerVnode** + + | 属性 | 说明 | + |---|---| + | 内部配置 | Yes | + | 适用范围 | 仅服务端适用 | + | 含义 | 每个 vnode 中超过最小表数后递增步长 | + | 缺省值 | 1000 | + +36. **cache** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 内存块的大小 | + | 单位| MB | + | 缺省值 | 16 | + +37. **blocks** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 每个 vnode(tsdb)中有多少 cache 大小的内存块。因此一个 vnode 的用的内存大小粗略为(cache \* blocks) | + | 缺省值 | 6 | + +38. **days** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 数据文件存储数据的时间跨度 | + | 单位| 天 | + | 缺省值 | 10 | + +39. **keep** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 数据保留的天数 | + | 单位| 天 | + | 缺省值 | 3650 | + +40. **minRows** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 文件块中记录的最小条数 | + | 缺省值 | 100 | + +41. **maxRows** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 文件块中记录的最大条数 | + | 缺省值 | 4096 | + +42. **quorum** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 多副本环境下指令执行的确认数要求 | + | 取值范围 | 1,2 | + | 缺省值 | 1 | + +43. **comp** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 文件压缩标志位 | + | 取值范围 | 0:关闭,1:一阶段压缩,2:两阶段压缩 | + | 缺省值 | 2 | + +44. **walLevel** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | WAL 级别 | + | 取值范围 | 1:写 wal, 但不执行 fsync
2:写 wal, 而且执行 fsync | + | 缺省值 | 1 | + +45. **fsync** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 当 wal 设置为 2 时,执行 fsync 的周期 | + | 单位| 毫秒 | + | 取值范围 | 最小为 0,表示每次写入,立即执行 fsync
最大为 180000(三分钟) | + | 缺省值 | 3000 | + +46. **replica** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 副本个数 | + | 取值范围 | 1-3 | + | 缺省值 | 1 | + +47. **mqttHostName** + + | 属性 | 说明 | + |---|---| + | 内部配置 | Yes | + | 适用范围 | 仅服务端适用 | + | 含义 | mqtt uri | + | 缺省值 | | + | 补充说明 | mqtt://username:password@hostname:1883/taos/ | + +48. **mqttPort** + + | 属性 | 说明 | + |---|---| + | 内部配置 | Yes | + | 适用范围 | 仅服务端适用 | + | 含义 | mqtt client name | + | 缺省值 | | + | 补充说明 | 1883 | + +49. **mqttTopic** + + | 属性 | 说明 | + |---|---| + | 内部配置 | Yes | + | 适用范围 | 仅服务端适用 | + | 含义 | | + | 缺省值 | | + | 补充说明 | /test | + +50. **compressMsgSize** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 客户端与服务器之间进行消息通讯过程中,对通讯的消息进行压缩的阈值。如果要压缩消息,建议设置为 64330 字节,即大于 64330 字节的消息体才进行压缩。 | + | 单位| bytes | + | 取值范围 | `0 `表示对所有的消息均进行压缩 >0: 超过该值的消息才进行压缩 -1: 不压缩 | + | 缺省值 | -1 | + +51. **maxSQLLength** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅客户端适用 | + | 含义 | 单条 SQL 语句允许的最长限制 | + | 单位| bytes | + | 取值范围 | 65480-1048576 | + | 缺省值 | 1048576 | + +52. **maxNumOfOrderedRes** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | 支持超级表时间排序允许的最多记录数限制 | + | 缺省值 | 10 万 | + +53. **timezone** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | 时区 | + | 缺省值 | 从系统中动态获取当前的时区设置 | + +54. **locale** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | 系统区位信息及编码格式 | + | 缺省值 | 系统中动态获取,如果自动获取失败,需要用户在配置文件设置或通过 API 设置 | + +55. **charset** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | 字符集编码 | + | 缺省值 | 系统中动态获取,如果自动获取失败,需要用户在配置文件设置或通过 API 设置 | + +56. **maxShellConns** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 一个 dnode 容许的连接数 | + | 取值范围 | 10-50000000 | + | 缺省值 | 5000 | + +57. **maxConnections** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 一个数据库连接所容许的 dnode 连接数 | + | 取值范围 | 1-100000 | + | 缺省值 | 5000 | + | 补充说明 | 实际测试下来,如果默认没有配,选 50 个 worker thread 会产生 Network unavailable | + +58. **minimalLogDirGB** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | 当日志文件夹的磁盘大小小于该值时,停止写日志 | + | 单位| GB | + | 缺省值 | 0.1 | + +59. **minimalTmpDirGB** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | 当日志文件夹的磁盘大小小于该值时,停止写临时文件 | + | 单位| GB | + | 缺省值 | 0.1 | + +60. **minimalDataDirGB** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 当日志文件夹的磁盘大小小于该值时,停止写时序数据 | + | 单位| GB | + | 缺省值 | 0.1 | + +61. **mnodeEqualVnodeNum** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 一个 mnode 等同于 vnode 消耗的个数 | + | 缺省值 | 4 | + +62. **http** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 服务器内部的 http 服务开关。 | + | 取值范围 | 0:关闭 http 服务, 1:激活 http 服务。 | + | 缺省值 | 1 | + +63. **mqtt** + + | 属性 | 说明 | + |---|---| + | 内部配置 | Yes | + | 适用范围 | 仅服务端适用 | + | 含义 | 服务器内部的 mqtt 服务开关。 | + | 取值范围 | 0:关闭 mqtt 服务, 1:激活 mqtt 服务。 | + | 缺省值 | 0 | + +64. **monitor** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 服务器内部的系统监控开关。监控主要负责收集物理节点的负载状况,包括 CPU、内存、硬盘、网络带宽、HTTP 请求量的监控记录,记录信息存储在`LOG`库中。 | + | 取值范围 | 0:关闭监控服务, 1:激活监控服务。 | + | 缺省值 | 0 | + +65. **httpEnableRecordSql** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 内部使用,记录通过 RESTFul 接口,产生的 SQL 调用。taosAdapter 配置或有不同,请参考相应[文档](https://www.taosdata.com/cn/documentation/tools/adapter)。 | + | 缺省值 | 0 | + | 补充说明 | 生成的文件(httpnote.0/httpnote.1),与服务端日志所在目录相同。 | + +66. **httpMaxThreads** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | RESTFul 接口的线程数。taosAdapter 配置或有不同,请参考相应[文档](https://www.taosdata.com/cn/documentation/tools/adapter)。 | + | 缺省值 | 2 | + +67. **telegrafUseFieldNum** + + | 属性 | 说明 | + |---|---| + | 内部配置 | Yes | + | 适用范围 | | + | 含义 | | + | 缺省值 | | + +68. **restfulRowLimit** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | RESTFul 接口单次返回的记录条数。taosAdapter 配置或有不同,请参考相应[文档](https://www.taosdata.com/cn/documentation/tools/adapter)。 | + | 缺省值 | 10240 | + | 补充说明 | 最大 10,000,000 | + +69. **numOfLogLines** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | 单个日志文件允许的最大行数。 | + | 缺省值 | 10,000,000 | + +70. **asyncLog** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | 日志写入模式 | + | 取值范围 | 0:同步、1:异步 | + | 缺省值 | 1 | + +71. **logKeepDays** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | 日志文件的最长保存时间 | + | 单位| 天 | + | 缺省值 | 0 | + | 补充说明 | 大于 0 时,日志文件会被重命名为 taosdlog.xxx,其中 xxx 为日志文件最后修改的时间戳。 | + +72. **debugFlag** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | 运行日志开关 | + | 取值范围 | 131(输出错误和警告日志),135(输出错误、警告和调试日志),143(输出错误、警告、调试和跟踪日志) | + | 缺省值 | 131 或 135(不同模块有不同的默认值) | + +73. **mDebugFlag** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 管理模块的日志开关 | + | 取值范围 | 同上 | + | 缺省值 | 135 | + +74. **dDebugFlag** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | dnode 模块的日志开关 | + | 取值范围 | 同上 | + | 缺省值 | 135 | + +75. **sDebugFlag** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | sync 模块的日志开关 | + | 取值范围 | 同上 | + | 缺省值 | 135 | + +76. **wDebugFlag** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | wal 模块的日志开关 | + | 取值范围 | 同上 | + | 缺省值 | 135 | + +77. **sdbDebugFlag** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | sdb 模块的日志开关 | + | 取值范围 | 同上 | + | 缺省值 | 135 | + +78. **rpcDebugFlag** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | rpc 模块的日志开关 | + | 取值范围 | 同上 | + | 缺省值 | | + +79. **tmrDebugFlag** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | 定时器模块的日志开关 | + | 取值范围 | 同上 | + | 缺省值 | | + +80. **cDebugFlag** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅客户端适用 | + | 含义 | client 模块的日志开关 | + | 取值范围 | 同上 | + | 缺省值 | | + +81. **jniDebugFlag** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅客户端适用 | + | 含义 | jni 模块的日志开关 | + | 取值范围 | 同上 | + | 缺省值 | | + +82. **odbcDebugFlag** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅客户端适用 | + | 含义 | odbc 模块的日志开关 | + | 取值范围 | 同上 | + | 缺省值 | | + +83. **uDebugFlag** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | 共用功能模块的日志开关 | + | 取值范围 | 同上 | + | 缺省值 | | + +84. **httpDebugFlag** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | http 模块的日志开关 | + | 取值范围 | 同上 | + | 缺省值 | | + +85. **mqttDebugFlag** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | mqtt 模块的日志开关 | + | 取值范围 | 同上 | + | 缺省值 | | + +86. **monitorDebugFlag** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 监控模块的日志开关 | + | 取值范围 | 同上 | + | 缺省值 | | + +87. **qDebugFlag** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | 查询模块的日志开关 | + | 取值范围 | 同上 | + | 缺省值 | | + +88. **vDebugFlag** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | vnode 模块的日志开关 | + | 取值范围 | 同上 | + | 缺省值 | | + +89. **tsdbDebugFlag** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | TSDB 模块的日志开关 | + | 取值范围 | 同上 | + | 缺省值 | | + +90. **cqDebugFlag** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | 连续查询模块的日志开关 | + | 取值范围 | 同上 | + | 缺省值 | | + +91. **tscEnableRecordSql** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅客户端适用 | + | 含义 | 是否记录客户端 sql 语句到文件 | + | 取值范围 | 0:否,1:是 | + | 缺省值 | 0 | + | 补充说明 | 生成的文件(tscnote-xxxx.0/tscnote-xxx.1,xxxx 是 pid),与客户端日志所在目录相同。 | + +92. **enableCoreFile** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | 是否开启服务 crash 时生成 core 文件 | + | 取值范围 | 0:否,1:是 | + | 缺省值 | 1 | + | 补充说明 | 不同的启动方式,生成 core 文件的目录如下:1、systemctl start taosd 启动:生成的 core 在根目录下
2、手动启动,就在 taosd 执行目录下。 | + +93. **gitinfo** + + | 属性 | 说明 | + |---|---| + | 内部配置 | Yes | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | | + | 取值范围 | 1 | + | 缺省值 | | + +94. **gitinfoofInternal** + + | 属性 | 说明 | + |---|---| + | 内部配置 | Yes | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | | + | 取值范围 | 2 | + | 缺省值 | | + +95. **Buildinfo** + + | 属性 | 说明 | + |---|---| + | 内部配置 | Yes | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | | + | 取值范围 | 3 | + | 缺省值 | | + +96. **version** + + | 属性 | 说明 | + |---|---| + | 内部配置 | Yes | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | | + | 取值范围 | 4 | + | 缺省值 | | + +97. **maxBinaryDisplayWidth** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅客户端适用 | + | 含义 | Taos shell 中 binary 和 nchar 字段的显示宽度上限,超过此限制的部分将被隐藏 | + | 取值范围 | 5 - | + | 缺省值 | 30 | + | 补充说明 | 实际上限按以下规则计算:如果字段值的长度大于 maxBinaryDisplayWidth,则显示上限为 **字段名长度** 和 **maxBinaryDisplayWidth** 的较大者。
否则,上限为 **字段名长度** 和 **字段值长度** 的较大者。
可在 shell 中通过命令 set max_binary_display_width nn 动态修改此选项 | + +98. **queryBufferSize** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 为所有并发查询占用保留的内存大小。 | + | 单位| MB | + | 缺省值 | | + | 补充说明 | 计算规则可以根据实际应用可能的最大并发数和表的数字相乘,再乘 170 。
(2.0.15 以前的版本中,此参数的单位是字节) | + +99. **ratioOfQueryCores** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 设置查询线程的最大数量。 | + | 缺省值 | | + | 补充说明 | 最小值 0 表示只有 1 个查询线程
最大值 2 表示最大建立 2 倍 CPU 核数的查询线程。
默认为 1,表示最大和 CPU 核数相等的查询线程。
该值可以为小数,即 0.5 表示最大建立 CPU 核数一半的查询线程。 | + +100. **update** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 允许更新已存在的数据行 | + | 取值范围 | 0:不允许更新
1:允许整行更新
2:允许部分列更新。(2.1.7.0 版本开始此参数支持设为 2,在此之前取值只能是 [0, 1]) | + | 缺省值 | 0 | + | 补充说明 | 2.0.8.0 版本之前,不支持此参数。 | + +101. **cacheLast** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 是否在内存中缓存子表的最近数据 | + | 取值范围 | 0:关闭
1:缓存子表最近一行数据
2:缓存子表每一列的最近的非 NULL 值
3:同时打开缓存最近行和列功能。(2.1.2.0 版本开始此参数支持 0 ~ 3 的取值范围,在此之前取值只能是 [0, 1]) | + | 缺省值 | 0 | + | 补充说明 | 2.1.2.0 版本之前、2.0.20.7 版本之前在 taos.cfg 文件中不支持此参数。 | + +102. **numOfCommitThreads** + + | 属性 | 说明 | + |---|---| + | 内部配置 | Yes | + | 适用范围 | 仅服务端适用 | + | 含义 | 设置写入线程的最大数量 | + | 缺省值 | | + +103. **maxWildCardsLength** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅客户端适用 | + | 含义 | 设定 LIKE 算子的通配符字符串允许的最大长度 | + | 单位| bytes | + | 取值范围 | 0-16384 | + | 缺省值 | 100 | + | 补充说明 | 2.1.6.1 版本新增。 | + +104. **compressColData** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 客户端与服务器之间进行消息通讯过程中,对服务器端查询结果进行列压缩的阈值。 | + | 单位| bytes | + | 取值范围 | 0: 对所有查询结果均进行压缩 >0: 查询结果中任意列大小超过该值的消息才进行压缩 -1: 不压缩 | + | 缺省值 | -1 | + | 补充说明 | 2.3.0.0 版本新增。 | + +105. **tsdbMetaCompactRatio** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅客户端适用 | + | 含义 | tsdb meta 文件中冗余数据超过多少阈值,开启 meta 文件的压缩功能 | + | 取值范围 | 0:不开启,[1-100]:冗余数据比例 | + | 缺省值 | 0 | + +106. **rpcForceTcp** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 服务端和客户端均适用 | + | 含义 | 强制使用 TCP 传输 | + | 取值范围 | 0: 不开启 1: 开启 | + | 缺省值 | 0 | + | 补充说明 | 在网络比较差的环境中,建议开启。
2.0 版本新增。 | + +107. **maxNumOfDistinctRes** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 允许返回的 distinct 结果最大行数 | + | 取值范围 | 默认值为 10 万,最大值 1 亿 | + | 缺省值 | 10 万 | + | 补充说明 | 2.3 版本新增。 | + +108. **clientMerge** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅客户端适用 | + | 含义 | 是否允许客户端对写入数据去重 | + | 取值范围 | 0:不开启,1:开启 | + | 缺省值 | 0 | + | 补充说明 | 2.3 版本新增。 | + +109. **httpDBNameMandatory** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅服务端适用 | + | 含义 | 是否在 URL 中输入 数据库名称 | + | 取值范围 | 0:不开启,1:开启 | + | 缺省值 | 0 | + | 补充说明 | 2.3 版本新增。 | + +110. **maxRegexStringLen** + + | 属性 | 说明 | + |---|---| + | 内部配置 | No | + | 适用范围 | 仅客户端适用 | + | 含义 | 正则表达式最大允许长度 | + | 取值范围 | 默认值 128,最大长度 16384 | + | 缺省值 | 128 | + | 补充说明 | 2.3 版本新增。 | + + +**注意:** 对于端口,TDengine会使用从serverPort起13个连续的TCP和UDP端口号,请务必在防火墙打开。因此如果是缺省配置,需要打开从6030到6042共13个端口,而且必须TCP和UDP都打开。(详细的端口情况请参见 [TDengine 2.0 端口说明](https://www.taosdata.com/cn/documentation/faq#port)) 不同应用场景的数据往往具有不同的数据特征,比如保留天数、副本数、采集频次、记录大小、采集点的数量、压缩等都可完全不同。为获得在存储上的最高效率,TDengine提供如下存储相关的系统配置参数(既可以作为 create database 指令的参数,也可以写在 taos.cfg 配置文件中用来设定创建新数据库时所采用的默认值): @@ -539,6 +1541,52 @@ select * from >> data.csv; 利用taosdump,用户可以根据需要选择导出所有数据库、一个数据库或者数据库中的一张表,所有数据或一时间段的数据,甚至仅仅表的定义。 具体使用方法,请参见博客:[TDengine DUMP工具使用指南](https://www.taosdata.com/blog/2020/03/09/1334.html)。 + +## TSZ压缩算法 + +**TSZ压缩算法简介** + +TSZ压缩算法是TDengine为浮点数据类型提供更加丰富的压缩功能,可以实现浮点数的有损至无损全状态压缩,相比原来在TDengine中原有压缩算法,TSZ压缩算法压缩选项更丰富,压缩率更高,即使切到无损状态下对浮点数压缩,压缩率也会比原来的压缩算法要高一倍。 + +**适合场景** + +TSZ压缩算法压缩率比原来的要高,但压缩时间会更长,即开启TSZ压缩算法写入速度会有一些下降,通常情况下会有20%左右的下降。影响写入速度是因为需要更多的CPU计算,所以从原始数据到压缩好数据的交付时间变长,导致写入速度变慢。如果您的服务器CPU配置很高的话,这个影响会变小甚至没有。 + +另外如果设备产生了大量的高精度浮点数,存储占用的空间非常庞大,但实际使用并不需要那么高的精度时,可以通过TSZ压缩的有损压缩功能,把精度压缩至指定的长度,节约存储空间。 + +总结:采集到了大量浮点数,存储时占用空间过大或出有存储空间不足,需要超高压缩率的场景。 + +**使用步骤** + +- 检查版本支持,TDengine 版本在2.4.0.10及之后的版本都支持此功能 + +- 配置选项开启功能,在TDengine的配置文件taos.cfg增加一行以下内容,打开TSZ功能 + + ```TSZ +lossyColumns float|double +``` + + - 根据自己需要配置其它选项,如果不配置都会按默认值处理。 + + - 重启服务,配置生效。 + + - 确认功能已开启,在服务启动过程中输出的信息如果有前面配置的内容,表明功能已生效: + + ```TSZ Test +02/22 10:49:27.607990 00002933 UTL lossyColumns float|double +``` + + **注意事项** + + - 确认版本是否支持 + + - 除了服务器启动时的输出的配置成功信息外,不再会有其它的信息输出是使用的哪种压缩算法,可以通过配置前后数据库文件大小来比较效果 + + - 如果浮点数类型列较少,看整体数据文件大小效果会不太明显 + + - 此压缩产生的数据文件中浮点数据部分将不能被2.4.0.10以下的版本解析,即不向下兼容,使用时避免更换回旧版本,以免数据不能被读取出来。 + + - 在使用过程中允许反复开启和关闭TSZ压缩选项的操作,前后两种压缩算法产生的数据都能正常读取。 ## 系统连接、任务查询管理 diff --git a/documentation20/cn/12.taos-sql/docs.md b/documentation20/cn/12.taos-sql/docs.md index 411ee5a34e6f72d56f30f33f55cb85687918df8a..05fbfb7389e18ce357103afeb0ec96b2c7b99673 100755 --- a/documentation20/cn/12.taos-sql/docs.md +++ b/documentation20/cn/12.taos-sql/docs.md @@ -4,7 +4,7 @@ TAOS SQL 是用户对 TDengine 进行数据写入和查询的主要工具。TAOS SQL 为了便于用户快速上手,在一定程度上提供类似于标准 SQL 类似的风格和模式。严格意义上,TAOS SQL 并不是也不试图提供 SQL 标准的语法。此外,由于 TDengine 针对的时序性结构化数据不提供删除功能,因此在 TAO SQL 中不提供数据删除的相关功能。 -TAOS SQL 不支持关键字的缩写,例如 DESCRIBE 不能缩写为 DESC。 +TAOS SQL 目前仅支持 DESCRIBE 关键字的缩写,DESCRIBE 可以缩写为 DESC。 本章节 SQL 语法遵循如下约定: @@ -135,7 +135,7 @@ CREATE DATABASE db_name PRECISION 'ns'; ```mysql ALTER DATABASE db_name BLOCKS 100; ``` - BLOCKS 参数是每个 VNODE (TSDB) 中有多少 cache 大小的内存块,因此一个 VNODE 的用的内存大小粗略为(cache * blocks)。取值范围 [3, 1000]。 + BLOCKS 参数是每个 VNODE (TSDB) 中有多少 cache 大小的内存块,因此一个 VNODE 的用的内存大小粗略为(cache * blocks)。取值范围 [3, 10000]。 ```mysql ALTER DATABASE db_name CACHELAST 0; diff --git a/documentation20/en/00.index/docs.md b/documentation20/en/00.index/docs.md index a31052b219ba02f817d59d78a715a237d4caa0c4..3aa60c912af057cb93e14f3574888791f99986a2 100644 --- a/documentation20/en/00.index/docs.md +++ b/documentation20/en/00.index/docs.md @@ -84,10 +84,9 @@ TDengine is a highly efficient platform to store, query, and analyze time-series * [taosAdapter](/tools/adapter): a bridge/adapter between TDengine cluster and applications. * [TDinsight](/tools/insight): monitoring TDengine cluster with Grafana. +* [taosTools](/tools/taos-tools): taosTools are some useful tool collections for TDengine. * [taosdump](/tools/taosdump): backup tool for TDengine. Please install `taosTools` package for it. * [taosBenchmark](/tools/taosbenchmark): stress test tool for TDengine. -* [taosTools](/tools/taos-tools): taosTools are some useful tool collections for TDengine. - ## [Connections with Other Tools](/connections) diff --git a/documentation20/en/01.evaluation/docs.md b/documentation20/en/01.evaluation/docs.md index b296ae999fbf63f65422993dde4586b6bec08497..88adfbd7e950303e9628f15652c80aaf20e2a54c 100644 --- a/documentation20/en/01.evaluation/docs.md +++ b/documentation20/en/01.evaluation/docs.md @@ -2,64 +2,71 @@ ## About TDengine -TDengine is an innovative Big Data processing product launched by TAOS Data in the face of the fast-growing Internet of Things (IoT) Big Data market and technical challenges. It does not rely on any third-party software, nor does it optimize or package any open-source database or stream computing product. Instead, it is a product independently developed after absorbing the advantages of many traditional relational databases, NoSQL databases, stream computing engines, message queues, and other software. TDengine has its own unique Big Data processing advantages in time-series space. +TDengine is a high-performance, scalable time-series database with SQL support. Its code, including its cluster feature is open source under GNU AGPL v3.0. Besides the database engine, it provides caching, stream processing, data subscription and other functionalities to reduce the complexity and cost of development and operation. TDengine differentiates itself from other TSDBs with the following advantages. -One of the modules of TDengine is the time-series database. However, in addition to this, to reduce the complexity of research and development and the difficulty of system operation, TDengine also provides functions such as caching, message queuing, subscription, stream computing, etc. TDengine provides a full-stack technical solution for the processing of IoT and Industrial Internet BigData. It is an efficient and easy-to-use IoT Big Data platform. Compared with typical Big Data platforms such as Hadoop, TDengine has the following distinct characteristics: +- **High Performance**: TDengine outperforms other time series databases in data ingestion and querying while significantly reducing storage cost and compute costs, with an innovatively designed and purpose-built storage engine. -- **Performance improvement over 10 times**: An innovative data storage structure is defined, with every single core that can process at least 20,000 requests per second, insert millions of data points, and read more than 10 million data points, which is more than 10 times faster than other existing general database. -- **Reduce the cost of hardware or cloud services to 1/5**: Due to its ultra-performance, TDengine’s computing resources consumption is less than 1/5 of other common Big Data solutions; through columnar storage and advanced compression algorithms, the storage consumption is less than 1/10 of other general databases. -- **Full-stack time-series data processing engine**: Integrate database, message queue, cache, stream computing, and other functions, and the applications do not need to integrate with software such as Kafka/Redis/HBase/Spark/HDFS, thus greatly reducing the complexity cost of application development and maintenance. -- **Highly Available and Horizontal Scalable**: With the distributed architecture and consistency algorithm, via multi-replication and clustering features, TDengine ensures high availability and horizontal scalability to support mission-critical applications. -- **Zero operation cost & zero learning cost**: Installing clusters is simple and quick, with real-time backup built-in, and no need to split libraries or tables. Similar to standard SQL, TDengine can support RESTful, Python/Java/C/C++/C#/Go/Node.js, and similar to MySQL with zero learning cost. -- **Core is Open Sourced:** Except for some auxiliary features, the core of TDengine is open-sourced. Enterprise won't be locked by the database anymore. The ecosystem is more strong, products are more stable, and developer communities are more active. +- **Scalable**: TDengine provides out-of-box scalability and high-availability through its native distributed design. Nodes can be added through simple configuration to achieve greater data processing power. In addition, this feature is open source. -With TDengine, the total cost of ownership of typical IoT, Internet of Vehicles, and Industrial Internet Big Data platforms can be greatly reduced. However, since it makes full use of the characteristics of IoT time-series data, TDengine cannot be used to process general data from web crawlers, microblogs, WeChat, e-commerce, ERP, CRM, and other sources. +- **SQL Support**: TDengine uses SQL as the query language, thereby reducing learning and migration costs, while adding SQL extensions to handle time-series data better, and supporting convenient and flexible schemaless data ingestion. + +- **All in One**: TDengine has built-in caching, stream processing and data subscription functions. It is no longer necessary to integrate Kafka/Redis/HBase/Spark or other software in some scenarios. It makes the system architecture much simpler, cost-effective and easier to maintain. + +- **Seamless Integration**: Without a single line of code, TDengine provide seamless, configurable integration with third-party tools such as Telegraf, Grafana, EMQ X, Prometheus, StatsD, collectd, etc. More third-party tools are being integrated. + +- **Zero Management**: Installation and cluster setup can be done in seconds. Data partitioning and sharding are executed automatically. TDengine’s running status can be monitored via Grafana or other DevOps tools. + +- **Zero Learning Cost**: With SQL as the query language, support for ubiquitous tools like Python, Java, C/C++, Go, Rust, Node.js connectors, there is zero learning cost. + +- **Interactive Console**: TDengine provides convenient console access to the database to run ad hoc queries, maintain the database, or manage the cluster without any programming. + +With TDengine, the total cost of ownership of typical IoT, Connected Vehicles, Industrial Internet, Energy, Financial, DevOps and other Big Data applications can be greatly reduced. Note that because TDengine makes full use of the characteristics of IoT time-series data and is highly optimized for it, TDengine cannot be used as a general purpose database engine to process general data from web crawlers, microblogs, WeChat, e-commerce, ERP, CRM, and other sources. ![TDengine Technology Ecosystem](../images/eco_system.png) +
Figure 1. TDengine Ecosystem
-
Figure 1. TDengine Technology Ecosystem
## Overall Scenarios of TDengine -As an IoT Big Data platform, the typical application scenarios of TDengine are mainly presented in the IoT category, with users having a certain amount of data. The following sections of this document are mainly aimed at IoT-relevant systems. Other systems, such as CRM, ERP, etc., are beyond the scope of this article. +As an IoT time-series Big Data platform, TDengine is optimal for application scenarios with the requirements described below. Therefore the following sections of this document are mainly aimed at IoT-relevant systems. Other systems, such as CRM, ERP, etc., are beyond the scope of this article. ### Characteristics and Requirements of Data Sources -From the perspective of data sources, designers can analyze the applicability of TDengine in target application systems as following. +From the perspective of data sources, designers can analyze the applicability of TDengine in target application systems as follows. | **Data Source Characteristics and Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** | | -------------------------------------------------------- | ------------------ | ----------------------- | ------------------- | :----------------------------------------------------------- | -| A huge amount of total data | | | √ | TDengine provides excellent scale-out functions in terms of capacity, and has a storage structure matching high compression ratio to achieve the best storage efficiency in the industry. | -| Data input velocity is occasionally or continuously huge | | | √ | TDengine's performance is much higher than other similar products. It can continuously process a large amount of input data in the same hardware environment, and provide a performance evaluation tool that can easily run in the user environment. | -| A huge amount of data sources | | | √ | TDengine is designed to include optimizations specifically for a huge amount of data sources, such as data writing and querying, which is especially suitable for efficiently processing massive (tens of millions or more) data sources. | +| A massive amount of total data | | | √ | TDengine provides excellent scale-out functions in terms of capacity, and has a storage structure with matching high compression ratio to achieve the best storage efficiency in the industry.| +| Data input velocity is extremely high | | | √ | TDengine's performance is much higher than that of other similar products. It can continuously process larger amounts of input data in the same hardware environment, and provides a performance evaluation tool that can easily run in the user environment. | +| A huge number of data sources | | | √ | TDengine is optimized specifically for a huge number of data sources. It is especially suitable for efficiently ingesting, writing and querying data from billions of data sources. | ### System Architecture Requirements | **System Architecture Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** | | ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ | -| Require a simple and reliable system architecture | | | √ | TDengine's system architecture is very simple and reliable, with its own message queue, cache, stream computing, monitoring and other functions, and no need to integrate any additional third-party products. | -| Require fault-tolerance and high-reliability | | | √ | TDengine has cluster functions to automatically provide high-reliability functions such as fault tolerance and disaster recovery. | -| Standardization specifications | | | √ | TDengine uses standard SQL language to provide main functions and follow standardization specifications. | +| A simple and reliable system architecture | | | √ | TDengine's system architecture is very simple and reliable, with its own message queue, cache, stream computing, monitoring and other functions. There is no need to integrate any additional third-party products. | +| Fault-tolerance and high-reliability | | | √ | TDengine has cluster functions to automatically provide high-reliability and high-availability functions such as fault tolerance and disaster recovery. | +| Standardization support | | | √ | TDengine supports standard SQL and also provides extensions specifically to analyze time-series data. | ### System Function Requirements -| **System Architecture Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** | +| **System Function Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** | | ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ | -| Require completed data processing algorithms built-in | | √ | | TDengine implements various general data processing algorithms, but has not properly handled all requirements of different industries, so special types of processing shall be processed at the application level. | -| Require a huge amount of crosstab queries | | √ | | This type of processing should be handled more by relational database systems, or TDengine and relational database systems should fit together to implement system functions. | +| Complete data processing algorithms built-in | | √ | | While TDengine implements various general data processing algorithms, industry specific algorithms and special types of processing will need to be implemented at the application level.| +| A large number of crosstab queries | | √ | | This type of processing is better handled by general purpose relational database systems but TDengine can work in concert with relational database systems to provide more complete solutions. | ### System Performance Requirements -| **System Architecture Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** | +| **System Performance Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** | | ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ | -| Require larger total processing capacity | | | √ | TDengine’s cluster functions can easily improve processing capacity via multi-server-cooperating. | -| Require high-speed data processing | | | √ | TDengine’s storage and data processing are designed to be optimized for IoT, can generally improve the processing speed by multiple times than other similar products. | -| Require fast processing of fine-grained data | | | √ | TDengine has achieved the same level of performance with relational and NoSQL data processing systems. | +| Very large total processing capacity | | | √ | TDengine’s cluster functions can easily improve processing capacity via multi-server coordination. | +| Extremely high-speed data processing | | | √ | TDengine’s storage and data processing are optimized for IoT, and can process data many times faster than similar products.| +| Extremely fast processing of fine-grained data | | | √ | TDengine has achieved the same or better performance than other relational and NoSQL data processing systems. | ### System Maintenance Requirements -| **System Architecture Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** | +| **System Maintenance Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** | | ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ | -| Require system with high-reliability | | | √ | TDengine has a very robust and reliable system architecture to implement simple and convenient daily operation with streamlined experiences for operators, thus human errors and accidents are eliminated to the greatest extent. | -| Require controllable operation learning cost | | | √ | As above. | -| Require abundant talent supply | √ | | | As a new-generation product, it’s still difficult to find talents with TDengine experiences from the market. However, the learning cost is low. As the vendor, we also provide extensive operation training and counseling services. | +| Native high-reliability | | | √ | TDengine has a very robust, reliable and easily configurable system architecture to simplify routine operation. Human errors and accidents are eliminated to the greatest extent, with a streamlined experience for operators. | +| Minimize learning and maintenance costs | | | √ | In addition to being easily configurable, standard SQL support and the Taos shell for ad hoc queries makes maintenance simpler, allows reuse and reduces learning costs.| +| Abundant talent supply | √ | | | Given the above, and given the extensive training and professional services provided by TDengine, it is easy to migrate from existing solutions or create a new and lasting solution based on TDengine.| diff --git a/documentation20/en/02.getting-started/03.install/docs.md b/documentation20/en/02.getting-started/03.install/docs.md index d12619cd3a79d6d83c93d9cd31a2a6b7fc296c6b..2ef2b54783ca529562872f15f7a1108e36201daf 100644 --- a/documentation20/en/02.getting-started/03.install/docs.md +++ b/documentation20/en/02.getting-started/03.install/docs.md @@ -11,24 +11,27 @@ TDengine open source version provides `deb` and `rpm` format installation packag - Go to the directory where the TDengine-server-2.0.0.0-Linux-x64.deb installation package is located and execute the following installation command. ``` -plum@ubuntu:~/git/taosv16$ sudo dpkg -i TDengine-server-2.0.0.0-Linux-x64.deb - -Selecting previously unselected package tdengine. -(Reading database ... 233181 files and directories currently installed.) -Preparing to unpack TDengine-server-2.0.0.0-Linux-x64.deb ... -Failed to stop taosd.service: Unit taosd.service not loaded. -Stop taosd service success! -Unpacking tdengine (2.0.0.0) ... -Setting up tdengine (2.0.0.0) ... -Start to install TDEngine... -Synchronizing state of taosd.service with SysV init with /lib/systemd/systemd-sysv-install... -Executing /lib/systemd/systemd-sysv-install enable taosd -insserv: warning: current start runlevel(s) (empty) of script `taosd' overrides LSB defaults (2 3 4 5). -insserv: warning: current stop runlevel(s) (0 1 2 3 4 5 6) of script `taosd' overrides LSB defaults (0 1 6). -Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join OR leave it blank to build one : +$ sudo dpkg -i TDengine-server-2.4.0.7-Linux-x64.deb +(Reading database ... 137504 files and directories currently installed.) +Preparing to unpack TDengine-server-2.4.0.7-Linux-x64.deb ... +TDengine is removed successfully! +Unpacking tdengine (2.4.0.7) over (2.4.0.7) ... +Setting up tdengine (2.4.0.7) ... +Start to install TDengine... + +System hostname is: shuduo-1804 + +Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join +OR leave it blank to build one: + +Enter your email address for priority support or enter empty to skip: +Created symlink /etc/systemd/system/multi-user.target.wants/taosd.service → /etc/systemd/system/taosd.service. + To configure TDengine : edit /etc/taos/taos.cfg To start TDengine : sudo systemctl start taosd -To access TDengine : use taos in shell +To access TDengine : taos -h shuduo-1804 to login into TDengine server + + TDengine is installed successfully! ``` @@ -42,10 +45,10 @@ The same operation is performed for the other installation packages format. Uninstall command is below: ``` - plum@ubuntu:~/git/tdengine/debs$ sudo dpkg -r tdengine - (Reading database ... 233482 files and directories currently installed.) - Removing tdengine (2.0.0.0) ... - TDEngine is removed successfully! +$ sudo dpkg -r tdengine +(Reading database ... 137504 files and directories currently installed.) +Removing tdengine (2.4.0.7) ... +TDengine is removed successfully! ``` ## Install and unstall rpm package @@ -56,16 +59,27 @@ Uninstall command is below: - Go to the directory where the TDengine-server-2.0.0.0-Linux-x64.rpm installation package is located and execute the following installation command. ``` - [root@bogon x86_64]# rpm -iv TDengine-server-2.0.0.0-Linux-x64.rpm - Preparing packages... - TDengine-2.0.0.0-3.x86_64 - Start to install TDEngine... - Created symlink from /etc/systemd/system/multi-user.target.wants/taosd.service to /etc/systemd/system/taosd.service. - Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join OR leave it blank to build one : - To configure TDengine : edit /etc/taos/taos.cfg - To start TDengine : sudo systemctl start taosd - To access TDengine : use taos in shell - TDengine is installed successfully! +$ sudo rpm -ivh TDengine-server-2.4.0.7-Linux-x64.rpm +Preparing... ################################# [100%] +Updating / installing... + 1:tdengine-2.4.0.7-3 ################################# [100%] +Start to install TDengine... + +System hostname is: centos7 + +Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join +OR leave it blank to build one: + +Enter your email address for priority support or enter empty to skip: + +Created symlink from /etc/systemd/system/multi-user.target.wants/taosd.service to /etc/systemd/system/taosd.service. + +To configure TDengine : edit /etc/taos/taos.cfg +To start TDengine : sudo systemctl start taosd +To access TDengine : taos -h centos7 to login into TDengine server + + +TDengine is installed successfully! ``` ### Uninstall rpm @@ -73,8 +87,8 @@ Uninstall command is below: Uninstall command is following: ``` - [root@bogon x86_64]# rpm -e tdengine - TDEngine is removed successfully! +$ sudo rpm -e tdengine +TDengine is removed successfully! ``` ## Install and uninstall tar.gz @@ -85,37 +99,47 @@ Uninstall command is following: - Go to the directory where the `TDengine-server-2.0.0.0-Linux-x64.tar.gz` installation package is located, unzip the file first, then enter the subdirectory and execute the install.sh installation script in it as follows ``` - plum@ubuntu:~/git/tdengine/release$ sudo tar -xzvf TDengine-server-2.0.0.0-Linux-x64.tar.gz - plum@ubuntu:~/git/tdengine/release$ ll - total 3796 - drwxr-xr-x 3 root root 4096 Aug 9 14:20 ./ - drwxrwxr-x 11 plum plum 4096 Aug 8 11:03 ../ - drwxr-xr-x 5 root root 4096 Aug 8 11:03 TDengine-server/ - -rw-r--r-- 1 root root 3871844 Aug 8 11:03 TDengine-server-2.0.0.0-Linux-x64.tar.gz - plum@ubuntu:~/git/tdengine/release$ cd TDengine-server/ - plum@ubuntu:~/git/tdengine/release/TDengine-server$ ll - total 2640 - drwxr-xr-x 5 root root 4096 Aug 8 11:03 ./ - drwxr-xr-x 3 root root 4096 Aug 9 14:20 ../ - drwxr-xr-x 5 root root 4096 Aug 8 11:03 connector/ - drwxr-xr-x 2 root root 4096 Aug 8 11:03 driver/ - drwxr-xr-x 8 root root 4096 Aug 8 11:03 examples/ - -rwxr-xr-x 1 root root 13095 Aug 8 11:03 install.sh* - -rw-r--r-- 1 root root 2651954 Aug 8 11:03 taos.tar.gz - plum@ubuntu:~/git/tdengine/release/TDengine-server$ sudo ./install.sh - This is ubuntu system - verType=server interactiveFqdn=yes - Start to install TDengine... - Synchronizing state of taosd.service with SysV init with /lib/systemd/systemd-sysv-install... - Executing /lib/systemd/systemd-sysv-install enable taosd - insserv: warning: current start runlevel(s) (empty) of script `taosd' overrides LSB defaults (2 3 4 5). - insserv: warning: current stop runlevel(s) (0 1 2 3 4 5 6) of script `taosd' overrides LSB defaults (0 1 6). - Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join OR leave it blank to build one :hostname.taosdata.com:7030 - To configure TDengine : edit /etc/taos/taos.cfg - To start TDengine : sudo systemctl start taosd - To access TDengine : use taos in shell - Please run: taos -h hostname.taosdata.com:7030 to login into cluster, then execute : create dnode 'newDnodeFQDN:port'; in TAOS shell to add this new node into the clsuter - TDengine is installed successfully! +$ tar xvzf TDengine-enterprise-server-2.4.0.7-Linux-x64.tar.gz +TDengine-enterprise-server-2.4.0.7/ +TDengine-enterprise-server-2.4.0.7/driver/ +TDengine-enterprise-server-2.4.0.7/driver/vercomp.txt +TDengine-enterprise-server-2.4.0.7/driver/libtaos.so.2.4.0.7 +TDengine-enterprise-server-2.4.0.7/install.sh +TDengine-enterprise-server-2.4.0.7/examples/ +... + +$ ll +total 43816 +drwxrwxr-x 3 ubuntu ubuntu 4096 Feb 22 09:31 ./ +drwxr-xr-x 20 ubuntu ubuntu 4096 Feb 22 09:30 ../ +drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 TDengine-enterprise-server-2.4.0.7/ +-rw-rw-r-- 1 ubuntu ubuntu 44852544 Feb 22 09:31 TDengine-enterprise-server-2.4.0.7-Linux-x64.tar.gz + +$ cd TDengine-enterprise-server-2.4.0.7/ + + $ ll +total 40784 +drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 ./ +drwxrwxr-x 3 ubuntu ubuntu 4096 Feb 22 09:31 ../ +drwxrwxr-x 2 ubuntu ubuntu 4096 Feb 22 09:30 driver/ +drwxrwxr-x 10 ubuntu ubuntu 4096 Feb 22 09:30 examples/ +-rwxrwxr-x 1 ubuntu ubuntu 33294 Feb 22 09:30 install.sh* +-rw-rw-r-- 1 ubuntu ubuntu 41704288 Feb 22 09:30 taos.tar.gz + +$ sudo ./install.sh + +Start to update TDengine... +Created symlink /etc/systemd/system/multi-user.target.wants/taosd.service → /etc/systemd/system/taosd.service. +Nginx for TDengine is updated successfully! + +To configure TDengine : edit /etc/taos/taos.cfg +To configure Taos Adapter (if has) : edit /etc/taos/taosadapter.toml +To start TDengine : sudo systemctl start taosd +To access TDengine : use taos -h shuduo-1804 in shell OR from http://127.0.0.1:6060 + +TDengine is updated successfully! +Install taoskeeper as a standalone service +taoskeeper is installed, enable it by `systemctl enable taoskeeper` ``` Note: The install.sh install script asks for some configuration information through an interactive command line interface during execution. If you prefer a non-interactive installation, you can execute the install.sh script with the -e no parameter. Run . /install.sh -h command to see detailed information about all parameters. @@ -125,8 +149,11 @@ Note: The install.sh install script asks for some configuration information thro Uninstall command is following: ``` - plum@ubuntu:~/git/tdengine/release/TDengine-server$ rmtaos - TDEngine is removed successfully! +$ rmtaos +Nginx for TDengine is running, stopping it... +TDengine is removed successfully! + +taosKeeper is removed successfully! ``` ## Installation directory description @@ -134,19 +161,19 @@ Uninstall command is following: After TDengine is successfully installed, the main installation directory is /usr/local/taos, and the directory contents are as follows: ``` - plum@ubuntu:/usr/local/taos$ cd /usr/local/taos - plum@ubuntu:/usr/local/taos$ ll - total 36 - drwxr-xr-x 9 root root 4096 7 30 19:20 ./ - drwxr-xr-x 13 root root 4096 7 30 19:20 ../ - drwxr-xr-x 2 root root 4096 7 30 19:20 bin/ - drwxr-xr-x 2 root root 4096 7 30 19:20 cfg/ - lrwxrwxrwx 1 root root 13 7 30 19:20 data -> /var/lib/taos/ - drwxr-xr-x 2 root root 4096 7 30 19:20 driver/ - drwxr-xr-x 8 root root 4096 7 30 19:20 examples/ - drwxr-xr-x 2 root root 4096 7 30 19:20 include/ - drwxr-xr-x 2 root root 4096 7 30 19:20 init.d/ - lrwxrwxrwx 1 root root 13 7 30 19:20 log -> /var/log/taos/ +$ cd /usr/local/taos +$ ll +$ ll +total 28 +drwxr-xr-x 7 root root 4096 Feb 22 09:34 ./ +drwxr-xr-x 12 root root 4096 Feb 22 09:34 ../ +drwxr-xr-x 2 root root 4096 Feb 22 09:34 bin/ +drwxr-xr-x 2 root root 4096 Feb 22 09:34 cfg/ +lrwxrwxrwx 1 root root 13 Feb 22 09:34 data -> /var/lib/taos/ +drwxr-xr-x 2 root root 4096 Feb 22 09:34 driver/ +drwxr-xr-x 10 root root 4096 Feb 22 09:34 examples/ +drwxr-xr-x 2 root root 4096 Feb 22 09:34 include/ +lrwxrwxrwx 1 root root 13 Feb 22 09:34 log -> /var/log/taos/ ``` - Automatically generates the configuration file directory, database directory, and log directory. @@ -171,7 +198,7 @@ file that comes with the installation package. - For deb package installation, if the installation directory is manually deleted by mistake, the uninstallation, or reinstallation cannot be successful. In this case, you need to clear the installation information of the tdengine package by executing the following command: ``` - plum@ubuntu:~/git/tdengine/$ sudo rm -f /var/lib/dpkg/info/tdengine* +$ sudo rm -f /var/lib/dpkg/info/tdengine* ``` Then just reinstall it. @@ -179,7 +206,7 @@ Then just reinstall it. - For the rpm package after installation, if the installation directory is manually deleted by mistake part of the uninstallation, or reinstallation can not be successful. In this case, you need to clear the installation information of the tdengine package by executing the following command: ``` - [root@bogon x86_64]# rpm -e --noscripts tdengine +$ sudo rpm -e --noscripts tdengine ``` Then just reinstall it. diff --git a/documentation20/en/02.getting-started/docs.md b/documentation20/en/02.getting-started/docs.md index 53cb2f2b194d93b653b2821356cd0e2f180e7c8d..bce50948f9a0ab13c6e488232c9aecdf1458973f 100644 --- a/documentation20/en/02.getting-started/docs.md +++ b/documentation20/en/02.getting-started/docs.md @@ -2,7 +2,7 @@ ## Quick Install -TDengine includes server, client, and ecological software and peripheral tools. Currently, version 2.0 of the server can only be installed and run on Linux and will support Windows, macOS, and other OSes in the future. The client can be installed and run on Windows or Linux. Applications on any operating system can use the RESTful interface to connect to the taosd server. After 2.4, TDengine includes taosAdapter to provide an easy-to-use and efficient way to ingest data including RESTful service. taosAdapter needs to be started manually as a stand-alone component. The early version uses an embedded HTTP component to provide the RESTful interface. +TDengine includes server, client, and ecosystem software and peripheral tools. Currently, version 2.0 of the server can only be installed and run on Linux and will support Windows, macOS, and other OSes in the future. The client can be installed and run on Windows or Linux. Applications on any operating system can use the RESTful interface to connect to the taosd server. Starting with 2.4, TDengine includes taosAdapter to provide an easy-to-use and efficient way to ingest data and includes a RESTful service. taosAdapter needs to be started manually as a stand-alone component. The earlier version uses an embedded HTTP component to provide the RESTful interface. TDengine supports X64/ARM64/MIPS64/Alpha64 hardware platforms and will support ARM32, RISC-V, and other CPU architectures in the future. @@ -14,11 +14,11 @@ docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengin Please refer to [Quickly Taste TDengine with Docker](https://www.taosdata.com/en/documentation/getting-started/docker) for the details. -For the time being, using Docker to deploy the client or server of TDengine for production environments is not recommended. However it is a convenient way to deploy TDengine for development purposes. In particular, it is easy to try TDengine in Mac OS X and Windows environments with Docker. +For the time being, we do not recommend using Docker to deploy the TDengine server or client in production environments. However it is a convenient way to deploy TDengine for development purposes. In particular, it is easy to try TDengine in Mac OS X and Windows environments with Docker. ### Install from Package -TDengine is very easy to install, from download to successful installation in just a few seconds. For ease of use, the standard server installation package includes the client application and sample code; if you only need the server application and C/C++ language support for the client connection, you can also download the lite version of the installation package only. The installation packages are available in `rpm` and `deb` formats, as well as `tar.gz` format for enterprise customers who need to facilitate use on specific operating systems. Releases include both stable and beta releases. We recommend the stable release for production use or testing. The beta release may contain more new features. You can choose to download from the following as needed: +TDengine is very easy to run; download to successful installation takes just a few seconds. For ease of use, the standard server installation package includes the client application and sample code. But if you only need the server application and C/C++ language support for the client connection, you can also download only the lite version of the installation package. The installation packages are available in `rpm` and `deb` formats, as well as `tar.gz` format for enterprise customers who need to facilitate use on specific operating systems. Releases include both stable and beta releases. We recommend the stable release for production use or testing. The beta release may contain more new features. You can choose to download from the following as needed:
    @@ -59,13 +59,13 @@ After installation, you can start the TDengine service by the `systemctl` comman systemctl start taosd ``` -Then check if the service is working now. +Then check if the service is working. ```bash systemctl status taosd ``` -If the service is running successfully, you can play around through TDengine shell `taos`. +If the service is running successfully, you can play around through the TDengine shell, `taos`. **Note:** @@ -120,7 +120,7 @@ ts | speed | Query OK, 2 row(s) in set (0.001700s) ``` -Besides the SQL commands, the system administrator can check system status, add or delete accounts, and manage the servers. +Besides executing SQL commands, the system administrator can check system status, add or delete accounts, and manage the servers. ### Shell Command Line Parameters diff --git a/documentation20/en/03.architecture/docs.md b/documentation20/en/03.architecture/docs.md index e152583035d729af797f080d344c1e0e031426a9..d621ae1408d363b771041bab079a7fbafbb6d070 100644 --- a/documentation20/en/03.architecture/docs.md +++ b/documentation20/en/03.architecture/docs.md @@ -157,7 +157,7 @@ Logical structure diagram of TDengine distributed architecture as following: ![TDengine architecture diagram](../images/architecture/structure.png)
    Figure 1: TDengine architecture diagram
    -A complete TDengine system runs on one or more physical nodes. Logically, it includes data node (dnode), TDEngine application driver (TAOSC) and application (app). There are one or more data nodes in the system, which form a cluster. The application interacts with the TDengine cluster through TAOSC's API. The following is a brief introduction to each logical unit. +A complete TDengine system runs on one or more physical nodes. Logically, it includes data node (dnode), TDengine application driver (TAOSC) and application (app). There are one or more data nodes in the system, which form a cluster. The application interacts with the TDengine cluster through TAOSC's API. The following is a brief introduction to each logical unit. **Physical node (pnode)**: A pnode is a computer that runs independently and has its own computing, storage and network capabilities. It can be a physical machine, virtual machine, or Docker container installed with OS. The physical node is identified by its configured FQDN (Fully Qualified Domain Name). TDengine relies entirely on FQDN for network communication. If you don't know about FQDN, please read the blog post "[All about FQDN of TDengine](https://www.taosdata.com/blog/2020/09/11/1824.html)". diff --git a/documentation20/en/07.advanced-features/docs.md b/documentation20/en/07.advanced-features/docs.md index 0bf10183c6babf82744e073ab0cd892602a381d9..799d9e2d68294f525d4a6f5aef2c94c774b218d9 100644 --- a/documentation20/en/07.advanced-features/docs.md +++ b/documentation20/en/07.advanced-features/docs.md @@ -9,8 +9,8 @@ Continuous query of TDengine adopts time-driven mode, which can be defined direc The continuous query provided by TDengine differs from the time window calculation in ordinary stream computing in the following ways: - Unlike the real-time feedback calculated results of stream computing, continuous query only starts calculation after the time window is closed. For example, if the time period is 1 day, the results of that day will only be generated after 23:59:59. -- If a history record is written to the time interval that has been calculated, the continuous query will not re-calculate and will not push the new results to the user again. -- TDengine server does not cache or save the client's status, nor does it provide Exactly-Once semantic guarantee. If the application crashes, the continuous query will be pull up again and starting time must be provided by the application. +- If a history record is written to the time interval that has been calculated, the continuous query will not re-calculate and will not push the new results to the user again. +- TDengine server does not cache or save the client's status, nor does it provide Exactly-Once semantic guarantee. If the application crashes, the continuous query will be pull up again and starting time must be provided by the application. ### How to use continuous query @@ -83,7 +83,7 @@ taos_consume taos_unsubscribe ``` -Please refer to the [C/C++ Connector](https://www.taosdata.com/cn/documentation/connector/) for the documentation of these APIs. The following is still a smart meter scenario as an example to introduce their specific usage (please refer to the previous section "Continuous Query" for the structure of STables and sub-tables). The complete sample code can be found [here](https://github.com/taosdata/TDengine/blob/master/tests/examples/c/subscribe.c). +Please refer to the [C/C++ Connector](https://www.taosdata.com/cn/documentation/connector/) for the documentation of these APIs. The following is still a smart meter scenario as an example to introduce their specific usage (please refer to the previous section "Continuous Query" for the structure of STables and sub-tables). The complete sample code can be found [here](https://github.com/taosdata/TDengine/blob/master/examples/c/subscribe.c). If we want to be notified and do some process when the current of a smart meter exceeds a certain limit (e.g. 10A), there are two methods: one is to query each sub-table separately, record the timestamp of the last piece of data after each query, and then only query all data after this timestamp: @@ -210,8 +210,8 @@ After introducing the code, let's take a look at the actual running effect. For You can compile and start the sample program by executing the following command in the directory where the sample code is located: ```shell -$ make -$ ./subscribe -sql='select * from meters where current > 10;' +make +./subscribe -sql='select * from meters where current > 10;' ``` After the sample program starts, open another terminal window, and the shell that starts TDengine inserts a data with a current of 12A into **D1001**: @@ -299,8 +299,8 @@ public class SubscribeDemo { try { if (null != subscribe) subscribe.close(true); // Close the subscription - if (connection != null) - connection.close(); + if (connection != null) + connection.close(); } catch (SQLException throwables) { throwables.printStackTrace(); } @@ -312,7 +312,7 @@ public class SubscribeDemo { Run the sample program. First, it consumes all the historical data that meets the query conditions: ```shell -# java -jar subscribe.jar +# java -jar subscribe.jar ts: 1597464000000 current: 12.0 voltage: 220 phase: 1 location: Beijing.Chaoyang groupid : 2 ts: 1597464600000 current: 12.3 voltage: 220 phase: 2 location: Beijing.Chaoyang groupid : 2 @@ -357,4 +357,4 @@ This SQL statement will obtain the last recorded voltage value of all smart mete In scenarios of TDengine, alarm monitoring is a common requirement. Conceptually, it requires the program to filter out data that meet certain conditions from the data of the latest period of time, and calculate a result according to a defined formula based on these data. When the result meets certain conditions and lasts for a certain period of time, it will notify the user in some form. -In order to meet the needs of users for alarm monitoring, TDengine provides this function in the form of an independent module. For its installation and use, please refer to the blog [How to Use TDengine for Alarm Monitoring](https://www.taosdata.com/blog/2020/04/14/1438.html). \ No newline at end of file +In order to meet the needs of users for alarm monitoring, TDengine provides this function in the form of an independent module. For its installation and use, please refer to the blog [How to Use TDengine for Alarm Monitoring](https://www.taosdata.com/blog/2020/04/14/1438.html). diff --git a/documentation20/en/08.connector/01.java/docs.md b/documentation20/en/08.connector/01.java/docs.md index 54df715641ca9de7bc19d142e57e14c2abbc1193..5fa0acbc1013fe5b4bd62751f80a34f5857161ee 100644 --- a/documentation20/en/08.connector/01.java/docs.md +++ b/documentation20/en/08.connector/01.java/docs.md @@ -54,25 +54,23 @@ INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('beijing') VALUES( ## JDBC driver version and supported TDengine and JDK versions -| taos-jdbcdriver | TDengine | JDK | -| --------------- |--------------------|--------| -| 2.0.36 | 2.4.0 and above | 1.8.x | -| 2.0.35 | 2.3.0 and above | 1.8.x | -| 2.0.33 - 2.0.34 | 2.0.3.0 and above | 1.8.x | -| 2.0.31 - 2.0.32 | 2.1.3.0 and above | 1.8.x | -| 2.0.22 - 2.0.30 | 2.0.18.0 - 2.1.2.x | 1.8.x | -| 2.0.12 - 2.0.21 | 2.0.8.0 - 2.0.17.x | 1.8.x | -| 2.0.4 - 2.0.11 | 2.0.0.0 - 2.0.7.x | 1.8.x | -| 1.0.3 | 1.6.1.x and above | 1.8.x | -| 1.0.2 | 1.6.1.x and above | 1.8.x | -| 1.0.1 | 1.6.1.x and above | 1.8.x | +| taos-jdbcdriver version | TDengine 2.0.x.x version | TDengine 2.2.x.x version | TDengine 2.4.x.x version | JDK version | +|---------------------| ----------------------| ----------------------| ----------------------| -------- | +| 2.0.37 | X | X | 2.4.0.6 以上 | 1.8.x | +| 2.0.36 | X | 2.2.2.11 以上 | 2.4.0.0 - 2.4.0.5 | 1.8.x | +| 2.0.35 | X | 2.2.2.11 以上 | 2.3.0.0 - 2.4.0.5 | 1.8.x | +| 2.0.33 - 2.0.34 | 2.0.3.0 以上 | 2.2.0.0 以上 | 2.4.0.0 - 2.4.0.5 | 1.8.x | +| 2.0.31 - 2.0.32 | 2.1.3.0 - 2.1.7.7 | X | X | 1.8.x | +| 2.0.22 - 2.0.30 | 2.0.18.0 - 2.1.2.1 | X | X | 1.8.x | +| 2.0.12 - 2.0.21 | 2.0.8.0 - 2.0.17.4 | X | X | 1.8.x | +| 2.0.4 - 2.0.11 | 2.0.0.0 - 2.0.7.3 | X | X | 1.8.x | ## DataType in TDengine and Java connector The TDengine supports the following data types and Java data types: | TDengine DataType | JDBCType (driver version < 2.0.24) | JDBCType (driver version >= 2.0.24) | -|-------------------|------------------------------------| ----------------------------------- | +| ----------------- | ---------------------------------- | ----------------------------------- | | TIMESTAMP | java.lang.Long | java.sql.Timestamp | | INT | java.lang.Integer | java.lang.Integer | | BIGINT | java.lang.Long | java.lang.Long | @@ -314,7 +312,8 @@ The Java connector may report three types of error codes: JDBC Driver (error cod ### Write data through parameter binding Starting with version 2.1.2.0, TDengine's JDBC-JNI implementation significantly improves support for data write (INSERT) scenarios with Parameter-Binding. When writing data in this way, you can avoid the resource consumption of SQL parsing, which can significantly improve write performance in many cases. -Note: + +**Note**: * Jdbc-restful implementations do not provide Parameter-Binding * The following sample code is based on taos-jdbcdriver-2.0.36 * use setString to bind BINARY data, and use setNString to bind NCHAR data @@ -322,6 +321,7 @@ Note: Sample Code: + ```java public class ParameterBindingDemo { @@ -578,15 +578,57 @@ public void setShort(int columnIndex, ArrayList list) throws SQLException public void setString(int columnIndex, ArrayList list, int size) throws SQLException public void setNString(int columnIndex, ArrayList list, int size) throws SQLException ``` + +### Data Writing via Schemaless + +Starting with version 2.2.0.0, TDengine supports schemaless function. schemaless writing protocol is compatible with InfluxDB's Line Protocol, OpenTSDB's telnet and JSON format protocols, Please see [Schemaless Writing](https://www.taosdata.com/docs/en/v2.0/insert#schemaless) + +**Note**: +* Jdbc-restful implementations do not provide Schemaless-Writing +* The following sample code is based on taos-jdbcdriver-2.0.36 + +Sample Code: + +```java +public class SchemalessInsertTest { + private static final String host = "127.0.0.1"; + private static final String lineDemo = "st,t1=3i64,t2=4f64,t3=\"t3\" c1=3i64,c3=L\"passit\",c2=false,c4=4f64 1626006833639000000"; + private static final String telnetDemo = "stb0_0 1626006833 4 host=host0 interface=eth0"; + private static final String jsonDemo = "{\"metric\": \"meter_current\",\"timestamp\": 1346846400,\"value\": 10.3, \"tags\": {\"groupid\": 2, \"location\": \"Beijing\", \"id\": \"d1001\"}}"; + + public static void main(String[] args) throws SQLException { + final String url = "jdbc:TAOS://" + host + ":6030/?user=root&password=taosdata"; + try (Connection connection = DriverManager.getConnection(url)) { + init(connection); + + SchemalessWriter writer = new SchemalessWriter(connection); + writer.write(lineDemo, SchemalessProtocolType.LINE, SchemalessTimestampType.NANO_SECONDS); + writer.write(telnetDemo, SchemalessProtocolType.TELNET, SchemalessTimestampType.MILLI_SECONDS); + writer.write(jsonDemo, SchemalessProtocolType.JSON, SchemalessTimestampType.NOT_CONFIGURED); + } + } + + private static void init(Connection connection) throws SQLException { + try (Statement stmt = connection.createStatement()) { + stmt.executeUpdate("drop database if exists test_schemaless"); + stmt.executeUpdate("create database if not exists test_schemaless"); + stmt.executeUpdate("use test_schemaless"); + } + } +} +``` + ### Set client configuration in JDBC -Starting with TDEngine-2.3.5.0, JDBC Driver supports setting TDengine client parameters on the first connection of a Java application. The Driver supports jdbcUrl and Properties to set client parameters in JDBC-JNI mode. -Note: +Starting with TDengine-2.3.5.0, JDBC Driver supports setting TDengine client parameters on the first connection of a Java application. The Driver supports jdbcUrl and Properties to set client parameters in JDBC-JNI mode. + +**Note**: * JDBC-RESTful does not support setting client parameters. * The client parameters set in the java application are process-level. To update the client parameters, the application needs to be restarted. This is because these client parameters are global that take effect the first time the application is set up. * The following sample code is based on taos-jdbcdriver-2.0.36. Sample Code: + ```java public class ClientParameterSetting { private static final String host = "127.0.0.1"; diff --git a/documentation20/en/08.connector/docs.md b/documentation20/en/08.connector/docs.md index a9f83bc060c7d059e5b4ff2b95335fec794b2524..da38a7063373615c9f1853a0bebb55f9db0fa939 100644 --- a/documentation20/en/08.connector/docs.md +++ b/documentation20/en/08.connector/docs.md @@ -17,7 +17,7 @@ At present, TDengine connectors support a wide range of platforms, including har | **C#** | ○ | ● | ● | ○ | ○ | ○ | ○ | -- | -- | | **RESTful** | ● | ● | ● | ● | ● | ● | ○ | ○ | ○ | -Note: ● stands for that has been verified by official tests; ○ stands for that has been verified by unofficial tests. +Note: ● stands for that has been verified by official tests; ○ stands for that has been verified by unofficial tests. Note: @@ -30,9 +30,9 @@ Note: The server should already have the TDengine server package installed. The connector driver installation steps are as follows: -**Linux** +### Linux -**1. Download from TAOS Data website(https://www.taosdata.com/cn/all-downloads/)** +**1. Download from TAOS Data [official website](https://www.taosdata.com/cn/all-downloads/)** * X64 hardware environment: TDengine-client-2.x.x.x-Linux-x64.tar.gz * ARM64 hardware environment: TDengine-client-2.x.x.x-Linux-aarch64.tar.gz @@ -52,11 +52,11 @@ After extracting the package, you will see the following files (directories) in *install_client. sh*: Installation script for application driver -*taos.tar.gz*: Application driver installation package +*taos.tar.gz*: Application driver installation package -*driver*: TDengine application driver +*driver*: TDengine application driver -*connector*: Connectors for various programming languages (go/grafanaplugin/nodejs/python/JDBC) +*connector*: Connectors for various programming languages (go/grafanaplugin/nodejs/python/JDBC) *Examples*: Sample programs for various programming languages (C/C #/go/JDBC/MATLAB/python/R) @@ -74,7 +74,7 @@ Edit the taos.cfg file (default path/etc/taos/taos.cfg) and change firstEP to En **Windows x64/x86** -**1. Download from TAOS Data website(https://www.taosdata.com/cn/all-downloads/)** +**1. Download from TAOS Data [official website](https://www.taosdata.com/cn/all-downloads/)** * X64 hardware environment: TDengine-client-2.X.X.X-Windows-x64.exe * X86 hardware environment: TDengine-client-2.X.X.X-Windows-x86.exe @@ -87,15 +87,15 @@ Default installation path is: C:\TDengine, with following files(directories): *taos.exe*: taos shell command line program -*cfg*: configuration file directory +*cfg*: configuration file directory -*driver*: application driver dynamic link library +*driver*: application driver dynamic link library -*examples*: sample program bash/C/C #/go/JDBC/Python/Node.js +*examples*: sample program bash/C/C #/go/JDBC/Python/Node.js -*include*: header file +*include*: header file -*log*: log file +*log*: log file *unins000. exe*: uninstall program @@ -118,16 +118,16 @@ After the above installation and configuration completed, and confirm that the T If you execute taos directly under Linux shell, you should be able to connect to tdengine service normally and jump to taos shell interface. For Example: ```mysql -$ taos -Welcome to the TDengine shell from Linux, Client Version:2.0.5.0 -Copyright (c) 2017 by TAOS Data, Inc. All rights reserved. -taos> show databases; -name | created_time | ntables | vgroups | replica | quorum | days | keep1,keep2,keep(D) | cache(MB)| blocks | minrows | maxrows | wallevel | fsync | comp | precision | status | -========================================================================================================================================================================================================================= -test | 2020-10-14 10:35:48.617 | 10 | 1 | 1 | 1 | 2 | 3650,3650,3650 | 16| 6 | 100 | 4096 | 1 | 3000 | 2 | ms | ready | -log | 2020-10-12 09:08:21.651 | 4 | 1 | 1 | 1 | 10 | 30,30,30 | 1| 3 | 100 | 4096 | 1 | 3000 | 2 | us | ready | -Query OK, 2 row(s) in set (0.001198s) -taos> +$ taos +Welcome to the TDengine shell from Linux, Client Version:2.0.5.0 +Copyright (c) 2017 by TAOS Data, Inc. All rights reserved. +taos> show databases; +name | created_time | ntables | vgroups | replica | quorum | days | keep1,keep2,keep(D) | cache(MB)| blocks | minrows | maxrows | wallevel | fsync | comp | precision | status | +========================================================================================================================================================================================================================= +test | 2020-10-14 10:35:48.617 | 10 | 1 | 1 | 1 | 2 | 3650,3650,3650 | 16| 6 | 100 | 4096 | 1 | 3000 | 2 | ms | ready | +log | 2020-10-12 09:08:21.651 | 4 | 1 | 1 | 1 | 10 | 30,30,30 | 1| 3 | 100 | 4096 | 1 | 3000 | 2 | us | ready | +Query OK, 2 row(s) in set (0.001198s) +taos> ``` **Windows (x64/x86) environment:** @@ -135,16 +135,16 @@ taos> Under cmd, enter the c:\TDengine directory and directly execute taos.exe, and you should be able to connect to tdengine service normally and jump to taos shell interface. For example: ```mysql - C:\TDengine>taos - Welcome to the TDengine shell from Linux, Client Version:2.0.5.0 - Copyright (c) 2017 by TAOS Data, Inc. All rights reserved. - taos> show databases; - name | created_time | ntables | vgroups | replica | quorum | days | keep1,keep2,keep(D) | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | precision | status | - =================================================================================================================================================================================================================================================================== - test | 2020-10-14 10:35:48.617 | 10 | 1 | 1 | 1 | 2 | 3650,3650,3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | ms | ready | - log | 2020-10-12 09:08:21.651 | 4 | 1 | 1 | 1 | 10 | 30,30,30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | us | ready | - Query OK, 2 row(s) in set (0.045000s) - taos> + C:\TDengine>taos + Welcome to the TDengine shell from Linux, Client Version:2.0.5.0 + Copyright (c) 2017 by TAOS Data, Inc. All rights reserved. + taos> show databases; + name | created_time | ntables | vgroups | replica | quorum | days | keep1,keep2,keep(D) | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | precision | status | + =================================================================================================================================================================================================================================================================== + test | 2020-10-14 10:35:48.617 | 10 | 1 | 1 | 1 | 2 | 3650,3650,3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | ms | ready | + log | 2020-10-12 09:08:21.651 | 4 | 1 | 1 | 1 | 10 | 30,30,30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | us | ready | + Query OK, 2 row(s) in set (0.045000s) + taos> ``` ## C/C++ Connector @@ -167,7 +167,7 @@ Note: - The TDengine dynamic library needs to be linked at compiling. The library in Linux is *libtaos.so*, which installed at/usr/local/taos/driver. By Windows, it is taos.dll and installed at C:\ TDengine. - Unless otherwise specified, when the return value of API is an integer, 0 represents success, others are error codes representing the cause of failure, and when the return value is a pointer, NULL represents failure. -More sample codes for using C/C++ connectors, please visit https://github.com/taosdata/TDengine/tree/develop/examples/c. +More sample codes for using C/C++ connectors, please visit `https://github.com/taosdata/TDengine/tree/develop/examples/c`. ### Basic API @@ -185,6 +185,8 @@ Clean up the running environment and call this API before the application exits. Set client options, currently only time zone setting (_TSDB_OPTIONTIMEZONE) and encoding setting (_TSDB_OPTIONLOCALE) are supported. The time zone and encoding default to the current operating system settings. +When the return value is `0`, it means success, and when it is `-1`, it means failure. + - `char *taos_get_client_info()` Get version information of the client. @@ -202,7 +204,6 @@ Create a database connection and initialize the connection context. The paramete A null return value indicates a failure. The application needs to save the returned parameters for subsequent API calls. Note: The same process can connect to multiple taosd processes based on ip/port - - `char *taos_get_server_info(TAOS *taos)` Get version information of the server-side. @@ -211,6 +212,8 @@ Get version information of the server-side. Set the current default database to db. +The return value is the error code. + - `void taos_close(TAOS *taos)` Close the connection, where `taos` is the pointer returned by `taos_connect` function. @@ -286,11 +289,11 @@ Asynchronous APIs all need applications to provide corresponding callback functi Asynchronous APIs have relatively high requirements for users, who can selectively use them according to specific application scenarios. Here are three important asynchronous APIs: - `void taos_query_a(TAOS *taos, const char *sql, void (*fp)(void *param, TAOS_RES *, int code), void *param);` - Execute SQL statement asynchronously. + Execute SQL statement asynchronously. - * taos: The database connection returned by calling `taos_connect` - * sql: The SQL statement needed to execute - * fp: User-defined callback function, whose third parameter `code` is used to indicate whether the operation is successful, `0` for success, and negative number for failure (call `taos_errstr` to get the reason for failure). When defining the callback function, it mainly handles the second parameter `TAOS_RES *`, which is the result set returned by the query + * taos: The database connection returned by calling `taos_connect` + * sql: The SQL statement needed to execute + * fp: User-defined callback function, whose third parameter `code` is used to indicate whether the operation is successful, `0` for success, and negative number for failure (call `taos_errstr` to get the reason for failure). When defining the callback function, it mainly handles the second parameter `TAOS_RES *`, which is the result set returned by the query * param:the parameter for the callback - `void taos_fetch_rows_a(TAOS_RES *res, void (*fp)(void *param, TAOS_RES *, int numOfRows), void *param);` @@ -301,7 +304,6 @@ Asynchronous APIs have relatively high requirements for users, who can selective The asynchronous APIs of TDengine all use non-blocking calling mode. Applications can use multithreading to open multiple tables at the same time, and can query or insert to each open table at the same time. It should be pointed out that the **application client must ensure that the operation on the same table is completely serialized**, that is, when the insertion or query operation on the same table is not completed (when no result returned), the second insertion or query operation cannot be performed. - ### Parameter binding API @@ -549,7 +551,7 @@ Create subscription ```python # Create a subscription with the topic ‘test’ and a consumption cycle of 1000 milliseconds -# If the first parameter is True, it means restarting the subscription. +# If the first parameter is True, it means restarting the subscription. # If it is False and a subscription with the topic 'test 'has been created before, # it means continuing to consume the data of this subscription instead of restarting to consume all the data sub = conn.subscribe(True, "test", "select * from tb;", 1000) @@ -622,16 +624,16 @@ Refer to [JSON type instructions](https://www.taosdata.com/en/documentation/taos So far Python still does not completely support nanosecond type. Please refer to the link 1 and 2. The implementation of the python connector is to return an integer number for nanosecond value rather than datatime type as what ms and us do. The developer needs to handle it themselves. We recommend using pandas to_datetime() function. If Python officially support nanosecond in the future, TAOS Data might be possible to change the interface accordingly, which mean the application need change too. -1. https://stackoverflow.com/questions/10611328/parsing-datetime-strings-containing-nanoseconds -2. https://www.python.org/dev/peps/pep-0564/ +1. `https://stackoverflow.com/questions/10611328/parsing-datetime-strings-containing-nanoseconds` +2. `https://www.python.org/dev/peps/pep-0564/` #### Helper -Users can directly view the usage information of the module through Python's helper, or refer to the sample program in tests/examples/Python. The following are some common classes and methods: +Users can directly view the usage information of the module through Python's helper, or refer to the sample program in the `examples/Python` directory. The following are some common classes and methods: - *TDengineConnection* class -Refer to help (taos.TDEngineConnection) in python. This class corresponds to a connection between the client and TDengine. In the scenario of client multithreading, it is recommended that each thread apply for an independent connection instance, but not recommended that multiple threads share a connection. +Refer to help (taos.TDengineConnection) in python. This class corresponds to a connection between the client and TDengine. In the scenario of client multithreading, it is recommended that each thread apply for an independent connection instance, but not recommended that multiple threads share a connection. - *TDengineCursor* class @@ -643,7 +645,7 @@ Used to generate an instance of taos.TDengineConnection. ### Python connector code sample -In tests/examples/python, we provide a sample Python program read_example. py to guide you to design your own write and query program. After installing the corresponding client, introduce the taos class through `import taos`. The steps are as follows: +In the `examples/python` directory, we provide a sample Python program read_example. py to guide you to design your own write and query program. After installing the corresponding client, introduce the taos class through `import taos`. The steps are as follows: - Get the `TDengineConnection` object through `taos.connect`, which can be applied for only one by a program and shared among multiple threads. @@ -653,7 +655,7 @@ In tests/examples/python, we provide a sample Python program read_example. py to - If a write statement is executed, `execute` returns the number of rows successfully written affected rows. -- If the query statement is executed, the result set needs to be pulled through the fetchall method after the execution is successful. +- If the query statement is executed, the result set needs to be pulled through the fetchall method after the execution is successful. You can refer to the sample code for specific methods. @@ -663,7 +665,7 @@ To support the development of various types of platforms, TDengine provides an A Note: One difference from the native connector is that the RESTful interface is stateless, so the `USE db_name` command has no effect and all references to table names and super table names require the database name to be specified. (Starting from version 2.2.0.0, we support specifying db_name in the RESTful url, in which case if the database name prefix is not specified in the SQL statement. Since version 2.4.0.0, RESTful service is provided by taosAdapter by default, which requires that db_name must be specified in the url.) -### HTTP request format +### HTTP request format ``` http://:/rest/sql @@ -674,7 +676,7 @@ Parameter description: - IP: Any host in the cluster - PORT: httpPort configuration item in the configuration file, defaulting to 6041 -For example: http://192.168.0.1:6041/rest/sql is a URL that points to an IP address of 192.168.0.1. +For example: `http://192.168.0.1:6041/rest/sql` is a URL that points to an IP address of 192.168.0.1. The header of HTTP request needs to carry identity authentication information. TDengine supports Basic authentication and custom authentication. Subsequent versions will provide standard and secure digital signature mechanism for identity authentication. @@ -720,7 +722,7 @@ The return value is in JSON format, as follows: ["2018-10-03 14:38:15.000", 12.6, ...] ], "rows": 2 -} +} ``` Description: @@ -873,32 +875,33 @@ Only some configuration parameters related to RESTful interface are listed below - httpEnableCompress: Compression is not supported by default. Currently, TDengine only supports gzip compression format - httpdebugflag: Logging switch, 131: error and alarm information only, 135: debugging information, 143: very detailed debugging information, default 131 - - ## CSharp Connector - * The C # connector supports: Linux 64/Windows x64/Windows x86. * C# connector can be download and include as normal table form [Nuget.org](https://www.nuget.org/packages/TDengine.Connector/). * On Windows, C # applications can use the native C interface of TDengine to perform all database operations, and future versions will provide the ORM (Dapper) framework driver. ### Installation preparation -* For application driver installation, please refer to the[ steps of installing connector driver](https://www.taosdata.com/en/documentation/connector#driver). +* For application driver installation, please refer to the [steps of installing connector driver](https://www.taosdata.com/en/documentation/connector#driver). * .NET interface file TDengineDrivercs.cs and reference sample TDengineTest.cs are both located in the Windows client install_directory/examples/C# directory. * Install [.NET SDK](https://dotnet.microsoft.com/download) ### Example Source Code + you can find sample code under follow directions: + * {client_install_directory}/examples/C# * [github C# example source code](https://github.com/taosdata/TDengine/tree/develop/examples/C%2523) **Tips:** TDengineTest.cs One of C# connector's sample code that include basic examples like connection,sql executions and so on. ### Installation verification + Run {client_install_directory}/examples/C#/C#Checker/C#Checker.cs Need install .Net SDK first + ```cmd cd {client_install_directory}/examples/C#/C#Checker //run c#checker.cs @@ -906,24 +909,31 @@ dotnet run -- -h //dotnet run will build project first by default. ``` ### How to use C# connector + On Windows system, .NET applications can use the .NET interface of TDengine to perform all database operations. The steps to use it are as follows: need to install .NET SDK first -* create a c# project. + +* create a c# project. + ``` cmd mkdir test -cd test +cd test dotnet new console ``` + * add TDengineDriver as an package through Nuget ``` cmd dotnet add package TDengine.Connector ``` -* include the TDnengineDriver in you application's namespace + +* include the TDnengineDriver in you application's namespace + ```C# using TDengineDriver; ``` + * user can reference from[TDengineTest.cs](https://github.com/taosdata/TDengine/tree/develop/examples/C%2523/TDengineTest) and learn how to define database connection,query,insert and other basic data manipulations. **Note:** @@ -938,9 +948,9 @@ Maikebing.Data.Taos is an ADO.Net provider for TDengine that supports Linux, Win ``` // Download -https://github.com/maikebing/Maikebing.EntityFrameworkCore.Taos -// How to use -https://www.taosdata.com/blog/2020/11/02/1901.html +https://github.com/maikebing/Maikebing.EntityFrameworkCore.Taos +// How to use +https://www.taosdata.com/blog/2020/11/02/1901.html ``` ## Go Connector @@ -949,9 +959,9 @@ https://www.taosdata.com/blog/2020/11/02/1901.html - For application driver installation, please refer to the [steps of installing connector driver](https://www.taosdata.com/en/documentation/connector#driver). -The TDengine provides the GO driver taosSql. taosSql implements the GO language's built-in interface database/sql/driver. Users can access TDengine in the application by simply importing the package as follows, see https://github.com/taosdata/driver-go/blob/develop/taosSql/driver_test.go for details. +The TDengine provides the GO driver taosSql. taosSql implements the GO language's built-in interface database/sql/driver. Users can access TDengine in the application by simply importing the package as follows, see `https://github.com/taosdata/driver-go/blob/develop/taosSql/driver_test.go` for details. -Sample code for using the Go connector can be found in https://github.com/taosdata/TDengine/tree/develop/examples/go . +Sample code for using the Go connector can be found in `https://github.com/taosdata/TDengine/tree/develop/examples/go`. ```Go import ( @@ -963,8 +973,8 @@ import ( **It is recommended to use Go version 1.13 or above and turn on module support:** ```bash -go env -w GO111MODULE=on -go env -w GOPROXY=https://goproxy.io,direct +go env -w GO111MODULE=on +go env -w GOPROXY=https://goproxy.io,direct ``` `taosSql` v2 completed refactoring of the v1 version and separated the built-in database operation interface `database/sql/driver` to the directory `taosSql`, and put other advanced functions such as subscription and stmt into the directory `af`. @@ -1067,7 +1077,7 @@ After installing the TDengine client, the nodejsChecker.js program can verify wh Steps: -1. Create a new installation verification directory, for example: `~/tdengine-test`, copy the nodejsChecker.js source program on github. Download address: (https://github.com/taosdata/TDengine/tree/develop/examples/nodejs/nodejsChecker.js). +1. Create a new installation verification directory, for example: `~/tdengine-test`, copy the nodejsChecker.js source program on github. Download address: `https://github.com/taosdata/TDengine/tree/develop/examples/nodejs/nodejsChecker.js`. 2. Execute the following command: @@ -1132,7 +1142,7 @@ The results of the query can be obtained and printed through `query.execute()` f ```javascript var promise = query.execute(); promise.then(function(result) { - result.pretty(); + result.pretty(); }); ``` @@ -1171,6 +1181,6 @@ promise2.then(function(result) { ### Example -[node-example.js](https://github.com/taosdata/tests/tree/master/examples/nodejs/node-example.js) provides a code example that uses the NodeJS connector to create a table, insert weather data, and query the inserted data. +[node-example.js](https://github.com/taosdata/TDengine/blob/master/examples/nodejs/node-example.js) provides a code example that uses the NodeJS connector to create a table, insert weather data, and query the inserted data. -[node-example-raw.js](https://github.com/taosdata/tests/tree/master/examples/nodejs/node-example-raw.js) is also a code example that uses the NodeJS connector to create a table, insert weather data, and query the inserted data, but unlike the above, this example only uses cursor. +[node-example-raw.js](https://github.com/taosdata/TDengine/blob/master/examples/nodejs/node-example-raw.js) is also a code example that uses the NodeJS connector to create a table, insert weather data, and query the inserted data, but unlike the above, this example only uses cursor. diff --git a/documentation20/en/09.connections/docs.md b/documentation20/en/09.connections/docs.md index 224e6817323efe291cebd6098e2ce95ef761fcb8..9381e25cbc572de9b1850c067cb799530f7cb519 100644 --- a/documentation20/en/09.connections/docs.md +++ b/documentation20/en/09.connections/docs.md @@ -137,7 +137,7 @@ sql1 = [‘insert into tb values (now, 1)’] exec(conn, sql1) ``` -For more detailed examples, please refer to the examples\Matlab\TDEngineDemo.m file in the package. +For more detailed examples, please refer to the examples\matlab\TDengineDemo.m file in the package. ## R diff --git a/documentation20/en/13.faq/docs.md b/documentation20/en/13.faq/docs.md index 05507e26e5ab84a01e19d9ecced5e0464c1411f3..683a16e8895a3b9500bda7a3ab836c7b87dbaee2 100644 --- a/documentation20/en/13.faq/docs.md +++ b/documentation20/en/13.faq/docs.md @@ -52,19 +52,19 @@ When the client encountered a connection failure, please follow the following st - Local virtual machine: Check whether the network can be pinged, and try to avoid using localhost as hostname - Corporate server: If you are in a NAT network environment, be sure to check whether the server can return messages to the client -2. Make sure that the client and server version numbers are exactly the same, and the open source Community Edition and Enterprise Edition cannot be mixed. -3. On the server, execute systemctl status taosd to check the running status of *taosd*. If not running, start *taosd*. -4. Verify that the correct server FQDN (Fully Qualified Domain Name, which is available by executing the Linux command hostname-f on the server) is specified when the client connects. FQDN configuration reference: "[All about FQDN of TDengine](https://www.taosdata.com/blog/2020/09/11/1824.html)". -5. Ping the server FQDN. If there is no response, please check your network, DNS settings, or the system hosts file of the computer where the client is located. -6. Check the firewall settings (Ubuntu uses ufw status, CentOS uses firewall-cmd-list-port) to confirm that TCP/UDP ports 6030-6042 are open. -7. For JDBC (ODBC, Python, Go and other interfaces are similar) connections on Linux, make sure that libtaos.so is in the directory /usr/local/taos/driver, and /usr/local/taos/driver is in the system library function search path LD_LIBRARY_PATH. -8. For JDBC, ODBC, Python, Go, etc. connections on Windows, make sure that C:\ TDengine\ driver\ taos.dll is in your system library function search directory (it is recommended that taos.dll be placed in the directory C:\ Windows\ System32) -9. If the connection issue still exist - -1. - On Linux system, please use the command line tool nc to determine whether the TCP and UDP connections on the specified ports are unobstructed. Check whether the UDP port connection works: nc -vuz {hostIP} {port} Check whether the server-side TCP port connection works: nc -l {port}Check whether the client-side TCP port connection works: nc {hostIP} {port} - - Windows systems use the PowerShell command Net-TestConnection-ComputerName {fqdn} Port {port} to detect whether the service-segment port is accessed - -10. You can also use the built-in network connectivity detection function of taos program to verify whether the specified port connection between the server and the client is unobstructed (including TCP and UDP): [TDengine's Built-in Network Detection Tool Use Guide](https://www.taosdata.com/blog/2020/09/08/1816.html). +3. Make sure that the client and server version numbers are exactly the same, and the open source Community Edition and Enterprise Edition cannot be mixed. +4. On the server, execute systemctl status taosd to check the running status of *taosd*. If not running, start *taosd*. +5. Verify that the correct server FQDN (Fully Qualified Domain Name, which is available by executing the Linux command hostname-f on the server) is specified when the client connects. FQDN configuration reference: "[All about FQDN of TDengine](https://www.taosdata.com/blog/2020/09/11/1824.html)". +6. Ping the server FQDN. If there is no response, please check your network, DNS settings, or the system hosts file of the computer where the client is located. +7. Check the firewall settings (Ubuntu uses ufw status, CentOS uses firewall-cmd-list-port) to confirm that TCP/UDP ports 6030-6042 are open. +8. For JDBC (ODBC, Python, Go and other interfaces are similar) connections on Linux, make sure that libtaos.so is in the directory /usr/local/taos/driver, and /usr/local/taos/driver is in the system library function search path LD_LIBRARY_PATH. +9. For JDBC, ODBC, Python, Go, etc. connections on Windows, make sure that C:\ TDengine\ driver\ taos.dll is in your system library function search directory (it is recommended that taos.dll be placed in the directory C:\ Windows\ System32) +10. If the connection issue still exist + + - On Linux system, please use the command line tool nc to determine whether the TCP and UDP connections on the specified ports are unobstructed. Check whether the UDP port connection works: nc -vuz {hostIP} {port} Check whether the server-side TCP port connection works: nc -l {port}Check whether the client-side TCP port connection works: nc {hostIP} {port} + - Windows systems use the PowerShell command Net-TestConnection-ComputerName {fqdn} Port {port} to detect whether the service-segment port is accessed + +11. You can also use the built-in network connectivity detection function of taos program to verify whether the specified port connection between the server and the client is unobstructed (including TCP and UDP): [TDengine's Built-in Network Detection Tool Use Guide](https://www.taosdata.com/blog/2020/09/08/1816.html). diff --git a/importSampleData/.gitignore b/importSampleData/.gitignore deleted file mode 100644 index 2283b63c52940e30b104289ce0c6c05cac75f197..0000000000000000000000000000000000000000 --- a/importSampleData/.gitignore +++ /dev/null @@ -1,17 +0,0 @@ -# Binaries for programs and plugins -*.exe -*.exe~ -*.dll -*.so -*.dylib - -# Test binary, built with `go test -c` -*.test - -# Output of the go coverage tool, specifically when used with LiteIDE -*.out - -# Dependency directories (remove the comment below to include it) -# vendor/ -.idea/ -.vscode/ \ No newline at end of file diff --git a/importSampleData/LICENSE b/importSampleData/LICENSE deleted file mode 100644 index 0ad25db4bd1d86c452db3f9602ccdbe172438f52..0000000000000000000000000000000000000000 --- a/importSampleData/LICENSE +++ /dev/null @@ -1,661 +0,0 @@ - GNU AFFERO GENERAL PUBLIC LICENSE - Version 3, 19 November 2007 - - Copyright (C) 2007 Free Software Foundation, Inc. - Everyone is permitted to copy and distribute verbatim copies - of this license document, but changing it is not allowed. - - Preamble - - The GNU Affero General Public License is a free, copyleft license for -software and other kinds of works, specifically designed to ensure -cooperation with the community in the case of network server software. - - The licenses for most software and other practical works are designed -to take away your freedom to share and change the works. By contrast, -our General Public Licenses are intended to guarantee your freedom to -share and change all versions of a program--to make sure it remains free -software for all its users. - - When we speak of free software, we are referring to freedom, not -price. Our General Public Licenses are designed to make sure that you -have the freedom to distribute copies of free software (and charge for -them if you wish), that you receive source code or can get it if you -want it, that you can change the software or use pieces of it in new -free programs, and that you know you can do these things. - - Developers that use our General Public Licenses protect your rights -with two steps: (1) assert copyright on the software, and (2) offer -you this License which gives you legal permission to copy, distribute -and/or modify the software. - - A secondary benefit of defending all users' freedom is that -improvements made in alternate versions of the program, if they -receive widespread use, become available for other developers to -incorporate. Many developers of free software are heartened and -encouraged by the resulting cooperation. However, in the case of -software used on network servers, this result may fail to come about. -The GNU General Public License permits making a modified version and -letting the public access it on a server without ever releasing its -source code to the public. - - The GNU Affero General Public License is designed specifically to -ensure that, in such cases, the modified source code becomes available -to the community. It requires the operator of a network server to -provide the source code of the modified version running there to the -users of that server. Therefore, public use of a modified version, on -a publicly accessible server, gives the public access to the source -code of the modified version. - - An older license, called the Affero General Public License and -published by Affero, was designed to accomplish similar goals. This is -a different license, not a version of the Affero GPL, but Affero has -released a new version of the Affero GPL which permits relicensing under -this license. - - The precise terms and conditions for copying, distribution and -modification follow. - - TERMS AND CONDITIONS - - 0. Definitions. - - "This License" refers to version 3 of the GNU Affero General Public License. - - "Copyright" also means copyright-like laws that apply to other kinds of -works, such as semiconductor masks. - - "The Program" refers to any copyrightable work licensed under this -License. Each licensee is addressed as "you". "Licensees" and -"recipients" may be individuals or organizations. - - To "modify" a work means to copy from or adapt all or part of the work -in a fashion requiring copyright permission, other than the making of an -exact copy. The resulting work is called a "modified version" of the -earlier work or a work "based on" the earlier work. - - A "covered work" means either the unmodified Program or a work based -on the Program. - - To "propagate" a work means to do anything with it that, without -permission, would make you directly or secondarily liable for -infringement under applicable copyright law, except executing it on a -computer or modifying a private copy. Propagation includes copying, -distribution (with or without modification), making available to the -public, and in some countries other activities as well. - - To "convey" a work means any kind of propagation that enables other -parties to make or receive copies. Mere interaction with a user through -a computer network, with no transfer of a copy, is not conveying. - - An interactive user interface displays "Appropriate Legal Notices" -to the extent that it includes a convenient and prominently visible -feature that (1) displays an appropriate copyright notice, and (2) -tells the user that there is no warranty for the work (except to the -extent that warranties are provided), that licensees may convey the -work under this License, and how to view a copy of this License. If -the interface presents a list of user commands or options, such as a -menu, a prominent item in the list meets this criterion. - - 1. Source Code. - - The "source code" for a work means the preferred form of the work -for making modifications to it. "Object code" means any non-source -form of a work. - - A "Standard Interface" means an interface that either is an official -standard defined by a recognized standards body, or, in the case of -interfaces specified for a particular programming language, one that -is widely used among developers working in that language. - - The "System Libraries" of an executable work include anything, other -than the work as a whole, that (a) is included in the normal form of -packaging a Major Component, but which is not part of that Major -Component, and (b) serves only to enable use of the work with that -Major Component, or to implement a Standard Interface for which an -implementation is available to the public in source code form. A -"Major Component", in this context, means a major essential component -(kernel, window system, and so on) of the specific operating system -(if any) on which the executable work runs, or a compiler used to -produce the work, or an object code interpreter used to run it. - - The "Corresponding Source" for a work in object code form means all -the source code needed to generate, install, and (for an executable -work) run the object code and to modify the work, including scripts to -control those activities. However, it does not include the work's -System Libraries, or general-purpose tools or generally available free -programs which are used unmodified in performing those activities but -which are not part of the work. For example, Corresponding Source -includes interface definition files associated with source files for -the work, and the source code for shared libraries and dynamically -linked subprograms that the work is specifically designed to require, -such as by intimate data communication or control flow between those -subprograms and other parts of the work. - - The Corresponding Source need not include anything that users -can regenerate automatically from other parts of the Corresponding -Source. - - The Corresponding Source for a work in source code form is that -same work. - - 2. Basic Permissions. - - All rights granted under this License are granted for the term of -copyright on the Program, and are irrevocable provided the stated -conditions are met. This License explicitly affirms your unlimited -permission to run the unmodified Program. The output from running a -covered work is covered by this License only if the output, given its -content, constitutes a covered work. This License acknowledges your -rights of fair use or other equivalent, as provided by copyright law. - - You may make, run and propagate covered works that you do not -convey, without conditions so long as your license otherwise remains -in force. You may convey covered works to others for the sole purpose -of having them make modifications exclusively for you, or provide you -with facilities for running those works, provided that you comply with -the terms of this License in conveying all material for which you do -not control copyright. Those thus making or running the covered works -for you must do so exclusively on your behalf, under your direction -and control, on terms that prohibit them from making any copies of -your copyrighted material outside their relationship with you. - - Conveying under any other circumstances is permitted solely under -the conditions stated below. Sublicensing is not allowed; section 10 -makes it unnecessary. - - 3. Protecting Users' Legal Rights From Anti-Circumvention Law. - - No covered work shall be deemed part of an effective technological -measure under any applicable law fulfilling obligations under article -11 of the WIPO copyright treaty adopted on 20 December 1996, or -similar laws prohibiting or restricting circumvention of such -measures. - - When you convey a covered work, you waive any legal power to forbid -circumvention of technological measures to the extent such circumvention -is effected by exercising rights under this License with respect to -the covered work, and you disclaim any intention to limit operation or -modification of the work as a means of enforcing, against the work's -users, your or third parties' legal rights to forbid circumvention of -technological measures. - - 4. Conveying Verbatim Copies. - - You may convey verbatim copies of the Program's source code as you -receive it, in any medium, provided that you conspicuously and -appropriately publish on each copy an appropriate copyright notice; -keep intact all notices stating that this License and any -non-permissive terms added in accord with section 7 apply to the code; -keep intact all notices of the absence of any warranty; and give all -recipients a copy of this License along with the Program. - - You may charge any price or no price for each copy that you convey, -and you may offer support or warranty protection for a fee. - - 5. Conveying Modified Source Versions. - - You may convey a work based on the Program, or the modifications to -produce it from the Program, in the form of source code under the -terms of section 4, provided that you also meet all of these conditions: - - a) The work must carry prominent notices stating that you modified - it, and giving a relevant date. - - b) The work must carry prominent notices stating that it is - released under this License and any conditions added under section - 7. This requirement modifies the requirement in section 4 to - "keep intact all notices". - - c) You must license the entire work, as a whole, under this - License to anyone who comes into possession of a copy. This - License will therefore apply, along with any applicable section 7 - additional terms, to the whole of the work, and all its parts, - regardless of how they are packaged. This License gives no - permission to license the work in any other way, but it does not - invalidate such permission if you have separately received it. - - d) If the work has interactive user interfaces, each must display - Appropriate Legal Notices; however, if the Program has interactive - interfaces that do not display Appropriate Legal Notices, your - work need not make them do so. - - A compilation of a covered work with other separate and independent -works, which are not by their nature extensions of the covered work, -and which are not combined with it such as to form a larger program, -in or on a volume of a storage or distribution medium, is called an -"aggregate" if the compilation and its resulting copyright are not -used to limit the access or legal rights of the compilation's users -beyond what the individual works permit. Inclusion of a covered work -in an aggregate does not cause this License to apply to the other -parts of the aggregate. - - 6. Conveying Non-Source Forms. - - You may convey a covered work in object code form under the terms -of sections 4 and 5, provided that you also convey the -machine-readable Corresponding Source under the terms of this License, -in one of these ways: - - a) Convey the object code in, or embodied in, a physical product - (including a physical distribution medium), accompanied by the - Corresponding Source fixed on a durable physical medium - customarily used for software interchange. - - b) Convey the object code in, or embodied in, a physical product - (including a physical distribution medium), accompanied by a - written offer, valid for at least three years and valid for as - long as you offer spare parts or customer support for that product - model, to give anyone who possesses the object code either (1) a - copy of the Corresponding Source for all the software in the - product that is covered by this License, on a durable physical - medium customarily used for software interchange, for a price no - more than your reasonable cost of physically performing this - conveying of source, or (2) access to copy the - Corresponding Source from a network server at no charge. - - c) Convey individual copies of the object code with a copy of the - written offer to provide the Corresponding Source. This - alternative is allowed only occasionally and noncommercially, and - only if you received the object code with such an offer, in accord - with subsection 6b. - - d) Convey the object code by offering access from a designated - place (gratis or for a charge), and offer equivalent access to the - Corresponding Source in the same way through the same place at no - further charge. You need not require recipients to copy the - Corresponding Source along with the object code. If the place to - copy the object code is a network server, the Corresponding Source - may be on a different server (operated by you or a third party) - that supports equivalent copying facilities, provided you maintain - clear directions next to the object code saying where to find the - Corresponding Source. Regardless of what server hosts the - Corresponding Source, you remain obligated to ensure that it is - available for as long as needed to satisfy these requirements. - - e) Convey the object code using peer-to-peer transmission, provided - you inform other peers where the object code and Corresponding - Source of the work are being offered to the general public at no - charge under subsection 6d. - - A separable portion of the object code, whose source code is excluded -from the Corresponding Source as a System Library, need not be -included in conveying the object code work. - - A "User Product" is either (1) a "consumer product", which means any -tangible personal property which is normally used for personal, family, -or household purposes, or (2) anything designed or sold for incorporation -into a dwelling. In determining whether a product is a consumer product, -doubtful cases shall be resolved in favor of coverage. For a particular -product received by a particular user, "normally used" refers to a -typical or common use of that class of product, regardless of the status -of the particular user or of the way in which the particular user -actually uses, or expects or is expected to use, the product. A product -is a consumer product regardless of whether the product has substantial -commercial, industrial or non-consumer uses, unless such uses represent -the only significant mode of use of the product. - - "Installation Information" for a User Product means any methods, -procedures, authorization keys, or other information required to install -and execute modified versions of a covered work in that User Product from -a modified version of its Corresponding Source. The information must -suffice to ensure that the continued functioning of the modified object -code is in no case prevented or interfered with solely because -modification has been made. - - If you convey an object code work under this section in, or with, or -specifically for use in, a User Product, and the conveying occurs as -part of a transaction in which the right of possession and use of the -User Product is transferred to the recipient in perpetuity or for a -fixed term (regardless of how the transaction is characterized), the -Corresponding Source conveyed under this section must be accompanied -by the Installation Information. But this requirement does not apply -if neither you nor any third party retains the ability to install -modified object code on the User Product (for example, the work has -been installed in ROM). - - The requirement to provide Installation Information does not include a -requirement to continue to provide support service, warranty, or updates -for a work that has been modified or installed by the recipient, or for -the User Product in which it has been modified or installed. Access to a -network may be denied when the modification itself materially and -adversely affects the operation of the network or violates the rules and -protocols for communication across the network. - - Corresponding Source conveyed, and Installation Information provided, -in accord with this section must be in a format that is publicly -documented (and with an implementation available to the public in -source code form), and must require no special password or key for -unpacking, reading or copying. - - 7. Additional Terms. - - "Additional permissions" are terms that supplement the terms of this -License by making exceptions from one or more of its conditions. -Additional permissions that are applicable to the entire Program shall -be treated as though they were included in this License, to the extent -that they are valid under applicable law. If additional permissions -apply only to part of the Program, that part may be used separately -under those permissions, but the entire Program remains governed by -this License without regard to the additional permissions. - - When you convey a copy of a covered work, you may at your option -remove any additional permissions from that copy, or from any part of -it. (Additional permissions may be written to require their own -removal in certain cases when you modify the work.) You may place -additional permissions on material, added by you to a covered work, -for which you have or can give appropriate copyright permission. - - Notwithstanding any other provision of this License, for material you -add to a covered work, you may (if authorized by the copyright holders of -that material) supplement the terms of this License with terms: - - a) Disclaiming warranty or limiting liability differently from the - terms of sections 15 and 16 of this License; or - - b) Requiring preservation of specified reasonable legal notices or - author attributions in that material or in the Appropriate Legal - Notices displayed by works containing it; or - - c) Prohibiting misrepresentation of the origin of that material, or - requiring that modified versions of such material be marked in - reasonable ways as different from the original version; or - - d) Limiting the use for publicity purposes of names of licensors or - authors of the material; or - - e) Declining to grant rights under trademark law for use of some - trade names, trademarks, or service marks; or - - f) Requiring indemnification of licensors and authors of that - material by anyone who conveys the material (or modified versions of - it) with contractual assumptions of liability to the recipient, for - any liability that these contractual assumptions directly impose on - those licensors and authors. - - All other non-permissive additional terms are considered "further -restrictions" within the meaning of section 10. If the Program as you -received it, or any part of it, contains a notice stating that it is -governed by this License along with a term that is a further -restriction, you may remove that term. If a license document contains -a further restriction but permits relicensing or conveying under this -License, you may add to a covered work material governed by the terms -of that license document, provided that the further restriction does -not survive such relicensing or conveying. - - If you add terms to a covered work in accord with this section, you -must place, in the relevant source files, a statement of the -additional terms that apply to those files, or a notice indicating -where to find the applicable terms. - - Additional terms, permissive or non-permissive, may be stated in the -form of a separately written license, or stated as exceptions; -the above requirements apply either way. - - 8. Termination. - - You may not propagate or modify a covered work except as expressly -provided under this License. Any attempt otherwise to propagate or -modify it is void, and will automatically terminate your rights under -this License (including any patent licenses granted under the third -paragraph of section 11). - - However, if you cease all violation of this License, then your -license from a particular copyright holder is reinstated (a) -provisionally, unless and until the copyright holder explicitly and -finally terminates your license, and (b) permanently, if the copyright -holder fails to notify you of the violation by some reasonable means -prior to 60 days after the cessation. - - Moreover, your license from a particular copyright holder is -reinstated permanently if the copyright holder notifies you of the -violation by some reasonable means, this is the first time you have -received notice of violation of this License (for any work) from that -copyright holder, and you cure the violation prior to 30 days after -your receipt of the notice. - - Termination of your rights under this section does not terminate the -licenses of parties who have received copies or rights from you under -this License. If your rights have been terminated and not permanently -reinstated, you do not qualify to receive new licenses for the same -material under section 10. - - 9. Acceptance Not Required for Having Copies. - - You are not required to accept this License in order to receive or -run a copy of the Program. Ancillary propagation of a covered work -occurring solely as a consequence of using peer-to-peer transmission -to receive a copy likewise does not require acceptance. However, -nothing other than this License grants you permission to propagate or -modify any covered work. These actions infringe copyright if you do -not accept this License. Therefore, by modifying or propagating a -covered work, you indicate your acceptance of this License to do so. - - 10. Automatic Licensing of Downstream Recipients. - - Each time you convey a covered work, the recipient automatically -receives a license from the original licensors, to run, modify and -propagate that work, subject to this License. You are not responsible -for enforcing compliance by third parties with this License. - - An "entity transaction" is a transaction transferring control of an -organization, or substantially all assets of one, or subdividing an -organization, or merging organizations. If propagation of a covered -work results from an entity transaction, each party to that -transaction who receives a copy of the work also receives whatever -licenses to the work the party's predecessor in interest had or could -give under the previous paragraph, plus a right to possession of the -Corresponding Source of the work from the predecessor in interest, if -the predecessor has it or can get it with reasonable efforts. - - You may not impose any further restrictions on the exercise of the -rights granted or affirmed under this License. For example, you may -not impose a license fee, royalty, or other charge for exercise of -rights granted under this License, and you may not initiate litigation -(including a cross-claim or counterclaim in a lawsuit) alleging that -any patent claim is infringed by making, using, selling, offering for -sale, or importing the Program or any portion of it. - - 11. Patents. - - A "contributor" is a copyright holder who authorizes use under this -License of the Program or a work on which the Program is based. The -work thus licensed is called the contributor's "contributor version". - - A contributor's "essential patent claims" are all patent claims -owned or controlled by the contributor, whether already acquired or -hereafter acquired, that would be infringed by some manner, permitted -by this License, of making, using, or selling its contributor version, -but do not include claims that would be infringed only as a -consequence of further modification of the contributor version. For -purposes of this definition, "control" includes the right to grant -patent sublicenses in a manner consistent with the requirements of -this License. - - Each contributor grants you a non-exclusive, worldwide, royalty-free -patent license under the contributor's essential patent claims, to -make, use, sell, offer for sale, import and otherwise run, modify and -propagate the contents of its contributor version. - - In the following three paragraphs, a "patent license" is any express -agreement or commitment, however denominated, not to enforce a patent -(such as an express permission to practice a patent or covenant not to -sue for patent infringement). To "grant" such a patent license to a -party means to make such an agreement or commitment not to enforce a -patent against the party. - - If you convey a covered work, knowingly relying on a patent license, -and the Corresponding Source of the work is not available for anyone -to copy, free of charge and under the terms of this License, through a -publicly available network server or other readily accessible means, -then you must either (1) cause the Corresponding Source to be so -available, or (2) arrange to deprive yourself of the benefit of the -patent license for this particular work, or (3) arrange, in a manner -consistent with the requirements of this License, to extend the patent -license to downstream recipients. "Knowingly relying" means you have -actual knowledge that, but for the patent license, your conveying the -covered work in a country, or your recipient's use of the covered work -in a country, would infringe one or more identifiable patents in that -country that you have reason to believe are valid. - - If, pursuant to or in connection with a single transaction or -arrangement, you convey, or propagate by procuring conveyance of, a -covered work, and grant a patent license to some of the parties -receiving the covered work authorizing them to use, propagate, modify -or convey a specific copy of the covered work, then the patent license -you grant is automatically extended to all recipients of the covered -work and works based on it. - - A patent license is "discriminatory" if it does not include within -the scope of its coverage, prohibits the exercise of, or is -conditioned on the non-exercise of one or more of the rights that are -specifically granted under this License. You may not convey a covered -work if you are a party to an arrangement with a third party that is -in the business of distributing software, under which you make payment -to the third party based on the extent of your activity of conveying -the work, and under which the third party grants, to any of the -parties who would receive the covered work from you, a discriminatory -patent license (a) in connection with copies of the covered work -conveyed by you (or copies made from those copies), or (b) primarily -for and in connection with specific products or compilations that -contain the covered work, unless you entered into that arrangement, -or that patent license was granted, prior to 28 March 2007. - - Nothing in this License shall be construed as excluding or limiting -any implied license or other defenses to infringement that may -otherwise be available to you under applicable patent law. - - 12. No Surrender of Others' Freedom. - - If conditions are imposed on you (whether by court order, agreement or -otherwise) that contradict the conditions of this License, they do not -excuse you from the conditions of this License. If you cannot convey a -covered work so as to satisfy simultaneously your obligations under this -License and any other pertinent obligations, then as a consequence you may -not convey it at all. For example, if you agree to terms that obligate you -to collect a royalty for further conveying from those to whom you convey -the Program, the only way you could satisfy both those terms and this -License would be to refrain entirely from conveying the Program. - - 13. Remote Network Interaction; Use with the GNU General Public License. - - Notwithstanding any other provision of this License, if you modify the -Program, your modified version must prominently offer all users -interacting with it remotely through a computer network (if your version -supports such interaction) an opportunity to receive the Corresponding -Source of your version by providing access to the Corresponding Source -from a network server at no charge, through some standard or customary -means of facilitating copying of software. This Corresponding Source -shall include the Corresponding Source for any work covered by version 3 -of the GNU General Public License that is incorporated pursuant to the -following paragraph. - - Notwithstanding any other provision of this License, you have -permission to link or combine any covered work with a work licensed -under version 3 of the GNU General Public License into a single -combined work, and to convey the resulting work. The terms of this -License will continue to apply to the part which is the covered work, -but the work with which it is combined will remain governed by version -3 of the GNU General Public License. - - 14. Revised Versions of this License. - - The Free Software Foundation may publish revised and/or new versions of -the GNU Affero General Public License from time to time. Such new versions -will be similar in spirit to the present version, but may differ in detail to -address new problems or concerns. - - Each version is given a distinguishing version number. If the -Program specifies that a certain numbered version of the GNU Affero General -Public License "or any later version" applies to it, you have the -option of following the terms and conditions either of that numbered -version or of any later version published by the Free Software -Foundation. If the Program does not specify a version number of the -GNU Affero General Public License, you may choose any version ever published -by the Free Software Foundation. - - If the Program specifies that a proxy can decide which future -versions of the GNU Affero General Public License can be used, that proxy's -public statement of acceptance of a version permanently authorizes you -to choose that version for the Program. - - Later license versions may give you additional or different -permissions. However, no additional obligations are imposed on any -author or copyright holder as a result of your choosing to follow a -later version. - - 15. Disclaimer of Warranty. - - THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY -APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT -HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY -OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, -THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR -PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM -IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF -ALL NECESSARY SERVICING, REPAIR OR CORRECTION. - - 16. Limitation of Liability. - - IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING -WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS -THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY -GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE -USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF -DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD -PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), -EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF -SUCH DAMAGES. - - 17. Interpretation of Sections 15 and 16. - - If the disclaimer of warranty and limitation of liability provided -above cannot be given local legal effect according to their terms, -reviewing courts shall apply local law that most closely approximates -an absolute waiver of all civil liability in connection with the -Program, unless a warranty or assumption of liability accompanies a -copy of the Program in return for a fee. - - END OF TERMS AND CONDITIONS - - How to Apply These Terms to Your New Programs - - If you develop a new program, and you want it to be of the greatest -possible use to the public, the best way to achieve this is to make it -free software which everyone can redistribute and change under these terms. - - To do so, attach the following notices to the program. It is safest -to attach them to the start of each source file to most effectively -state the exclusion of warranty; and each file should have at least -the "copyright" line and a pointer to where the full notice is found. - - - Copyright (C) - - This program is free software: you can redistribute it and/or modify - it under the terms of the GNU Affero General Public License as published - by the Free Software Foundation, either version 3 of the License, or - (at your option) any later version. - - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - GNU Affero General Public License for more details. - - You should have received a copy of the GNU Affero General Public License - along with this program. If not, see . - -Also add information on how to contact you by electronic and paper mail. - - If your software can interact with users remotely through a computer -network, you should also make sure that it provides a way for users to -get its source. For example, if your program is a web application, its -interface could display a "Source" link that leads users to an archive -of the code. There are many ways you could offer source, and different -solutions will be better for different programs; see section 13 for the -specific requirements. - - You should also get your employer (if you work as a programmer) or school, -if any, to sign a "copyright disclaimer" for the program, if necessary. -For more information on this, and how to apply and follow the GNU AGPL, see -. diff --git a/importSampleData/README.md b/importSampleData/README.md deleted file mode 100644 index c945cf52cb82723681f37efc42d3325f89011d39..0000000000000000000000000000000000000000 --- a/importSampleData/README.md +++ /dev/null @@ -1,239 +0,0 @@ -# 样例数据导入 - -该工具可以根据用户提供的 `json` 或 `csv` 格式样例数据文件快速导入 `TDengine`,目前仅能在 Linux 上运行。 - -为了体验写入和查询性能,可以对样例数据进行横向、纵向扩展。横向扩展是指将一个表(监测点)的数据克隆到多张表,纵向扩展是指将样例数据中的一段时间范围内的数据在时间轴上复制。该工具还支持历史数据导入至当前时间后持续导入,这样可以测试插入和查询并行进行的场景,以模拟真实环境。 - -## 编译安装 - -由于该工具使用 go 语言开发,编译之前需要先安装 go,具体请参考 [Getting Started][2]。执行以下命令即可编译成可执行文件 `bin/taosimport`。 - -```shell -go mod tidy -go build -o bin/taosimport app/main.go -``` - -## 使用 - -### 快速体验 - -执行命令 `bin/taosimport` 会根据默认配置执行以下操作: - -1. 创建数据库 - - 自动创建名称为 `test_yyyyMMdd` 的数据库,`yyyyMMdd` 是当前日期,如`20211111`。 - -2. 创建超级表 - - 根据配置文件 `config/cfg.toml` 中指定的 `sensor_info` 场景信息创建相应的超级表。 - > 建表语句: create table s_sensor_info(ts timestamp, temperature int, humidity float) tags(location binary(20), color binary(16), devgroup int); - -3. 自动建立子表并插入数据 - - 根据配置文件 `config/cfg.toml` 中 `sensor_info` 场景指定的 `data/sensor_info.csv` 样例数据进行横向扩展 `100` 倍(可通过 hnum 参数指定),即自动创建 `10*100=1000` 张子表(默认样例数据中有 10 张子表,每张表 100 条数据),启动 `10` 个线程(可通过 thread 参数指定)对每张子表循环导入 `1000` 次(可通过 vnum 参数指定)。 - -进入 `taos shell`,可运行如下查询验证: - -* 查询记录数 - - ```shell - taos> use test_yyyyMMdd; - taos> select count(*) from s_sensor_info; - ``` - -* 查询各个分组的记录数 - - ```shell - taos> select count(*) from s_sensor_info group by devgroup; - ``` - -* 按 1h 间隔查询各聚合指标 - - ```shell - taos> select count(temperature), sum(temperature), avg(temperature) from s_sensor_info interval(1h); - ``` - -* 查询指定位置最新上传指标 - - ```shell - taos> select last(*) from s_sensor_info where location = 'beijing'; - ``` - -> 更多查询及函数使用请参考 [数据查询][4] - -### 详细使用说明 - -执行命令 `bin/taosimport -h` 可以查看详细参数使用说明: - -* -cfg string - - 导入配置文件路径,包含样例数据文件相关描述及对应 TDengine 配置信息。默认使用 `config/cfg.toml`。 - -* -cases string - - 需要导入的场景名称,该名称可从 -cfg 指定的配置文件中 `[usecase]` 查看,可同时导入多个场景,中间使用逗号分隔,如:`sensor_info,camera_detection`,默认为 `sensor_info`。 - -* -hnum int - - 需要将样例数据进行横向扩展的倍数,假设原有样例数据包含 1 张子表 `t_0` 数据,指定 hnum 为 2 时会根据原有表名创建 `t_0、t_1` 两张子表。默认为 100。 - -* -vnum int - - 需要将样例数据进行纵向扩展的次数,如果设置为 0 代表将历史数据导入至当前时间后持续按照指定间隔导入。默认为 1000,表示将样例数据在时间轴上纵向复制1000 次。 - -* -delay int - - 当 vnum 设置为 0 时持续导入的时间间隔,默认为所有场景中最小记录间隔时间的一半,单位 ms。 - -* -tick int - - 打印统计信息的时间间隔,默认 2000 ms。 - -* -save int - - 是否保存统计信息到 tdengine 的 statistic 表中,1 是,0 否, 默认 0。 - -* -savetb string - - 当 save 为 1 时保存统计信息的表名, 默认 statistic。 - -* -auto int - - 是否自动生成样例数据中的主键时间戳,1 是,0 否, 默认 0。 - -* -start string - - 导入的记录开始时间,格式为 `"yyyy-MM-dd HH:mm:ss.SSS"`,不设置会使用样例数据中最小时间,设置后会忽略样例数据中的主键时间,会按照指定的 start 进行导入。如果 auto 为 1,则必须设置 start,默认为空。 - -* -interval int - - 导入的记录时间间隔,该设置只会在指定 `auto=1` 之后生效,否则会根据样例数据自动计算间隔时间。单位为毫秒,默认 1000。 - -* -thread int - - 执行导入数据的线程数目,默认为 10。 - -* -batch int - - 执行导入数据时的批量大小,默认为 100。批量是指一次写操作时,包含多少条记录。 - -* -host string - - 导入的 TDengine 服务器 IP,默认为 127.0.0.1。 - -* -port int - - 导入的 TDengine 服务器端口,默认为 6030。 - -* -user string - - 导入的 TDengine 用户名,默认为 root。 - -* -password string - - 导入的 TDengine 用户密码,默认为 taosdata。 - -* -dropdb int - - 导入数据之前是否删除数据库,1 是,0 否, 默认 0。 - -* -db string - - 导入的 TDengine 数据库名称,默认为 test_yyyyMMdd。 - -* -dbparam string - - 当指定的数据库不存在时,自动创建数据库时可选项配置参数,如 `days 10 cache 16000 ablocks 4`,默认为空。 - -### 常见使用示例 - -* `bin/taosimport -cfg config/cfg.toml -cases sensor_info,camera_detection -hnum 1 -vnum 10` - - 执行上述命令后会将 sensor_info、camera_detection 两个场景的数据各导入 10 次。 - -* `bin/taosimport -cfg config/cfg.toml -cases sensor_info -hnum 2 -vnum 0 -start "2019-12-12 00:00:00.000" -interval 5000` - - 执行上述命令后会将 sensor_info 场景的数据横向扩展2倍从指定时间 `2019-12-12 00:00:00.000` 开始且记录间隔时间为 5000 毫秒开始导入,导入至当前时间后会自动持续导入。 - -### config/cfg.toml 配置文件说明 - -``` toml -# 传感器场景 -[sensor_info] # 场景名称 -format = "csv" # 样例数据文件格式,可以是 json 或 csv,具体字段应至少包含 subTableName、tags、fields 指定的字段。 -filePath = "data/sensor_info.csv" # 样例数据文件路径,程序会循环使用该文件数据 -separator = "," # csv 样例文件中字段分隔符,默认逗号 - -stname = "sensor_info" # 超级表名称 -subTableName = "devid" # 使用样例数据中指定字段当作子表名称一部分,子表名称格式为 t_subTableName_stname,扩展表名为 t_subTableName_stname_i。 -timestamp = "ts" # 使用 fields 中哪个字段当作主键,类型必须为 timestamp -timestampType="millisecond" # 样例数据中主键时间字段是 millisecond 还是 dateTime 格式 -#timestampTypeFormat = "2006-01-02 15:04:05.000" # 主键日期时间格式,timestampType 为 dateTime 时需要指定 -tags = [ - # 标签列表,name 为标签名称,type 为标签类型 - { name = "location", type = "binary(20)" }, - { name = "color", type = "binary(16)" }, - { name = "devgroup", type = "int" }, -] - -fields = [ - # 字段列表,name 为字段名称,type 为字段类型 - { name = "ts", type = "timestamp" }, - { name = "temperature", type = "int" }, - { name = "humidity", type = "float" }, -] - -# 摄像头检测场景 -[camera_detection] # 场景名称 -format = "json" # 样例数据文件格式,可以是 json 或 csv,具体字段应至少包含 subTableName、tags、fields 指定的字段。 -filePath = "data/camera_detection.json" # 样例数据文件路径,程序会循环使用该文件数据 -#separator = "," # csv 样例文件中字段分隔符,默认逗号, 如果是 json 文件可以不用配置 - -stname = "camera_detection" # 超级表名称 -subTableName = "sensor_id" # 使用样例数据中指定字段当作子表名称一部分,子表名称格式为 t_subTableName_stname,扩展表名为 t_subTableName_stname_i。 -timestamp = "ts" # 使用 fields 中哪个字段当作主键,类型必须为 timestamp -timestampType="dateTime" # 样例数据中主键时间字段是 millisecond 还是 dateTime 格式 -timestampTypeFormat = "2006-01-02 15:04:05.000" # 主键日期时间格式,timestampType 为 dateTime 时需要指定 -tags = [ - # 标签列表,name 为标签名称,type 为标签类型 - { name = "home_id", type = "binary(30)" }, - { name = "object_type", type = "int" }, - { name = "object_kind", type = "binary(20)" }, -] - -fields = [ - # 字段列表,name 为字段名称,type 为字段类型 - { name = "ts", type = "timestamp" }, - { name = "states", type = "tinyint" }, - { name = "battery_voltage", type = "float" }, -] - -# other cases - -``` - -### 样例数据格式说明 - -#### json - -当配置文件 `config/cfg.toml` 中各场景的 format="json" 时,样例数据文件需要提供 tags 和 fields 字段列表中的字段值。样例数据格式如下: - -```json -{"home_id": "603", "sensor_id": "s100", "ts": "2019-01-01 00:00:00.000", "object_type": 1, "object_kind": "night", "battery_voltage": 0.8, "states": 1} -{"home_id": "604", "sensor_id": "s200", "ts": "2019-01-01 00:00:00.000", "object_type": 2, "object_kind": "day", "battery_voltage": 0.6, "states": 0} -``` - -#### csv - -当配置文件 `config/cfg.toml` 中各场景的 format="csv" 时,样例数据文件需要提供表头和对应的数据,其中字段分隔符由使用场景中 `separator` 指定,默认逗号。具体格式如下: - -```csv -devid,location,color,devgroup,ts,temperature,humidity -0, beijing, white, 0, 1575129600000, 16, 19.405091 -0, beijing, white, 0, 1575129601000, 22, 14.377142 -``` - -[1]: https://github.com/taosdata/TDengine -[2]: https://golang.org/doc/install -[3]: https://www.taosdata.com/cn/documentation/connector/#Go-Connector -[4]: https://www.taosdata.com/cn/documentation/taos-sql/#%E6%95%B0%E6%8D%AE%E6%9F%A5%E8%AF%A2 \ No newline at end of file diff --git a/importSampleData/app/main.go b/importSampleData/app/main.go deleted file mode 100644 index 3589c8c2a98f31e78c4dac3496f804605a0b2314..0000000000000000000000000000000000000000 --- a/importSampleData/app/main.go +++ /dev/null @@ -1,1024 +0,0 @@ -/* - * Copyright (c) 2019 TAOS Data, Inc. - * - * This program is free software: you can use, redistribute, and/or modify - * it under the terms of the GNU Affero General Public License, version 3 - * or later ("AGPL"), as published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. - * - * You should have received a copy of the GNU Affero General Public License - * along with this program. If not, see . - */ - -package main - -import ( - "bufio" - "bytes" - "database/sql" - "encoding/json" - "flag" - "fmt" - "io" - "log" - "os" - "sort" - "strconv" - "strings" - "sync" - "time" - - dataImport "github.com/taosdata/TDengine/importSampleData/import" - - _ "github.com/taosdata/driver-go/taosSql" -) - -const ( - // 主键类型必须为 timestamp - TIMESTAMP = "timestamp" - - // 样例数据中主键时间字段是 millisecond 还是 dateTime 格式 - DATETIME = "datetime" - MILLISECOND = "millisecond" - - DefaultStartTime int64 = -1 - DefaultInterval int64 = 1 * 1000 // 导入的记录时间间隔,该设置只会在指定 auto=1 之后生效,否则会根据样例数据自动计算间隔时间。单位为毫秒,默认 1000。 - DefaultDelay int64 = -1 // - - // 当 save 为 1 时保存统计信息的表名, 默认 statistic。 - DefaultStatisticTable = "statistic" - - // 样例数据文件格式,可以是 json 或 csv - JsonFormat = "json" - CsvFormat = "csv" - - SuperTablePrefix = "s_" // 超级表前缀 - SubTablePrefix = "t_" // 子表前缀 - - DriverName = "taosSql" - StartTimeLayout = "2006-01-02 15:04:05.000" - InsertPrefix = "insert into " -) - -var ( - cfg string // 导入配置文件路径,包含样例数据文件相关描述及对应 TDengine 配置信息。默认使用 config/cfg.toml - cases string // 需要导入的场景名称,该名称可从 -cfg 指定的配置文件中 [usecase] 查看,可同时导入多个场景,中间使用逗号分隔,如:sensor_info,camera_detection,默认为 sensor_info - hnum int // 需要将样例数据进行横向扩展的倍数,假设原有样例数据包含 1 张子表 t_0 数据,指定 hnum 为 2 时会根据原有表名创建 t、t_1 两张子表。默认为 100。 - vnum int // 需要将样例数据进行纵向扩展的次数,如果设置为 0 代表将历史数据导入至当前时间后持续按照指定间隔导入。默认为 1000,表示将样例数据在时间轴上纵向复制1000 次 - thread int // 执行导入数据的线程数目,默认为 10 - batch int // 执行导入数据时的批量大小,默认为 100。批量是指一次写操作时,包含多少条记录 - auto int // 是否自动生成样例数据中的主键时间戳,1 是,0 否, 默认 0 - startTimeStr string // 导入的记录开始时间,格式为 "yyyy-MM-dd HH:mm:ss.SSS",不设置会使用样例数据中最小时间,设置后会忽略样例数据中的主键时间,会按照指定的 start 进行导入。如果 auto 为 1,则必须设置 start,默认为空 - interval int64 // 导入的记录时间间隔,该设置只会在指定 auto=1 之后生效,否则会根据样例数据自动计算间隔时间。单位为毫秒,默认 1000 - host string // 导入的 TDengine 服务器 IP,默认为 127.0.0.1 - port int // 导入的 TDengine 服务器端口,默认为 6030 - user string // 导入的 TDengine 用户名,默认为 root - password string // 导入的 TDengine 用户密码,默认为 taosdata - dropdb int // 导入数据之前是否删除数据库,1 是,0 否, 默认 0 - db string // 导入的 TDengine 数据库名称,默认为 test_yyyyMMdd - dbparam string // 当指定的数据库不存在时,自动创建数据库时可选项配置参数,如 days 10 cache 16000 ablocks 4,默认为空 - - dataSourceName string - startTime int64 - - superTableConfigMap = make(map[string]*superTableConfig) - subTableMap = make(map[string]*dataRows) - scaleTableNames []string - - scaleTableMap = make(map[string]*scaleTableInfo) - - successRows []int64 - lastStaticTime time.Time - lastTotalRows int64 - timeTicker *time.Ticker - delay int64 // 当 vnum 设置为 0 时持续导入的时间间隔,默认为所有场景中最小记录间隔时间的一半,单位 ms。 - tick int64 // 打印统计信息的时间间隔,默认 2000 ms。 - save int // 是否保存统计信息到 tdengine 的 statistic 表中,1 是,0 否, 默认 0。 - saveTable string // 当 save 为 1 时保存统计信息的表名, 默认 statistic。 -) - -type superTableConfig struct { - startTime int64 - endTime int64 - cycleTime int64 - avgInterval int64 - config dataImport.CaseConfig -} - -type scaleTableInfo struct { - scaleTableName string - subTableName string - insertRows int64 -} - -//type tableRows struct { -// tableName string // tableName -// value string // values(...) -//} - -type dataRows struct { - rows []map[string]interface{} - config dataImport.CaseConfig -} - -func (rows dataRows) Len() int { - return len(rows.rows) -} - -func (rows dataRows) Less(i, j int) bool { - iTime := getPrimaryKey(rows.rows[i][rows.config.Timestamp]) - jTime := getPrimaryKey(rows.rows[j][rows.config.Timestamp]) - return iTime < jTime -} - -func (rows dataRows) Swap(i, j int) { - rows.rows[i], rows.rows[j] = rows.rows[j], rows.rows[i] -} - -func getPrimaryKey(value interface{}) int64 { - val, _ := value.(int64) - //time, _ := strconv.ParseInt(str, 10, 64) - return val -} - -func init() { - parseArg() // parse argument - - if db == "" { - // 导入的 TDengine 数据库名称,默认为 test_yyyyMMdd - db = fmt.Sprintf("test_%s", time.Now().Format("20060102")) - } - - if auto == 1 && len(startTimeStr) == 0 { - log.Fatalf("startTime must be set when auto is 1, the format is \"yyyy-MM-dd HH:mm:ss.SSS\" ") - } - - if len(startTimeStr) != 0 { - t, err := time.ParseInLocation(StartTimeLayout, strings.TrimSpace(startTimeStr), time.Local) - if err != nil { - log.Fatalf("param startTime %s error, %s\n", startTimeStr, err) - } - - startTime = t.UnixNano() / 1e6 // as millisecond - } else { - startTime = DefaultStartTime - } - - dataSourceName = fmt.Sprintf("%s:%s@/tcp(%s:%d)/", user, password, host, port) - - printArg() - - log.SetFlags(log.Ldate | log.Ltime | log.Lshortfile) -} - -func main() { - - importConfig := dataImport.LoadConfig(cfg) - - var caseMinInterval int64 = -1 - - for _, userCase := range strings.Split(cases, ",") { - caseConfig, ok := importConfig.UserCases[userCase] - - if !ok { - log.Println("not exist case: ", userCase) - continue - } - - checkUserCaseConfig(userCase, &caseConfig) - - // read file as map array - fileRows := readFile(caseConfig) - log.Printf("case [%s] sample data file contains %d rows.\n", userCase, len(fileRows.rows)) - - if len(fileRows.rows) == 0 { - log.Printf("there is no valid line in file %s\n", caseConfig.FilePath) - continue - } - - _, exists := superTableConfigMap[caseConfig.StName] - if !exists { - superTableConfigMap[caseConfig.StName] = &superTableConfig{config: caseConfig} - } else { - log.Fatalf("the stname of case %s already exist.\n", caseConfig.StName) - } - - var start, cycleTime, avgInterval int64 = getSuperTableTimeConfig(fileRows) - - // set super table's startTime, cycleTime and avgInterval - superTableConfigMap[caseConfig.StName].startTime = start - superTableConfigMap[caseConfig.StName].cycleTime = cycleTime - superTableConfigMap[caseConfig.StName].avgInterval = avgInterval - - if caseMinInterval == -1 || caseMinInterval > avgInterval { - caseMinInterval = avgInterval - } - - startStr := time.Unix(0, start*int64(time.Millisecond)).Format(StartTimeLayout) - log.Printf("case [%s] startTime %s(%d), average dataInterval %d ms, cycleTime %d ms.\n", userCase, startStr, start, avgInterval, cycleTime) - } - - if DefaultDelay == delay { - // default delay - delay = caseMinInterval / 2 - if delay < 1 { - delay = 1 - } - log.Printf("actual delay is %d ms.", delay) - } - - superTableNum := len(superTableConfigMap) - if superTableNum == 0 { - log.Fatalln("no valid file, exited") - } - - start := time.Now() - // create super table - createSuperTable(superTableConfigMap) - log.Printf("create %d superTable ,used %d ms.\n", superTableNum, time.Since(start)/1e6) - - // create sub table - start = time.Now() - createSubTable(subTableMap) - log.Printf("create %d times of %d subtable ,all %d tables, used %d ms.\n", hnum, len(subTableMap), len(scaleTableMap), time.Since(start)/1e6) - - subTableNum := len(scaleTableMap) - - if subTableNum < thread { - thread = subTableNum - } - - filePerThread := subTableNum / thread - leftFileNum := subTableNum % thread - - var wg sync.WaitGroup - - start = time.Now() - - successRows = make([]int64, thread) - - startIndex, endIndex := 0, filePerThread - for i := 0; i < thread; i++ { - // start thread - if i < leftFileNum { - endIndex++ - } - wg.Add(1) - - go insertData(i, startIndex, endIndex, &wg, successRows) - startIndex, endIndex = endIndex, endIndex+filePerThread - } - - lastStaticTime = time.Now() - timeTicker = time.NewTicker(time.Millisecond * time.Duration(tick)) - go staticSpeed() - wg.Wait() - - usedTime := time.Since(start) - - total := getTotalRows(successRows) - - log.Printf("finished insert %d rows, used %d ms, speed %d rows/s", total, usedTime/1e6, total*1e3/usedTime.Milliseconds()) - - if vnum == 0 { - // continue waiting for insert data - wait := make(chan string) - v := <-wait - log.Printf("program receive %s, exited.\n", v) - } else { - timeTicker.Stop() - } - -} - -func staticSpeed() { - - connection := getConnection() - defer connection.Close() - - if save == 1 { - _, _ = connection.Exec("use " + db) - _, err := connection.Exec("create table if not exists " + saveTable + "(ts timestamp, speed int)") - if err != nil { - log.Fatalf("create %s Table error: %s\n", saveTable, err) - } - } - - for { - <-timeTicker.C - - currentTime := time.Now() - usedTime := currentTime.UnixNano() - lastStaticTime.UnixNano() - - total := getTotalRows(successRows) - currentSuccessRows := total - lastTotalRows - - speed := currentSuccessRows * 1e9 / usedTime - log.Printf("insert %d rows, used %d ms, speed %d rows/s", currentSuccessRows, usedTime/1e6, speed) - - if save == 1 { - insertSql := fmt.Sprintf("insert into %s values(%d, %d)", saveTable, currentTime.UnixNano()/1e6, speed) - _, _ = connection.Exec(insertSql) - } - - lastStaticTime = currentTime - lastTotalRows = total - } - -} - -func getTotalRows(successRows []int64) int64 { - var total int64 = 0 - for j := 0; j < len(successRows); j++ { - total += successRows[j] - } - return total -} - -func getSuperTableTimeConfig(fileRows dataRows) (start, cycleTime, avgInterval int64) { - if auto == 1 { - // use auto generate data time - start = startTime - avgInterval = interval - maxTableRows := normalizationDataWithSameInterval(fileRows, avgInterval) - cycleTime = maxTableRows*avgInterval + avgInterval - - } else { - - // use the sample data primary timestamp - sort.Sort(fileRows) // sort the file data by the primaryKey - minTime := getPrimaryKey(fileRows.rows[0][fileRows.config.Timestamp]) - maxTime := getPrimaryKey(fileRows.rows[len(fileRows.rows)-1][fileRows.config.Timestamp]) - - start = minTime // default startTime use the minTime - // 设置了start时间的话 按照start来 - if DefaultStartTime != startTime { - start = startTime - } - - tableNum := normalizationData(fileRows, minTime) - - if minTime == maxTime { - avgInterval = interval - cycleTime = tableNum*avgInterval + avgInterval - } else { - avgInterval = (maxTime - minTime) / int64(len(fileRows.rows)) * tableNum - cycleTime = maxTime - minTime + avgInterval - } - - } - return -} - -func createSubTable(subTableMaps map[string]*dataRows) { - - connection := getConnection() - defer connection.Close() - - _, _ = connection.Exec("use " + db) - - createTablePrefix := "create table if not exists " - var buffer bytes.Buffer - for subTableName := range subTableMaps { - - superTableName := getSuperTableName(subTableMaps[subTableName].config.StName) - firstRowValues := subTableMaps[subTableName].rows[0] // the first rows values as tags - - // create table t using superTable tags(...); - for i := 0; i < hnum; i++ { - tableName := getScaleSubTableName(subTableName, i) - - scaleTableMap[tableName] = &scaleTableInfo{ - subTableName: subTableName, - insertRows: 0, - } - scaleTableNames = append(scaleTableNames, tableName) - - buffer.WriteString(createTablePrefix) - buffer.WriteString(tableName) - buffer.WriteString(" using ") - buffer.WriteString(superTableName) - buffer.WriteString(" tags(") - for _, tag := range subTableMaps[subTableName].config.Tags { - tagValue := fmt.Sprintf("%v", firstRowValues[strings.ToLower(tag.Name)]) - buffer.WriteString("'" + tagValue + "'") - buffer.WriteString(",") - } - buffer.Truncate(buffer.Len() - 1) - buffer.WriteString(")") - - createTableSql := buffer.String() - buffer.Reset() - - //log.Printf("create table: %s\n", createTableSql) - _, err := connection.Exec(createTableSql) - if err != nil { - log.Fatalf("create table error: %s\n", err) - } - } - } -} - -func createSuperTable(superTableConfigMap map[string]*superTableConfig) { - - connection := getConnection() - defer connection.Close() - - if dropdb == 1 { - dropDbSql := "drop database if exists " + db - _, err := connection.Exec(dropDbSql) // drop database if exists - if err != nil { - log.Fatalf("drop database error: %s\n", err) - } - log.Printf("dropdb: %s\n", dropDbSql) - } - - createDbSql := "create database if not exists " + db + " " + dbparam - - _, err := connection.Exec(createDbSql) // create database if not exists - if err != nil { - log.Fatalf("create database error: %s\n", err) - } - log.Printf("createDb: %s\n", createDbSql) - - _, _ = connection.Exec("use " + db) - - prefix := "create table if not exists " - var buffer bytes.Buffer - //CREATE TABLE ( TIMESTAMP, field_name1 field_type,…) TAGS(tag_name tag_type, …) - for key := range superTableConfigMap { - - buffer.WriteString(prefix) - buffer.WriteString(getSuperTableName(key)) - buffer.WriteString("(") - - superTableConf := superTableConfigMap[key] - - buffer.WriteString(superTableConf.config.Timestamp) - buffer.WriteString(" timestamp, ") - - for _, field := range superTableConf.config.Fields { - buffer.WriteString(field.Name + " " + field.Type + ",") - } - - buffer.Truncate(buffer.Len() - 1) - buffer.WriteString(") tags( ") - - for _, tag := range superTableConf.config.Tags { - buffer.WriteString(tag.Name + " " + tag.Type + ",") - } - - buffer.Truncate(buffer.Len() - 1) - buffer.WriteString(")") - - createSql := buffer.String() - buffer.Reset() - - //log.Printf("superTable: %s\n", createSql) - _, err = connection.Exec(createSql) - if err != nil { - log.Fatalf("create supertable error: %s\n", err) - } - } - -} - -func getScaleSubTableName(subTableName string, hNum int) string { - if hNum == 0 { - return subTableName - } - return fmt.Sprintf("%s_%d", subTableName, hNum) -} - -func getSuperTableName(stName string) string { - return SuperTablePrefix + stName -} - -/** -* normalizationData , and return the num of subTables - */ -func normalizationData(fileRows dataRows, minTime int64) int64 { - - var tableNum int64 = 0 - for _, row := range fileRows.rows { - // get subTableName - tableValue := getSubTableNameValue(row[fileRows.config.SubTableName]) - if len(tableValue) == 0 { - continue - } - - row[fileRows.config.Timestamp] = getPrimaryKey(row[fileRows.config.Timestamp]) - minTime - - subTableName := getSubTableName(tableValue, fileRows.config.StName) - - value, ok := subTableMap[subTableName] - if !ok { - subTableMap[subTableName] = &dataRows{ - rows: []map[string]interface{}{row}, - config: fileRows.config, - } - - tableNum++ - } else { - value.rows = append(value.rows, row) - } - } - return tableNum -} - -// return the maximum table rows -func normalizationDataWithSameInterval(fileRows dataRows, avgInterval int64) int64 { - // subTableMap - currSubTableMap := make(map[string]*dataRows) - for _, row := range fileRows.rows { - // get subTableName - tableValue := getSubTableNameValue(row[fileRows.config.SubTableName]) - if len(tableValue) == 0 { - continue - } - - subTableName := getSubTableName(tableValue, fileRows.config.StName) - - value, ok := currSubTableMap[subTableName] - if !ok { - row[fileRows.config.Timestamp] = 0 - currSubTableMap[subTableName] = &dataRows{ - rows: []map[string]interface{}{row}, - config: fileRows.config, - } - } else { - row[fileRows.config.Timestamp] = int64(len(value.rows)) * avgInterval - value.rows = append(value.rows, row) - } - - } - - var maxRows, tableRows = 0, 0 - for tableName := range currSubTableMap { - tableRows = len(currSubTableMap[tableName].rows) - subTableMap[tableName] = currSubTableMap[tableName] // add to global subTableMap - if tableRows > maxRows { - maxRows = tableRows - } - } - - return int64(maxRows) -} - -func getSubTableName(subTableValue string, superTableName string) string { - return SubTablePrefix + subTableValue + "_" + superTableName -} - -func insertData(threadIndex, start, end int, wg *sync.WaitGroup, successRows []int64) { - connection := getConnection() - defer connection.Close() - defer wg.Done() - - _, _ = connection.Exec("use " + db) // use db - - log.Printf("thread-%d start insert into [%d, %d) subtables.\n", threadIndex, start, end) - - num := 0 - subTables := scaleTableNames[start:end] - var buffer bytes.Buffer - for { - var currSuccessRows int64 - var appendRows int - var lastTableName string - - buffer.WriteString(InsertPrefix) - - for _, tableName := range subTables { - - subTableInfo := subTableMap[scaleTableMap[tableName].subTableName] - subTableRows := int64(len(subTableInfo.rows)) - superTableConf := superTableConfigMap[subTableInfo.config.StName] - - tableStartTime := superTableConf.startTime - var tableEndTime int64 - if vnum == 0 { - // need continue generate data - tableEndTime = time.Now().UnixNano() / 1e6 - } else { - tableEndTime = tableStartTime + superTableConf.cycleTime*int64(vnum) - superTableConf.avgInterval - } - - insertRows := scaleTableMap[tableName].insertRows - - for { - loopNum := insertRows / subTableRows - rowIndex := insertRows % subTableRows - currentRow := subTableInfo.rows[rowIndex] - - currentTime := getPrimaryKey(currentRow[subTableInfo.config.Timestamp]) + loopNum*superTableConf.cycleTime + tableStartTime - if currentTime <= tableEndTime { - // append - - if lastTableName != tableName { - buffer.WriteString(tableName) - buffer.WriteString(" values") - } - lastTableName = tableName - - buffer.WriteString("(") - buffer.WriteString(fmt.Sprintf("%v", currentTime)) - buffer.WriteString(",") - - for _, field := range subTableInfo.config.Fields { - buffer.WriteString(getFieldValue(currentRow[strings.ToLower(field.Name)],field.Type)) - buffer.WriteString(",") - } - - buffer.Truncate(buffer.Len() - 1) - buffer.WriteString(") ") - - appendRows++ - insertRows++ - if appendRows == batch { - // executeBatch - insertSql := buffer.String() - affectedRows := executeBatchInsert(insertSql, connection) - - successRows[threadIndex] += affectedRows - currSuccessRows += affectedRows - - buffer.Reset() - buffer.WriteString(InsertPrefix) - lastTableName = "" - appendRows = 0 - } - } else { - // finished insert current table - break - } - } - - scaleTableMap[tableName].insertRows = insertRows - - } - - // left := len(rows) - if appendRows > 0 { - // executeBatch - insertSql := buffer.String() - affectedRows := executeBatchInsert(insertSql, connection) - - successRows[threadIndex] += affectedRows - currSuccessRows += affectedRows - - buffer.Reset() - } - - // log.Printf("thread-%d finished insert %d rows, used %d ms.", threadIndex, currSuccessRows, time.Since(threadStartTime)/1e6) - - if vnum != 0 { - // thread finished insert data - // log.Printf("thread-%d exit\n", threadIndex) - break - } - - if num == 0 { - wg.Done() //finished insert history data - num++ - } - - if currSuccessRows == 0 { - // log.Printf("thread-%d start to sleep %d ms.", threadIndex, delay) - time.Sleep(time.Duration(delay) * time.Millisecond) - } - - // need continue insert data - } - -} - -func executeBatchInsert(insertSql string, connection *sql.DB) int64 { - result, err := connection.Exec(insertSql) - if err != nil { - log.Printf("execute insertSql %s error, %s\n", insertSql, err) - return 0 - } - affected, _ := result.RowsAffected() - if affected < 0 { - affected = 0 - } - return affected -} - -func getFieldValue(fieldValue interface{},fieldtype interface{}) string { - if fieldtype == "timestamp" || fieldtype == "bigint" { - return fmt.Sprintf("%v", fieldValue) - } - return fmt.Sprintf("'%v'", fieldValue) -} - -func getConnection() *sql.DB { - db, err := sql.Open(DriverName, dataSourceName) - if err != nil { - panic(err) - } - return db -} - -func getSubTableNameValue(suffix interface{}) string { - return fmt.Sprintf("%v", suffix) -} - -func readFile(config dataImport.CaseConfig) dataRows { - fileFormat := strings.ToLower(config.Format) - if fileFormat == JsonFormat { - return readJSONFile(config) - } else if fileFormat == CsvFormat { - return readCSVFile(config) - } - - log.Printf("the file %s is not supported yet\n", config.FilePath) - return dataRows{} -} - -func readCSVFile(config dataImport.CaseConfig) dataRows { - var rows dataRows - f, err := os.Open(config.FilePath) - if err != nil { - log.Printf("Error: %s, %s\n", config.FilePath, err) - return rows - } - defer f.Close() - - r := bufio.NewReader(f) - - //read the first line as title - lineBytes, _, err := r.ReadLine() - if err == io.EOF { - log.Printf("the file %s is empty\n", config.FilePath) - return rows - } - line := strings.ToLower(string(lineBytes)) - titles := strings.Split(line, config.Separator) - if len(titles) < 3 { - // need suffix、 primaryKey and at least one other field - log.Printf("the first line of file %s should be title row, and at least 3 field.\n", config.FilePath) - return rows - } - - rows.config = config - - var lineNum = 0 - for { - // read data row - lineBytes, _, err = r.ReadLine() - lineNum++ - if err == io.EOF { - break - } - // fmt.Println(line) - rowData := strings.Split(string(lineBytes), config.Separator) - - dataMap := make(map[string]interface{}) - for i, title := range titles { - title = strings.TrimSpace(title) - if i < len(rowData) { - dataMap[title] = strings.TrimSpace(rowData[i]) - } else { - dataMap[title] = "" - } - } - - // if the suffix valid - if !existMapKeyAndNotEmpty(config.Timestamp, dataMap) { - log.Printf("the Timestamp[%s] of line %d is empty, will filtered.\n", config.Timestamp, lineNum) - continue - } - - // if the primary key valid - primaryKeyValue := getPrimaryKeyMilliSec(config.Timestamp, config.TimestampType, config.TimestampTypeFormat, dataMap) - if primaryKeyValue == -1 { - log.Printf("the Timestamp[%s] of line %d is not valid, will filtered.\n", config.Timestamp, lineNum) - continue - } - - dataMap[config.Timestamp] = primaryKeyValue - - rows.rows = append(rows.rows, dataMap) - } - return rows -} - -func readJSONFile(config dataImport.CaseConfig) dataRows { - - var rows dataRows - f, err := os.Open(config.FilePath) - if err != nil { - log.Printf("Error: %s, %s\n", config.FilePath, err) - return rows - } - defer f.Close() - - r := bufio.NewReader(f) - //log.Printf("file size %d\n", r.Size()) - - rows.config = config - var lineNum = 0 - for { - lineBytes, _, err := r.ReadLine() - lineNum++ - if err == io.EOF { - break - } - - line := make(map[string]interface{}) - err = json.Unmarshal(lineBytes, &line) - - if err != nil { - log.Printf("line [%d] of file %s parse error, reason: %s\n", lineNum, config.FilePath, err) - continue - } - - // transfer the key to lowercase - lowerMapKey(line) - - if !existMapKeyAndNotEmpty(config.SubTableName, line) { - log.Printf("the SubTableName[%s] of line %d is empty, will filtered.\n", config.SubTableName, lineNum) - continue - } - - primaryKeyValue := getPrimaryKeyMilliSec(config.Timestamp, config.TimestampType, config.TimestampTypeFormat, line) - if primaryKeyValue == -1 { - log.Printf("the Timestamp[%s] of line %d is not valid, will filtered.\n", config.Timestamp, lineNum) - continue - } - - line[config.Timestamp] = primaryKeyValue - - rows.rows = append(rows.rows, line) - } - - return rows -} - -/** -* get primary key as millisecond , otherwise return -1 - */ -func getPrimaryKeyMilliSec(key string, valueType string, valueFormat string, line map[string]interface{}) int64 { - if !existMapKeyAndNotEmpty(key, line) { - return -1 - } - if DATETIME == valueType { - // transfer the datetime to milliseconds - return parseMillisecond(line[key], valueFormat) - } - - value, err := strconv.ParseInt(fmt.Sprintf("%v", line[key]), 10, 64) - // as millisecond num - if err != nil { - return -1 - } - return value -} - -// parseMillisecond parse the dateStr to millisecond, return -1 if failed -func parseMillisecond(str interface{}, layout string) int64 { - value, ok := str.(string) - if !ok { - return -1 - } - - t, err := time.ParseInLocation(layout, strings.TrimSpace(value), time.Local) - - if err != nil { - log.Println(err) - return -1 - } - return t.UnixNano() / 1e6 -} - -// lowerMapKey transfer all the map key to lowercase -func lowerMapKey(maps map[string]interface{}) { - for key := range maps { - value := maps[key] - delete(maps, key) - maps[strings.ToLower(key)] = value - } -} - -func existMapKeyAndNotEmpty(key string, maps map[string]interface{}) bool { - value, ok := maps[key] - if !ok { - return false - } - - str, err := value.(string) - if err && len(str) == 0 { - return false - } - return true -} - -func checkUserCaseConfig(caseName string, caseConfig *dataImport.CaseConfig) { - - if len(caseConfig.StName) == 0 { - log.Fatalf("the stname of case %s can't be empty\n", caseName) - } - - caseConfig.StName = strings.ToLower(caseConfig.StName) - - if len(caseConfig.Tags) == 0 { - log.Fatalf("the tags of case %s can't be empty\n", caseName) - } - - if len(caseConfig.Fields) == 0 { - log.Fatalf("the fields of case %s can't be empty\n", caseName) - } - - if len(caseConfig.SubTableName) == 0 { - log.Fatalf("the suffix of case %s can't be empty\n", caseName) - } - - caseConfig.SubTableName = strings.ToLower(caseConfig.SubTableName) - - caseConfig.Timestamp = strings.ToLower(caseConfig.Timestamp) - - var timestampExist = false - for i, field := range caseConfig.Fields { - if strings.EqualFold(field.Name, caseConfig.Timestamp) { - if strings.ToLower(field.Type) != TIMESTAMP { - log.Fatalf("case %s's primaryKey %s field type is %s, it must be timestamp\n", caseName, caseConfig.Timestamp, field.Type) - } - timestampExist = true - if i < len(caseConfig.Fields)-1 { - // delete middle item, a = a[:i+copy(a[i:], a[i+1:])] - caseConfig.Fields = caseConfig.Fields[:i+copy(caseConfig.Fields[i:], caseConfig.Fields[i+1:])] - } else { - // delete the last item - caseConfig.Fields = caseConfig.Fields[:len(caseConfig.Fields)-1] - } - break - } - } - - if !timestampExist { - log.Fatalf("case %s primaryKey %s is not exist in fields\n", caseName, caseConfig.Timestamp) - } - - caseConfig.TimestampType = strings.ToLower(caseConfig.TimestampType) - if caseConfig.TimestampType != MILLISECOND && caseConfig.TimestampType != DATETIME { - log.Fatalf("case %s's timestampType %s error, only can be timestamp or datetime\n", caseName, caseConfig.TimestampType) - } - - if caseConfig.TimestampType == DATETIME && len(caseConfig.TimestampTypeFormat) == 0 { - log.Fatalf("case %s's timestampTypeFormat %s can't be empty when timestampType is datetime\n", caseName, caseConfig.TimestampTypeFormat) - } - -} - -func parseArg() { - flag.StringVar(&cfg, "cfg", "config/cfg.toml", "configuration file which describes useCase and data format.") - flag.StringVar(&cases, "cases", "sensor_info", "useCase for dataset to be imported. Multiple choices can be separated by comma, for example, -cases sensor_info,camera_detection.") - flag.IntVar(&hnum, "hnum", 100, "magnification factor of the sample tables. For example, if hnum is 100 and in the sample data there are 10 tables, then 10x100=1000 tables will be created in the database.") - flag.IntVar(&vnum, "vnum", 1000, "copies of the sample records in each table. If set to 0,this program will never stop simulating and importing data even if the timestamp has passed current time.") - flag.Int64Var(&delay, "delay", DefaultDelay, "the delay time interval(millisecond) to continue generating data when vnum set 0.") - flag.Int64Var(&tick, "tick", 2000, "the tick time interval(millisecond) to print statistic info.") - flag.IntVar(&save, "save", 0, "whether to save the statistical info into 'statistic' table. 0 is disabled and 1 is enabled.") - flag.StringVar(&saveTable, "savetb", DefaultStatisticTable, "the table to save 'statistic' info when save set 1.") - flag.IntVar(&thread, "thread", 10, "number of threads to import data.") - flag.IntVar(&batch, "batch", 100, "rows of records in one import batch.") - flag.IntVar(&auto, "auto", 0, "whether to use the startTime and interval specified by users when simulating the data. 0 is disabled and 1 is enabled.") - flag.StringVar(&startTimeStr, "start", "", "the starting timestamp of simulated data, in the format of yyyy-MM-dd HH:mm:ss.SSS. If not specified, the earliest timestamp in the sample data will be set as the startTime.") - flag.Int64Var(&interval, "interval", DefaultInterval, "time interval between two consecutive records, in the unit of millisecond. Only valid when auto is 1.") - flag.StringVar(&host, "host", "127.0.0.1", "tdengine server ip.") - flag.IntVar(&port, "port", 6030, "tdengine server port.") - flag.StringVar(&user, "user", "root", "user name to login into the database.") - flag.StringVar(&password, "password", "taosdata", "the import tdengine user password") - flag.IntVar(&dropdb, "dropdb", 0, "whether to drop the existing database. 1 is yes and 0 otherwise.") - flag.StringVar(&db, "db", "", "name of the database to store data.") - flag.StringVar(&dbparam, "dbparam", "", "database configurations when it is created.") - - flag.Parse() -} - -func printArg() { - fmt.Println("used param: ") - fmt.Println("-cfg: ", cfg) - fmt.Println("-cases:", cases) - fmt.Println("-hnum:", hnum) - fmt.Println("-vnum:", vnum) - fmt.Println("-delay:", delay) - fmt.Println("-tick:", tick) - fmt.Println("-save:", save) - fmt.Println("-savetb:", saveTable) - fmt.Println("-thread:", thread) - fmt.Println("-batch:", batch) - fmt.Println("-auto:", auto) - fmt.Println("-start:", startTimeStr) - fmt.Println("-interval:", interval) - fmt.Println("-host:", host) - fmt.Println("-port", port) - fmt.Println("-user", user) - fmt.Println("-password", password) - fmt.Println("-dropdb", dropdb) - fmt.Println("-db", db) - fmt.Println("-dbparam", dbparam) -} diff --git a/importSampleData/config/cfg.toml b/importSampleData/config/cfg.toml deleted file mode 100644 index 545bab071ad66af2f59447b3449c6606e2ff1078..0000000000000000000000000000000000000000 --- a/importSampleData/config/cfg.toml +++ /dev/null @@ -1,53 +0,0 @@ -# 传感器场景 -[sensor_info] # 场景名称 -format = "csv" # 样例数据文件格式,可以是 json 或 csv,具体字段应至少包含 subTableName、tags、fields 指定的字段。 -filePath = "data/sensor_info.csv" # 样例数据文件路径,程序会循环使用该文件数据 -separator = "," # csv 样例文件中字段分隔符,默认逗号 - -stname = "sensor_info" # 超级表名称 -subTableName = "devid" # 使用样例数据中指定字段当作子表名称一部分,子表名称格式为 t_subTableName_stname,扩展表名为 t_subTableName_stname_i。 -timestamp = "ts" # 使用 fields 中哪个字段当作主键,类型必须为 timestamp -timestampType="millisecond" # 样例数据中主键时间字段是 millisecond 还是 dateTime 格式 -#timestampTypeFormat = "2006-01-02 15:04:05.000" # 主键日期时间格式,timestampType 为 dateTime 时需要指定 -tags = [ - # 标签列表,name 为标签名称,type 为标签类型 - { name = "location", type = "binary(20)" }, - { name = "color", type = "binary(16)" }, - { name = "devgroup", type = "int" }, -] - -fields = [ - # 字段列表,name 为字段名称,type 为字段类型 - # 除主键外,其他field如果也要设置为timestamp,可以是type ="timestamp" 类型,此时value可同时支持'2006-01-02 15:04:05.000'和millisecond格式 - # 也可以是type = "bigint",此时value只支持millisecond格式 - { name = "ts", type = "timestamp" }, - { name = "temperature", type = "int" }, - { name = "humidity", type = "float" }, -] - -# 摄像头检测场景 -[camera_detection] # 场景名称 -format = "json" # 样例数据文件格式,可以是 json 或 csv,具体字段应至少包含 subTableName、tags、fields 指定的字段。 -filePath = "data/camera_detection.json" # 样例数据文件路径,程序会循环使用该文件数据 -#separator = "," # csv 样例文件中字段分隔符,默认逗号, 如果是 json 文件可以不用配置 - -stname = "camera_detection" # 超级表名称 -subTableName = "sensor_id" # 使用样例数据中指定字段当作子表名称一部分,子表名称格式为 t_subTableName_stname,扩展表名为 t_subTableName_stname_i。 -timestamp = "ts" # 使用 fields 中哪个字段当作主键,类型必须为 timestamp -timestampType="dateTime" # 样例数据中主键时间字段是 millisecond 还是 dateTime 格式 -timestampTypeFormat = "2006-01-02 15:04:05.000" # 主键日期时间格式,timestampType 为 dateTime 时需要指定 -tags = [ - # 标签列表,name 为标签名称,type 为标签类型 - { name = "home_id", type = "binary(30)" }, - { name = "object_type", type = "int" }, - { name = "object_kind", type = "binary(20)" }, -] - -fields = [ - # 字段列表,name 为字段名称,type 为字段类型 - { name = "ts", type = "timestamp" }, - { name = "states", type = "tinyint" }, - { name = "battery_voltage", type = "float" }, -] - -# other case \ No newline at end of file diff --git a/importSampleData/dashboard/sensor_info.json b/importSampleData/dashboard/sensor_info.json deleted file mode 100644 index 6dcf5505f2a1a2db3a10cb9c7bed47ac5dc3687c..0000000000000000000000000000000000000000 --- a/importSampleData/dashboard/sensor_info.json +++ /dev/null @@ -1,380 +0,0 @@ -{ - "annotations": { - "list": [ - { - "builtIn": 1, - "datasource": "-- Grafana --", - "enable": true, - "hide": true, - "iconColor": "rgba(0, 211, 255, 1)", - "name": "Annotations & Alerts", - "type": "dashboard" - } - ] - }, - "editable": true, - "gnetId": null, - "graphTooltip": 0, - "id": 7, - "links": [], - "panels": [ - { - "cacheTimeout": null, - "colorBackground": false, - "colorValue": true, - "colors": [ - "#299c46", - "rgba(237, 129, 40, 0.89)", - "#d44a3a" - ], - "datasource": null, - "format": "celsius", - "gauge": { - "maxValue": 100, - "minValue": 0, - "show": false, - "thresholdLabels": false, - "thresholdMarkers": true - }, - "gridPos": { - "h": 8, - "w": 12, - "x": 0, - "y": 0 - }, - "id": 6, - "interval": null, - "links": [], - "mappingType": 1, - "mappingTypes": [ - { - "name": "value to text", - "value": 1 - }, - { - "name": "range to text", - "value": 2 - } - ], - "maxDataPoints": 100, - "nullPointMode": "connected", - "nullText": null, - "options": {}, - "postfix": "", - "postfixFontSize": "50%", - "prefix": "", - "prefixFontSize": "50%", - "rangeMaps": [ - { - "from": "null", - "text": "N/A", - "to": "null" - } - ], - "sparkline": { - "fillColor": "rgba(31, 118, 189, 0.18)", - "full": false, - "lineColor": "rgb(31, 120, 193)", - "show": true, - "ymax": null, - "ymin": null - }, - "tableColumn": "", - "targets": [ - { - "alias": "lastest_temperature", - "refId": "A", - "sql": "select ts, temp from test.stream_temp_last where ts >= $from and ts < $to", - "target": "select metric", - "type": "timeserie" - } - ], - "thresholds": "20,30", - "timeFrom": null, - "timeShift": null, - "title": "最新温度", - "type": "singlestat", - "valueFontSize": "80%", - "valueMaps": [ - { - "op": "=", - "text": "N/A", - "value": "null" - } - ], - "valueName": "current" - }, - { - "datasource": null, - "gridPos": { - "h": 8, - "w": 12, - "x": 12, - "y": 0 - }, - "id": 8, - "options": { - "fieldOptions": { - "calcs": [ - "last" - ], - "defaults": { - "decimals": 2, - "mappings": [], - "max": 100, - "min": 0, - "thresholds": [ - { - "color": "green", - "value": null - }, - { - "color": "red", - "value": 80 - } - ], - "title": "" - }, - "override": {}, - "values": false - }, - "orientation": "auto", - "showThresholdLabels": false, - "showThresholdMarkers": true - }, - "pluginVersion": "6.4.3", - "targets": [ - { - "alias": "maxHumidity", - "refId": "A", - "sql": "select ts, humidity from test.stream_humidity_max where ts >= $from and ts < $to", - "target": "select metric", - "type": "timeserie" - } - ], - "timeFrom": null, - "timeShift": null, - "title": "最大湿度", - "type": "gauge" - }, - { - "aliasColors": {}, - "bars": true, - "dashLength": 10, - "dashes": false, - "datasource": null, - "fill": 1, - "fillGradient": 0, - "gridPos": { - "h": 10, - "w": 12, - "x": 0, - "y": 8 - }, - "id": 4, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": false, - "linewidth": 1, - "nullPointMode": "null", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pointradius": 2, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "alias": "avgTemperature", - "refId": "A", - "sql": "select ts, temp from test.stream_temp_avg where ts >= $from and ts < $to", - "target": "select metric", - "type": "timeserie" - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "平均温度", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "celsius", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - }, - { - "aliasColors": {}, - "bars": false, - "dashLength": 10, - "dashes": false, - "datasource": null, - "fill": 1, - "fillGradient": 0, - "gridPos": { - "h": 10, - "w": 12, - "x": 12, - "y": 8 - }, - "id": 10, - "legend": { - "avg": false, - "current": false, - "max": false, - "min": false, - "show": true, - "total": false, - "values": false - }, - "lines": true, - "linewidth": 1, - "nullPointMode": "null", - "options": { - "dataLinks": [] - }, - "percentage": false, - "pointradius": 2, - "points": false, - "renderer": "flot", - "seriesOverrides": [], - "spaceLength": 10, - "stack": false, - "steppedLine": false, - "targets": [ - { - "alias": "max", - "refId": "A", - "sql": "select ts, max_temp from test.stream_sensor where ts >= $from and ts < $to", - "target": "select metric", - "type": "timeserie" - }, - { - "alias": "avg", - "refId": "B", - "sql": "select ts, avg_temp from test.stream_sensor where ts >= $from and ts < $to", - "target": "select metric", - "type": "timeserie" - }, - { - "alias": "min", - "refId": "C", - "sql": "select ts, min_temp from test.stream_sensor where ts >= $from and ts < $to", - "target": "select metric", - "type": "timeserie" - } - ], - "thresholds": [], - "timeFrom": null, - "timeRegions": [], - "timeShift": null, - "title": "某传感器", - "tooltip": { - "shared": true, - "sort": 0, - "value_type": "individual" - }, - "type": "graph", - "xaxis": { - "buckets": null, - "mode": "time", - "name": null, - "show": true, - "values": [] - }, - "yaxes": [ - { - "format": "celsius", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - }, - { - "format": "short", - "label": null, - "logBase": 1, - "max": null, - "min": null, - "show": true - } - ], - "yaxis": { - "align": false, - "alignLevel": null - } - } - ], - "refresh": "5s", - "schemaVersion": 20, - "style": "dark", - "tags": [], - "templating": { - "list": [] - }, - "time": { - "from": "now-5m", - "to": "now" - }, - "timepicker": { - "refresh_intervals": [ - "5s", - "10s", - "30s", - "1m", - "5m", - "15m", - "30m", - "1h", - "2h", - "1d" - ] - }, - "timezone": "", - "title": "sensor_info", - "uid": "dGSoaTLWz", - "version": 2 -} \ No newline at end of file diff --git a/importSampleData/data/camera_detection.json b/importSampleData/data/camera_detection.json deleted file mode 100644 index cf67e38fa71255fc63ada2a05f1891e2e509fc2f..0000000000000000000000000000000000000000 --- a/importSampleData/data/camera_detection.json +++ /dev/null @@ -1,1000 +0,0 @@ -{"battery_voltage":0.80233014,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:00.000"} -{"battery_voltage":0.83228004,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:01.000"} -{"battery_voltage":0.7123188,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:02.000"} -{"battery_voltage":0.5328185,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:03.000"} -{"battery_voltage":0.54848474,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:04.000"} -{"battery_voltage":0.7576063,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:05.000"} -{"battery_voltage":0.60713196,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:06.000"} -{"battery_voltage":0.65902907,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:07.000"} -{"battery_voltage":0.64151704,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:08.000"} -{"battery_voltage":0.8395423,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:09.000"} -{"battery_voltage":0.60159343,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:10.000"} -{"battery_voltage":0.7853366,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:11.000"} -{"battery_voltage":0.6465571,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:12.000"} -{"battery_voltage":0.8762865,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:13.000"} -{"battery_voltage":0.9326675,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:14.000"} -{"battery_voltage":0.76191014,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:15.000"} -{"battery_voltage":0.57916415,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:16.000"} -{"battery_voltage":0.98762083,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:17.000"} -{"battery_voltage":0.7974043,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:18.000"} -{"battery_voltage":0.8460123,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:19.000"} -{"battery_voltage":0.5866331,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:20.000"} -{"battery_voltage":0.7720778,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:21.000"} -{"battery_voltage":0.7115761,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:22.000"} -{"battery_voltage":0.62677026,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:23.000"} -{"battery_voltage":0.8943025,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:24.000"} -{"battery_voltage":0.94027156,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:25.000"} -{"battery_voltage":0.94718087,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:26.000"} -{"battery_voltage":0.9884584,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:27.000"} -{"battery_voltage":0.6111447,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:28.000"} -{"battery_voltage":0.6207575,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:29.000"} -{"battery_voltage":0.9664232,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:30.000"} -{"battery_voltage":0.9005275,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:31.000"} -{"battery_voltage":0.59146243,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:32.000"} -{"battery_voltage":0.948496,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:33.000"} -{"battery_voltage":0.98946464,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:34.000"} -{"battery_voltage":0.5454186,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:35.000"} -{"battery_voltage":0.9634934,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:36.000"} -{"battery_voltage":0.673977,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:37.000"} -{"battery_voltage":0.8554536,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:38.000"} -{"battery_voltage":0.8247447,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:39.000"} -{"battery_voltage":0.87791175,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:40.000"} -{"battery_voltage":0.56532556,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:41.000"} -{"battery_voltage":0.9481709,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:42.000"} -{"battery_voltage":0.8605739,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:43.000"} -{"battery_voltage":0.54276025,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:44.000"} -{"battery_voltage":0.8113642,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:45.000"} -{"battery_voltage":0.6184113,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:46.000"} -{"battery_voltage":0.59362304,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:47.000"} -{"battery_voltage":0.8140491,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:48.000"} -{"battery_voltage":0.6406652,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:49.000"} -{"battery_voltage":0.7174562,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:50.000"} -{"battery_voltage":0.77507347,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:51.000"} -{"battery_voltage":0.8645904,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:52.000"} -{"battery_voltage":0.5002569,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:53.000"} -{"battery_voltage":0.6999919,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:54.000"} -{"battery_voltage":0.8019891,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:55.000"} -{"battery_voltage":0.51483566,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:56.000"} -{"battery_voltage":0.5014215,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:57.000"} -{"battery_voltage":0.7949171,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:00:58.000"} -{"battery_voltage":0.90770257,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:00:59.000"} -{"battery_voltage":0.7292212,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:01:00.000"} -{"battery_voltage":0.5131326,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:01:01.000"} -{"battery_voltage":0.6248466,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:01:02.000"} -{"battery_voltage":0.6237333,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:01:03.000"} -{"battery_voltage":0.79631186,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:01:04.000"} -{"battery_voltage":0.84691906,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:01:05.000"} -{"battery_voltage":0.76960504,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:01:06.000"} -{"battery_voltage":0.8753815,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:01:07.000"} -{"battery_voltage":0.8765806,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:01:08.000"} -{"battery_voltage":0.6778836,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:01:09.000"} -{"battery_voltage":0.615915,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:01:10.000"} -{"battery_voltage":0.7491971,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:01:11.000"} -{"battery_voltage":0.51259696,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:01:12.000"} -{"battery_voltage":0.79469156,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:01:13.000"} -{"battery_voltage":0.7860434,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:01:14.000"} -{"battery_voltage":0.70588136,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:01:15.000"} -{"battery_voltage":0.7458037,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:01:16.000"} -{"battery_voltage":0.8986043,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:01:17.000"} -{"battery_voltage":0.8915175,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:01:18.000"} -{"battery_voltage":0.56520694,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:01:19.000"} -{"battery_voltage":0.86991286,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:01:20.000"} -{"battery_voltage":0.5491919,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:01:21.000"} -{"battery_voltage":0.5498648,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:01:22.000"} -{"battery_voltage":0.5380951,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:01:23.000"} -{"battery_voltage":0.57982546,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:01:24.000"} -{"battery_voltage":0.6613053,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:01:25.000"} -{"battery_voltage":0.7854258,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:01:26.000"} -{"battery_voltage":0.84208757,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:01:27.000"} -{"battery_voltage":0.7622499,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:01:28.000"} -{"battery_voltage":0.8581842,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:01:29.000"} -{"battery_voltage":0.506413,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:01:30.000"} -{"battery_voltage":0.54901546,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:01:31.000"} -{"battery_voltage":0.9132271,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":1,"ts":"2019-12-01 00:01:32.000"} -{"battery_voltage":0.6721575,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:01:33.000"} -{"battery_voltage":0.6082356,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:01:34.000"} -{"battery_voltage":0.70103544,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:01:35.000"} -{"battery_voltage":0.58433986,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:01:36.000"} -{"battery_voltage":0.91396403,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:01:37.000"} -{"battery_voltage":0.52896315,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:01:38.000"} -{"battery_voltage":0.7057702,"home_id":"603","object_kind":"night","object_type":1,"sensor_id":"s100","states":0,"ts":"2019-12-01 00:01:39.000"} -{"battery_voltage":0.89037704,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:00.000"} -{"battery_voltage":0.5267473,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:01.000"} -{"battery_voltage":0.6253811,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:02.000"} -{"battery_voltage":0.986941,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:03.000"} -{"battery_voltage":0.51076686,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:04.000"} -{"battery_voltage":0.54648507,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:05.000"} -{"battery_voltage":0.6559428,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:06.000"} -{"battery_voltage":0.7436196,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:07.000"} -{"battery_voltage":0.83591455,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:08.000"} -{"battery_voltage":0.9501376,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:09.000"} -{"battery_voltage":0.65966564,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:10.000"} -{"battery_voltage":0.7002162,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:11.000"} -{"battery_voltage":0.8225194,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:12.000"} -{"battery_voltage":0.6697984,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:13.000"} -{"battery_voltage":0.6181637,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:14.000"} -{"battery_voltage":0.51787734,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:15.000"} -{"battery_voltage":0.8129183,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:16.000"} -{"battery_voltage":0.5362242,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:17.000"} -{"battery_voltage":0.93992245,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:18.000"} -{"battery_voltage":0.92375016,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:19.000"} -{"battery_voltage":0.6239222,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:20.000"} -{"battery_voltage":0.5375186,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:21.000"} -{"battery_voltage":0.81466585,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:22.000"} -{"battery_voltage":0.8160017,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:23.000"} -{"battery_voltage":0.5074137,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:24.000"} -{"battery_voltage":0.5343781,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:25.000"} -{"battery_voltage":0.8245942,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:26.000"} -{"battery_voltage":0.91740286,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:27.000"} -{"battery_voltage":0.8306966,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:28.000"} -{"battery_voltage":0.65525514,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:29.000"} -{"battery_voltage":0.9835472,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:30.000"} -{"battery_voltage":0.6547742,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:31.000"} -{"battery_voltage":0.7086629,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:32.000"} -{"battery_voltage":0.70336837,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:33.000"} -{"battery_voltage":0.9790882,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:34.000"} -{"battery_voltage":0.8958361,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:35.000"} -{"battery_voltage":0.50759065,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:36.000"} -{"battery_voltage":0.9523881,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:37.000"} -{"battery_voltage":0.52146083,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:38.000"} -{"battery_voltage":0.6739295,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:39.000"} -{"battery_voltage":0.91997373,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:40.000"} -{"battery_voltage":0.5621818,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:41.000"} -{"battery_voltage":0.9174738,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:42.000"} -{"battery_voltage":0.5038406,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:43.000"} -{"battery_voltage":0.68513376,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:44.000"} -{"battery_voltage":0.821602,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:45.000"} -{"battery_voltage":0.89556265,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:46.000"} -{"battery_voltage":0.67343193,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:47.000"} -{"battery_voltage":0.91104645,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:48.000"} -{"battery_voltage":0.79959714,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:49.000"} -{"battery_voltage":0.7067905,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:50.000"} -{"battery_voltage":0.95580685,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:51.000"} -{"battery_voltage":0.6144588,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:52.000"} -{"battery_voltage":0.67538255,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:53.000"} -{"battery_voltage":0.65190107,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:54.000"} -{"battery_voltage":0.8357633,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:55.000"} -{"battery_voltage":0.9815697,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:56.000"} -{"battery_voltage":0.90397054,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:57.000"} -{"battery_voltage":0.9738802,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:00:58.000"} -{"battery_voltage":0.9766294,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:00:59.000"} -{"battery_voltage":0.5907954,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:01:00.000"} -{"battery_voltage":0.9156205,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:01:01.000"} -{"battery_voltage":0.92765516,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:01:02.000"} -{"battery_voltage":0.63674736,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:01:03.000"} -{"battery_voltage":0.95488065,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:01:04.000"} -{"battery_voltage":0.7493162,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:01:05.000"} -{"battery_voltage":0.98794764,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:01:06.000"} -{"battery_voltage":0.5224953,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:01:07.000"} -{"battery_voltage":0.9759531,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:01:08.000"} -{"battery_voltage":0.76789546,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:01:09.000"} -{"battery_voltage":0.9325875,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:01:10.000"} -{"battery_voltage":0.7892754,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:01:11.000"} -{"battery_voltage":0.7753079,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:01:12.000"} -{"battery_voltage":0.7549327,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:01:13.000"} -{"battery_voltage":0.745397,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:01:14.000"} -{"battery_voltage":0.6312453,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:01:15.000"} -{"battery_voltage":0.68574333,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:01:16.000"} -{"battery_voltage":0.70787597,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:01:17.000"} -{"battery_voltage":0.9508138,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:01:18.000"} -{"battery_voltage":0.6369623,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:01:19.000"} -{"battery_voltage":0.92772424,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:01:20.000"} -{"battery_voltage":0.9945661,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:01:21.000"} -{"battery_voltage":0.585473,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:01:22.000"} -{"battery_voltage":0.7667257,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:01:23.000"} -{"battery_voltage":0.9067954,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:01:24.000"} -{"battery_voltage":0.62860376,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:01:25.000"} -{"battery_voltage":0.66754717,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:01:26.000"} -{"battery_voltage":0.5024399,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:01:27.000"} -{"battery_voltage":0.6147868,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:01:28.000"} -{"battery_voltage":0.9749687,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:01:29.000"} -{"battery_voltage":0.9813121,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:01:30.000"} -{"battery_voltage":0.85633135,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:01:31.000"} -{"battery_voltage":0.70376605,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:01:32.000"} -{"battery_voltage":0.6737342,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:01:33.000"} -{"battery_voltage":0.79878306,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:01:34.000"} -{"battery_voltage":0.91642797,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:01:35.000"} -{"battery_voltage":0.96835375,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:01:36.000"} -{"battery_voltage":0.86015654,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:01:37.000"} -{"battery_voltage":0.725077,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":0,"ts":"2019-12-01 00:01:38.000"} -{"battery_voltage":0.736246,"home_id":"604","object_kind":"day","object_type":2,"sensor_id":"s101","states":1,"ts":"2019-12-01 00:01:39.000"} -{"battery_voltage":0.68116575,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:00:00.000"} -{"battery_voltage":0.5239342,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:01.000"} -{"battery_voltage":0.8781051,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:02.000"} -{"battery_voltage":0.61049944,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:03.000"} -{"battery_voltage":0.6954212,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:00:04.000"} -{"battery_voltage":0.57484275,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:00:05.000"} -{"battery_voltage":0.88279426,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:06.000"} -{"battery_voltage":0.727722,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:07.000"} -{"battery_voltage":0.54098475,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:08.000"} -{"battery_voltage":0.6331909,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:00:09.000"} -{"battery_voltage":0.5495351,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:00:10.000"} -{"battery_voltage":0.57960176,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:00:11.000"} -{"battery_voltage":0.8157383,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:00:12.000"} -{"battery_voltage":0.9837526,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:13.000"} -{"battery_voltage":0.66909057,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:00:14.000"} -{"battery_voltage":0.918733,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:15.000"} -{"battery_voltage":0.75111043,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:00:16.000"} -{"battery_voltage":0.73151976,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:00:17.000"} -{"battery_voltage":0.87203634,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:00:18.000"} -{"battery_voltage":0.6242085,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:19.000"} -{"battery_voltage":0.7118511,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:00:20.000"} -{"battery_voltage":0.8284241,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:21.000"} -{"battery_voltage":0.81839544,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:22.000"} -{"battery_voltage":0.6934307,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:23.000"} -{"battery_voltage":0.5631822,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:24.000"} -{"battery_voltage":0.7556696,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:25.000"} -{"battery_voltage":0.9973032,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:00:26.000"} -{"battery_voltage":0.8636595,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:00:27.000"} -{"battery_voltage":0.7570118,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:28.000"} -{"battery_voltage":0.7728013,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:00:29.000"} -{"battery_voltage":0.6466422,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:00:30.000"} -{"battery_voltage":0.57088935,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:31.000"} -{"battery_voltage":0.8156741,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:00:32.000"} -{"battery_voltage":0.5007058,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:00:33.000"} -{"battery_voltage":0.94389606,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:34.000"} -{"battery_voltage":0.7980893,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:35.000"} -{"battery_voltage":0.9149192,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:36.000"} -{"battery_voltage":0.5329674,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:37.000"} -{"battery_voltage":0.667759,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:00:38.000"} -{"battery_voltage":0.8095149,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:39.000"} -{"battery_voltage":0.66232204,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:40.000"} -{"battery_voltage":0.54209346,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:00:41.000"} -{"battery_voltage":0.8437841,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:00:42.000"} -{"battery_voltage":0.51106554,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:00:43.000"} -{"battery_voltage":0.5391229,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:00:44.000"} -{"battery_voltage":0.6142876,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:00:45.000"} -{"battery_voltage":0.63602245,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:46.000"} -{"battery_voltage":0.83091503,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:47.000"} -{"battery_voltage":0.98437226,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:48.000"} -{"battery_voltage":0.6822,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:49.000"} -{"battery_voltage":0.60308766,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:50.000"} -{"battery_voltage":0.88321567,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:51.000"} -{"battery_voltage":0.64395475,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:00:52.000"} -{"battery_voltage":0.726102,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:53.000"} -{"battery_voltage":0.6945282,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:54.000"} -{"battery_voltage":0.5037642,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:55.000"} -{"battery_voltage":0.50224465,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:56.000"} -{"battery_voltage":0.61892045,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:57.000"} -{"battery_voltage":0.8965783,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:00:58.000"} -{"battery_voltage":0.72004735,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:00:59.000"} -{"battery_voltage":0.89201033,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:01:00.000"} -{"battery_voltage":0.55109394,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:01:01.000"} -{"battery_voltage":0.5819292,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:01:02.000"} -{"battery_voltage":0.56059873,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:01:03.000"} -{"battery_voltage":0.99916655,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:01:04.000"} -{"battery_voltage":0.5516443,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:01:05.000"} -{"battery_voltage":0.65729505,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:01:06.000"} -{"battery_voltage":0.57163346,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:01:07.000"} -{"battery_voltage":0.843902,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:01:08.000"} -{"battery_voltage":0.51640797,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:01:09.000"} -{"battery_voltage":0.6674092,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:01:10.000"} -{"battery_voltage":0.67429006,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:01:11.000"} -{"battery_voltage":0.95735073,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:01:12.000"} -{"battery_voltage":0.5792276,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:01:13.000"} -{"battery_voltage":0.63157403,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:01:14.000"} -{"battery_voltage":0.59447736,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:01:15.000"} -{"battery_voltage":0.8206818,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:01:16.000"} -{"battery_voltage":0.8141984,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:01:17.000"} -{"battery_voltage":0.66849256,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:01:18.000"} -{"battery_voltage":0.71412754,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:01:19.000"} -{"battery_voltage":0.6733996,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:01:20.000"} -{"battery_voltage":0.9024965,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:01:21.000"} -{"battery_voltage":0.6886468,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:01:22.000"} -{"battery_voltage":0.7236516,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:01:23.000"} -{"battery_voltage":0.5494264,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:01:24.000"} -{"battery_voltage":0.51326233,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:01:25.000"} -{"battery_voltage":0.89173627,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:01:26.000"} -{"battery_voltage":0.98756754,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:01:27.000"} -{"battery_voltage":0.7213226,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:01:28.000"} -{"battery_voltage":0.8062184,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:01:29.000"} -{"battery_voltage":0.5482464,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:01:30.000"} -{"battery_voltage":0.61909574,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:01:31.000"} -{"battery_voltage":0.7190039,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:01:32.000"} -{"battery_voltage":0.60273135,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:01:33.000"} -{"battery_voltage":0.7350895,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":1,"ts":"2019-12-01 00:01:34.000"} -{"battery_voltage":0.5447789,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:01:35.000"} -{"battery_voltage":0.509202,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:01:36.000"} -{"battery_voltage":0.97541416,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:01:37.000"} -{"battery_voltage":0.7516321,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:01:38.000"} -{"battery_voltage":0.7726933,"home_id":"605","object_kind":"all","object_type":3,"sensor_id":"s102","states":0,"ts":"2019-12-01 00:01:39.000"} -{"battery_voltage":0.60115623,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:00.000"} -{"battery_voltage":0.9755862,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:01.000"} -{"battery_voltage":0.9823349,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:02.000"} -{"battery_voltage":0.6357885,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:03.000"} -{"battery_voltage":0.6279355,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:04.000"} -{"battery_voltage":0.59463865,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:05.000"} -{"battery_voltage":0.67826885,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:06.000"} -{"battery_voltage":0.8077018,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:07.000"} -{"battery_voltage":0.8912208,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:08.000"} -{"battery_voltage":0.8821316,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:09.000"} -{"battery_voltage":0.56158596,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:10.000"} -{"battery_voltage":0.76752067,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:11.000"} -{"battery_voltage":0.6092849,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:12.000"} -{"battery_voltage":0.8139862,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:13.000"} -{"battery_voltage":0.7290665,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:14.000"} -{"battery_voltage":0.93346804,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:15.000"} -{"battery_voltage":0.7031946,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:16.000"} -{"battery_voltage":0.73181903,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:17.000"} -{"battery_voltage":0.8115653,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:18.000"} -{"battery_voltage":0.66609514,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:19.000"} -{"battery_voltage":0.8918715,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:20.000"} -{"battery_voltage":0.89229536,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:21.000"} -{"battery_voltage":0.6547448,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:22.000"} -{"battery_voltage":0.5263817,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:23.000"} -{"battery_voltage":0.69104654,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:24.000"} -{"battery_voltage":0.64589655,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:25.000"} -{"battery_voltage":0.7149786,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:26.000"} -{"battery_voltage":0.6625407,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:27.000"} -{"battery_voltage":0.7064498,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:28.000"} -{"battery_voltage":0.8864048,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:29.000"} -{"battery_voltage":0.56908727,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:30.000"} -{"battery_voltage":0.66720784,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:31.000"} -{"battery_voltage":0.8207879,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:32.000"} -{"battery_voltage":0.7704214,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:33.000"} -{"battery_voltage":0.74916565,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:34.000"} -{"battery_voltage":0.53460443,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:35.000"} -{"battery_voltage":0.70717573,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:36.000"} -{"battery_voltage":0.9661542,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:37.000"} -{"battery_voltage":0.8559648,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:38.000"} -{"battery_voltage":0.5753055,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:39.000"} -{"battery_voltage":0.8062254,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:40.000"} -{"battery_voltage":0.8050467,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:41.000"} -{"battery_voltage":0.5420858,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:42.000"} -{"battery_voltage":0.89997375,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:43.000"} -{"battery_voltage":0.5517962,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:44.000"} -{"battery_voltage":0.7491184,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:45.000"} -{"battery_voltage":0.9720428,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:46.000"} -{"battery_voltage":0.8925575,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:47.000"} -{"battery_voltage":0.80679524,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:48.000"} -{"battery_voltage":0.80774236,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:49.000"} -{"battery_voltage":0.53613126,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:50.000"} -{"battery_voltage":0.9552542,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:51.000"} -{"battery_voltage":0.9303039,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:52.000"} -{"battery_voltage":0.9168983,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:53.000"} -{"battery_voltage":0.78906983,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:54.000"} -{"battery_voltage":0.5393992,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:55.000"} -{"battery_voltage":0.7752098,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:56.000"} -{"battery_voltage":0.7393297,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:00:57.000"} -{"battery_voltage":0.5901948,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:58.000"} -{"battery_voltage":0.82910055,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:00:59.000"} -{"battery_voltage":0.88593745,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:01:00.000"} -{"battery_voltage":0.60122955,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:01:01.000"} -{"battery_voltage":0.878977,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:01:02.000"} -{"battery_voltage":0.75698256,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:01:03.000"} -{"battery_voltage":0.50624055,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:01:04.000"} -{"battery_voltage":0.9885113,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:01:05.000"} -{"battery_voltage":0.74340963,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:01:06.000"} -{"battery_voltage":0.9759798,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:01:07.000"} -{"battery_voltage":0.73438704,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:01:08.000"} -{"battery_voltage":0.7121439,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:01:09.000"} -{"battery_voltage":0.7707707,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:01:10.000"} -{"battery_voltage":0.8732446,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:01:11.000"} -{"battery_voltage":0.8968997,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:01:12.000"} -{"battery_voltage":0.82115555,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:01:13.000"} -{"battery_voltage":0.85465467,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:01:14.000"} -{"battery_voltage":0.7902354,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:01:15.000"} -{"battery_voltage":0.50993747,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:01:16.000"} -{"battery_voltage":0.8614131,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:01:17.000"} -{"battery_voltage":0.92145103,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:01:18.000"} -{"battery_voltage":0.9863989,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:01:19.000"} -{"battery_voltage":0.58747536,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:01:20.000"} -{"battery_voltage":0.8356127,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:01:21.000"} -{"battery_voltage":0.8804123,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:01:22.000"} -{"battery_voltage":0.54516625,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:01:23.000"} -{"battery_voltage":0.54958564,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:01:24.000"} -{"battery_voltage":0.5939968,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:01:25.000"} -{"battery_voltage":0.5792352,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:01:26.000"} -{"battery_voltage":0.5488316,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:01:27.000"} -{"battery_voltage":0.9730228,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:01:28.000"} -{"battery_voltage":0.5745121,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:01:29.000"} -{"battery_voltage":0.8696457,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:01:30.000"} -{"battery_voltage":0.94995236,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:01:31.000"} -{"battery_voltage":0.9038729,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:01:32.000"} -{"battery_voltage":0.7729239,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:01:33.000"} -{"battery_voltage":0.6789726,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:01:34.000"} -{"battery_voltage":0.8997017,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:01:35.000"} -{"battery_voltage":0.72364557,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:01:36.000"} -{"battery_voltage":0.88753945,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":0,"ts":"2019-12-01 00:01:37.000"} -{"battery_voltage":0.7016446,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:01:38.000"} -{"battery_voltage":0.53595066,"home_id":"606","object_kind":"night","object_type":1,"sensor_id":"s103","states":1,"ts":"2019-12-01 00:01:39.000"} -{"battery_voltage":0.8033614,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:00.000"} -{"battery_voltage":0.8147938,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:01.000"} -{"battery_voltage":0.6050153,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:02.000"} -{"battery_voltage":0.7920519,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:03.000"} -{"battery_voltage":0.733798,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:04.000"} -{"battery_voltage":0.7512984,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:05.000"} -{"battery_voltage":0.972511,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:06.000"} -{"battery_voltage":0.8678342,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:07.000"} -{"battery_voltage":0.5627333,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:08.000"} -{"battery_voltage":0.50696725,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:09.000"} -{"battery_voltage":0.7697411,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:10.000"} -{"battery_voltage":0.7384832,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:11.000"} -{"battery_voltage":0.57802075,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:12.000"} -{"battery_voltage":0.6342828,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:13.000"} -{"battery_voltage":0.8889152,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:14.000"} -{"battery_voltage":0.7986384,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:15.000"} -{"battery_voltage":0.7695893,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:16.000"} -{"battery_voltage":0.6342156,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:17.000"} -{"battery_voltage":0.82402253,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:18.000"} -{"battery_voltage":0.9537116,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:19.000"} -{"battery_voltage":0.85123,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:20.000"} -{"battery_voltage":0.94443214,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:21.000"} -{"battery_voltage":0.81446874,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:22.000"} -{"battery_voltage":0.5079787,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:23.000"} -{"battery_voltage":0.82231855,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:24.000"} -{"battery_voltage":0.54318166,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:25.000"} -{"battery_voltage":0.887102,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:26.000"} -{"battery_voltage":0.7985031,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:27.000"} -{"battery_voltage":0.9324222,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:28.000"} -{"battery_voltage":0.9568784,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:29.000"} -{"battery_voltage":0.84419024,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:30.000"} -{"battery_voltage":0.63686687,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:31.000"} -{"battery_voltage":0.862638,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:32.000"} -{"battery_voltage":0.63915664,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:33.000"} -{"battery_voltage":0.94823104,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:34.000"} -{"battery_voltage":0.80180836,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:35.000"} -{"battery_voltage":0.56163365,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:36.000"} -{"battery_voltage":0.60698605,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:37.000"} -{"battery_voltage":0.90496016,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:38.000"} -{"battery_voltage":0.79479086,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:39.000"} -{"battery_voltage":0.5411746,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:40.000"} -{"battery_voltage":0.7360853,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:41.000"} -{"battery_voltage":0.8097295,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:42.000"} -{"battery_voltage":0.7171494,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:43.000"} -{"battery_voltage":0.849315,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:44.000"} -{"battery_voltage":0.663502,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:45.000"} -{"battery_voltage":0.51946706,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:46.000"} -{"battery_voltage":0.85430115,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:47.000"} -{"battery_voltage":0.82286215,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:48.000"} -{"battery_voltage":0.9102302,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:49.000"} -{"battery_voltage":0.94066036,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:50.000"} -{"battery_voltage":0.8434773,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:51.000"} -{"battery_voltage":0.95908654,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:52.000"} -{"battery_voltage":0.5931864,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:53.000"} -{"battery_voltage":0.9871588,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:54.000"} -{"battery_voltage":0.8742759,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:55.000"} -{"battery_voltage":0.50797683,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:56.000"} -{"battery_voltage":0.56906056,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:57.000"} -{"battery_voltage":0.9103812,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:00:58.000"} -{"battery_voltage":0.61753106,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:00:59.000"} -{"battery_voltage":0.7401742,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:01:00.000"} -{"battery_voltage":0.95390666,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:01:01.000"} -{"battery_voltage":0.5069772,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:01:02.000"} -{"battery_voltage":0.51301944,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:01:03.000"} -{"battery_voltage":0.72201246,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:01:04.000"} -{"battery_voltage":0.8913778,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:01:05.000"} -{"battery_voltage":0.976287,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:01:06.000"} -{"battery_voltage":0.991058,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:01:07.000"} -{"battery_voltage":0.99977124,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:01:08.000"} -{"battery_voltage":0.7334305,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:01:09.000"} -{"battery_voltage":0.552872,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:01:10.000"} -{"battery_voltage":0.7832855,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:01:11.000"} -{"battery_voltage":0.70349,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:01:12.000"} -{"battery_voltage":0.964519,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:01:13.000"} -{"battery_voltage":0.74284106,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:01:14.000"} -{"battery_voltage":0.66428864,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:01:15.000"} -{"battery_voltage":0.5493044,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:01:16.000"} -{"battery_voltage":0.74065554,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:01:17.000"} -{"battery_voltage":0.96337205,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:01:18.000"} -{"battery_voltage":0.67027295,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:01:19.000"} -{"battery_voltage":0.81034344,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:01:20.000"} -{"battery_voltage":0.6549411,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:01:21.000"} -{"battery_voltage":0.5835841,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:01:22.000"} -{"battery_voltage":0.96476233,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:01:23.000"} -{"battery_voltage":0.7508897,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:01:24.000"} -{"battery_voltage":0.5903082,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:01:25.000"} -{"battery_voltage":0.7541075,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:01:26.000"} -{"battery_voltage":0.8509584,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:01:27.000"} -{"battery_voltage":0.58535063,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:01:28.000"} -{"battery_voltage":0.51696,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:01:29.000"} -{"battery_voltage":0.8245963,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:01:30.000"} -{"battery_voltage":0.5676064,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:01:31.000"} -{"battery_voltage":0.9954416,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:01:32.000"} -{"battery_voltage":0.6617937,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:01:33.000"} -{"battery_voltage":0.5499162,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:01:34.000"} -{"battery_voltage":0.64593154,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":1,"ts":"2019-12-01 00:01:35.000"} -{"battery_voltage":0.946115,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:01:36.000"} -{"battery_voltage":0.5849637,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:01:37.000"} -{"battery_voltage":0.68064904,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:01:38.000"} -{"battery_voltage":0.8852545,"home_id":"603","object_kind":"day","object_type":2,"sensor_id":"s104","states":0,"ts":"2019-12-01 00:01:39.000"} -{"battery_voltage":0.70754087,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:00.000"} -{"battery_voltage":0.6483855,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:00:01.000"} -{"battery_voltage":0.5671366,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:02.000"} -{"battery_voltage":0.76337266,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:00:03.000"} -{"battery_voltage":0.9920288,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:00:04.000"} -{"battery_voltage":0.5574518,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:05.000"} -{"battery_voltage":0.59904534,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:06.000"} -{"battery_voltage":0.6480302,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:07.000"} -{"battery_voltage":0.63429725,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:08.000"} -{"battery_voltage":0.85299885,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:00:09.000"} -{"battery_voltage":0.77297366,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:00:10.000"} -{"battery_voltage":0.7668507,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:00:11.000"} -{"battery_voltage":0.57824785,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:12.000"} -{"battery_voltage":0.76801443,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:00:13.000"} -{"battery_voltage":0.8984245,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:14.000"} -{"battery_voltage":0.52167296,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:00:15.000"} -{"battery_voltage":0.8797653,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:16.000"} -{"battery_voltage":0.70621747,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:17.000"} -{"battery_voltage":0.8416389,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:18.000"} -{"battery_voltage":0.5681568,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:00:19.000"} -{"battery_voltage":0.9125648,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:20.000"} -{"battery_voltage":0.5100865,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:21.000"} -{"battery_voltage":0.9596597,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:22.000"} -{"battery_voltage":0.5011256,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:00:23.000"} -{"battery_voltage":0.8343365,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:00:24.000"} -{"battery_voltage":0.64652085,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:00:25.000"} -{"battery_voltage":0.6358192,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:26.000"} -{"battery_voltage":0.92160124,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:27.000"} -{"battery_voltage":0.909333,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:28.000"} -{"battery_voltage":0.95970964,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:00:29.000"} -{"battery_voltage":0.94331,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:30.000"} -{"battery_voltage":0.65175146,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:31.000"} -{"battery_voltage":0.69886935,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:00:32.000"} -{"battery_voltage":0.9866854,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:00:33.000"} -{"battery_voltage":0.5484814,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:34.000"} -{"battery_voltage":0.6101544,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:35.000"} -{"battery_voltage":0.8419212,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:00:36.000"} -{"battery_voltage":0.6960639,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:37.000"} -{"battery_voltage":0.8068489,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:38.000"} -{"battery_voltage":0.68448293,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:00:39.000"} -{"battery_voltage":0.8672006,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:40.000"} -{"battery_voltage":0.9113866,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:00:41.000"} -{"battery_voltage":0.8871064,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:00:42.000"} -{"battery_voltage":0.96817946,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:43.000"} -{"battery_voltage":0.5816642,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:00:44.000"} -{"battery_voltage":0.6309987,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:00:45.000"} -{"battery_voltage":0.9452791,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:00:46.000"} -{"battery_voltage":0.98369205,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:00:47.000"} -{"battery_voltage":0.7123141,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:00:48.000"} -{"battery_voltage":0.9546062,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:49.000"} -{"battery_voltage":0.92401385,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:50.000"} -{"battery_voltage":0.59127367,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:51.000"} -{"battery_voltage":0.87045366,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:52.000"} -{"battery_voltage":0.8465115,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:53.000"} -{"battery_voltage":0.91188776,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:54.000"} -{"battery_voltage":0.61064494,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:55.000"} -{"battery_voltage":0.84154475,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:00:56.000"} -{"battery_voltage":0.69890535,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:00:57.000"} -{"battery_voltage":0.57661706,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:00:58.000"} -{"battery_voltage":0.89222425,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:00:59.000"} -{"battery_voltage":0.56609154,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:01:00.000"} -{"battery_voltage":0.9224727,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:01:01.000"} -{"battery_voltage":0.8360301,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:01:02.000"} -{"battery_voltage":0.91405284,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:01:03.000"} -{"battery_voltage":0.8875489,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:01:04.000"} -{"battery_voltage":0.6775255,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:01:05.000"} -{"battery_voltage":0.71002764,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:01:06.000"} -{"battery_voltage":0.7901696,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:01:07.000"} -{"battery_voltage":0.84012544,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:01:08.000"} -{"battery_voltage":0.7698927,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:01:09.000"} -{"battery_voltage":0.6951759,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:01:10.000"} -{"battery_voltage":0.5941455,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:01:11.000"} -{"battery_voltage":0.8753067,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:01:12.000"} -{"battery_voltage":0.8527192,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:01:13.000"} -{"battery_voltage":0.7162281,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:01:14.000"} -{"battery_voltage":0.96830696,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:01:15.000"} -{"battery_voltage":0.82742965,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:01:16.000"} -{"battery_voltage":0.62583256,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:01:17.000"} -{"battery_voltage":0.8133428,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:01:18.000"} -{"battery_voltage":0.73012495,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:01:19.000"} -{"battery_voltage":0.8870168,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:01:20.000"} -{"battery_voltage":0.592625,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:01:21.000"} -{"battery_voltage":0.58833945,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:01:22.000"} -{"battery_voltage":0.6206717,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:01:23.000"} -{"battery_voltage":0.6431462,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:01:24.000"} -{"battery_voltage":0.8724054,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:01:25.000"} -{"battery_voltage":0.79947186,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:01:26.000"} -{"battery_voltage":0.9971847,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:01:27.000"} -{"battery_voltage":0.9268321,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:01:28.000"} -{"battery_voltage":0.82837874,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:01:29.000"} -{"battery_voltage":0.5304892,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:01:30.000"} -{"battery_voltage":0.6329912,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:01:31.000"} -{"battery_voltage":0.90618366,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:01:32.000"} -{"battery_voltage":0.5784858,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:01:33.000"} -{"battery_voltage":0.7942324,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:01:34.000"} -{"battery_voltage":0.6310129,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:01:35.000"} -{"battery_voltage":0.9656929,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:01:36.000"} -{"battery_voltage":0.9464745,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:01:37.000"} -{"battery_voltage":0.5906156,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":0,"ts":"2019-12-01 00:01:38.000"} -{"battery_voltage":0.57623565,"home_id":"604","object_kind":"all","object_type":3,"sensor_id":"s105","states":1,"ts":"2019-12-01 00:01:39.000"} -{"battery_voltage":0.8002974,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:00.000"} -{"battery_voltage":0.65368044,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:01.000"} -{"battery_voltage":0.71293247,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:02.000"} -{"battery_voltage":0.9082031,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:03.000"} -{"battery_voltage":0.7811729,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:04.000"} -{"battery_voltage":0.96570766,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:05.000"} -{"battery_voltage":0.8413833,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:00:06.000"} -{"battery_voltage":0.5964865,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:07.000"} -{"battery_voltage":0.8187906,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:08.000"} -{"battery_voltage":0.95528543,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:09.000"} -{"battery_voltage":0.8641478,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:10.000"} -{"battery_voltage":0.9830004,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:00:11.000"} -{"battery_voltage":0.88352764,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:12.000"} -{"battery_voltage":0.9232228,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:00:13.000"} -{"battery_voltage":0.95486975,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:00:14.000"} -{"battery_voltage":0.94609356,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:15.000"} -{"battery_voltage":0.61100274,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:16.000"} -{"battery_voltage":0.5691416,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:17.000"} -{"battery_voltage":0.9360826,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:00:18.000"} -{"battery_voltage":0.8925245,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:00:19.000"} -{"battery_voltage":0.6242925,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:00:20.000"} -{"battery_voltage":0.7285948,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:00:21.000"} -{"battery_voltage":0.74059856,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:00:22.000"} -{"battery_voltage":0.64874685,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:23.000"} -{"battery_voltage":0.7564658,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:24.000"} -{"battery_voltage":0.98491573,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:25.000"} -{"battery_voltage":0.598005,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:26.000"} -{"battery_voltage":0.88058275,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:27.000"} -{"battery_voltage":0.54105055,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:28.000"} -{"battery_voltage":0.93672323,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:29.000"} -{"battery_voltage":0.82872415,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:00:30.000"} -{"battery_voltage":0.6971599,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:00:31.000"} -{"battery_voltage":0.6769042,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:00:32.000"} -{"battery_voltage":0.6805867,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:33.000"} -{"battery_voltage":0.6872542,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:34.000"} -{"battery_voltage":0.82297754,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:35.000"} -{"battery_voltage":0.81444764,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:36.000"} -{"battery_voltage":0.69297683,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:00:37.000"} -{"battery_voltage":0.8391928,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:00:38.000"} -{"battery_voltage":0.80736417,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:39.000"} -{"battery_voltage":0.7868073,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:00:40.000"} -{"battery_voltage":0.77172005,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:41.000"} -{"battery_voltage":0.5137727,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:00:42.000"} -{"battery_voltage":0.95526296,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:00:43.000"} -{"battery_voltage":0.938064,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:44.000"} -{"battery_voltage":0.9020388,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:00:45.000"} -{"battery_voltage":0.9114888,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:00:46.000"} -{"battery_voltage":0.6880104,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:47.000"} -{"battery_voltage":0.9375304,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:48.000"} -{"battery_voltage":0.7244901,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:49.000"} -{"battery_voltage":0.82105714,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:00:50.000"} -{"battery_voltage":0.6234149,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:51.000"} -{"battery_voltage":0.92923963,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:00:52.000"} -{"battery_voltage":0.6733919,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:00:53.000"} -{"battery_voltage":0.76741683,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:00:54.000"} -{"battery_voltage":0.5319273,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:00:55.000"} -{"battery_voltage":0.68805224,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:00:56.000"} -{"battery_voltage":0.7300814,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:00:57.000"} -{"battery_voltage":0.6131429,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:00:58.000"} -{"battery_voltage":0.6922425,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:00:59.000"} -{"battery_voltage":0.9727907,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:01:00.000"} -{"battery_voltage":0.82986295,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:01:01.000"} -{"battery_voltage":0.5132921,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:01:02.000"} -{"battery_voltage":0.77134275,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:01:03.000"} -{"battery_voltage":0.5777383,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:01:04.000"} -{"battery_voltage":0.7101292,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:01:05.000"} -{"battery_voltage":0.6752328,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:01:06.000"} -{"battery_voltage":0.6355128,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:01:07.000"} -{"battery_voltage":0.9268579,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:01:08.000"} -{"battery_voltage":0.8940948,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:01:09.000"} -{"battery_voltage":0.8045571,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:01:10.000"} -{"battery_voltage":0.6397352,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:01:11.000"} -{"battery_voltage":0.5142179,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:01:12.000"} -{"battery_voltage":0.57437795,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:01:13.000"} -{"battery_voltage":0.5779674,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:01:14.000"} -{"battery_voltage":0.5777746,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:01:15.000"} -{"battery_voltage":0.79977393,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:01:16.000"} -{"battery_voltage":0.91564786,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:01:17.000"} -{"battery_voltage":0.83601356,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:01:18.000"} -{"battery_voltage":0.60413766,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:01:19.000"} -{"battery_voltage":0.98716986,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:01:20.000"} -{"battery_voltage":0.93296355,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:01:21.000"} -{"battery_voltage":0.90041673,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:01:22.000"} -{"battery_voltage":0.5376759,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:01:23.000"} -{"battery_voltage":0.71533316,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:01:24.000"} -{"battery_voltage":0.69811344,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:01:25.000"} -{"battery_voltage":0.9715346,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:01:26.000"} -{"battery_voltage":0.9206581,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:01:27.000"} -{"battery_voltage":0.8165749,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:01:28.000"} -{"battery_voltage":0.6838542,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:01:29.000"} -{"battery_voltage":0.87848604,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:01:30.000"} -{"battery_voltage":0.67027926,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:01:31.000"} -{"battery_voltage":0.90292645,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:01:32.000"} -{"battery_voltage":0.58885974,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:01:33.000"} -{"battery_voltage":0.6755761,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:01:34.000"} -{"battery_voltage":0.58424705,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:01:35.000"} -{"battery_voltage":0.8706522,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:01:36.000"} -{"battery_voltage":0.5665725,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:01:37.000"} -{"battery_voltage":0.8853537,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":0,"ts":"2019-12-01 00:01:38.000"} -{"battery_voltage":0.74042374,"home_id":"605","object_kind":"night","object_type":1,"sensor_id":"s106","states":1,"ts":"2019-12-01 00:01:39.000"} -{"battery_voltage":0.7546813,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:00.000"} -{"battery_voltage":0.6428457,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:01.000"} -{"battery_voltage":0.8217722,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:00:02.000"} -{"battery_voltage":0.5497275,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:03.000"} -{"battery_voltage":0.549164,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:04.000"} -{"battery_voltage":0.99488986,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:05.000"} -{"battery_voltage":0.65951693,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:00:06.000"} -{"battery_voltage":0.98187494,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:07.000"} -{"battery_voltage":0.51635957,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:08.000"} -{"battery_voltage":0.71983063,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:09.000"} -{"battery_voltage":0.9287454,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:10.000"} -{"battery_voltage":0.764307,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:11.000"} -{"battery_voltage":0.7559774,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:12.000"} -{"battery_voltage":0.8555727,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:13.000"} -{"battery_voltage":0.74285305,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:14.000"} -{"battery_voltage":0.8345988,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:00:15.000"} -{"battery_voltage":0.80865055,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:00:16.000"} -{"battery_voltage":0.6373774,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:00:17.000"} -{"battery_voltage":0.70070326,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:00:18.000"} -{"battery_voltage":0.7702416,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:00:19.000"} -{"battery_voltage":0.8708988,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:00:20.000"} -{"battery_voltage":0.7460189,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:21.000"} -{"battery_voltage":0.8054011,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:00:22.000"} -{"battery_voltage":0.70088184,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:23.000"} -{"battery_voltage":0.97855425,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:24.000"} -{"battery_voltage":0.92553365,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:00:25.000"} -{"battery_voltage":0.8004091,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:00:26.000"} -{"battery_voltage":0.58621615,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:00:27.000"} -{"battery_voltage":0.8544398,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:28.000"} -{"battery_voltage":0.93507946,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:00:29.000"} -{"battery_voltage":0.981555,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:30.000"} -{"battery_voltage":0.6559863,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:31.000"} -{"battery_voltage":0.589917,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:32.000"} -{"battery_voltage":0.77023107,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:33.000"} -{"battery_voltage":0.8414885,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:00:34.000"} -{"battery_voltage":0.92723155,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:35.000"} -{"battery_voltage":0.68667865,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:36.000"} -{"battery_voltage":0.6563879,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:00:37.000"} -{"battery_voltage":0.5494162,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:38.000"} -{"battery_voltage":0.73033655,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:39.000"} -{"battery_voltage":0.8967389,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:00:40.000"} -{"battery_voltage":0.93003184,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:00:41.000"} -{"battery_voltage":0.5939365,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:42.000"} -{"battery_voltage":0.8320396,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:43.000"} -{"battery_voltage":0.99154466,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:44.000"} -{"battery_voltage":0.9142281,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:45.000"} -{"battery_voltage":0.9949862,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:00:46.000"} -{"battery_voltage":0.7782185,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:00:47.000"} -{"battery_voltage":0.5089121,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:00:48.000"} -{"battery_voltage":0.73104143,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:49.000"} -{"battery_voltage":0.8676681,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:00:50.000"} -{"battery_voltage":0.6835471,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:51.000"} -{"battery_voltage":0.7104448,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:52.000"} -{"battery_voltage":0.8338785,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:00:53.000"} -{"battery_voltage":0.78650606,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:00:54.000"} -{"battery_voltage":0.86156666,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:00:55.000"} -{"battery_voltage":0.67074865,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:56.000"} -{"battery_voltage":0.92131823,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:00:57.000"} -{"battery_voltage":0.6692456,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:00:58.000"} -{"battery_voltage":0.70075643,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:00:59.000"} -{"battery_voltage":0.810084,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:01:00.000"} -{"battery_voltage":0.5218424,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:01:01.000"} -{"battery_voltage":0.66221285,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:01:02.000"} -{"battery_voltage":0.8589293,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:01:03.000"} -{"battery_voltage":0.85367,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:01:04.000"} -{"battery_voltage":0.76111495,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:01:05.000"} -{"battery_voltage":0.5683803,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:01:06.000"} -{"battery_voltage":0.965793,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:01:07.000"} -{"battery_voltage":0.97445166,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:01:08.000"} -{"battery_voltage":0.64657986,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:01:09.000"} -{"battery_voltage":0.8598856,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:01:10.000"} -{"battery_voltage":0.9699453,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:01:11.000"} -{"battery_voltage":0.77614653,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:01:12.000"} -{"battery_voltage":0.73633116,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:01:13.000"} -{"battery_voltage":0.66921216,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:01:14.000"} -{"battery_voltage":0.61229855,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:01:15.000"} -{"battery_voltage":0.9456196,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:01:16.000"} -{"battery_voltage":0.8569248,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:01:17.000"} -{"battery_voltage":0.5586567,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:01:18.000"} -{"battery_voltage":0.5249643,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:01:19.000"} -{"battery_voltage":0.51541376,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:01:20.000"} -{"battery_voltage":0.9897876,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:01:21.000"} -{"battery_voltage":0.5684158,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:01:22.000"} -{"battery_voltage":0.7586645,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:01:23.000"} -{"battery_voltage":0.57831913,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:01:24.000"} -{"battery_voltage":0.5272984,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:01:25.000"} -{"battery_voltage":0.8490623,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:01:26.000"} -{"battery_voltage":0.61126375,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:01:27.000"} -{"battery_voltage":0.6298294,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:01:28.000"} -{"battery_voltage":0.58072305,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:01:29.000"} -{"battery_voltage":0.54520565,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:01:30.000"} -{"battery_voltage":0.65894264,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:01:31.000"} -{"battery_voltage":0.55736834,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:01:32.000"} -{"battery_voltage":0.9139086,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:01:33.000"} -{"battery_voltage":0.59066606,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:01:34.000"} -{"battery_voltage":0.65324485,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:01:35.000"} -{"battery_voltage":0.52651376,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:01:36.000"} -{"battery_voltage":0.79430807,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:01:37.000"} -{"battery_voltage":0.68324184,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":0,"ts":"2019-12-01 00:01:38.000"} -{"battery_voltage":0.9977864,"home_id":"606","object_kind":"day","object_type":2,"sensor_id":"s107","states":1,"ts":"2019-12-01 00:01:39.000"} -{"battery_voltage":0.86721027,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:00.000"} -{"battery_voltage":0.91057515,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:01.000"} -{"battery_voltage":0.6340915,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:02.000"} -{"battery_voltage":0.9256289,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:03.000"} -{"battery_voltage":0.524389,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:04.000"} -{"battery_voltage":0.900969,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:05.000"} -{"battery_voltage":0.70975065,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:06.000"} -{"battery_voltage":0.6816068,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:07.000"} -{"battery_voltage":0.60286266,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:08.000"} -{"battery_voltage":0.64431405,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:09.000"} -{"battery_voltage":0.8481047,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:10.000"} -{"battery_voltage":0.74875927,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:11.000"} -{"battery_voltage":0.553125,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:12.000"} -{"battery_voltage":0.89230585,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:13.000"} -{"battery_voltage":0.8484179,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:14.000"} -{"battery_voltage":0.508562,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:15.000"} -{"battery_voltage":0.6212453,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:16.000"} -{"battery_voltage":0.8540254,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:17.000"} -{"battery_voltage":0.5535025,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:18.000"} -{"battery_voltage":0.73381513,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:19.000"} -{"battery_voltage":0.64239544,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:20.000"} -{"battery_voltage":0.55263007,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:21.000"} -{"battery_voltage":0.6341637,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:22.000"} -{"battery_voltage":0.7654568,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:23.000"} -{"battery_voltage":0.92196476,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:24.000"} -{"battery_voltage":0.9304836,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:25.000"} -{"battery_voltage":0.6291903,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:26.000"} -{"battery_voltage":0.72140205,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:27.000"} -{"battery_voltage":0.8851147,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:28.000"} -{"battery_voltage":0.80896443,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:29.000"} -{"battery_voltage":0.63162744,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:30.000"} -{"battery_voltage":0.539704,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:31.000"} -{"battery_voltage":0.9556397,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:32.000"} -{"battery_voltage":0.6092425,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:33.000"} -{"battery_voltage":0.64407754,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:34.000"} -{"battery_voltage":0.8924454,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:35.000"} -{"battery_voltage":0.6341453,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:36.000"} -{"battery_voltage":0.80927,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:37.000"} -{"battery_voltage":0.6517041,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:38.000"} -{"battery_voltage":0.597603,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:39.000"} -{"battery_voltage":0.89759815,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:40.000"} -{"battery_voltage":0.91360915,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:41.000"} -{"battery_voltage":0.77801263,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:42.000"} -{"battery_voltage":0.6941989,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:43.000"} -{"battery_voltage":0.5947089,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:44.000"} -{"battery_voltage":0.5456626,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:45.000"} -{"battery_voltage":0.607256,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:46.000"} -{"battery_voltage":0.61421853,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:47.000"} -{"battery_voltage":0.63586694,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:48.000"} -{"battery_voltage":0.6851482,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:49.000"} -{"battery_voltage":0.6763804,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:50.000"} -{"battery_voltage":0.82943195,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:51.000"} -{"battery_voltage":0.50045407,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:52.000"} -{"battery_voltage":0.7916049,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:53.000"} -{"battery_voltage":0.7013703,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:54.000"} -{"battery_voltage":0.6699885,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:55.000"} -{"battery_voltage":0.8420504,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:56.000"} -{"battery_voltage":0.51466507,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:00:57.000"} -{"battery_voltage":0.90366614,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:58.000"} -{"battery_voltage":0.85084975,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:00:59.000"} -{"battery_voltage":0.9257466,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:01:00.000"} -{"battery_voltage":0.5102245,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:01:01.000"} -{"battery_voltage":0.96108043,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:01:02.000"} -{"battery_voltage":0.5486912,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:01:03.000"} -{"battery_voltage":0.89654887,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:01:04.000"} -{"battery_voltage":0.8891253,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:01:05.000"} -{"battery_voltage":0.5727371,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:01:06.000"} -{"battery_voltage":0.783249,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:01:07.000"} -{"battery_voltage":0.5469513,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:01:08.000"} -{"battery_voltage":0.9613789,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:01:09.000"} -{"battery_voltage":0.69545364,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:01:10.000"} -{"battery_voltage":0.86351585,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:01:11.000"} -{"battery_voltage":0.8531561,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:01:12.000"} -{"battery_voltage":0.7359057,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:01:13.000"} -{"battery_voltage":0.621776,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:01:14.000"} -{"battery_voltage":0.6604511,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:01:15.000"} -{"battery_voltage":0.5478783,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:01:16.000"} -{"battery_voltage":0.66255414,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:01:17.000"} -{"battery_voltage":0.9499179,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:01:18.000"} -{"battery_voltage":0.88831574,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:01:19.000"} -{"battery_voltage":0.7580633,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:01:20.000"} -{"battery_voltage":0.9225353,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:01:21.000"} -{"battery_voltage":0.6692219,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:01:22.000"} -{"battery_voltage":0.61912835,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:01:23.000"} -{"battery_voltage":0.99582875,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:01:24.000"} -{"battery_voltage":0.87902415,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:01:25.000"} -{"battery_voltage":0.67628443,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:01:26.000"} -{"battery_voltage":0.83687174,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:01:27.000"} -{"battery_voltage":0.9943143,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:01:28.000"} -{"battery_voltage":0.6201099,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:01:29.000"} -{"battery_voltage":0.674114,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:01:30.000"} -{"battery_voltage":0.67246366,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:01:31.000"} -{"battery_voltage":0.8239599,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:01:32.000"} -{"battery_voltage":0.70240146,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:01:33.000"} -{"battery_voltage":0.69047993,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:01:34.000"} -{"battery_voltage":0.7104731,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:01:35.000"} -{"battery_voltage":0.9709189,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:01:36.000"} -{"battery_voltage":0.94283324,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:01:37.000"} -{"battery_voltage":0.9247738,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":0,"ts":"2019-12-01 00:01:38.000"} -{"battery_voltage":0.552071,"home_id":"603","object_kind":"all","object_type":3,"sensor_id":"s108","states":1,"ts":"2019-12-01 00:01:39.000"} -{"battery_voltage":0.9636625,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:00.000"} -{"battery_voltage":0.6839534,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:01.000"} -{"battery_voltage":0.9954566,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:02.000"} -{"battery_voltage":0.86396515,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:03.000"} -{"battery_voltage":0.61659896,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:04.000"} -{"battery_voltage":0.71129197,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:05.000"} -{"battery_voltage":0.905854,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:06.000"} -{"battery_voltage":0.7705264,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:07.000"} -{"battery_voltage":0.7981062,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:08.000"} -{"battery_voltage":0.7636435,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:09.000"} -{"battery_voltage":0.9345093,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:10.000"} -{"battery_voltage":0.5336993,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:11.000"} -{"battery_voltage":0.80898285,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:12.000"} -{"battery_voltage":0.89411414,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:13.000"} -{"battery_voltage":0.56510407,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:14.000"} -{"battery_voltage":0.7335708,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:15.000"} -{"battery_voltage":0.93600345,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:16.000"} -{"battery_voltage":0.9809611,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:17.000"} -{"battery_voltage":0.59630346,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:18.000"} -{"battery_voltage":0.8613196,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:19.000"} -{"battery_voltage":0.62268347,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:20.000"} -{"battery_voltage":0.9764792,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:21.000"} -{"battery_voltage":0.8251264,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:22.000"} -{"battery_voltage":0.95377636,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:23.000"} -{"battery_voltage":0.8570133,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:24.000"} -{"battery_voltage":0.92142123,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:25.000"} -{"battery_voltage":0.8477019,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:26.000"} -{"battery_voltage":0.8612052,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:27.000"} -{"battery_voltage":0.59492385,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:28.000"} -{"battery_voltage":0.87671703,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:29.000"} -{"battery_voltage":0.9056556,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:30.000"} -{"battery_voltage":0.93940216,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:31.000"} -{"battery_voltage":0.8290224,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:32.000"} -{"battery_voltage":0.5113568,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:33.000"} -{"battery_voltage":0.59223604,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:34.000"} -{"battery_voltage":0.51160496,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:35.000"} -{"battery_voltage":0.54997766,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:36.000"} -{"battery_voltage":0.8167529,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:37.000"} -{"battery_voltage":0.73863506,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:38.000"} -{"battery_voltage":0.7665298,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:39.000"} -{"battery_voltage":0.82101595,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:40.000"} -{"battery_voltage":0.97279453,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:41.000"} -{"battery_voltage":0.5629725,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:42.000"} -{"battery_voltage":0.53847814,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:43.000"} -{"battery_voltage":0.589947,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:44.000"} -{"battery_voltage":0.98508626,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:45.000"} -{"battery_voltage":0.84777415,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:46.000"} -{"battery_voltage":0.68025327,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:47.000"} -{"battery_voltage":0.6514157,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:48.000"} -{"battery_voltage":0.5478574,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:49.000"} -{"battery_voltage":0.8615689,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:50.000"} -{"battery_voltage":0.9215113,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:51.000"} -{"battery_voltage":0.5097517,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:52.000"} -{"battery_voltage":0.99524146,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:53.000"} -{"battery_voltage":0.62237006,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:54.000"} -{"battery_voltage":0.7505579,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:55.000"} -{"battery_voltage":0.6049488,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:56.000"} -{"battery_voltage":0.6638993,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:57.000"} -{"battery_voltage":0.8366454,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:00:58.000"} -{"battery_voltage":0.5381521,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:00:59.000"} -{"battery_voltage":0.662686,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:01:00.000"} -{"battery_voltage":0.6258177,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:01:01.000"} -{"battery_voltage":0.64257276,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:01:02.000"} -{"battery_voltage":0.65594685,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:01:03.000"} -{"battery_voltage":0.57828206,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:01:04.000"} -{"battery_voltage":0.786163,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:01:05.000"} -{"battery_voltage":0.6895987,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:01:06.000"} -{"battery_voltage":0.904716,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:01:07.000"} -{"battery_voltage":0.5041426,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:01:08.000"} -{"battery_voltage":0.66904837,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:01:09.000"} -{"battery_voltage":0.7101751,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:01:10.000"} -{"battery_voltage":0.69509715,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:01:11.000"} -{"battery_voltage":0.6266739,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:01:12.000"} -{"battery_voltage":0.97146165,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:01:13.000"} -{"battery_voltage":0.71578836,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:01:14.000"} -{"battery_voltage":0.7764681,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:01:15.000"} -{"battery_voltage":0.94571376,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:01:16.000"} -{"battery_voltage":0.7120625,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:01:17.000"} -{"battery_voltage":0.98183215,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:01:18.000"} -{"battery_voltage":0.9253825,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:01:19.000"} -{"battery_voltage":0.53743166,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:01:20.000"} -{"battery_voltage":0.69378746,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:01:21.000"} -{"battery_voltage":0.784279,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:01:22.000"} -{"battery_voltage":0.87504184,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:01:23.000"} -{"battery_voltage":0.7890485,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:01:24.000"} -{"battery_voltage":0.9394257,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:01:25.000"} -{"battery_voltage":0.7325297,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:01:26.000"} -{"battery_voltage":0.79771256,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:01:27.000"} -{"battery_voltage":0.5948397,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:01:28.000"} -{"battery_voltage":0.5982751,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:01:29.000"} -{"battery_voltage":0.5305714,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:01:30.000"} -{"battery_voltage":0.9328362,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:01:31.000"} -{"battery_voltage":0.60514575,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:01:32.000"} -{"battery_voltage":0.64315695,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:01:33.000"} -{"battery_voltage":0.61862606,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:01:34.000"} -{"battery_voltage":0.9997138,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":1,"ts":"2019-12-01 00:01:35.000"} -{"battery_voltage":0.9584835,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:01:36.000"} -{"battery_voltage":0.74601066,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:01:37.000"} -{"battery_voltage":0.5287202,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:01:38.000"} -{"battery_voltage":0.9887305,"home_id":"604","object_kind":"night","object_type":1,"sensor_id":"s109","states":0,"ts":"2019-12-01 00:01:39.000"} diff --git a/importSampleData/data/sensor_info.csv b/importSampleData/data/sensor_info.csv deleted file mode 100644 index c5ff898118e59dcfc0eb24d03db7b326b5fb9342..0000000000000000000000000000000000000000 --- a/importSampleData/data/sensor_info.csv +++ /dev/null @@ -1,1001 +0,0 @@ -devid,location,color,devgroup,ts,temperature,humidity -0, beijing, white, 0, 1575129600000, 16, 19.405091 -0, beijing, white, 0, 1575129601000, 22, 14.377142 -0, beijing, white, 0, 1575129602000, 16, 16.868231 -0, beijing, white, 0, 1575129603000, 20, 11.565193 -0, beijing, white, 0, 1575129604000, 31, 13.009119 -0, beijing, white, 0, 1575129605000, 29, 18.136400 -0, beijing, white, 0, 1575129606000, 17, 13.806572 -0, beijing, white, 0, 1575129607000, 23, 14.688898 -0, beijing, white, 0, 1575129608000, 26, 12.931019 -0, beijing, white, 0, 1575129609000, 32, 12.185531 -0, beijing, white, 0, 1575129610000, 30, 13.608714 -0, beijing, white, 0, 1575129611000, 23, 18.624914 -0, beijing, white, 0, 1575129612000, 22, 12.970826 -0, beijing, white, 0, 1575129613000, 22, 12.065827 -0, beijing, white, 0, 1575129614000, 25, 16.967192 -0, beijing, white, 0, 1575129615000, 16, 10.283031 -0, beijing, white, 0, 1575129616000, 22, 16.072534 -0, beijing, white, 0, 1575129617000, 24, 10.794536 -0, beijing, white, 0, 1575129618000, 32, 10.591207 -0, beijing, white, 0, 1575129619000, 20, 13.015227 -0, beijing, white, 0, 1575129620000, 28, 15.410999 -0, beijing, white, 0, 1575129621000, 29, 12.785076 -0, beijing, white, 0, 1575129622000, 28, 15.305857 -0, beijing, white, 0, 1575129623000, 33, 12.820810 -0, beijing, white, 0, 1575129624000, 34, 13.618055 -0, beijing, white, 0, 1575129625000, 32, 12.971123 -0, beijing, white, 0, 1575129626000, 24, 10.974546 -0, beijing, white, 0, 1575129627000, 15, 10.742910 -0, beijing, white, 0, 1575129628000, 23, 16.810783 -0, beijing, white, 0, 1575129629000, 18, 13.115224 -0, beijing, white, 0, 1575129630000, 26, 17.418490 -0, beijing, white, 0, 1575129631000, 20, 17.302315 -0, beijing, white, 0, 1575129632000, 21, 14.283571 -0, beijing, white, 0, 1575129633000, 16, 16.826535 -0, beijing, white, 0, 1575129634000, 18, 19.222123 -0, beijing, white, 0, 1575129635000, 18, 14.931420 -0, beijing, white, 0, 1575129636000, 17, 19.549454 -0, beijing, white, 0, 1575129637000, 22, 16.908388 -0, beijing, white, 0, 1575129638000, 32, 15.637796 -0, beijing, white, 0, 1575129639000, 31, 15.517650 -0, beijing, white, 0, 1575129640000, 18, 14.038033 -0, beijing, white, 0, 1575129641000, 32, 19.859647 -0, beijing, white, 0, 1575129642000, 16, 13.220840 -0, beijing, white, 0, 1575129643000, 28, 16.445398 -0, beijing, white, 0, 1575129644000, 26, 16.695753 -0, beijing, white, 0, 1575129645000, 33, 13.696928 -0, beijing, white, 0, 1575129646000, 21, 15.352819 -0, beijing, white, 0, 1575129647000, 15, 12.388407 -0, beijing, white, 0, 1575129648000, 27, 11.267529 -0, beijing, white, 0, 1575129649000, 20, 14.103228 -0, beijing, white, 0, 1575129650000, 20, 16.250950 -0, beijing, white, 0, 1575129651000, 30, 16.236088 -0, beijing, white, 0, 1575129652000, 22, 18.305339 -0, beijing, white, 0, 1575129653000, 25, 17.360686 -0, beijing, white, 0, 1575129654000, 25, 14.978681 -0, beijing, white, 0, 1575129655000, 33, 14.096183 -0, beijing, white, 0, 1575129656000, 26, 10.019039 -0, beijing, white, 0, 1575129657000, 19, 19.158213 -0, beijing, white, 0, 1575129658000, 22, 15.593924 -0, beijing, white, 0, 1575129659000, 26, 18.780118 -0, beijing, white, 0, 1575129660000, 21, 16.001656 -0, beijing, white, 0, 1575129661000, 16, 18.458328 -0, beijing, white, 0, 1575129662000, 21, 16.417843 -0, beijing, white, 0, 1575129663000, 28, 11.736558 -0, beijing, white, 0, 1575129664000, 34, 18.143946 -0, beijing, white, 0, 1575129665000, 27, 10.303225 -0, beijing, white, 0, 1575129666000, 20, 19.756748 -0, beijing, white, 0, 1575129667000, 22, 12.940063 -0, beijing, white, 0, 1575129668000, 23, 11.509640 -0, beijing, white, 0, 1575129669000, 19, 18.319309 -0, beijing, white, 0, 1575129670000, 19, 16.278346 -0, beijing, white, 0, 1575129671000, 27, 10.898361 -0, beijing, white, 0, 1575129672000, 31, 13.922162 -0, beijing, white, 0, 1575129673000, 15, 19.296116 -0, beijing, white, 0, 1575129674000, 26, 15.885763 -0, beijing, white, 0, 1575129675000, 15, 15.525804 -0, beijing, white, 0, 1575129676000, 19, 19.579539 -0, beijing, white, 0, 1575129677000, 20, 11.073811 -0, beijing, white, 0, 1575129678000, 16, 13.932510 -0, beijing, white, 0, 1575129679000, 17, 11.900328 -0, beijing, white, 0, 1575129680000, 22, 16.540414 -0, beijing, white, 0, 1575129681000, 33, 15.203803 -0, beijing, white, 0, 1575129682000, 17, 11.518434 -0, beijing, white, 0, 1575129683000, 17, 13.152081 -0, beijing, white, 0, 1575129684000, 18, 11.378041 -0, beijing, white, 0, 1575129685000, 21, 15.390745 -0, beijing, white, 0, 1575129686000, 30, 15.127818 -0, beijing, white, 0, 1575129687000, 19, 16.530402 -0, beijing, white, 0, 1575129688000, 32, 16.542701 -0, beijing, white, 0, 1575129689000, 26, 16.366442 -0, beijing, white, 0, 1575129690000, 25, 10.306822 -0, beijing, white, 0, 1575129691000, 15, 13.691117 -0, beijing, white, 0, 1575129692000, 15, 13.476817 -0, beijing, white, 0, 1575129693000, 25, 12.529998 -0, beijing, white, 0, 1575129694000, 22, 15.550021 -0, beijing, white, 0, 1575129695000, 20, 15.064971 -0, beijing, white, 0, 1575129696000, 24, 13.313683 -0, beijing, white, 0, 1575129697000, 23, 17.002879 -0, beijing, white, 0, 1575129698000, 30, 19.991595 -0, beijing, white, 0, 1575129699000, 15, 11.116746 -1, shanghai, black, 1, 1575129600000, 24, 10.921176 -1, shanghai, black, 1, 1575129601000, 26, 17.146958 -1, shanghai, black, 1, 1575129602000, 21, 18.486329 -1, shanghai, black, 1, 1575129603000, 34, 12.125609 -1, shanghai, black, 1, 1575129604000, 22, 19.451948 -1, shanghai, black, 1, 1575129605000, 23, 16.458334 -1, shanghai, black, 1, 1575129606000, 18, 14.484644 -1, shanghai, black, 1, 1575129607000, 33, 10.824797 -1, shanghai, black, 1, 1575129608000, 34, 14.001883 -1, shanghai, black, 1, 1575129609000, 32, 19.498832 -1, shanghai, black, 1, 1575129610000, 30, 14.993855 -1, shanghai, black, 1, 1575129611000, 28, 10.198087 -1, shanghai, black, 1, 1575129612000, 32, 14.286884 -1, shanghai, black, 1, 1575129613000, 25, 18.874475 -1, shanghai, black, 1, 1575129614000, 21, 17.650082 -1, shanghai, black, 1, 1575129615000, 15, 17.275773 -1, shanghai, black, 1, 1575129616000, 17, 15.130875 -1, shanghai, black, 1, 1575129617000, 16, 17.242291 -1, shanghai, black, 1, 1575129618000, 15, 19.777635 -1, shanghai, black, 1, 1575129619000, 29, 18.321979 -1, shanghai, black, 1, 1575129620000, 15, 19.133991 -1, shanghai, black, 1, 1575129621000, 16, 18.351038 -1, shanghai, black, 1, 1575129622000, 31, 17.517406 -1, shanghai, black, 1, 1575129623000, 34, 10.969342 -1, shanghai, black, 1, 1575129624000, 28, 15.838347 -1, shanghai, black, 1, 1575129625000, 19, 19.982738 -1, shanghai, black, 1, 1575129626000, 24, 19.854656 -1, shanghai, black, 1, 1575129627000, 34, 13.320561 -1, shanghai, black, 1, 1575129628000, 15, 19.560206 -1, shanghai, black, 1, 1575129629000, 15, 11.843907 -1, shanghai, black, 1, 1575129630000, 19, 18.332418 -1, shanghai, black, 1, 1575129631000, 30, 18.058718 -1, shanghai, black, 1, 1575129632000, 16, 17.185304 -1, shanghai, black, 1, 1575129633000, 29, 18.958033 -1, shanghai, black, 1, 1575129634000, 25, 10.187132 -1, shanghai, black, 1, 1575129635000, 33, 14.235532 -1, shanghai, black, 1, 1575129636000, 19, 14.326982 -1, shanghai, black, 1, 1575129637000, 29, 18.557044 -1, shanghai, black, 1, 1575129638000, 19, 16.590305 -1, shanghai, black, 1, 1575129639000, 21, 15.034868 -1, shanghai, black, 1, 1575129640000, 27, 10.231096 -1, shanghai, black, 1, 1575129641000, 17, 12.611756 -1, shanghai, black, 1, 1575129642000, 32, 13.148048 -1, shanghai, black, 1, 1575129643000, 20, 18.997501 -1, shanghai, black, 1, 1575129644000, 34, 11.001994 -1, shanghai, black, 1, 1575129645000, 24, 17.698891 -1, shanghai, black, 1, 1575129646000, 16, 12.623819 -1, shanghai, black, 1, 1575129647000, 26, 12.146537 -1, shanghai, black, 1, 1575129648000, 28, 13.511343 -1, shanghai, black, 1, 1575129649000, 34, 15.783513 -1, shanghai, black, 1, 1575129650000, 23, 11.198505 -1, shanghai, black, 1, 1575129651000, 23, 10.537856 -1, shanghai, black, 1, 1575129652000, 29, 13.241740 -1, shanghai, black, 1, 1575129653000, 30, 13.492887 -1, shanghai, black, 1, 1575129654000, 21, 19.687462 -1, shanghai, black, 1, 1575129655000, 21, 12.079431 -1, shanghai, black, 1, 1575129656000, 29, 13.022024 -1, shanghai, black, 1, 1575129657000, 34, 11.340842 -1, shanghai, black, 1, 1575129658000, 18, 16.408648 -1, shanghai, black, 1, 1575129659000, 22, 18.098742 -1, shanghai, black, 1, 1575129660000, 29, 19.427574 -1, shanghai, black, 1, 1575129661000, 26, 14.946804 -1, shanghai, black, 1, 1575129662000, 18, 17.107439 -1, shanghai, black, 1, 1575129663000, 31, 14.076329 -1, shanghai, black, 1, 1575129664000, 32, 19.443971 -1, shanghai, black, 1, 1575129665000, 31, 12.886383 -1, shanghai, black, 1, 1575129666000, 20, 14.525845 -1, shanghai, black, 1, 1575129667000, 24, 13.153620 -1, shanghai, black, 1, 1575129668000, 22, 17.515631 -1, shanghai, black, 1, 1575129669000, 24, 16.697146 -1, shanghai, black, 1, 1575129670000, 34, 14.588845 -1, shanghai, black, 1, 1575129671000, 17, 14.815298 -1, shanghai, black, 1, 1575129672000, 20, 19.506232 -1, shanghai, black, 1, 1575129673000, 28, 17.425147 -1, shanghai, black, 1, 1575129674000, 15, 10.661514 -1, shanghai, black, 1, 1575129675000, 20, 19.254679 -1, shanghai, black, 1, 1575129676000, 24, 14.094194 -1, shanghai, black, 1, 1575129677000, 31, 10.972616 -1, shanghai, black, 1, 1575129678000, 15, 10.044447 -1, shanghai, black, 1, 1575129679000, 32, 11.093067 -1, shanghai, black, 1, 1575129680000, 33, 12.570554 -1, shanghai, black, 1, 1575129681000, 28, 19.264114 -1, shanghai, black, 1, 1575129682000, 23, 13.038871 -1, shanghai, black, 1, 1575129683000, 20, 11.764896 -1, shanghai, black, 1, 1575129684000, 19, 17.051371 -1, shanghai, black, 1, 1575129685000, 18, 12.503689 -1, shanghai, black, 1, 1575129686000, 28, 17.512406 -1, shanghai, black, 1, 1575129687000, 28, 18.409932 -1, shanghai, black, 1, 1575129688000, 22, 10.132855 -1, shanghai, black, 1, 1575129689000, 23, 18.993715 -1, shanghai, black, 1, 1575129690000, 26, 10.430004 -1, shanghai, black, 1, 1575129691000, 21, 10.510941 -1, shanghai, black, 1, 1575129692000, 26, 14.756974 -1, shanghai, black, 1, 1575129693000, 32, 10.407199 -1, shanghai, black, 1, 1575129694000, 29, 12.601247 -1, shanghai, black, 1, 1575129695000, 25, 19.604975 -1, shanghai, black, 1, 1575129696000, 22, 12.293202 -1, shanghai, black, 1, 1575129697000, 19, 17.564823 -1, shanghai, black, 1, 1575129698000, 28, 13.389774 -1, shanghai, black, 1, 1575129699000, 31, 19.859944 -2, guangzhou, green, 2, 1575129600000, 17, 12.496550 -2, guangzhou, green, 2, 1575129601000, 29, 17.897172 -2, guangzhou, green, 2, 1575129602000, 34, 16.574690 -2, guangzhou, green, 2, 1575129603000, 15, 16.575054 -2, guangzhou, green, 2, 1575129604000, 34, 19.192545 -2, guangzhou, green, 2, 1575129605000, 19, 15.203920 -2, guangzhou, green, 2, 1575129606000, 28, 12.481825 -2, guangzhou, green, 2, 1575129607000, 30, 16.997891 -2, guangzhou, green, 2, 1575129608000, 24, 15.122720 -2, guangzhou, green, 2, 1575129609000, 20, 16.220016 -2, guangzhou, green, 2, 1575129610000, 16, 11.405753 -2, guangzhou, green, 2, 1575129611000, 26, 19.440151 -2, guangzhou, green, 2, 1575129612000, 24, 12.457920 -2, guangzhou, green, 2, 1575129613000, 30, 15.369806 -2, guangzhou, green, 2, 1575129614000, 27, 16.716676 -2, guangzhou, green, 2, 1575129615000, 32, 17.338548 -2, guangzhou, green, 2, 1575129616000, 28, 14.234738 -2, guangzhou, green, 2, 1575129617000, 34, 19.530447 -2, guangzhou, green, 2, 1575129618000, 15, 14.551896 -2, guangzhou, green, 2, 1575129619000, 21, 17.198856 -2, guangzhou, green, 2, 1575129620000, 19, 17.425909 -2, guangzhou, green, 2, 1575129621000, 29, 16.825216 -2, guangzhou, green, 2, 1575129622000, 28, 12.485828 -2, guangzhou, green, 2, 1575129623000, 25, 17.699710 -2, guangzhou, green, 2, 1575129624000, 30, 12.866378 -2, guangzhou, green, 2, 1575129625000, 18, 11.985615 -2, guangzhou, green, 2, 1575129626000, 24, 16.359533 -2, guangzhou, green, 2, 1575129627000, 20, 14.123154 -2, guangzhou, green, 2, 1575129628000, 23, 11.311899 -2, guangzhou, green, 2, 1575129629000, 29, 18.450350 -2, guangzhou, green, 2, 1575129630000, 29, 17.783038 -2, guangzhou, green, 2, 1575129631000, 22, 16.543795 -2, guangzhou, green, 2, 1575129632000, 25, 13.939652 -2, guangzhou, green, 2, 1575129633000, 22, 15.658666 -2, guangzhou, green, 2, 1575129634000, 24, 14.524828 -2, guangzhou, green, 2, 1575129635000, 15, 16.428353 -2, guangzhou, green, 2, 1575129636000, 16, 18.103802 -2, guangzhou, green, 2, 1575129637000, 28, 10.814747 -2, guangzhou, green, 2, 1575129638000, 21, 14.906347 -2, guangzhou, green, 2, 1575129639000, 25, 16.276587 -2, guangzhou, green, 2, 1575129640000, 28, 17.932145 -2, guangzhou, green, 2, 1575129641000, 34, 12.543257 -2, guangzhou, green, 2, 1575129642000, 21, 14.202174 -2, guangzhou, green, 2, 1575129643000, 19, 12.169968 -2, guangzhou, green, 2, 1575129644000, 31, 15.638443 -2, guangzhou, green, 2, 1575129645000, 23, 13.675736 -2, guangzhou, green, 2, 1575129646000, 20, 19.002998 -2, guangzhou, green, 2, 1575129647000, 34, 14.451299 -2, guangzhou, green, 2, 1575129648000, 29, 16.676133 -2, guangzhou, green, 2, 1575129649000, 31, 10.066270 -2, guangzhou, green, 2, 1575129650000, 26, 17.824551 -2, guangzhou, green, 2, 1575129651000, 34, 18.082416 -2, guangzhou, green, 2, 1575129652000, 28, 16.099497 -2, guangzhou, green, 2, 1575129653000, 16, 12.265096 -2, guangzhou, green, 2, 1575129654000, 34, 12.468646 -2, guangzhou, green, 2, 1575129655000, 16, 11.534757 -2, guangzhou, green, 2, 1575129656000, 16, 19.092035 -2, guangzhou, green, 2, 1575129657000, 20, 13.272631 -2, guangzhou, green, 2, 1575129658000, 19, 14.302918 -2, guangzhou, green, 2, 1575129659000, 31, 10.996095 -2, guangzhou, green, 2, 1575129660000, 17, 15.220791 -2, guangzhou, green, 2, 1575129661000, 28, 18.482870 -2, guangzhou, green, 2, 1575129662000, 17, 15.654042 -2, guangzhou, green, 2, 1575129663000, 30, 12.753545 -2, guangzhou, green, 2, 1575129664000, 18, 19.292998 -2, guangzhou, green, 2, 1575129665000, 33, 12.108711 -2, guangzhou, green, 2, 1575129666000, 34, 14.724292 -2, guangzhou, green, 2, 1575129667000, 28, 13.754784 -2, guangzhou, green, 2, 1575129668000, 22, 17.879010 -2, guangzhou, green, 2, 1575129669000, 27, 10.963891 -2, guangzhou, green, 2, 1575129670000, 32, 15.231074 -2, guangzhou, green, 2, 1575129671000, 24, 11.802718 -2, guangzhou, green, 2, 1575129672000, 21, 13.681011 -2, guangzhou, green, 2, 1575129673000, 19, 10.910179 -2, guangzhou, green, 2, 1575129674000, 29, 13.944866 -2, guangzhou, green, 2, 1575129675000, 18, 17.558532 -2, guangzhou, green, 2, 1575129676000, 19, 13.186824 -2, guangzhou, green, 2, 1575129677000, 25, 12.784448 -2, guangzhou, green, 2, 1575129678000, 28, 15.774681 -2, guangzhou, green, 2, 1575129679000, 29, 11.104902 -2, guangzhou, green, 2, 1575129680000, 16, 13.809837 -2, guangzhou, green, 2, 1575129681000, 16, 18.830369 -2, guangzhou, green, 2, 1575129682000, 32, 11.798459 -2, guangzhou, green, 2, 1575129683000, 17, 11.893725 -2, guangzhou, green, 2, 1575129684000, 16, 11.646352 -2, guangzhou, green, 2, 1575129685000, 30, 16.511740 -2, guangzhou, green, 2, 1575129686000, 27, 11.837594 -2, guangzhou, green, 2, 1575129687000, 26, 17.312381 -2, guangzhou, green, 2, 1575129688000, 16, 12.512595 -2, guangzhou, green, 2, 1575129689000, 27, 10.224634 -2, guangzhou, green, 2, 1575129690000, 31, 15.000720 -2, guangzhou, green, 2, 1575129691000, 18, 12.810097 -2, guangzhou, green, 2, 1575129692000, 24, 19.154830 -2, guangzhou, green, 2, 1575129693000, 29, 17.029148 -2, guangzhou, green, 2, 1575129694000, 25, 19.416777 -2, guangzhou, green, 2, 1575129695000, 17, 17.692554 -2, guangzhou, green, 2, 1575129696000, 25, 10.939226 -2, guangzhou, green, 2, 1575129697000, 23, 10.632203 -2, guangzhou, green, 2, 1575129698000, 21, 17.977449 -2, guangzhou, green, 2, 1575129699000, 20, 14.047369 -3, shenzhen, yellow, 0, 1575129600000, 17, 13.181688 -3, shenzhen, yellow, 0, 1575129601000, 26, 17.912070 -3, shenzhen, yellow, 0, 1575129602000, 28, 11.660286 -3, shenzhen, yellow, 0, 1575129603000, 28, 16.496510 -3, shenzhen, yellow, 0, 1575129604000, 32, 16.164662 -3, shenzhen, yellow, 0, 1575129605000, 16, 19.604285 -3, shenzhen, yellow, 0, 1575129606000, 33, 19.308120 -3, shenzhen, yellow, 0, 1575129607000, 16, 16.755204 -3, shenzhen, yellow, 0, 1575129608000, 33, 10.658284 -3, shenzhen, yellow, 0, 1575129609000, 30, 17.241293 -3, shenzhen, yellow, 0, 1575129610000, 16, 18.088522 -3, shenzhen, yellow, 0, 1575129611000, 31, 15.455248 -3, shenzhen, yellow, 0, 1575129612000, 29, 10.505713 -3, shenzhen, yellow, 0, 1575129613000, 28, 16.189388 -3, shenzhen, yellow, 0, 1575129614000, 16, 14.723009 -3, shenzhen, yellow, 0, 1575129615000, 27, 15.670388 -3, shenzhen, yellow, 0, 1575129616000, 29, 16.080214 -3, shenzhen, yellow, 0, 1575129617000, 18, 18.544671 -3, shenzhen, yellow, 0, 1575129618000, 23, 16.947663 -3, shenzhen, yellow, 0, 1575129619000, 15, 16.917797 -3, shenzhen, yellow, 0, 1575129620000, 25, 17.888324 -3, shenzhen, yellow, 0, 1575129621000, 34, 18.520162 -3, shenzhen, yellow, 0, 1575129622000, 29, 10.271190 -3, shenzhen, yellow, 0, 1575129623000, 26, 11.781460 -3, shenzhen, yellow, 0, 1575129624000, 16, 17.737292 -3, shenzhen, yellow, 0, 1575129625000, 15, 13.730896 -3, shenzhen, yellow, 0, 1575129626000, 28, 12.161647 -3, shenzhen, yellow, 0, 1575129627000, 33, 15.012675 -3, shenzhen, yellow, 0, 1575129628000, 28, 12.880752 -3, shenzhen, yellow, 0, 1575129629000, 28, 12.418301 -3, shenzhen, yellow, 0, 1575129630000, 16, 15.744831 -3, shenzhen, yellow, 0, 1575129631000, 23, 10.551453 -3, shenzhen, yellow, 0, 1575129632000, 32, 11.782227 -3, shenzhen, yellow, 0, 1575129633000, 32, 16.531595 -3, shenzhen, yellow, 0, 1575129634000, 19, 12.512090 -3, shenzhen, yellow, 0, 1575129635000, 22, 16.554170 -3, shenzhen, yellow, 0, 1575129636000, 20, 12.593234 -3, shenzhen, yellow, 0, 1575129637000, 23, 10.267977 -3, shenzhen, yellow, 0, 1575129638000, 19, 18.470475 -3, shenzhen, yellow, 0, 1575129639000, 27, 11.479857 -3, shenzhen, yellow, 0, 1575129640000, 29, 17.387964 -3, shenzhen, yellow, 0, 1575129641000, 28, 18.605927 -3, shenzhen, yellow, 0, 1575129642000, 28, 14.150780 -3, shenzhen, yellow, 0, 1575129643000, 30, 12.112675 -3, shenzhen, yellow, 0, 1575129644000, 20, 12.126206 -3, shenzhen, yellow, 0, 1575129645000, 34, 11.627235 -3, shenzhen, yellow, 0, 1575129646000, 34, 18.202179 -3, shenzhen, yellow, 0, 1575129647000, 30, 12.447241 -3, shenzhen, yellow, 0, 1575129648000, 15, 12.542049 -3, shenzhen, yellow, 0, 1575129649000, 34, 12.043278 -3, shenzhen, yellow, 0, 1575129650000, 26, 15.254272 -3, shenzhen, yellow, 0, 1575129651000, 33, 14.655641 -3, shenzhen, yellow, 0, 1575129652000, 21, 17.835511 -3, shenzhen, yellow, 0, 1575129653000, 30, 18.979520 -3, shenzhen, yellow, 0, 1575129654000, 26, 12.942195 -3, shenzhen, yellow, 0, 1575129655000, 29, 19.775977 -3, shenzhen, yellow, 0, 1575129656000, 31, 14.242160 -3, shenzhen, yellow, 0, 1575129657000, 15, 10.568320 -3, shenzhen, yellow, 0, 1575129658000, 21, 12.407690 -3, shenzhen, yellow, 0, 1575129659000, 23, 14.165327 -3, shenzhen, yellow, 0, 1575129660000, 27, 11.292074 -3, shenzhen, yellow, 0, 1575129661000, 18, 11.011734 -3, shenzhen, yellow, 0, 1575129662000, 22, 18.100115 -3, shenzhen, yellow, 0, 1575129663000, 18, 11.857615 -3, shenzhen, yellow, 0, 1575129664000, 20, 15.402887 -3, shenzhen, yellow, 0, 1575129665000, 32, 17.952958 -3, shenzhen, yellow, 0, 1575129666000, 16, 15.407510 -3, shenzhen, yellow, 0, 1575129667000, 23, 17.344025 -3, shenzhen, yellow, 0, 1575129668000, 34, 13.251864 -3, shenzhen, yellow, 0, 1575129669000, 31, 15.406216 -3, shenzhen, yellow, 0, 1575129670000, 19, 16.385551 -3, shenzhen, yellow, 0, 1575129671000, 32, 13.493399 -3, shenzhen, yellow, 0, 1575129672000, 27, 11.856057 -3, shenzhen, yellow, 0, 1575129673000, 30, 12.977649 -3, shenzhen, yellow, 0, 1575129674000, 19, 18.339123 -3, shenzhen, yellow, 0, 1575129675000, 23, 16.442236 -3, shenzhen, yellow, 0, 1575129676000, 18, 19.140272 -3, shenzhen, yellow, 0, 1575129677000, 27, 16.562737 -3, shenzhen, yellow, 0, 1575129678000, 16, 10.993309 -3, shenzhen, yellow, 0, 1575129679000, 27, 15.137385 -3, shenzhen, yellow, 0, 1575129680000, 15, 18.754543 -3, shenzhen, yellow, 0, 1575129681000, 26, 10.116102 -3, shenzhen, yellow, 0, 1575129682000, 29, 14.024587 -3, shenzhen, yellow, 0, 1575129683000, 31, 14.016558 -3, shenzhen, yellow, 0, 1575129684000, 19, 10.671284 -3, shenzhen, yellow, 0, 1575129685000, 32, 14.641297 -3, shenzhen, yellow, 0, 1575129686000, 18, 12.823655 -3, shenzhen, yellow, 0, 1575129687000, 30, 19.260822 -3, shenzhen, yellow, 0, 1575129688000, 30, 16.105202 -3, shenzhen, yellow, 0, 1575129689000, 22, 10.230556 -3, shenzhen, yellow, 0, 1575129690000, 17, 10.732315 -3, shenzhen, yellow, 0, 1575129691000, 31, 15.320282 -3, shenzhen, yellow, 0, 1575129692000, 24, 17.208577 -3, shenzhen, yellow, 0, 1575129693000, 16, 12.506668 -3, shenzhen, yellow, 0, 1575129694000, 17, 18.911875 -3, shenzhen, yellow, 0, 1575129695000, 15, 12.665488 -3, shenzhen, yellow, 0, 1575129696000, 18, 11.283357 -3, shenzhen, yellow, 0, 1575129697000, 15, 13.186590 -3, shenzhen, yellow, 0, 1575129698000, 34, 15.659293 -3, shenzhen, yellow, 0, 1575129699000, 30, 12.898771 -4, hangzhou, blue, 1, 1575129600000, 33, 18.262612 -4, hangzhou, blue, 1, 1575129601000, 20, 11.612149 -4, hangzhou, blue, 1, 1575129602000, 26, 17.261305 -4, hangzhou, blue, 1, 1575129603000, 27, 19.240210 -4, hangzhou, blue, 1, 1575129604000, 27, 17.412985 -4, hangzhou, blue, 1, 1575129605000, 19, 12.835781 -4, hangzhou, blue, 1, 1575129606000, 24, 13.087003 -4, hangzhou, blue, 1, 1575129607000, 24, 13.701138 -4, hangzhou, blue, 1, 1575129608000, 31, 10.076716 -4, hangzhou, blue, 1, 1575129609000, 27, 14.703408 -4, hangzhou, blue, 1, 1575129610000, 19, 17.503874 -4, hangzhou, blue, 1, 1575129611000, 21, 18.607839 -4, hangzhou, blue, 1, 1575129612000, 16, 15.416387 -4, hangzhou, blue, 1, 1575129613000, 19, 19.477280 -4, hangzhou, blue, 1, 1575129614000, 15, 17.374174 -4, hangzhou, blue, 1, 1575129615000, 30, 10.732940 -4, hangzhou, blue, 1, 1575129616000, 33, 16.863960 -4, hangzhou, blue, 1, 1575129617000, 16, 10.413205 -4, hangzhou, blue, 1, 1575129618000, 27, 14.130482 -4, hangzhou, blue, 1, 1575129619000, 19, 10.731398 -4, hangzhou, blue, 1, 1575129620000, 27, 11.713011 -4, hangzhou, blue, 1, 1575129621000, 26, 19.063695 -4, hangzhou, blue, 1, 1575129622000, 26, 16.309728 -4, hangzhou, blue, 1, 1575129623000, 33, 12.229796 -4, hangzhou, blue, 1, 1575129624000, 16, 15.176824 -4, hangzhou, blue, 1, 1575129625000, 31, 12.417684 -4, hangzhou, blue, 1, 1575129626000, 31, 17.284961 -4, hangzhou, blue, 1, 1575129627000, 24, 12.530188 -4, hangzhou, blue, 1, 1575129628000, 32, 15.067641 -4, hangzhou, blue, 1, 1575129629000, 32, 18.546511 -4, hangzhou, blue, 1, 1575129630000, 21, 13.049847 -4, hangzhou, blue, 1, 1575129631000, 19, 17.509510 -4, hangzhou, blue, 1, 1575129632000, 24, 13.289143 -4, hangzhou, blue, 1, 1575129633000, 18, 19.179227 -4, hangzhou, blue, 1, 1575129634000, 25, 18.128126 -4, hangzhou, blue, 1, 1575129635000, 26, 19.627125 -4, hangzhou, blue, 1, 1575129636000, 25, 16.090493 -4, hangzhou, blue, 1, 1575129637000, 19, 19.093488 -4, hangzhou, blue, 1, 1575129638000, 32, 17.563422 -4, hangzhou, blue, 1, 1575129639000, 16, 12.867582 -4, hangzhou, blue, 1, 1575129640000, 32, 11.606473 -4, hangzhou, blue, 1, 1575129641000, 31, 12.321989 -4, hangzhou, blue, 1, 1575129642000, 30, 17.043967 -4, hangzhou, blue, 1, 1575129643000, 20, 14.553511 -4, hangzhou, blue, 1, 1575129644000, 34, 19.068052 -4, hangzhou, blue, 1, 1575129645000, 18, 15.992107 -4, hangzhou, blue, 1, 1575129646000, 34, 11.308531 -4, hangzhou, blue, 1, 1575129647000, 18, 19.053088 -4, hangzhou, blue, 1, 1575129648000, 25, 18.617738 -4, hangzhou, blue, 1, 1575129649000, 25, 14.190978 -4, hangzhou, blue, 1, 1575129650000, 22, 18.049969 -4, hangzhou, blue, 1, 1575129651000, 19, 16.890290 -4, hangzhou, blue, 1, 1575129652000, 26, 10.055835 -4, hangzhou, blue, 1, 1575129653000, 32, 18.772190 -4, hangzhou, blue, 1, 1575129654000, 18, 15.347443 -4, hangzhou, blue, 1, 1575129655000, 19, 15.611078 -4, hangzhou, blue, 1, 1575129656000, 24, 11.345082 -4, hangzhou, blue, 1, 1575129657000, 27, 10.883929 -4, hangzhou, blue, 1, 1575129658000, 25, 19.810161 -4, hangzhou, blue, 1, 1575129659000, 33, 10.159027 -4, hangzhou, blue, 1, 1575129660000, 20, 11.900341 -4, hangzhou, blue, 1, 1575129661000, 24, 12.395535 -4, hangzhou, blue, 1, 1575129662000, 25, 13.832159 -4, hangzhou, blue, 1, 1575129663000, 26, 15.066722 -4, hangzhou, blue, 1, 1575129664000, 24, 12.441406 -4, hangzhou, blue, 1, 1575129665000, 22, 16.281200 -4, hangzhou, blue, 1, 1575129666000, 21, 14.116693 -4, hangzhou, blue, 1, 1575129667000, 28, 12.441770 -4, hangzhou, blue, 1, 1575129668000, 18, 11.402083 -4, hangzhou, blue, 1, 1575129669000, 28, 15.167379 -4, hangzhou, blue, 1, 1575129670000, 16, 15.433220 -4, hangzhou, blue, 1, 1575129671000, 23, 10.211150 -4, hangzhou, blue, 1, 1575129672000, 19, 19.501424 -4, hangzhou, blue, 1, 1575129673000, 18, 17.974835 -4, hangzhou, blue, 1, 1575129674000, 26, 12.904804 -4, hangzhou, blue, 1, 1575129675000, 27, 17.012268 -4, hangzhou, blue, 1, 1575129676000, 34, 11.223162 -4, hangzhou, blue, 1, 1575129677000, 34, 11.008873 -4, hangzhou, blue, 1, 1575129678000, 18, 13.466623 -4, hangzhou, blue, 1, 1575129679000, 25, 11.714342 -4, hangzhou, blue, 1, 1575129680000, 32, 15.193444 -4, hangzhou, blue, 1, 1575129681000, 17, 13.998644 -4, hangzhou, blue, 1, 1575129682000, 27, 12.180101 -4, hangzhou, blue, 1, 1575129683000, 17, 16.405635 -4, hangzhou, blue, 1, 1575129684000, 33, 16.027225 -4, hangzhou, blue, 1, 1575129685000, 28, 17.864308 -4, hangzhou, blue, 1, 1575129686000, 20, 16.057140 -4, hangzhou, blue, 1, 1575129687000, 26, 17.240991 -4, hangzhou, blue, 1, 1575129688000, 31, 11.178153 -4, hangzhou, blue, 1, 1575129689000, 29, 11.688910 -4, hangzhou, blue, 1, 1575129690000, 24, 15.830195 -4, hangzhou, blue, 1, 1575129691000, 33, 13.083720 -4, hangzhou, blue, 1, 1575129692000, 25, 15.003569 -4, hangzhou, blue, 1, 1575129693000, 16, 14.412837 -4, hangzhou, blue, 1, 1575129694000, 26, 18.930523 -4, hangzhou, blue, 1, 1575129695000, 19, 10.657332 -4, hangzhou, blue, 1, 1575129696000, 28, 11.193432 -4, hangzhou, blue, 1, 1575129697000, 17, 18.000253 -4, hangzhou, blue, 1, 1575129698000, 21, 15.908098 -4, hangzhou, blue, 1, 1575129699000, 25, 14.506726 -5, nanjing, white, 2, 1575129600000, 20, 17.327941 -5, nanjing, white, 2, 1575129601000, 18, 14.271766 -5, nanjing, white, 2, 1575129602000, 26, 19.593114 -5, nanjing, white, 2, 1575129603000, 19, 13.142911 -5, nanjing, white, 2, 1575129604000, 27, 15.166424 -5, nanjing, white, 2, 1575129605000, 28, 11.804980 -5, nanjing, white, 2, 1575129606000, 24, 17.625403 -5, nanjing, white, 2, 1575129607000, 19, 11.373316 -5, nanjing, white, 2, 1575129608000, 34, 19.434849 -5, nanjing, white, 2, 1575129609000, 31, 14.078995 -5, nanjing, white, 2, 1575129610000, 27, 11.647533 -5, nanjing, white, 2, 1575129611000, 25, 16.624403 -5, nanjing, white, 2, 1575129612000, 28, 12.862567 -5, nanjing, white, 2, 1575129613000, 20, 18.218963 -5, nanjing, white, 2, 1575129614000, 17, 10.021056 -5, nanjing, white, 2, 1575129615000, 30, 16.042743 -5, nanjing, white, 2, 1575129616000, 26, 11.424560 -5, nanjing, white, 2, 1575129617000, 21, 10.094065 -5, nanjing, white, 2, 1575129618000, 31, 15.982905 -5, nanjing, white, 2, 1575129619000, 17, 15.925533 -5, nanjing, white, 2, 1575129620000, 30, 15.622108 -5, nanjing, white, 2, 1575129621000, 18, 19.320662 -5, nanjing, white, 2, 1575129622000, 19, 14.068873 -5, nanjing, white, 2, 1575129623000, 15, 15.213653 -5, nanjing, white, 2, 1575129624000, 32, 16.028939 -5, nanjing, white, 2, 1575129625000, 28, 17.858151 -5, nanjing, white, 2, 1575129626000, 18, 11.261528 -5, nanjing, white, 2, 1575129627000, 21, 10.262692 -5, nanjing, white, 2, 1575129628000, 27, 13.190850 -5, nanjing, white, 2, 1575129629000, 17, 15.404541 -5, nanjing, white, 2, 1575129630000, 27, 10.852643 -5, nanjing, white, 2, 1575129631000, 23, 13.134271 -5, nanjing, white, 2, 1575129632000, 22, 19.928938 -5, nanjing, white, 2, 1575129633000, 19, 10.683633 -5, nanjing, white, 2, 1575129634000, 29, 15.450679 -5, nanjing, white, 2, 1575129635000, 20, 17.032495 -5, nanjing, white, 2, 1575129636000, 21, 16.081343 -5, nanjing, white, 2, 1575129637000, 31, 15.173797 -5, nanjing, white, 2, 1575129638000, 17, 18.062092 -5, nanjing, white, 2, 1575129639000, 22, 14.139422 -5, nanjing, white, 2, 1575129640000, 30, 15.335309 -5, nanjing, white, 2, 1575129641000, 30, 18.381148 -5, nanjing, white, 2, 1575129642000, 28, 15.640517 -5, nanjing, white, 2, 1575129643000, 15, 10.603125 -5, nanjing, white, 2, 1575129644000, 18, 12.096534 -5, nanjing, white, 2, 1575129645000, 27, 17.015026 -5, nanjing, white, 2, 1575129646000, 24, 15.616134 -5, nanjing, white, 2, 1575129647000, 32, 15.552120 -5, nanjing, white, 2, 1575129648000, 18, 13.846167 -5, nanjing, white, 2, 1575129649000, 32, 15.406105 -5, nanjing, white, 2, 1575129650000, 19, 14.396603 -5, nanjing, white, 2, 1575129651000, 34, 15.660214 -5, nanjing, white, 2, 1575129652000, 29, 19.035787 -5, nanjing, white, 2, 1575129653000, 26, 14.746065 -5, nanjing, white, 2, 1575129654000, 29, 14.144764 -5, nanjing, white, 2, 1575129655000, 32, 11.953327 -5, nanjing, white, 2, 1575129656000, 16, 11.546639 -5, nanjing, white, 2, 1575129657000, 20, 12.779206 -5, nanjing, white, 2, 1575129658000, 16, 16.364659 -5, nanjing, white, 2, 1575129659000, 29, 10.204467 -5, nanjing, white, 2, 1575129660000, 22, 18.824781 -5, nanjing, white, 2, 1575129661000, 26, 18.795199 -5, nanjing, white, 2, 1575129662000, 16, 12.142987 -5, nanjing, white, 2, 1575129663000, 30, 13.810269 -5, nanjing, white, 2, 1575129664000, 28, 19.670323 -5, nanjing, white, 2, 1575129665000, 17, 10.776152 -5, nanjing, white, 2, 1575129666000, 31, 18.095779 -5, nanjing, white, 2, 1575129667000, 34, 12.720668 -5, nanjing, white, 2, 1575129668000, 27, 18.285647 -5, nanjing, white, 2, 1575129669000, 18, 15.929034 -5, nanjing, white, 2, 1575129670000, 27, 10.397290 -5, nanjing, white, 2, 1575129671000, 29, 12.914206 -5, nanjing, white, 2, 1575129672000, 29, 11.560832 -5, nanjing, white, 2, 1575129673000, 21, 15.487904 -5, nanjing, white, 2, 1575129674000, 28, 11.585003 -5, nanjing, white, 2, 1575129675000, 30, 15.042832 -5, nanjing, white, 2, 1575129676000, 23, 12.408045 -5, nanjing, white, 2, 1575129677000, 15, 17.353187 -5, nanjing, white, 2, 1575129678000, 31, 18.084138 -5, nanjing, white, 2, 1575129679000, 34, 10.756624 -5, nanjing, white, 2, 1575129680000, 19, 13.270267 -5, nanjing, white, 2, 1575129681000, 27, 16.639891 -5, nanjing, white, 2, 1575129682000, 31, 14.671892 -5, nanjing, white, 2, 1575129683000, 27, 10.554016 -5, nanjing, white, 2, 1575129684000, 16, 14.507173 -5, nanjing, white, 2, 1575129685000, 19, 11.977540 -5, nanjing, white, 2, 1575129686000, 26, 13.286239 -5, nanjing, white, 2, 1575129687000, 30, 17.858074 -5, nanjing, white, 2, 1575129688000, 24, 19.446978 -5, nanjing, white, 2, 1575129689000, 21, 19.698453 -5, nanjing, white, 2, 1575129690000, 21, 19.494527 -5, nanjing, white, 2, 1575129691000, 34, 11.911972 -5, nanjing, white, 2, 1575129692000, 16, 16.283904 -5, nanjing, white, 2, 1575129693000, 29, 12.346139 -5, nanjing, white, 2, 1575129694000, 25, 10.589538 -5, nanjing, white, 2, 1575129695000, 23, 16.730700 -5, nanjing, white, 2, 1575129696000, 33, 16.858111 -5, nanjing, white, 2, 1575129697000, 27, 13.779923 -5, nanjing, white, 2, 1575129698000, 20, 11.035122 -5, nanjing, white, 2, 1575129699000, 34, 10.444430 -6, wuhan, black, 0, 1575129600000, 30, 13.948532 -6, wuhan, black, 0, 1575129601000, 28, 12.860198 -6, wuhan, black, 0, 1575129602000, 32, 14.979606 -6, wuhan, black, 0, 1575129603000, 22, 11.844284 -6, wuhan, black, 0, 1575129604000, 16, 19.507148 -6, wuhan, black, 0, 1575129605000, 22, 14.315308 -6, wuhan, black, 0, 1575129606000, 19, 13.773210 -6, wuhan, black, 0, 1575129607000, 31, 18.224420 -6, wuhan, black, 0, 1575129608000, 28, 15.962573 -6, wuhan, black, 0, 1575129609000, 32, 12.855757 -6, wuhan, black, 0, 1575129610000, 32, 11.010859 -6, wuhan, black, 0, 1575129611000, 33, 11.110190 -6, wuhan, black, 0, 1575129612000, 24, 18.628721 -6, wuhan, black, 0, 1575129613000, 30, 16.044831 -6, wuhan, black, 0, 1575129614000, 29, 14.617854 -6, wuhan, black, 0, 1575129615000, 31, 15.591157 -6, wuhan, black, 0, 1575129616000, 31, 12.486593 -6, wuhan, black, 0, 1575129617000, 21, 17.680152 -6, wuhan, black, 0, 1575129618000, 27, 10.341043 -6, wuhan, black, 0, 1575129619000, 28, 13.359138 -6, wuhan, black, 0, 1575129620000, 30, 19.654174 -6, wuhan, black, 0, 1575129621000, 28, 18.037469 -6, wuhan, black, 0, 1575129622000, 25, 18.404051 -6, wuhan, black, 0, 1575129623000, 16, 14.856599 -6, wuhan, black, 0, 1575129624000, 29, 19.552920 -6, wuhan, black, 0, 1575129625000, 17, 13.434096 -6, wuhan, black, 0, 1575129626000, 27, 17.019559 -6, wuhan, black, 0, 1575129627000, 26, 15.173058 -6, wuhan, black, 0, 1575129628000, 32, 12.826764 -6, wuhan, black, 0, 1575129629000, 26, 17.535447 -6, wuhan, black, 0, 1575129630000, 21, 14.249137 -6, wuhan, black, 0, 1575129631000, 17, 17.047627 -6, wuhan, black, 0, 1575129632000, 27, 16.650397 -6, wuhan, black, 0, 1575129633000, 15, 13.081019 -6, wuhan, black, 0, 1575129634000, 31, 16.957137 -6, wuhan, black, 0, 1575129635000, 16, 14.120849 -6, wuhan, black, 0, 1575129636000, 20, 19.559244 -6, wuhan, black, 0, 1575129637000, 24, 17.951023 -6, wuhan, black, 0, 1575129638000, 28, 12.034821 -6, wuhan, black, 0, 1575129639000, 27, 19.410968 -6, wuhan, black, 0, 1575129640000, 32, 19.163660 -6, wuhan, black, 0, 1575129641000, 19, 18.268331 -6, wuhan, black, 0, 1575129642000, 17, 13.487017 -6, wuhan, black, 0, 1575129643000, 15, 19.085113 -6, wuhan, black, 0, 1575129644000, 31, 18.786878 -6, wuhan, black, 0, 1575129645000, 25, 17.901693 -6, wuhan, black, 0, 1575129646000, 16, 13.458948 -6, wuhan, black, 0, 1575129647000, 17, 16.372939 -6, wuhan, black, 0, 1575129648000, 20, 16.547324 -6, wuhan, black, 0, 1575129649000, 22, 14.801144 -6, wuhan, black, 0, 1575129650000, 16, 15.819640 -6, wuhan, black, 0, 1575129651000, 24, 16.569364 -6, wuhan, black, 0, 1575129652000, 29, 13.750153 -6, wuhan, black, 0, 1575129653000, 16, 14.846974 -6, wuhan, black, 0, 1575129654000, 23, 15.937862 -6, wuhan, black, 0, 1575129655000, 32, 19.969213 -6, wuhan, black, 0, 1575129656000, 17, 16.589262 -6, wuhan, black, 0, 1575129657000, 16, 15.983127 -6, wuhan, black, 0, 1575129658000, 32, 19.981177 -6, wuhan, black, 0, 1575129659000, 30, 15.526706 -6, wuhan, black, 0, 1575129660000, 30, 11.473325 -6, wuhan, black, 0, 1575129661000, 34, 14.734314 -6, wuhan, black, 0, 1575129662000, 31, 19.298395 -6, wuhan, black, 0, 1575129663000, 22, 16.150773 -6, wuhan, black, 0, 1575129664000, 18, 10.211251 -6, wuhan, black, 0, 1575129665000, 23, 16.773732 -6, wuhan, black, 0, 1575129666000, 22, 14.005852 -6, wuhan, black, 0, 1575129667000, 17, 13.159840 -6, wuhan, black, 0, 1575129668000, 26, 13.747615 -6, wuhan, black, 0, 1575129669000, 26, 14.601900 -6, wuhan, black, 0, 1575129670000, 29, 10.489225 -6, wuhan, black, 0, 1575129671000, 21, 16.890829 -6, wuhan, black, 0, 1575129672000, 26, 11.081302 -6, wuhan, black, 0, 1575129673000, 26, 19.336692 -6, wuhan, black, 0, 1575129674000, 22, 13.601869 -6, wuhan, black, 0, 1575129675000, 19, 11.627652 -6, wuhan, black, 0, 1575129676000, 19, 13.767122 -6, wuhan, black, 0, 1575129677000, 17, 15.320825 -6, wuhan, black, 0, 1575129678000, 16, 13.546837 -6, wuhan, black, 0, 1575129679000, 26, 19.562339 -6, wuhan, black, 0, 1575129680000, 24, 18.861545 -6, wuhan, black, 0, 1575129681000, 22, 11.048994 -6, wuhan, black, 0, 1575129682000, 32, 18.633559 -6, wuhan, black, 0, 1575129683000, 24, 11.423349 -6, wuhan, black, 0, 1575129684000, 31, 10.958536 -6, wuhan, black, 0, 1575129685000, 27, 16.700368 -6, wuhan, black, 0, 1575129686000, 32, 19.383603 -6, wuhan, black, 0, 1575129687000, 25, 12.817186 -6, wuhan, black, 0, 1575129688000, 21, 19.289010 -6, wuhan, black, 0, 1575129689000, 21, 18.514933 -6, wuhan, black, 0, 1575129690000, 22, 19.214387 -6, wuhan, black, 0, 1575129691000, 33, 11.673355 -6, wuhan, black, 0, 1575129692000, 23, 18.321138 -6, wuhan, black, 0, 1575129693000, 29, 11.371021 -6, wuhan, black, 0, 1575129694000, 32, 10.531389 -6, wuhan, black, 0, 1575129695000, 18, 15.921944 -6, wuhan, black, 0, 1575129696000, 27, 16.780309 -6, wuhan, black, 0, 1575129697000, 29, 12.028908 -6, wuhan, black, 0, 1575129698000, 32, 14.714637 -6, wuhan, black, 0, 1575129699000, 29, 12.753968 -7, suzhou, green, 1, 1575129600000, 24, 15.501768 -7, suzhou, green, 1, 1575129601000, 18, 17.583403 -7, suzhou, green, 1, 1575129602000, 15, 14.919566 -7, suzhou, green, 1, 1575129603000, 34, 11.870620 -7, suzhou, green, 1, 1575129604000, 29, 13.098385 -7, suzhou, green, 1, 1575129605000, 16, 17.498160 -7, suzhou, green, 1, 1575129606000, 30, 19.744556 -7, suzhou, green, 1, 1575129607000, 33, 16.558870 -7, suzhou, green, 1, 1575129608000, 16, 12.532103 -7, suzhou, green, 1, 1575129609000, 16, 16.504603 -7, suzhou, green, 1, 1575129610000, 25, 11.681246 -7, suzhou, green, 1, 1575129611000, 30, 10.620805 -7, suzhou, green, 1, 1575129612000, 22, 16.687937 -7, suzhou, green, 1, 1575129613000, 25, 17.911474 -7, suzhou, green, 1, 1575129614000, 32, 11.036519 -7, suzhou, green, 1, 1575129615000, 29, 16.162914 -7, suzhou, green, 1, 1575129616000, 30, 10.425992 -7, suzhou, green, 1, 1575129617000, 34, 19.630803 -7, suzhou, green, 1, 1575129618000, 29, 17.739556 -7, suzhou, green, 1, 1575129619000, 32, 17.805220 -7, suzhou, green, 1, 1575129620000, 23, 15.547236 -7, suzhou, green, 1, 1575129621000, 19, 13.928559 -7, suzhou, green, 1, 1575129622000, 34, 15.063669 -7, suzhou, green, 1, 1575129623000, 33, 16.968293 -7, suzhou, green, 1, 1575129624000, 24, 17.425284 -7, suzhou, green, 1, 1575129625000, 29, 12.856950 -7, suzhou, green, 1, 1575129626000, 16, 10.769358 -7, suzhou, green, 1, 1575129627000, 19, 19.106196 -7, suzhou, green, 1, 1575129628000, 15, 18.987306 -7, suzhou, green, 1, 1575129629000, 18, 19.311755 -7, suzhou, green, 1, 1575129630000, 20, 11.854711 -7, suzhou, green, 1, 1575129631000, 17, 11.268703 -7, suzhou, green, 1, 1575129632000, 28, 18.451425 -7, suzhou, green, 1, 1575129633000, 30, 15.813294 -7, suzhou, green, 1, 1575129634000, 28, 14.549649 -7, suzhou, green, 1, 1575129635000, 30, 18.777474 -7, suzhou, green, 1, 1575129636000, 28, 18.789080 -7, suzhou, green, 1, 1575129637000, 22, 12.038230 -7, suzhou, green, 1, 1575129638000, 15, 10.294816 -7, suzhou, green, 1, 1575129639000, 18, 19.396735 -7, suzhou, green, 1, 1575129640000, 20, 17.763178 -7, suzhou, green, 1, 1575129641000, 27, 17.413355 -7, suzhou, green, 1, 1575129642000, 29, 12.723483 -7, suzhou, green, 1, 1575129643000, 29, 12.753222 -7, suzhou, green, 1, 1575129644000, 25, 11.097518 -7, suzhou, green, 1, 1575129645000, 27, 15.300300 -7, suzhou, green, 1, 1575129646000, 34, 11.625943 -7, suzhou, green, 1, 1575129647000, 25, 16.646308 -7, suzhou, green, 1, 1575129648000, 31, 10.940592 -7, suzhou, green, 1, 1575129649000, 25, 18.853796 -7, suzhou, green, 1, 1575129650000, 23, 16.183418 -7, suzhou, green, 1, 1575129651000, 34, 15.379113 -7, suzhou, green, 1, 1575129652000, 15, 10.424659 -7, suzhou, green, 1, 1575129653000, 25, 10.196040 -7, suzhou, green, 1, 1575129654000, 24, 15.591199 -7, suzhou, green, 1, 1575129655000, 31, 17.032220 -7, suzhou, green, 1, 1575129656000, 30, 14.349576 -7, suzhou, green, 1, 1575129657000, 21, 14.315072 -7, suzhou, green, 1, 1575129658000, 18, 12.297491 -7, suzhou, green, 1, 1575129659000, 27, 13.134474 -7, suzhou, green, 1, 1575129660000, 28, 16.510527 -7, suzhou, green, 1, 1575129661000, 21, 17.905938 -7, suzhou, green, 1, 1575129662000, 16, 14.310720 -7, suzhou, green, 1, 1575129663000, 33, 12.415139 -7, suzhou, green, 1, 1575129664000, 28, 19.899145 -7, suzhou, green, 1, 1575129665000, 32, 18.874009 -7, suzhou, green, 1, 1575129666000, 34, 16.834873 -7, suzhou, green, 1, 1575129667000, 16, 18.383447 -7, suzhou, green, 1, 1575129668000, 29, 11.365641 -7, suzhou, green, 1, 1575129669000, 34, 13.137474 -7, suzhou, green, 1, 1575129670000, 18, 13.566243 -7, suzhou, green, 1, 1575129671000, 27, 16.454975 -7, suzhou, green, 1, 1575129672000, 21, 10.957562 -7, suzhou, green, 1, 1575129673000, 24, 14.916977 -7, suzhou, green, 1, 1575129674000, 28, 12.449565 -7, suzhou, green, 1, 1575129675000, 20, 10.217084 -7, suzhou, green, 1, 1575129676000, 32, 15.026526 -7, suzhou, green, 1, 1575129677000, 20, 10.291223 -7, suzhou, green, 1, 1575129678000, 24, 13.561227 -7, suzhou, green, 1, 1575129679000, 26, 10.091348 -7, suzhou, green, 1, 1575129680000, 25, 13.574391 -7, suzhou, green, 1, 1575129681000, 33, 17.308216 -7, suzhou, green, 1, 1575129682000, 15, 11.635235 -7, suzhou, green, 1, 1575129683000, 31, 19.967076 -7, suzhou, green, 1, 1575129684000, 25, 11.849431 -7, suzhou, green, 1, 1575129685000, 31, 16.161484 -7, suzhou, green, 1, 1575129686000, 20, 15.716389 -7, suzhou, green, 1, 1575129687000, 22, 17.486091 -7, suzhou, green, 1, 1575129688000, 29, 10.390956 -7, suzhou, green, 1, 1575129689000, 18, 18.549987 -7, suzhou, green, 1, 1575129690000, 21, 12.367505 -7, suzhou, green, 1, 1575129691000, 30, 12.345558 -7, suzhou, green, 1, 1575129692000, 17, 14.100245 -7, suzhou, green, 1, 1575129693000, 19, 11.093554 -7, suzhou, green, 1, 1575129694000, 26, 13.614985 -7, suzhou, green, 1, 1575129695000, 28, 13.753683 -7, suzhou, green, 1, 1575129696000, 21, 12.691688 -7, suzhou, green, 1, 1575129697000, 29, 17.595583 -7, suzhou, green, 1, 1575129698000, 20, 13.184472 -7, suzhou, green, 1, 1575129699000, 17, 14.349156 -8, haerbing, yellow, 2, 1575129600000, 28, 13.254039 -8, haerbing, yellow, 2, 1575129601000, 21, 17.815564 -8, haerbing, yellow, 2, 1575129602000, 19, 11.209747 -8, haerbing, yellow, 2, 1575129603000, 26, 16.861074 -8, haerbing, yellow, 2, 1575129604000, 31, 11.504868 -8, haerbing, yellow, 2, 1575129605000, 34, 19.224629 -8, haerbing, yellow, 2, 1575129606000, 23, 11.358596 -8, haerbing, yellow, 2, 1575129607000, 31, 12.635280 -8, haerbing, yellow, 2, 1575129608000, 26, 11.433395 -8, haerbing, yellow, 2, 1575129609000, 17, 13.468466 -8, haerbing, yellow, 2, 1575129610000, 33, 14.519953 -8, haerbing, yellow, 2, 1575129611000, 15, 14.241436 -8, haerbing, yellow, 2, 1575129612000, 16, 13.055456 -8, haerbing, yellow, 2, 1575129613000, 17, 13.772431 -8, haerbing, yellow, 2, 1575129614000, 19, 12.057286 -8, haerbing, yellow, 2, 1575129615000, 19, 13.647710 -8, haerbing, yellow, 2, 1575129616000, 20, 15.103685 -8, haerbing, yellow, 2, 1575129617000, 18, 16.627761 -8, haerbing, yellow, 2, 1575129618000, 26, 18.441795 -8, haerbing, yellow, 2, 1575129619000, 15, 18.348824 -8, haerbing, yellow, 2, 1575129620000, 32, 18.431012 -8, haerbing, yellow, 2, 1575129621000, 17, 10.795047 -8, haerbing, yellow, 2, 1575129622000, 34, 10.793828 -8, haerbing, yellow, 2, 1575129623000, 18, 16.664458 -8, haerbing, yellow, 2, 1575129624000, 22, 16.533227 -8, haerbing, yellow, 2, 1575129625000, 15, 12.870278 -8, haerbing, yellow, 2, 1575129626000, 31, 17.592231 -8, haerbing, yellow, 2, 1575129627000, 17, 10.092316 -8, haerbing, yellow, 2, 1575129628000, 22, 10.988946 -8, haerbing, yellow, 2, 1575129629000, 17, 14.493579 -8, haerbing, yellow, 2, 1575129630000, 20, 11.943546 -8, haerbing, yellow, 2, 1575129631000, 28, 19.871601 -8, haerbing, yellow, 2, 1575129632000, 16, 16.607235 -8, haerbing, yellow, 2, 1575129633000, 19, 10.197650 -8, haerbing, yellow, 2, 1575129634000, 19, 10.742104 -8, haerbing, yellow, 2, 1575129635000, 30, 18.785863 -8, haerbing, yellow, 2, 1575129636000, 16, 14.827333 -8, haerbing, yellow, 2, 1575129637000, 28, 13.826542 -8, haerbing, yellow, 2, 1575129638000, 16, 18.638533 -8, haerbing, yellow, 2, 1575129639000, 24, 17.832974 -8, haerbing, yellow, 2, 1575129640000, 31, 14.904558 -8, haerbing, yellow, 2, 1575129641000, 32, 16.034774 -8, haerbing, yellow, 2, 1575129642000, 33, 16.879997 -8, haerbing, yellow, 2, 1575129643000, 18, 16.981511 -8, haerbing, yellow, 2, 1575129644000, 19, 18.554924 -8, haerbing, yellow, 2, 1575129645000, 28, 12.138742 -8, haerbing, yellow, 2, 1575129646000, 27, 17.938497 -8, haerbing, yellow, 2, 1575129647000, 25, 16.919425 -8, haerbing, yellow, 2, 1575129648000, 15, 17.739521 -8, haerbing, yellow, 2, 1575129649000, 26, 16.017035 -8, haerbing, yellow, 2, 1575129650000, 20, 14.530903 -8, haerbing, yellow, 2, 1575129651000, 32, 10.938258 -8, haerbing, yellow, 2, 1575129652000, 18, 15.265134 -8, haerbing, yellow, 2, 1575129653000, 25, 11.227825 -8, haerbing, yellow, 2, 1575129654000, 32, 15.839538 -8, haerbing, yellow, 2, 1575129655000, 20, 12.813906 -8, haerbing, yellow, 2, 1575129656000, 34, 14.348205 -8, haerbing, yellow, 2, 1575129657000, 23, 13.158134 -8, haerbing, yellow, 2, 1575129658000, 27, 18.320920 -8, haerbing, yellow, 2, 1575129659000, 31, 10.848533 -8, haerbing, yellow, 2, 1575129660000, 21, 13.549193 -8, haerbing, yellow, 2, 1575129661000, 21, 10.043014 -8, haerbing, yellow, 2, 1575129662000, 17, 13.852666 -8, haerbing, yellow, 2, 1575129663000, 20, 13.046154 -8, haerbing, yellow, 2, 1575129664000, 15, 15.538251 -8, haerbing, yellow, 2, 1575129665000, 25, 15.422191 -8, haerbing, yellow, 2, 1575129666000, 23, 17.912156 -8, haerbing, yellow, 2, 1575129667000, 31, 10.870706 -8, haerbing, yellow, 2, 1575129668000, 15, 15.348852 -8, haerbing, yellow, 2, 1575129669000, 15, 19.605174 -8, haerbing, yellow, 2, 1575129670000, 20, 12.633162 -8, haerbing, yellow, 2, 1575129671000, 23, 15.347140 -8, haerbing, yellow, 2, 1575129672000, 23, 19.131427 -8, haerbing, yellow, 2, 1575129673000, 28, 17.031277 -8, haerbing, yellow, 2, 1575129674000, 25, 12.871234 -8, haerbing, yellow, 2, 1575129675000, 27, 12.112865 -8, haerbing, yellow, 2, 1575129676000, 28, 14.989160 -8, haerbing, yellow, 2, 1575129677000, 34, 12.925199 -8, haerbing, yellow, 2, 1575129678000, 30, 11.244869 -8, haerbing, yellow, 2, 1575129679000, 34, 13.189385 -8, haerbing, yellow, 2, 1575129680000, 32, 12.347545 -8, haerbing, yellow, 2, 1575129681000, 29, 14.551418 -8, haerbing, yellow, 2, 1575129682000, 30, 14.502223 -8, haerbing, yellow, 2, 1575129683000, 32, 13.304706 -8, haerbing, yellow, 2, 1575129684000, 25, 12.030741 -8, haerbing, yellow, 2, 1575129685000, 17, 16.387617 -8, haerbing, yellow, 2, 1575129686000, 15, 19.766795 -8, haerbing, yellow, 2, 1575129687000, 21, 16.533866 -8, haerbing, yellow, 2, 1575129688000, 17, 11.657003 -8, haerbing, yellow, 2, 1575129689000, 34, 12.667008 -8, haerbing, yellow, 2, 1575129690000, 22, 15.673815 -8, haerbing, yellow, 2, 1575129691000, 22, 15.767975 -8, haerbing, yellow, 2, 1575129692000, 31, 19.982548 -8, haerbing, yellow, 2, 1575129693000, 29, 19.036149 -8, haerbing, yellow, 2, 1575129694000, 24, 16.044736 -8, haerbing, yellow, 2, 1575129695000, 19, 12.138802 -8, haerbing, yellow, 2, 1575129696000, 28, 17.771396 -8, haerbing, yellow, 2, 1575129697000, 31, 16.321497 -8, haerbing, yellow, 2, 1575129698000, 25, 15.864515 -8, haerbing, yellow, 2, 1575129699000, 25, 16.492443 -9, shijiazhuang, blue, 0, 1575129600000, 23, 16.002889 -9, shijiazhuang, blue, 0, 1575129601000, 26, 17.034610 -9, shijiazhuang, blue, 0, 1575129602000, 29, 12.892319 -9, shijiazhuang, blue, 0, 1575129603000, 34, 15.321807 -9, shijiazhuang, blue, 0, 1575129604000, 29, 12.562642 -9, shijiazhuang, blue, 0, 1575129605000, 32, 17.190246 -9, shijiazhuang, blue, 0, 1575129606000, 19, 15.361774 -9, shijiazhuang, blue, 0, 1575129607000, 26, 15.022364 -9, shijiazhuang, blue, 0, 1575129608000, 31, 14.837084 -9, shijiazhuang, blue, 0, 1575129609000, 25, 11.554289 -9, shijiazhuang, blue, 0, 1575129610000, 21, 15.313973 -9, shijiazhuang, blue, 0, 1575129611000, 27, 18.621783 -9, shijiazhuang, blue, 0, 1575129612000, 31, 18.018101 -9, shijiazhuang, blue, 0, 1575129613000, 23, 14.421450 -9, shijiazhuang, blue, 0, 1575129614000, 28, 10.833142 -9, shijiazhuang, blue, 0, 1575129615000, 33, 18.169837 -9, shijiazhuang, blue, 0, 1575129616000, 21, 18.772730 -9, shijiazhuang, blue, 0, 1575129617000, 24, 18.893146 -9, shijiazhuang, blue, 0, 1575129618000, 24, 10.290187 -9, shijiazhuang, blue, 0, 1575129619000, 23, 17.393345 -9, shijiazhuang, blue, 0, 1575129620000, 30, 12.949215 -9, shijiazhuang, blue, 0, 1575129621000, 19, 19.267621 -9, shijiazhuang, blue, 0, 1575129622000, 33, 14.831735 -9, shijiazhuang, blue, 0, 1575129623000, 21, 14.711125 -9, shijiazhuang, blue, 0, 1575129624000, 16, 17.168485 -9, shijiazhuang, blue, 0, 1575129625000, 17, 16.426433 -9, shijiazhuang, blue, 0, 1575129626000, 19, 13.879050 -9, shijiazhuang, blue, 0, 1575129627000, 21, 18.308168 -9, shijiazhuang, blue, 0, 1575129628000, 17, 10.845681 -9, shijiazhuang, blue, 0, 1575129629000, 20, 10.238272 -9, shijiazhuang, blue, 0, 1575129630000, 19, 19.424976 -9, shijiazhuang, blue, 0, 1575129631000, 31, 13.885909 -9, shijiazhuang, blue, 0, 1575129632000, 15, 19.264740 -9, shijiazhuang, blue, 0, 1575129633000, 30, 12.460645 -9, shijiazhuang, blue, 0, 1575129634000, 27, 17.608036 -9, shijiazhuang, blue, 0, 1575129635000, 25, 13.493812 -9, shijiazhuang, blue, 0, 1575129636000, 19, 10.955939 -9, shijiazhuang, blue, 0, 1575129637000, 24, 11.956587 -9, shijiazhuang, blue, 0, 1575129638000, 15, 19.141381 -9, shijiazhuang, blue, 0, 1575129639000, 24, 14.801530 -9, shijiazhuang, blue, 0, 1575129640000, 17, 14.347318 -9, shijiazhuang, blue, 0, 1575129641000, 29, 14.803237 -9, shijiazhuang, blue, 0, 1575129642000, 28, 10.342297 -9, shijiazhuang, blue, 0, 1575129643000, 29, 19.368282 -9, shijiazhuang, blue, 0, 1575129644000, 31, 17.491654 -9, shijiazhuang, blue, 0, 1575129645000, 18, 13.161736 -9, shijiazhuang, blue, 0, 1575129646000, 17, 16.067354 -9, shijiazhuang, blue, 0, 1575129647000, 18, 13.736465 -9, shijiazhuang, blue, 0, 1575129648000, 23, 19.103276 -9, shijiazhuang, blue, 0, 1575129649000, 29, 16.075892 -9, shijiazhuang, blue, 0, 1575129650000, 21, 10.728566 -9, shijiazhuang, blue, 0, 1575129651000, 15, 18.921849 -9, shijiazhuang, blue, 0, 1575129652000, 24, 16.914709 -9, shijiazhuang, blue, 0, 1575129653000, 19, 13.501651 -9, shijiazhuang, blue, 0, 1575129654000, 19, 13.538347 -9, shijiazhuang, blue, 0, 1575129655000, 16, 13.261095 -9, shijiazhuang, blue, 0, 1575129656000, 32, 16.315746 -9, shijiazhuang, blue, 0, 1575129657000, 27, 16.400939 -9, shijiazhuang, blue, 0, 1575129658000, 24, 13.321819 -9, shijiazhuang, blue, 0, 1575129659000, 27, 19.070181 -9, shijiazhuang, blue, 0, 1575129660000, 27, 13.040922 -9, shijiazhuang, blue, 0, 1575129661000, 32, 10.872530 -9, shijiazhuang, blue, 0, 1575129662000, 28, 16.428657 -9, shijiazhuang, blue, 0, 1575129663000, 32, 13.883854 -9, shijiazhuang, blue, 0, 1575129664000, 33, 14.299554 -9, shijiazhuang, blue, 0, 1575129665000, 30, 16.445130 -9, shijiazhuang, blue, 0, 1575129666000, 15, 18.059404 -9, shijiazhuang, blue, 0, 1575129667000, 21, 12.348847 -9, shijiazhuang, blue, 0, 1575129668000, 32, 13.315378 -9, shijiazhuang, blue, 0, 1575129669000, 17, 15.689507 -9, shijiazhuang, blue, 0, 1575129670000, 22, 15.591808 -9, shijiazhuang, blue, 0, 1575129671000, 27, 16.386065 -9, shijiazhuang, blue, 0, 1575129672000, 25, 10.564803 -9, shijiazhuang, blue, 0, 1575129673000, 20, 12.276544 -9, shijiazhuang, blue, 0, 1575129674000, 26, 15.828786 -9, shijiazhuang, blue, 0, 1575129675000, 18, 12.236420 -9, shijiazhuang, blue, 0, 1575129676000, 15, 19.439522 -9, shijiazhuang, blue, 0, 1575129677000, 19, 19.831531 -9, shijiazhuang, blue, 0, 1575129678000, 22, 17.115744 -9, shijiazhuang, blue, 0, 1575129679000, 29, 19.879456 -9, shijiazhuang, blue, 0, 1575129680000, 34, 10.207136 -9, shijiazhuang, blue, 0, 1575129681000, 16, 17.633523 -9, shijiazhuang, blue, 0, 1575129682000, 15, 14.227873 -9, shijiazhuang, blue, 0, 1575129683000, 34, 12.027768 -9, shijiazhuang, blue, 0, 1575129684000, 22, 11.376610 -9, shijiazhuang, blue, 0, 1575129685000, 21, 11.711299 -9, shijiazhuang, blue, 0, 1575129686000, 33, 14.281126 -9, shijiazhuang, blue, 0, 1575129687000, 31, 10.895302 -9, shijiazhuang, blue, 0, 1575129688000, 31, 13.971350 -9, shijiazhuang, blue, 0, 1575129689000, 15, 15.262790 -9, shijiazhuang, blue, 0, 1575129690000, 23, 12.440568 -9, shijiazhuang, blue, 0, 1575129691000, 32, 19.731267 -9, shijiazhuang, blue, 0, 1575129692000, 22, 10.518092 -9, shijiazhuang, blue, 0, 1575129693000, 34, 17.863021 -9, shijiazhuang, blue, 0, 1575129694000, 28, 11.478909 -9, shijiazhuang, blue, 0, 1575129695000, 16, 15.075524 -9, shijiazhuang, blue, 0, 1575129696000, 16, 10.292127 -9, shijiazhuang, blue, 0, 1575129697000, 22, 13.716012 -9, shijiazhuang, blue, 0, 1575129698000, 32, 10.906551 -9, shijiazhuang, blue, 0, 1575129699000, 19, 18.386868 \ No newline at end of file diff --git a/importSampleData/go.mod b/importSampleData/go.mod deleted file mode 100644 index d2e58d302b3c917922206cbfc3a7d5afef8266c9..0000000000000000000000000000000000000000 --- a/importSampleData/go.mod +++ /dev/null @@ -1,8 +0,0 @@ -module github.com/taosdata/TDengine/importSampleData - -go 1.13 - -require ( - github.com/pelletier/go-toml v1.9.0 - github.com/taosdata/driver-go v0.0.0-20210415143420-d99751356e28 -) diff --git a/importSampleData/import/import_config.go b/importSampleData/import/import_config.go deleted file mode 100644 index 68587a35197b3cca496892514415f072ad48ce86..0000000000000000000000000000000000000000 --- a/importSampleData/import/import_config.go +++ /dev/null @@ -1,81 +0,0 @@ -/* - * Copyright (c) 2019 TAOS Data, Inc. - * - * This program is free software: you can use, redistribute, and/or modify - * it under the terms of the GNU Affero General Public License, version 3 - * or later ("AGPL"), as published by the Free Software Foundation. - * - * This program is distributed in the hope that it will be useful, but WITHOUT - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or - * FITNESS FOR A PARTICULAR PURPOSE. - * - * You should have received a copy of the GNU Affero General Public License - * along with this program. If not, see . - */ - -package dataimport - -import ( - "encoding/json" - "fmt" - "path/filepath" - "sync" - - "github.com/pelletier/go-toml" -) - -var ( - cfg Config - once sync.Once -) - -// Config include all scene import config -type Config struct { - UserCases map[string]CaseConfig -} - -// CaseConfig include the sample data config and tdengine config -type CaseConfig struct { - Format string - FilePath string - Separator string - StName string - SubTableName string - Timestamp string - TimestampType string - TimestampTypeFormat string - Tags []FieldInfo - Fields []FieldInfo -} - -// FieldInfo is field or tag info -type FieldInfo struct { - Name string - Type string -} - -// LoadConfig will load the specified file config -func LoadConfig(filePath string) Config { - once.Do(func() { - filePath, err := filepath.Abs(filePath) - if err != nil { - panic(err) - } - fmt.Printf("parse toml file once. filePath: %s\n", filePath) - tree, err := toml.LoadFile(filePath) - if err != nil { - panic(err) - } - - bytes, err := json.Marshal(tree.ToMap()) - if err != nil { - panic(err) - } - - err = json.Unmarshal(bytes, &cfg.UserCases) - if err != nil { - panic(err) - } - }) - return cfg -} diff --git a/packaging/cfg/jh_taos.cfg b/packaging/cfg/jh_taos.cfg deleted file mode 100644 index ba3fa8e462d2c9bcbb87046204cb7bd87fa27f9d..0000000000000000000000000000000000000000 --- a/packaging/cfg/jh_taos.cfg +++ /dev/null @@ -1,286 +0,0 @@ -######################################################## -# # -# jh_iot Configuration # -# Any questions, please email jhkj@njsteel.com.cn # -# # -######################################################## - -# first fully qualified domain name (FQDN) for jh_iot system -# firstEp hostname:6030 - -# local fully qualified domain name (FQDN) -# fqdn hostname - -# first port number for the connection (12 continuous UDP/TCP port number are used) -# serverPort 6030 - -# log file's directory -# logDir /var/log/jh_taos - -# data file's directory -# dataDir /var/lib/jh_taos - -# temporary file's directory -# tempDir /tmp/ - -# the arbitrator's fully qualified domain name (FQDN) for jh_iot system, for cluster only -# arbitrator arbitrator_hostname:6042 - -# number of threads per CPU core -# numOfThreadsPerCore 1.0 - -# number of threads to commit cache data -# numOfCommitThreads 4 - -# the proportion of total CPU cores available for query processing -# 2.0: the query threads will be set to double of the CPU cores. -# 1.0: all CPU cores are available for query processing [default]. -# 0.5: only half of the CPU cores are available for query. -# 0.0: only one core available. -# ratioOfQueryCores 1.0 - -# the last_row/first/last aggregator will not change the original column name in the result fields -keepColumnName 1 - -# number of management nodes in the system -# numOfMnodes 3 - -# enable/disable backuping vnode directory when removing vnode -# vnodeBak 1 - -# enable/disable installation / usage report -# telemetryReporting 1 - -# enable/disable load balancing -# balance 1 - -# role for dnode. 0 - any, 1 - mnode, 2 - dnode -# role 0 - -# max timer control blocks -# maxTmrCtrl 512 - -# time interval of system monitor, seconds -# monitorInterval 30 - -# number of seconds allowed for a dnode to be offline, for cluster only -# offlineThreshold 864000 - -# RPC re-try timer, millisecond -# rpcTimer 300 - -# RPC maximum time for ack, seconds. -# rpcMaxTime 600 - -# time interval of dnode status reporting to mnode, seconds, for cluster only -# statusInterval 1 - -# time interval of heart beat from shell to dnode, seconds -# shellActivityTimer 3 - -# minimum sliding window time, milli-second -# minSlidingTime 10 - -# minimum time window, milli-second -# minIntervalTime 10 - -# maximum delay before launching a stream computation, milli-second -# maxStreamCompDelay 20000 - -# maximum delay before launching a stream computation for the first time, milli-second -# maxFirstStreamCompDelay 10000 - -# retry delay when a stream computation fails, milli-second -# retryStreamCompDelay 10 - -# the delayed time for launching a stream computation, from 0.1(default, 10% of whole computing time window) to 0.9 -# streamCompDelayRatio 0.1 - -# max number of vgroups per db, 0 means configured automatically -# maxVgroupsPerDb 0 - -# max number of tables per vnode -# maxTablesPerVnode 1000000 - -# cache block size (Mbyte) -# cache 16 - -# number of cache blocks per vnode -# blocks 6 - -# number of days per DB file -# days 10 - -# number of days to keep DB file -# keep 3650 - -# minimum rows of records in file block -# minRows 100 - -# maximum rows of records in file block -# maxRows 4096 - -# the number of acknowledgments required for successful data writing -# quorum 1 - -# enable/disable compression -# comp 2 - -# write ahead log (WAL) level, 0: no wal; 1: write wal, but no fysnc; 2: write wal, and call fsync -# walLevel 1 - -# if walLevel is set to 2, the cycle of fsync being executed, if set to 0, fsync is called right away -# fsync 3000 - -# number of replications, for cluster only -# replica 1 - -# the compressed rpc message, option: -# -1 (no compression) -# 0 (all message compressed), -# > 0 (rpc message body which larger than this value will be compressed) -# compressMsgSize -1 - -# max length of an SQL -# maxSQLLength 65480 - -# max length of WildCards -# maxWildCardsLength 100 - -# the maximum number of records allowed for super table time sorting -# maxNumOfOrderedRes 100000 - -# system time zone -# timezone Asia/Shanghai (CST, +0800) -# system time zone (for windows 10) -# timezone UTC-8 - -# system locale -# locale en_US.UTF-8 - -# default system charset -# charset UTF-8 - -# max number of connections allowed in dnode -# maxShellConns 5000 - -# max number of connections allowed in client -# maxConnections 5000 - -# stop writing logs when the disk size of the log folder is less than this value -# minimalLogDirGB 0.1 - -# stop writing temporary files when the disk size of the tmp folder is less than this value -# minimalTmpDirGB 0.1 - -# if disk free space is less than this value, server service exit directly within startup process -# minimalDataDirGB 0.1 - -# One mnode is equal to the number of vnode consumed -# mnodeEqualVnodeNum 4 - -# enbale/disable http service -# http 1 - -# enable/disable system monitor -# monitor 1 - -# enable/disable recording the SQL statements via restful interface -# httpEnableRecordSql 0 - -# number of threads used to process http requests -# httpMaxThreads 2 - -# maximum number of rows returned by the restful interface -# restfulRowLimit 10240 - -# The following parameter is used to limit the maximum number of lines in log files. -# max number of lines per log filters -# numOfLogLines 10000000 - -# enable/disable async log -# asyncLog 1 - -# time of keeping log files, days -# logKeepDays 0 - - -# The following parameters are used for debug purpose only. -# debugFlag 8 bits mask: FILE-SCREEN-UNUSED-HeartBeat-DUMP-TRACE_WARN-ERROR -# 131: output warning and error -# 135: output debug, warning and error -# 143: output trace, debug, warning and error to log -# 199: output debug, warning and error to both screen and file -# 207: output trace, debug, warning and error to both screen and file - -# debug flag for all log type, take effect when non-zero value -# debugFlag 0 - -# debug flag for meta management messages -# mDebugFlag 135 - -# debug flag for dnode messages -# dDebugFlag 135 - -# debug flag for sync module -# sDebugFlag 135 - -# debug flag for WAL -# wDebugFlag 135 - -# debug flag for SDB -# sdbDebugFlag 135 - -# debug flag for RPC -# rpcDebugFlag 131 - -# debug flag for TIMER -# tmrDebugFlag 131 - -# debug flag for jh_iot client -# cDebugFlag 131 - -# debug flag for JNI -# jniDebugFlag 131 - -# debug flag for storage -# uDebugFlag 131 - -# debug flag for http server -# httpDebugFlag 131 - -# debug flag for monitor -# monDebugFlag 131 - -# debug flag for query -# qDebugFlag 131 - -# debug flag for vnode -# vDebugFlag 131 - -# debug flag for TSDB -# tsdbDebugFlag 131 - -# debug flag for continue query -# cqDebugFlag 131 - -# enable/disable recording the SQL in client -# enableRecordSql 0 - -# generate core file when service crash -# enableCoreFile 1 - -# maximum display width of binary and nchar fields in the shell. The parts exceeding this limit will be hidden -# maxBinaryDisplayWidth 30 - -# enable/disable stream (continuous query) -# stream 1 - -# in retrieve blocking model, only in 50% query threads will be used in query processing in dnode -# retrieveBlockingModel 0 - -# the maximum allowed query buffer size in MB during query processing for each data node -# -1 no limit (default) -# 0 no query allowed, queries are disabled -# queryBufferSize -1 - diff --git a/packaging/cfg/jh_taosd.service b/packaging/cfg/jh_taosd.service deleted file mode 100644 index d02eb406131b5273785753178b7d6326203bfaef..0000000000000000000000000000000000000000 --- a/packaging/cfg/jh_taosd.service +++ /dev/null @@ -1,21 +0,0 @@ -[Unit] -Description=jh_iot server service -After=network-online.target -Wants=network-online.target - -[Service] -Type=simple -ExecStart=/usr/bin/jh_taosd -ExecStartPre=/usr/local/jh_taos/bin/startPre.sh -TimeoutStopSec=1000000s -LimitNOFILE=infinity -LimitNPROC=infinity -LimitCORE=infinity -TimeoutStartSec=0 -StandardOutput=null -Restart=always -StartLimitBurst=3 -StartLimitInterval=60s - -[Install] -WantedBy=multi-user.target diff --git a/packaging/cfg/khserver.service b/packaging/cfg/khserver.service deleted file mode 100644 index 005afaddc06f6ba8e02112c074a7b3575d5974de..0000000000000000000000000000000000000000 --- a/packaging/cfg/khserver.service +++ /dev/null @@ -1,21 +0,0 @@ -[Unit] -Description=KingHistorian server service -After=network-online.target -Wants=network-online.target - -[Service] -Type=simple -ExecStart=/usr/bin/khserver -ExecStartPre=/usr/local/kinghistorian/bin/startPre.sh -TimeoutStopSec=1000000s -LimitNOFILE=infinity -LimitNPROC=infinity -LimitCORE=infinity -TimeoutStartSec=0 -StandardOutput=null -Restart=always -StartLimitBurst=3 -StartLimitInterval=60s - -[Install] -WantedBy=multi-user.target diff --git a/packaging/cfg/kinghistorian.cfg b/packaging/cfg/kinghistorian.cfg deleted file mode 100644 index 0d0d0b9f3eaae6539f9e431d51bb532270179226..0000000000000000000000000000000000000000 --- a/packaging/cfg/kinghistorian.cfg +++ /dev/null @@ -1,286 +0,0 @@ -######################################################## -# # -# KingHistorian Configuration # -# Any questions, please email support@wellintech.com # -# # -######################################################## - -# first fully qualified domain name (FQDN) for KingHistorian system -# firstEp hostname:6030 - -# local fully qualified domain name (FQDN) -# fqdn hostname - -# first port number for the connection (12 continuous UDP/TCP port number are used) -# serverPort 6030 - -# log file's directory -# logDir /var/log/kinghistorian - -# data file's directory -# dataDir /var/lib/kinghistorian - -# temporary file's directory -# tempDir /tmp/ - -# the arbitrator's fully qualified domain name (FQDN) for KingHistorian system, for cluster only -# arbitrator arbitrator_hostname:6042 - -# number of threads per CPU core -# numOfThreadsPerCore 1.0 - -# number of threads to commit cache data -# numOfCommitThreads 4 - -# the proportion of total CPU cores available for query processing -# 2.0: the query threads will be set to double of the CPU cores. -# 1.0: all CPU cores are available for query processing [default]. -# 0.5: only half of the CPU cores are available for query. -# 0.0: only one core available. -# ratioOfQueryCores 1.0 - -# the last_row/first/last aggregator will not change the original column name in the result fields -keepColumnName 1 - -# number of management nodes in the system -# numOfMnodes 3 - -# enable/disable backuping vnode directory when removing vnode -# vnodeBak 1 - -# enable/disable installation / usage report -# telemetryReporting 1 - -# enable/disable load balancing -# balance 1 - -# role for dnode. 0 - any, 1 - mnode, 2 - dnode -# role 0 - -# max timer control blocks -# maxTmrCtrl 512 - -# time interval of system monitor, seconds -# monitorInterval 30 - -# number of seconds allowed for a dnode to be offline, for cluster only -# offlineThreshold 864000 - -# RPC re-try timer, millisecond -# rpcTimer 300 - -# RPC maximum time for ack, seconds. -# rpcMaxTime 600 - -# time interval of dnode status reporting to mnode, seconds, for cluster only -# statusInterval 1 - -# time interval of heart beat from shell to dnode, seconds -# shellActivityTimer 3 - -# minimum sliding window time, milli-second -# minSlidingTime 10 - -# minimum time window, milli-second -# minIntervalTime 10 - -# maximum delay before launching a stream computation, milli-second -# maxStreamCompDelay 20000 - -# maximum delay before launching a stream computation for the first time, milli-second -# maxFirstStreamCompDelay 10000 - -# retry delay when a stream computation fails, milli-second -# retryStreamCompDelay 10 - -# the delayed time for launching a stream computation, from 0.1(default, 10% of whole computing time window) to 0.9 -# streamCompDelayRatio 0.1 - -# max number of vgroups per db, 0 means configured automatically -# maxVgroupsPerDb 0 - -# max number of tables per vnode -# maxTablesPerVnode 1000000 - -# cache block size (Mbyte) -# cache 16 - -# number of cache blocks per vnode -# blocks 6 - -# number of days per DB file -# days 10 - -# number of days to keep DB file -# keep 3650 - -# minimum rows of records in file block -# minRows 100 - -# maximum rows of records in file block -# maxRows 4096 - -# the number of acknowledgments required for successful data writing -# quorum 1 - -# enable/disable compression -# comp 2 - -# write ahead log (WAL) level, 0: no wal; 1: write wal, but no fysnc; 2: write wal, and call fsync -# walLevel 1 - -# if walLevel is set to 2, the cycle of fsync being executed, if set to 0, fsync is called right away -# fsync 3000 - -# number of replications, for cluster only -# replica 1 - -# the compressed rpc message, option: -# -1 (no compression) -# 0 (all message compressed), -# > 0 (rpc message body which larger than this value will be compressed) -# compressMsgSize -1 - -# max length of an SQL -# maxSQLLength 65480 - -# max length of WildCards -# maxWildCardsLength 100 - -# the maximum number of records allowed for super table time sorting -# maxNumOfOrderedRes 100000 - -# system time zone -# timezone Asia/Shanghai (CST, +0800) -# system time zone (for windows 10) -# timezone UTC-8 - -# system locale -# locale en_US.UTF-8 - -# default system charset -# charset UTF-8 - -# max number of connections allowed in dnode -# maxShellConns 5000 - -# max number of connections allowed in client -# maxConnections 5000 - -# stop writing logs when the disk size of the log folder is less than this value -# minimalLogDirGB 0.1 - -# stop writing temporary files when the disk size of the tmp folder is less than this value -# minimalTmpDirGB 0.1 - -# if disk free space is less than this value, khserver service exit directly within startup process -# minimalDataDirGB 0.1 - -# One mnode is equal to the number of vnode consumed -# mnodeEqualVnodeNum 4 - -# enbale/disable http service -# http 1 - -# enable/disable system monitor -# monitor 1 - -# enable/disable recording the SQL statements via restful interface -# httpEnableRecordSql 0 - -# number of threads used to process http requests -# httpMaxThreads 2 - -# maximum number of rows returned by the restful interface -# restfulRowLimit 10240 - -# The following parameter is used to limit the maximum number of lines in log files. -# max number of lines per log filters -# numOfLogLines 10000000 - -# enable/disable async log -# asyncLog 1 - -# time of keeping log files, days -# logKeepDays 0 - - -# The following parameters are used for debug purpose only. -# debugFlag 8 bits mask: FILE-SCREEN-UNUSED-HeartBeat-DUMP-TRACE_WARN-ERROR -# 131: output warning and error -# 135: output debug, warning and error -# 143: output trace, debug, warning and error to log -# 199: output debug, warning and error to both screen and file -# 207: output trace, debug, warning and error to both screen and file - -# debug flag for all log type, take effect when non-zero value -# debugFlag 0 - -# debug flag for meta management messages -# mDebugFlag 135 - -# debug flag for dnode messages -# dDebugFlag 135 - -# debug flag for sync module -# sDebugFlag 135 - -# debug flag for WAL -# wDebugFlag 135 - -# debug flag for SDB -# sdbDebugFlag 135 - -# debug flag for RPC -# rpcDebugFlag 131 - -# debug flag for TIMER -# tmrDebugFlag 131 - -# debug flag for KingHistorian client -# cDebugFlag 131 - -# debug flag for JNI -# jniDebugFlag 131 - -# debug flag for storage -# uDebugFlag 131 - -# debug flag for http server -# httpDebugFlag 131 - -# debug flag for monitor -# monDebugFlag 131 - -# debug flag for query -# qDebugFlag 131 - -# debug flag for vnode -# vDebugFlag 131 - -# debug flag for TSDB -# tsdbDebugFlag 131 - -# debug flag for continue query -# cqDebugFlag 131 - -# enable/disable recording the SQL in kinghistorian client -# enableRecordSql 0 - -# generate core file when service crash -# enableCoreFile 1 - -# maximum display width of binary and nchar fields in the shell. The parts exceeding this limit will be hidden -# maxBinaryDisplayWidth 30 - -# enable/disable stream (continuous query) -# stream 1 - -# in retrieve blocking model, only in 50% query threads will be used in query processing in dnode -# retrieveBlockingModel 0 - -# the maximum allowed query buffer size in MB during query processing for each data node -# -1 no limit (default) -# 0 no query allowed, queries are disabled -# queryBufferSize -1 - diff --git a/packaging/cfg/power.cfg b/packaging/cfg/power.cfg deleted file mode 100644 index 6f5e910a28c0471666243e275975243bc77d2fc5..0000000000000000000000000000000000000000 --- a/packaging/cfg/power.cfg +++ /dev/null @@ -1,286 +0,0 @@ -######################################################## -# # -# PowerDB Configuration # -# Any questions, please email support@taosdata.com # -# # -######################################################## - -# first fully qualified domain name (FQDN) for PowerDB system -# firstEp hostname:6030 - -# local fully qualified domain name (FQDN) -# fqdn hostname - -# first port number for the connection (12 continuous UDP/TCP port number are used) -# serverPort 6030 - -# log file's directory -# logDir /var/log/power - -# data file's directory -# dataDir /var/lib/power - -# temporary file's directory -# tempDir /tmp/ - -# the arbitrator's fully qualified domain name (FQDN) for PowerDB system, for cluster only -# arbitrator arbitrator_hostname:6042 - -# number of threads per CPU core -# numOfThreadsPerCore 1.0 - -# number of threads to commit cache data -# numOfCommitThreads 4 - -# the proportion of total CPU cores available for query processing -# 2.0: the query threads will be set to double of the CPU cores. -# 1.0: all CPU cores are available for query processing [default]. -# 0.5: only half of the CPU cores are available for query. -# 0.0: only one core available. -# ratioOfQueryCores 1.0 - -# the last_row/first/last aggregator will not change the original column name in the result fields -keepColumnName 1 - -# number of management nodes in the system -# numOfMnodes 3 - -# enable/disable backuping vnode directory when removing vnode -# vnodeBak 1 - -# enable/disable installation / usage report -# telemetryReporting 1 - -# enable/disable load balancing -# balance 1 - -# role for dnode. 0 - any, 1 - mnode, 2 - dnode -# role 0 - -# max timer control blocks -# maxTmrCtrl 512 - -# time interval of system monitor, seconds -# monitorInterval 30 - -# number of seconds allowed for a dnode to be offline, for cluster only -# offlineThreshold 864000 - -# RPC re-try timer, millisecond -# rpcTimer 300 - -# RPC maximum time for ack, seconds. -# rpcMaxTime 600 - -# time interval of dnode status reporting to mnode, seconds, for cluster only -# statusInterval 1 - -# time interval of heart beat from shell to dnode, seconds -# shellActivityTimer 3 - -# minimum sliding window time, milli-second -# minSlidingTime 10 - -# minimum time window, milli-second -# minIntervalTime 10 - -# maximum delay before launching a stream computation, milli-second -# maxStreamCompDelay 20000 - -# maximum delay before launching a stream computation for the first time, milli-second -# maxFirstStreamCompDelay 10000 - -# retry delay when a stream computation fails, milli-second -# retryStreamCompDelay 10 - -# the delayed time for launching a stream computation, from 0.1(default, 10% of whole computing time window) to 0.9 -# streamCompDelayRatio 0.1 - -# max number of vgroups per db, 0 means configured automatically -# maxVgroupsPerDb 0 - -# max number of tables per vnode -# maxTablesPerVnode 1000000 - -# cache block size (Mbyte) -# cache 16 - -# number of cache blocks per vnode -# blocks 6 - -# number of days per DB file -# days 10 - -# number of days to keep DB file -# keep 3650 - -# minimum rows of records in file block -# minRows 100 - -# maximum rows of records in file block -# maxRows 4096 - -# the number of acknowledgments required for successful data writing -# quorum 1 - -# enable/disable compression -# comp 2 - -# write ahead log (WAL) level, 0: no wal; 1: write wal, but no fysnc; 2: write wal, and call fsync -# walLevel 1 - -# if walLevel is set to 2, the cycle of fsync being executed, if set to 0, fsync is called right away -# fsync 3000 - -# number of replications, for cluster only -# replica 1 - -# the compressed rpc message, option: -# -1 (no compression) -# 0 (all message compressed), -# > 0 (rpc message body which larger than this value will be compressed) -# compressMsgSize -1 - -# max length of an SQL -# maxSQLLength 65480 - -# max length of WildCards -# maxWildCardsLength 100 - -# the maximum number of records allowed for super table time sorting -# maxNumOfOrderedRes 100000 - -# system time zone -# timezone Asia/Shanghai (CST, +0800) -# system time zone (for windows 10) -# timezone UTC-8 - -# system locale -# locale en_US.UTF-8 - -# default system charset -# charset UTF-8 - -# max number of connections allowed in dnode -# maxShellConns 5000 - -# max number of connections allowed in client -# maxConnections 5000 - -# stop writing logs when the disk size of the log folder is less than this value -# minimalLogDirGB 0.1 - -# stop writing temporary files when the disk size of the tmp folder is less than this value -# minimalTmpDirGB 0.1 - -# if disk free space is less than this value, powerd service exit directly within startup process -# minimalDataDirGB 0.1 - -# One mnode is equal to the number of vnode consumed -# mnodeEqualVnodeNum 4 - -# enbale/disable http service -# http 1 - -# enable/disable system monitor -# monitor 1 - -# enable/disable recording the SQL statements via restful interface -# httpEnableRecordSql 0 - -# number of threads used to process http requests -# httpMaxThreads 2 - -# maximum number of rows returned by the restful interface -# restfulRowLimit 10240 - -# The following parameter is used to limit the maximum number of lines in log files. -# max number of lines per log filters -# numOfLogLines 10000000 - -# enable/disable async log -# asyncLog 1 - -# time of keeping log files, days -# logKeepDays 0 - - -# The following parameters are used for debug purpose only. -# debugFlag 8 bits mask: FILE-SCREEN-UNUSED-HeartBeat-DUMP-TRACE_WARN-ERROR -# 131: output warning and error -# 135: output debug, warning and error -# 143: output trace, debug, warning and error to log -# 199: output debug, warning and error to both screen and file -# 207: output trace, debug, warning and error to both screen and file - -# debug flag for all log type, take effect when non-zero value -# debugFlag 0 - -# debug flag for meta management messages -# mDebugFlag 135 - -# debug flag for dnode messages -# dDebugFlag 135 - -# debug flag for sync module -# sDebugFlag 135 - -# debug flag for WAL -# wDebugFlag 135 - -# debug flag for SDB -# sdbDebugFlag 135 - -# debug flag for RPC -# rpcDebugFlag 131 - -# debug flag for TAOS TIMER -# tmrDebugFlag 131 - -# debug flag for TDengine client -# cDebugFlag 131 - -# debug flag for JNI -# jniDebugFlag 131 - -# debug flag for storage -# uDebugFlag 131 - -# debug flag for http server -# httpDebugFlag 131 - -# debug flag for monitor -# monDebugFlag 131 - -# debug flag for query -# qDebugFlag 131 - -# debug flag for vnode -# vDebugFlag 131 - -# debug flag for TSDB -# tsdbDebugFlag 131 - -# debug flag for continue query -# cqDebugFlag 131 - -# enable/disable recording the SQL in power client -# enableRecordSql 0 - -# generate core file when service crash -# enableCoreFile 1 - -# maximum display width of binary and nchar fields in the shell. The parts exceeding this limit will be hidden -# maxBinaryDisplayWidth 30 - -# enable/disable stream (continuous query) -# stream 1 - -# in retrieve blocking model, only in 50% query threads will be used in query processing in dnode -# retrieveBlockingModel 0 - -# the maximum allowed query buffer size in MB during query processing for each data node -# -1 no limit (default) -# 0 no query allowed, queries are disabled -# queryBufferSize -1 - diff --git a/packaging/cfg/powerd.service b/packaging/cfg/powerd.service deleted file mode 100644 index 5aaad07ee8e992e74d6bdbdd36fafbe2236ab658..0000000000000000000000000000000000000000 --- a/packaging/cfg/powerd.service +++ /dev/null @@ -1,21 +0,0 @@ -[Unit] -Description=Power server service -After=network-online.target -Wants=network-online.target - -[Service] -Type=simple -ExecStart=/usr/bin/powerd -ExecStartPre=/usr/local/power/bin/startPre.sh -TimeoutStopSec=1000000s -LimitNOFILE=infinity -LimitNPROC=infinity -LimitCORE=infinity -TimeoutStartSec=0 -StandardOutput=null -Restart=always -StartLimitBurst=3 -StartLimitInterval=60s - -[Install] -WantedBy=multi-user.target diff --git a/packaging/cfg/prodb.cfg b/packaging/cfg/prodb.cfg deleted file mode 100644 index f84ea63bd6791b050c67befbc0c16ecb0ee553f1..0000000000000000000000000000000000000000 --- a/packaging/cfg/prodb.cfg +++ /dev/null @@ -1,286 +0,0 @@ -######################################################## -# # -# ProDB Configuration # -# Any questions, please email support@hanatech.com.cn # -# # -######################################################## - -# first fully qualified domain name (FQDN) for ProDB system -# firstEp hostname:6030 - -# local fully qualified domain name (FQDN) -# fqdn hostname - -# first port number for the connection (12 continuous UDP/TCP port number are used) -# serverPort 6030 - -# log file's directory -# logDir /var/log/ProDB - -# data file's directory -# dataDir /var/lib/ProDB - -# temporary file's directory -# tempDir /tmp/ - -# the arbitrator's fully qualified domain name (FQDN) for ProDB system, for cluster only -# arbitrator arbitrator_hostname:6042 - -# number of threads per CPU core -# numOfThreadsPerCore 1.0 - -# number of threads to commit cache data -# numOfCommitThreads 4 - -# the proportion of total CPU cores available for query processing -# 2.0: the query threads will be set to double of the CPU cores. -# 1.0: all CPU cores are available for query processing [default]. -# 0.5: only half of the CPU cores are available for query. -# 0.0: only one core available. -# ratioOfQueryCores 1.0 - -# the last_row/first/last aggregator will not change the original column name in the result fields -keepColumnName 1 - -# number of management nodes in the system -# numOfMnodes 3 - -# enable/disable backuping vnode directory when removing vnode -# vnodeBak 1 - -# enable/disable installation / usage report -# telemetryReporting 1 - -# enable/disable load balancing -# balance 1 - -# role for dnode. 0 - any, 1 - mnode, 2 - dnode -# role 0 - -# max timer control blocks -# maxTmrCtrl 512 - -# time interval of system monitor, seconds -# monitorInterval 30 - -# number of seconds allowed for a dnode to be offline, for cluster only -# offlineThreshold 864000 - -# RPC re-try timer, millisecond -# rpcTimer 300 - -# RPC maximum time for ack, seconds. -# rpcMaxTime 600 - -# time interval of dnode status reporting to mnode, seconds, for cluster only -# statusInterval 1 - -# time interval of heart beat from shell to dnode, seconds -# shellActivityTimer 3 - -# minimum sliding window time, milli-second -# minSlidingTime 10 - -# minimum time window, milli-second -# minIntervalTime 10 - -# maximum delay before launching a stream computation, milli-second -# maxStreamCompDelay 20000 - -# maximum delay before launching a stream computation for the first time, milli-second -# maxFirstStreamCompDelay 10000 - -# retry delay when a stream computation fails, milli-second -# retryStreamCompDelay 10 - -# the delayed time for launching a stream computation, from 0.1(default, 10% of whole computing time window) to 0.9 -# streamCompDelayRatio 0.1 - -# max number of vgroups per db, 0 means configured automatically -# maxVgroupsPerDb 0 - -# max number of tables per vnode -# maxTablesPerVnode 1000000 - -# cache block size (Mbyte) -# cache 16 - -# number of cache blocks per vnode -# blocks 6 - -# number of days per DB file -# days 10 - -# number of days to keep DB file -# keep 3650 - -# minimum rows of records in file block -# minRows 100 - -# maximum rows of records in file block -# maxRows 4096 - -# the number of acknowledgments required for successful data writing -# quorum 1 - -# enable/disable compression -# comp 2 - -# write ahead log (WAL) level, 0: no wal; 1: write wal, but no fysnc; 2: write wal, and call fsync -# walLevel 1 - -# if walLevel is set to 2, the cycle of fsync being executed, if set to 0, fsync is called right away -# fsync 3000 - -# number of replications, for cluster only -# replica 1 - -# the compressed rpc message, option: -# -1 (no compression) -# 0 (all message compressed), -# > 0 (rpc message body which larger than this value will be compressed) -# compressMsgSize -1 - -# max length of an SQL -# maxSQLLength 65480 - -# max length of WildCards -# maxWildCardsLength 100 - -# the maximum number of records allowed for super table time sorting -# maxNumOfOrderedRes 100000 - -# system time zone -# timezone Asia/Shanghai (CST, +0800) -# system time zone (for windows 10) -# timezone UTC-8 - -# system locale -# locale en_US.UTF-8 - -# default system charset -# charset UTF-8 - -# max number of connections allowed in dnode -# maxShellConns 5000 - -# max number of connections allowed in client -# maxConnections 5000 - -# stop writing logs when the disk size of the log folder is less than this value -# minimalLogDirGB 0.1 - -# stop writing temporary files when the disk size of the tmp folder is less than this value -# minimalTmpDirGB 0.1 - -# if disk free space is less than this value, prodbs service exit directly within startup process -# minimalDataDirGB 0.1 - -# One mnode is equal to the number of vnode consumed -# mnodeEqualVnodeNum 4 - -# enbale/disable http service -# http 1 - -# enable/disable system monitor -# monitor 1 - -# enable/disable recording the SQL statements via restful interface -# httpEnableRecordSql 0 - -# number of threads used to process http requests -# httpMaxThreads 2 - -# maximum number of rows returned by the restful interface -# restfulRowLimit 10240 - -# The following parameter is used to limit the maximum number of lines in log files. -# max number of lines per log filters -# numOfLogLines 10000000 - -# enable/disable async log -# asyncLog 1 - -# time of keeping log files, days -# logKeepDays 0 - - -# The following parameters are used for debug purpose only. -# debugFlag 8 bits mask: FILE-SCREEN-UNUSED-HeartBeat-DUMP-TRACE_WARN-ERROR -# 131: output warning and error -# 135: output debug, warning and error -# 143: output trace, debug, warning and error to log -# 199: output debug, warning and error to both screen and file -# 207: output trace, debug, warning and error to both screen and file - -# debug flag for all log type, take effect when non-zero value -# debugFlag 0 - -# debug flag for meta management messages -# mDebugFlag 135 - -# debug flag for dnode messages -# dDebugFlag 135 - -# debug flag for sync module -# sDebugFlag 135 - -# debug flag for WAL -# wDebugFlag 135 - -# debug flag for SDB -# sdbDebugFlag 135 - -# debug flag for RPC -# rpcDebugFlag 131 - -# debug flag for TAOS TIMER -# tmrDebugFlag 131 - -# debug flag for ProDB client -# cDebugFlag 131 - -# debug flag for JNI -# jniDebugFlag 131 - -# debug flag for storage -# uDebugFlag 131 - -# debug flag for http server -# httpDebugFlag 131 - -# debug flag for monitor -# monDebugFlag 131 - -# debug flag for query -# qDebugFlag 131 - -# debug flag for vnode -# vDebugFlag 131 - -# debug flag for TSDB -# tsdbDebugFlag 131 - -# debug flag for continue query -# cqDebugFlag 131 - -# enable/disable recording the SQL in prodb client -# enableRecordSql 0 - -# generate core file when service crash -# enableCoreFile 1 - -# maximum display width of binary and nchar fields in the shell. The parts exceeding this limit will be hidden -# maxBinaryDisplayWidth 30 - -# enable/disable stream (continuous query) -# stream 1 - -# in retrieve blocking model, only in 50% query threads will be used in query processing in dnode -# retrieveBlockingModel 0 - -# the maximum allowed query buffer size in MB during query processing for each data node -# -1 no limit (default) -# 0 no query allowed, queries are disabled -# queryBufferSize -1 - diff --git a/packaging/cfg/prodbs.service b/packaging/cfg/prodbs.service deleted file mode 100644 index 4d5108989474a1a3e9c3c7a11c6b1136fb16c67c..0000000000000000000000000000000000000000 --- a/packaging/cfg/prodbs.service +++ /dev/null @@ -1,21 +0,0 @@ -[Unit] -Description=ProDB server service -After=network-online.target -Wants=network-online.target - -[Service] -Type=simple -ExecStart=/usr/bin/prodbs -ExecStartPre=/usr/local/ProDB/bin/startPre.sh -TimeoutStopSec=1000000s -LimitNOFILE=infinity -LimitNPROC=infinity -LimitCORE=infinity -TimeoutStartSec=0 -StandardOutput=null -Restart=always -StartLimitBurst=3 -StartLimitInterval=60s - -[Install] -WantedBy=multi-user.target diff --git a/packaging/cfg/tq.cfg b/packaging/cfg/tq.cfg deleted file mode 100644 index 284335a4ad9d5b60cc36a0d722e8c6994f20a812..0000000000000000000000000000000000000000 --- a/packaging/cfg/tq.cfg +++ /dev/null @@ -1,286 +0,0 @@ -######################################################## -# # -# TQueue Configuration # -# Any questions, please email support@taosdata.com # -# # -######################################################## - -# first fully qualified domain name (FQDN) for TQueue system -# firstEp hostname:6030 - -# local fully qualified domain name (FQDN) -# fqdn hostname - -# first port number for the connection (12 continuous UDP/TCP port number are used) -# serverPort 6030 - -# log file's directory -# logDir /var/log/tq - -# data file's directory -# dataDir /var/lib/tq - -# temporary file's directory -# tempDir /tmp/ - -# the arbitrator's fully qualified domain name (FQDN) for TQueue system, for cluster only -# arbitrator arbitrator_hostname:6042 - -# number of threads per CPU core -# numOfThreadsPerCore 1.0 - -# number of threads to commit cache data -# numOfCommitThreads 4 - -# the proportion of total CPU cores available for query processing -# 2.0: the query threads will be set to double of the CPU cores. -# 1.0: all CPU cores are available for query processing [default]. -# 0.5: only half of the CPU cores are available for query. -# 0.0: only one core available. -# ratioOfQueryCores 1.0 - -# the last_row/first/last aggregator will not change the original column name in the result fields -keepColumnName 1 - -# number of management nodes in the system -# numOfMnodes 3 - -# enable/disable backuping vnode directory when removing vnode -# vnodeBak 1 - -# enable/disable installation / usage report -# telemetryReporting 1 - -# enable/disable load balancing -# balance 1 - -# role for dnode. 0 - any, 1 - mnode, 2 - dnode -# role 0 - -# max timer control blocks -# maxTmrCtrl 512 - -# time interval of system monitor, seconds -# monitorInterval 30 - -# number of seconds allowed for a dnode to be offline, for cluster only -# offlineThreshold 864000 - -# RPC re-try timer, millisecond -# rpcTimer 300 - -# RPC maximum time for ack, seconds. -# rpcMaxTime 600 - -# time interval of dnode status reporting to mnode, seconds, for cluster only -# statusInterval 1 - -# time interval of heart beat from shell to dnode, seconds -# shellActivityTimer 3 - -# minimum sliding window time, milli-second -# minSlidingTime 10 - -# minimum time window, milli-second -# minIntervalTime 10 - -# maximum delay before launching a stream computation, milli-second -# maxStreamCompDelay 20000 - -# maximum delay before launching a stream computation for the first time, milli-second -# maxFirstStreamCompDelay 10000 - -# retry delay when a stream computation fails, milli-second -# retryStreamCompDelay 10 - -# the delayed time for launching a stream computation, from 0.1(default, 10% of whole computing time window) to 0.9 -# streamCompDelayRatio 0.1 - -# max number of vgroups per db, 0 means configured automatically -# maxVgroupsPerDb 0 - -# max number of tables per vnode -# maxTablesPerVnode 1000000 - -# cache block size (Mbyte) -# cache 16 - -# number of cache blocks per vnode -# blocks 6 - -# number of days per DB file -# days 10 - -# number of days to keep DB file -# keep 3650 - -# minimum rows of records in file block -# minRows 100 - -# maximum rows of records in file block -# maxRows 4096 - -# the number of acknowledgments required for successful data writing -# quorum 1 - -# enable/disable compression -# comp 2 - -# write ahead log (WAL) level, 0: no wal; 1: write wal, but no fysnc; 2: write wal, and call fsync -# walLevel 1 - -# if walLevel is set to 2, the cycle of fsync being executed, if set to 0, fsync is called right away -# fsync 3000 - -# number of replications, for cluster only -# replica 1 - -# the compressed rpc message, option: -# -1 (no compression) -# 0 (all message compressed), -# > 0 (rpc message body which larger than this value will be compressed) -# compressMsgSize -1 - -# max length of an SQL -# maxSQLLength 65480 - -# max length of WildCards -# maxWildCardsLength 100 - -# the maximum number of records allowed for super table time sorting -# maxNumOfOrderedRes 100000 - -# system time zone -# timezone Asia/Shanghai (CST, +0800) -# system time zone (for windows 10) -# timezone UTC-8 - -# system locale -# locale en_US.UTF-8 - -# default system charset -# charset UTF-8 - -# max number of connections allowed in dnode -# maxShellConns 5000 - -# max number of connections allowed in client -# maxConnections 5000 - -# stop writing logs when the disk size of the log folder is less than this value -# minimalLogDirGB 0.1 - -# stop writing temporary files when the disk size of the tmp folder is less than this value -# minimalTmpDirGB 0.1 - -# if disk free space is less than this value, tqd service exit directly within startup process -# minimalDataDirGB 0.1 - -# One mnode is equal to the number of vnode consumed -# mnodeEqualVnodeNum 4 - -# enbale/disable http service -# http 1 - -# enable/disable system monitor -# monitor 1 - -# enable/disable recording the SQL statements via restful interface -# httpEnableRecordSql 0 - -# number of threads used to process http requests -# httpMaxThreads 2 - -# maximum number of rows returned by the restful interface -# restfulRowLimit 10240 - -# The following parameter is used to limit the maximum number of lines in log files. -# max number of lines per log filters -# numOfLogLines 10000000 - -# enable/disable async log -# asyncLog 1 - -# time of keeping log files, days -# logKeepDays 0 - - -# The following parameters are used for debug purpose only. -# debugFlag 8 bits mask: FILE-SCREEN-UNUSED-HeartBeat-DUMP-TRACE_WARN-ERROR -# 131: output warning and error -# 135: output debug, warning and error -# 143: output trace, debug, warning and error to log -# 199: output debug, warning and error to both screen and file -# 207: output trace, debug, warning and error to both screen and file - -# debug flag for all log type, take effect when non-zero value -# debugFlag 0 - -# debug flag for meta management messages -# mDebugFlag 135 - -# debug flag for dnode messages -# dDebugFlag 135 - -# debug flag for sync module -# sDebugFlag 135 - -# debug flag for WAL -# wDebugFlag 135 - -# debug flag for SDB -# sdbDebugFlag 135 - -# debug flag for RPC -# rpcDebugFlag 131 - -# debug flag for TAOS TIMER -# tmrDebugFlag 131 - -# debug flag for TQueue client -# cDebugFlag 131 - -# debug flag for JNI -# jniDebugFlag 131 - -# debug flag for storage -# uDebugFlag 131 - -# debug flag for http server -# httpDebugFlag 131 - -# debug flag for monitor -# monDebugFlag 131 - -# debug flag for query -# qDebugFlag 131 - -# debug flag for vnode -# vDebugFlag 131 - -# debug flag for TSDB -# tsdbDebugFlag 131 - -# debug flag for continue query -# cqDebugFlag 131 - -# enable/disable recording the SQL in tq client -# enableRecordSql 0 - -# generate core file when service crash -# enableCoreFile 1 - -# maximum display width of binary and nchar fields in the shell. The parts exceeding this limit will be hidden -# maxBinaryDisplayWidth 30 - -# enable/disable stream (continuous query) -# stream 1 - -# in retrieve blocking model, only in 50% query threads will be used in query processing in dnode -# retrieveBlockingModel 0 - -# the maximum allowed query buffer size in MB during query processing for each data node -# -1 no limit (default) -# 0 no query allowed, queries are disabled -# queryBufferSize -1 - diff --git a/packaging/cfg/tqd.service b/packaging/cfg/tqd.service deleted file mode 100644 index 805a019f12a1eb26f16ddb3c2be0ae49e9f9b0e0..0000000000000000000000000000000000000000 --- a/packaging/cfg/tqd.service +++ /dev/null @@ -1,21 +0,0 @@ -[Unit] -Description=TQ server service -After=network-online.target -Wants=network-online.target - -[Service] -Type=simple -ExecStart=/usr/bin/tqd -ExecStartPre=/usr/local/tq/bin/startPre.sh -TimeoutStopSec=1000000s -LimitNOFILE=infinity -LimitNPROC=infinity -LimitCORE=infinity -TimeoutStartSec=0 -StandardOutput=null -Restart=always -StartLimitBurst=3 -StartLimitInterval=60s - -[Install] -WantedBy=multi-user.target diff --git a/packaging/docker/dockerbuild.sh b/packaging/docker/dockerbuild.sh index 3729131c0e20859488d0a7c0c100463c818aaf8c..2483973111a871a0cc958675531cb8a85d73fdac 100755 --- a/packaging/docker/dockerbuild.sh +++ b/packaging/docker/dockerbuild.sh @@ -13,6 +13,7 @@ set -e # set parameters by default value cpuType="" +cpuTypeAlias="" version="" passWord="" pkgFile="" @@ -88,8 +89,18 @@ cp -f ${comunityArchiveDir}/${pkgFile} . echo "dirName=${dirName}" +if [[ "${cpuType}" == "x64" ]] || [[ "${cpuType}" == "amd64" ]]; then + cpuTypeAlias="amd64" +elif [[ "${cpuType}" == "aarch64" ]]; then + cpuTypeAlias="arm64" +elif [[ "${cpuType}" == "aarch32" ]]; then + cpuTypeAlias="armhf" +else + echo "Unknown cpuType: ${cpuType}" + exit 1 +fi -docker build --rm -f "Dockerfile" --network=host -t tdengine/tdengine-${dockername}:${version} "." --build-arg pkgFile=${pkgFile} --build-arg dirName=${dirName} --build-arg cpuType=${cpuType} +docker build --rm -f "Dockerfile" --network=host -t tdengine/tdengine-${dockername}:${version} "." --build-arg pkgFile=${pkgFile} --build-arg dirName=${dirName} --build-arg cpuType=${cpuTypeAlias} docker login -u tdengine -p ${passWord} #replace the docker registry username and password docker push tdengine/tdengine-${dockername}:${version} diff --git a/packaging/release.sh b/packaging/release.sh index 207444377c1195762506ac2ada8338b3bd105885..a96b3129992ea829e9d70fb11d41647760e480c8 100755 --- a/packaging/release.sh +++ b/packaging/release.sh @@ -4,13 +4,6 @@ set -e #set -x -scriptDir=$(dirname $(readlink -f $0)) - -source $scriptDir/sed_power.sh -source $scriptDir/sed_tq.sh -source $scriptDir/sed_pro.sh -source $scriptDir/sed_kh.sh -source $scriptDir/sed_jh.sh # release.sh -v [cluster | edge] # -c [aarch32 | aarch64 | x64 | x86 | mips64 ...] @@ -18,7 +11,7 @@ source $scriptDir/sed_jh.sh # -V [stable | beta] # -l [full | lite] # -s [static | dynamic] -# -d [taos | power | tq | pro | kh | jh] +# -d [taos | ...] # -n [2.0.0.3] # -m [2.0.0.0] # -H [ false | true] @@ -30,10 +23,10 @@ cpuType=x64 # [aarch32 | aarch64 | x64 | x86 | mips64 ...] osType=Linux # [Linux | Kylin | Alpine | Raspberrypi | Darwin | Windows | Ningsi60 | Ningsi80 |...] pagMode=full # [full | lite] soMode=dynamic # [static | dynamic] -dbName=taos # [taos | power | tq | pro | kh | jh] +dbName=taos # [taos | ...] allocator=glibc # [glibc | jemalloc] verNumber="" -verNumberComp="1.0.0.0" +verNumberComp="2.0.0.0" httpdBuild=false while getopts "hv:V:c:o:l:s:d:a:n:m:H:" arg; do @@ -90,7 +83,7 @@ while getopts "hv:V:c:o:l:s:d:a:n:m:H:" arg; do echo " -l [full | lite] " echo " -a [glibc | jemalloc] " echo " -s [static | dynamic] " - echo " -d [taos | power | tq | pro | kh | jh] " + echo " -d [taos | ...] " echo " -n [version number] " echo " -m [compatible version number] " echo " -H [false | true] " @@ -107,14 +100,14 @@ echo "verMode=${verMode} verType=${verType} cpuType=${cpuType} osType=${osType} curr_dir=$(pwd) -if [ "$osType" != "Darwin" ]; then - script_dir="$(dirname $(readlink -f $0))" - top_dir="$(readlink -f ${script_dir}/..)" -else +if [ "$osType" == "Darwin" ]; then script_dir=$(dirname $0) cd ${script_dir} script_dir="$(pwd)" top_dir=${script_dir}/.. +else + script_dir="$(dirname $(readlink -f $0))" + top_dir="$(readlink -f ${script_dir}/..)" fi csudo="" @@ -184,7 +177,7 @@ else gitinfoOfInternal=NULL fi -cd ${curr_dir} +cd "${curr_dir}" # 2. cmake executable file compile_dir="${top_dir}/debug" @@ -206,6 +199,7 @@ else fi if [[ "$dbName" != "taos" ]]; then + source ${enterprise_dir}/packaging/oem/sed_$dbName.sh replace_community_$dbName fi @@ -231,11 +225,9 @@ if [[ "$cpuType" == "x64" ]] || [[ "$cpuType" == "aarch64" ]] || [[ "$cpuType" = # community-version compile cmake ../ -DCPUTYPE=${cpuType} -DOSTYPE=${osType} -DSOMODE=${soMode} -DDBNAME=${dbName} -DVERTYPE=${verType} -DVERDATE="${build_time}" -DGITINFO=${gitinfo} -DGITINFOI=${gitinfoOfInternal} -DVERNUMBER=${verNumber} -DVERCOMPATIBLE=${verNumberComp} -DPAGMODE=${pagMode} -DBUILD_HTTP=${BUILD_HTTP} -DBUILD_TOOLS=${BUILD_TOOLS} ${allocator_macro} else - if [[ "$dbName" != "taos" ]]; then replace_enterprise_$dbName fi - cmake ../../ -DCPUTYPE=${cpuType} -DOSTYPE=${osType} -DSOMODE=${soMode} -DDBNAME=${dbName} -DVERTYPE=${verType} -DVERDATE="${build_time}" -DGITINFO=${gitinfo} -DGITINFOI=${gitinfoOfInternal} -DVERNUMBER=${verNumber} -DVERCOMPATIBLE=${verNumberComp} -DBUILD_HTTP=${BUILD_HTTP} -DBUILD_TOOLS=${BUILD_TOOLS} ${allocator_macro} fi else @@ -256,7 +248,7 @@ cd ${curr_dir} # 3. Call the corresponding script for packaging if [ "$osType" != "Darwin" ]; then - if [[ "$verMode" != "cluster" ]] && [[ "$cpuType" == "x64" ]] && [[ "$dbName" == "taos" ]]; then + if [[ "$verMode" != "cluster" ]] && [[ "$pagMode" == "full" ]] && [[ "$cpuType" == "x64" ]] && [[ "$dbName" == "taos" ]]; then ret='0' command -v dpkg >/dev/null 2>&1 || { ret='1'; } if [ "$ret" -eq 0 ]; then @@ -312,14 +304,12 @@ if [ "$osType" != "Darwin" ]; then echo "====do tar.gz package for all systems====" cd ${script_dir}/tools - if [ "$verMode" == "cluster" ]; then - ${csudo}./makepkg.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${verNumberComp} -# ${csudo}./makeclient.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} -# ${csudo}./makearbi.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} - fi + ${csudo}./makepkg.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${verNumberComp} ${dbName} + ${csudo}./makeclient.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${dbName} + ${csudo}./makearbi.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} else # only make client for Darwin cd ${script_dir}/tools - ./makeclient.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${dbName} + ./makeclient.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${dbName} fi diff --git a/packaging/sed_jh.bat b/packaging/sed_jh.bat deleted file mode 100644 index f7ce46562c913f4a6043c872fdec0104d0153d46..0000000000000000000000000000000000000000 --- a/packaging/sed_jh.bat +++ /dev/null @@ -1,76 +0,0 @@ -set sed="C:\Program Files\Git\usr\bin\sed.exe" -set community_dir=%1 - -::cmake\install.inc -%sed% -i "s/C:\/TDengine/C:\/jh_iot/g" %community_dir%\cmake\install.inc -%sed% -i "s/taos\.cfg/jh_taos\.cfg/g" %community_dir%\cmake\install.inc -%sed% -i "s/taos\.exe/jh_taos\.exe/g" %community_dir%\cmake\install.inc -%sed% -i "s/taosdemo\.exe/jhdemo\.exe/g" %community_dir%\cmake\install.inc -%sed% -i "/src\/connector/d" %community_dir%\cmake\install.inc -%sed% -i "/tests\/examples/d" %community_dir%\cmake\install.inc -::src\kit\shell\CMakeLists.txt -%sed% -i "s/OUTPUT_NAME taos/OUTPUT_NAME jh_taos/g" %community_dir%\src\kit\shell\CMakeLists.txt -::src\kit\shell\inc\shell.h -%sed% -i "s/taos_history/jh_taos_history/g" %community_dir%\src\kit\shell\inc\shell.h -::src\inc\taosdef.h -%sed% -i "s/\"taosdata\"/\"jhdata\"/g" %community_dir%\src\inc\taosdef.h -::src\util\src\tconfig.c -%sed% -i "s/taos\.cfg/jh_taos\.cfg/g" %community_dir%\src\util\src\tconfig.c -%sed% -i "s/etc\/taos/etc\/jh_taos/g" %community_dir%\src\util\src\tconfig.c -::src\kit\taosdemo\CMakeLists.txt -%sed% -i "s/ADD_EXECUTABLE(taosdemo/ADD_EXECUTABLE(jhdemo/g" %community_dir%\src\kit\taosdemo\CMakeLists.txt -%sed% -i "s/TARGET_LINK_LIBRARIES(taosdemo/TARGET_LINK_LIBRARIES(jhdemo/g" %community_dir%\src\kit\taosdemo\CMakeLists.txt -::src\kit\taosdemo\taosdemo.c -%sed% -i "s/taosdemo --help/jhdemo --help/g" %community_dir%\src\kit\taosdemo\taosdemo.c -%sed% -i "s/taosdemo --usage/jhdemo --usage/g" %community_dir%\src\kit\taosdemo\taosdemo.c -%sed% -i "s/Usage: taosdemo/Usage: jhdemo/g" %community_dir%\src\kit\taosdemo\taosdemo.c -%sed% -i "s/taosdemo is simulating/jhdemo is simulating/g" %community_dir%\src\kit\taosdemo\taosdemo.c -%sed% -i "s/taosdemo version/jhdemo version/g" %community_dir%\src\kit\taosdemo\taosdemo.c -%sed% -i "s/\"taosdata\"/\"jhdata\"/g" %community_dir%\src\kit\taosdemo\taosdemo.c -%sed% -i "s/support@taosdata\.com/jhkj@njsteel\.com\.cn/g" %community_dir%\src\kit\taosdemo\taosdemo.c -%sed% -i "s/taosc, rest, and stmt/jh_taos, rest, and stmt/g" %community_dir%\src\kit\taosdemo\taosdemo.c -%sed% -i "s/taosdemo uses/jhdemo uses/g" %community_dir%\src\kit\taosdemo\taosdemo.c -%sed% -i "s/use 'taosc'/use 'jh_taos'/g" %community_dir%\src\kit\taosdemo\taosdemo.c -::src\util\src\tlog.c -%sed% -i "s/log\/taos/log\/jh_taos/g" %community_dir%\src\util\src\tlog.c -::src\dnode\src\dnodeSystem.c -%sed% -i "s/TDengine/jh_iot/g" %community_dir%\src\dnode\src\dnodeSystem.c -::src\dnode\src\dnodeMain.c -%sed% -i "s/TDengine/jh_iot/g" %community_dir%\src\dnode\src\dnodeMain.c -%sed% -i "s/taosdlog/jh_taosdlog/g" %community_dir%\src\dnode\src\dnodeMain.c -::src\client\src\tscSystem.c -%sed% -i "s/taoslog/jh_taoslog/g" %community_dir%\src\client\src\tscSystem.c -::src\util\src\tnote.c -%sed% -i "s/taosinfo/jh_taosinfo/g" %community_dir%\src\util\src\tnote.c -::src\dnode\CMakeLists.txt -%sed% -i "s/taos\.cfg/jh_taos\.cfg/g" %community_dir%\src\dnode\CMakeLists.txt -::src\kit\taosdump\taosdump.c -%sed% -i "s/support@taosdata\.com/jhkj@njsteel\.com\.cn/g" %community_dir%\src\kit\taosdump\taosdump.c -%sed% -i "s/Default is taosdata/Default is jhdata/g" %community_dir%\src\kit\taosdump\taosdump.c -%sed% -i "s/\"taosdata\"/\"jhdata\"/g" %community_dir%\src\kit\taosdump\taosdump.c -%sed% -i "s/TDengine/jh_iot/g" %community_dir%\src\kit\taosdump\taosdump.c -%sed% -i "s/taos\/taos\.cfg/jh_taos\/jh_taos\.cfg/g" %community_dir%\src\kit\taosdump\taosdump.c -::src\os\src\linux\linuxEnv.c -%sed% -i "s/etc\/taos/etc\/jh_taos/g" %community_dir%\src\os\src\linux\linuxEnv.c -%sed% -i "s/lib\/taos/lib\/jh_taos/g" %community_dir%\src\os\src\linux\linuxEnv.c -%sed% -i "s/log\/taos/log\/jh_taos/g" %community_dir%\src\os\src\linux\linuxEnv.c -::src\kit\shell\src\shellDarwin.c -%sed% -i "s/TDengine shell/jh_iot shell/g" %community_dir%\src\kit\shell\src\shellDarwin.c -%sed% -i "s/2020 by TAOS Data/2021 by Jinheng Technology/g" %community_dir%\src\kit\shell\src\shellDarwin.c -::src\kit\shell\src\shellLinux.c -%sed% -i "s/support@taosdata\.com/jhkj@njsteel\.com\.cn/g" %community_dir%\src\kit\shell\src\shellLinux.c -%sed% -i "s/TDengine shell/jh_iot shell/g" %community_dir%\src\kit\shell\src\shellLinux.c -%sed% -i "s/2020 by TAOS Data/2021 by Jinheng Technology/g" %community_dir%\src\kit\shell\src\shellLinux.c -::src\os\src\windows\wEnv.c -%sed% -i "s/TDengine/jh_iot/g" %community_dir%\src\os\src\windows\wEnv.c -::src\kit\shell\src\shellEngine.c -%sed% -i "s/TDengine shell/jh_iot shell/g" %community_dir%\src\kit\shell\src\shellEngine.c -%sed% -i "s/2020 by TAOS Data, Inc/2021 by Jinheng Technology, Inc/g" %community_dir%\src\kit\shell\src\shellEngine.c -%sed% -i "s/taos connect failed/jh_taos connect failed/g" %community_dir%\src\kit\shell\src\shellEngine.c -%sed% -i "s/\"taos^> \"/\"jh_taos^> \"/g" %community_dir%\src\kit\shell\src\shellEngine.c -%sed% -i "s/\" -^> \"/\" -^> \"/g" %community_dir%\src\kit\shell\src\shellEngine.c -%sed% -i "s/prompt_size = 6/prompt_size = 9/g" %community_dir%\src\kit\shell\src\shellEngine.c -::src\rpc\src\rpcMain.c -%sed% -i "s/taos connections/jh_taos connections/g" %community_dir%\src\rpc\src\rpcMain.c -::src\plugins\monitor\src\monMain.c -%sed% -i "s/taosd is quiting/jh_taosd is quiting/g" %community_dir%\src\plugins\monitor\src\monMain.c diff --git a/packaging/sed_jh.sh b/packaging/sed_jh.sh deleted file mode 100755 index 0c288bee76c0745f5d3cf3b23d4aa103c1897c22..0000000000000000000000000000000000000000 --- a/packaging/sed_jh.sh +++ /dev/null @@ -1,162 +0,0 @@ -#!/bin/bash - -function replace_community_jh() { - # cmake/install.inc - sed -i "s/C:\/TDengine/C:\/jh_iot/g" ${top_dir}/cmake/install.inc - sed -i "s/taos\.cfg/jh_taos\.cfg/g" ${top_dir}/cmake/install.inc - sed -i "s/taos\.exe/jh_taos\.exe/g" ${top_dir}/cmake/install.inc - # src/kit/shell/CMakeLists.txt - sed -i "s/OUTPUT_NAME taos/OUTPUT_NAME jh_taos/g" ${top_dir}/src/kit/shell/CMakeLists.txt - # src/kit/shell/inc/shell.h - sed -i "s/taos_history/jh_taos_history/g" ${top_dir}/src/kit/shell/inc/shell.h - # src/inc/taosdef.h - sed -i "s/\"taosdata\"/\"jhdata\"/g" ${top_dir}/src/inc/taosdef.h - # src/util/src/tconfig.c - sed -i "s/taos\.cfg/jh_taos\.cfg/g" ${top_dir}/src/util/src/tconfig.c - sed -i "s/etc\/taos/etc\/jh_taos/g" ${top_dir}/src/util/src/tconfig.c - sed -i "s/taos config/jh_taos config/g" ${top_dir}/src/util/src/tconfig.c - # src/util/src/tlog.c - sed -i "s/log\/taos/log\/jh_taos/g" ${top_dir}/src/util/src/tlog.c - # src/dnode/src/dnodeSystem.c - sed -i "s/TDengine/jh_taos/g" ${top_dir}/src/dnode/src/dnodeSystem.c - sed -i "s/TDengine/jh_taos/g" ${top_dir}/src/dnode/src/dnodeMain.c - sed -i "s/taosdlog/jh_taosdlog/g" ${top_dir}/src/dnode/src/dnodeMain.c - # src/client/src/tscSystem.c - sed -i "s/taoslog/jh_taoslog/g" ${top_dir}/src/client/src/tscSystem.c - # src/util/src/tnote.c - sed -i "s/taosinfo/jh_taosinfo/g" ${top_dir}/src/util/src/tnote.c - # src/dnode/CMakeLists.txt - sed -i "s/taos\.cfg/jh_taos\.cfg/g" ${top_dir}/src/dnode/CMakeLists.txt - echo "SET_TARGET_PROPERTIES(taosd PROPERTIES OUTPUT_NAME jh_taosd)" >>${top_dir}/src/dnode/CMakeLists.txt - # src/os/src/linux/linuxEnv.c - sed -i "s/etc\/taos/etc\/jh_taos/g" ${top_dir}/src/os/src/linux/linuxEnv.c - sed -i "s/lib\/taos/lib\/jh_taos/g" ${top_dir}/src/os/src/linux/linuxEnv.c - sed -i "s/log\/taos/log\/jh_taos/g" ${top_dir}/src/os/src/linux/linuxEnv.c - # src/kit/shell/src/shellDarwin.c - sed -i "s/TDengine shell/jh_iot shell/g" ${top_dir}/src/kit/shell/src/shellDarwin.c - sed -i "s/2020 by TAOS Data/2021 by Jinheng Technology/g" ${top_dir}/src/kit/shell/src/shellDarwin.c - # src/kit/shell/src/shellLinux.c - sed -i "s/support@taosdata\.com/jhkj@njsteel\.com\.cn/g" ${top_dir}/src/kit/shell/src/shellLinux.c - sed -i "s/TDengine shell/jh_iot shell/g" ${top_dir}/src/kit/shell/src/shellLinux.c - sed -i "s/2020 by TAOS Data/2021 by Jinheng Technology/g" ${top_dir}/src/kit/shell/src/shellLinux.c - # src/os/src/windows/wEnv.c - sed -i "s/C:\/TDengine/C:\/jh_iot/g" ${top_dir}/src/os/src/windows/wEnv.c - # src/kit/shell/src/shellEngine.c - sed -i "s/TDengine shell/jh_iot shell/g" ${top_dir}/src/kit/shell/src/shellEngine.c - sed -i "s/2020 by TAOS Data, Inc/2021 by Jinheng Technology, Inc/g" ${top_dir}/src/kit/shell/src/shellEngine.c - sed -i "s/taos connect failed/jh_taos connect failed/g" ${top_dir}/src/kit/shell/src/shellEngine.c - sed -i "s/\"taos> \"/\"jh_taos> \"/g" ${top_dir}/src/kit/shell/src/shellEngine.c - sed -i "s/\" -> \"/\" -> \"/g" ${top_dir}/src/kit/shell/src/shellEngine.c - sed -i "s/prompt_size = 6/prompt_size = 9/g" ${top_dir}/src/kit/shell/src/shellEngine.c - # src/rpc/src/rpcMain.c - sed -i "s/taos connections/jh_taos connections/g" ${top_dir}/src/rpc/src/rpcMain.c - # src/plugins/monitor/src/monMain.c - sed -i "s/taosd is quiting/jh_taosd is quiting/g" ${top_dir}/src/plugins/monitor/src/monMain.c - - # packaging/tools/makepkg.sh - sed -i "s/productName=\"TDengine\"/productName=\"jh_iot\"/g" ${top_dir}/packaging/tools/makepkg.sh - sed -i "s/serverName=\"taosd\"/serverName=\"jh_taosd\"/g" ${top_dir}/packaging/tools/makepkg.sh - sed -i "s/clientName=\"taos\"/clientName=\"jh_taos\"/g" ${top_dir}/packaging/tools/makepkg.sh - sed -i "s/configFile=\"taos\.cfg\"/configFile=\"jh_taos\.cfg\"/g" ${top_dir}/packaging/tools/makepkg.sh - sed -i "s/tarName=\"taos\.tar\.gz\"/tarName=\"jh_taos\.tar\.gz\"/g" ${top_dir}/packaging/tools/makepkg.sh - # packaging/tools/remove.sh - sed -i "s/installDir=\"\/usr\/local\/taos\"/installDir=\"\/usr\/local\/jh_taos\"/g" ${top_dir}/packaging/tools/remove.sh - sed -i "s/serverName=\"taosd\"/serverName=\"jh_taosd\"/g" ${top_dir}/packaging/tools/remove.sh - sed -i "s/clientName=\"taos\"/clientName=\"jh_taos\"/g" ${top_dir}/packaging/tools/remove.sh - sed -i "s/uninstallScript=\"rmtaos\"/uninstallScript=\"rmjh\"/g" ${top_dir}/packaging/tools/remove.sh - sed -i "s/productName=\"TDengine\"/productName=\"jh_iot\"/g" ${top_dir}/packaging/tools/remove.sh - # packaging/tools/startPre.sh - sed -i "s/serverName=\"taosd\"/serverName=\"jh_taosd\"/g" ${top_dir}/packaging/tools/startPre.sh - sed -i "s/logDir=\"\/var\/log\/taos\"/logDir=\"\/var\/log\/jh_taos\"/g" ${top_dir}/packaging/tools/startPre.sh - # packaging/tools/run_taosd.sh - sed -i "s/taosd/jh_taosd/g" ${top_dir}/packaging/tools/run_taosd.sh - # packaging/tools/install.sh - sed -i "s/clientName=\"taos\"/clientName=\"jh_taos\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/serverName=\"taosd\"/serverName=\"jh_taosd\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/configFile=\"taos\.cfg\"/configFile=\"jh_taos\.cfg\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/productName=\"TDengine\"/productName=\"jh_iot\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/emailName=\"taosdata\.com\"/emailName=\"\jhict\.com\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/uninstallScript=\"rmtaos\"/uninstallScript=\"rmjh\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/historyFile=\"taos_history\"/historyFile=\"jh_taos_history\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/tarName=\"taos\.tar\.gz\"/tarName=\"jh_taos\.tar\.gz\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/dataDir=\"\/var\/lib\/taos\"/dataDir=\"\/var\/lib\/jh_taos\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/logDir=\"\/var\/log\/taos\"/logDir=\"\/var\/log\/jh_taos\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/configDir=\"\/etc\/taos\"/configDir=\"\/etc\/jh_taos\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/installDir=\"\/usr\/local\/taos\"/installDir=\"\/usr\/local\/jh_taos\"/g" ${top_dir}/packaging/tools/install.sh - - # packaging/tools/makeclient.sh - sed -i "s/productName=\"TDengine\"/productName=\"jh_iot\"/g" ${top_dir}/packaging/tools/makeclient.sh - sed -i "s/clientName=\"taos\"/clientName=\"jh_taos\"/g" ${top_dir}/packaging/tools/makeclient.sh - sed -i "s/configFile=\"taos\.cfg\"/configFile=\"jh_taos\.cfg\"/g" ${top_dir}/packaging/tools/makeclient.sh - sed -i "s/tarName=\"taos\.tar\.gz\"/tarName=\"jh_taos\.tar\.gz\"/g" ${top_dir}/packaging/tools/makeclient.sh - # packaging/tools/remove_client.sh - sed -i "s/installDir=\"\/usr\/local\/taos\"/installDir=\"\/usr\/local\/jh_taos\"/g" ${top_dir}/packaging/tools/remove_client.sh - sed -i "s/clientName=\"taos\"/clientName=\"jh_taos\"/g" ${top_dir}/packaging/tools/remove_client.sh - sed -i "s/uninstallScript=\"rmtaos\"/uninstallScript=\"rmjh\"/g" ${top_dir}/packaging/tools/remove_client.sh - # packaging/tools/install_client.sh - sed -i "s/dataDir=\"\/var\/lib\/taos\"/dataDir=\"\/var\/lib\/jh_iot\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/logDir=\"\/var\/log\/taos\"/logDir=\"\/var\/log\/jh_taos\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/productName=\"TDengine\"/productName=\"jh_iot\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/installDir=\"\/usr\/local\/taos\"/installDir=\"\/usr\/local\/jh_taos\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/configDir=\"\/etc\/taos\"/configDir=\"\/etc\/jh_taos\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/serverName=\"taosd\"/serverName=\"jh_taosd\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/clientName=\"taos\"/clientName=\"jh_taos\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/uninstallScript=\"rmtaos\"/uninstallScript=\"rmjh\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/configFile=\"taos\.cfg\"/configFile=\"jh_taos\.cfg\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/tarName=\"taos\.tar\.gz\"/tarName=\"jh_taos\.tar\.gz\"/g" ${top_dir}/packaging/tools/install_client.sh - - # packaging/tools/makearbi.sh - sed -i "s/productName=\"TDengine\"/productName=\"jh_iot\"/g" ${top_dir}/packaging/tools/makearbi.sh - # packaging/tools/remove_arbi.sh - sed -i "s/TDengine/jh_iot/g" ${top_dir}/packaging/tools/remove_arbi.sh - # packaging/tools/install_arbi.sh - sed -i "s/TDengine/jh_iot/g" ${top_dir}/packaging/tools/install_arbi.sh - sed -i "s/taosdata\.com/jhict\.com/g" ${top_dir}/packaging/tools/install_arbi.sh - - # packaging/tools/make_install.sh - sed -i "s/clientName=\"taos\"/clientName=\"jh_taos\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/serverName=\"taosd\"/serverName=\"jh_taosd\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/logDir=\"\/var\/log\/taos\"/logDir=\"\/var\/log\/jh_taos\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/dataDir=\"\/var\/lib\/taos\"/dataDir=\"\/var\/lib\/jh_taos\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/configDir=\"\/etc\/taos\"/configDir=\"\/etc\/jh_taos\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/configFile=\"taos\.cfg\"/configFile=\"jh_taos\.cfg\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/installDir=\"\/usr\/local\/taos\"/installDir=\"\/usr\/local\/jh_taos\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/productName=\"TDengine\"/productName=\"jh_iot\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/emailName=\"taosdata\.com\"/emailName=\"jhict\.com\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/uninstallScript=\"rmtaos\"/uninstallScript=\"rmjh\"/g" ${top_dir}/packaging/tools/make_install.sh - - # packaging/rpm/taosd - sed -i "s/TDengine/jh_iot/g" ${top_dir}/packaging/rpm/taosd - sed -i "s/usr\/local\/taos/usr\/local\/jh_taos/g" ${top_dir}/packaging/rpm/taosd - sed -i "s/taosd/jh_taosd/g" ${top_dir}/packaging/rpm/taosd - # packaging/deb/taosd - sed -i "s/TDengine/jh_iot/g" ${top_dir}/packaging/deb/taosd - sed -i "s/usr\/local\/taos/usr\/local\/jh_taos/g" ${top_dir}/packaging/deb/taosd - sed -i "s/taosd/jh_taosd/g" ${top_dir}/packaging/deb/taosd - -} - -function replace_enterprise_jh() { - # enterprise/src/kit/perfMonitor/perfMonitor.c - sed -i "s/\"taosdata\"/\"jhdata\"/g" ${top_dir}/../enterprise/src/kit/perfMonitor/perfMonitor.c - sed -i "s/TDengine/jh_iot/g" ${top_dir}/../enterprise/src/kit/perfMonitor/perfMonitor.c - # enterprise/src/plugins/admin/src/httpAdminHandle.c - sed -i "s/taos\.cfg/jh_taos\.cfg/g" ${top_dir}/../enterprise/src/plugins/admin/src/httpAdminHandle.c - # enterprise/src/plugins/grant/src/grantMain.c - sed -i "s/taos\.cfg/jh_taos\.cfg/g" ${top_dir}/../enterprise/src/plugins/grant/src/grantMain.c - # enterprise/src/plugins/module/src/moduleMain.c - sed -i "s/taos\.cfg/jh_taos\.cfg/g" ${top_dir}/../enterprise/src/plugins/module/src/moduleMain.c - - # enterprise/src/plugins/web - sed -i -e "s/www\.taosdata\.com/www\.jhict\.com\.cn/g" $(grep -r "www.taosdata.com" ${top_dir}/../enterprise/src/plugins/web | sed -r "s/(.*\.html):\s*(.*)/\1/g") - sed -i -e "s/2017, TAOS Data/2021, Jinheng Technology/g" $(grep -r "TAOS Data" ${top_dir}/../enterprise/src/plugins/web | sed -r "s/(.*\.html):\s*(.*)/\1/g") - sed -i -e "s/taosd/jh_taosd/g" $(grep -r "taosd" ${top_dir}/../enterprise/src/plugins/web | grep -E "*\.js\s*.*" | sed -r -e "s/(.*\.js):\s*(.*)/\1/g" | sort | uniq) - # enterprise/src/plugins/web/admin/monitor.html - sed -i -e "s/taosd<\/th>/jh_taosd<\/th>/g" ${top_dir}/../enterprise/src/plugins/web/admin/monitor.html - sed -i -e "s/data:\['taosd', 'system'\],/data:\['jh_taosd', 'system'\],/g" ${top_dir}/../enterprise/src/plugins/web/admin/monitor.html - sed -i -e "s/name: 'taosd',/name: 'jh_taosd',/g" ${top_dir}/../enterprise/src/plugins/web/admin/monitor.html - # enterprise/src/plugins/web/admin/*.html - sed -i "s/TDengine/jh_iot/g" ${top_dir}/../enterprise/src/plugins/web/admin/*.html - # enterprise/src/plugins/web/admin/js/*.js - sed -i "s/TDengine/jh_iot/g" ${top_dir}/../enterprise/src/plugins/web/admin/js/*.js -} diff --git a/packaging/sed_kh.bat b/packaging/sed_kh.bat deleted file mode 100644 index 975bdbbcc03d78f21b8b7532031d60f97a687d0a..0000000000000000000000000000000000000000 --- a/packaging/sed_kh.bat +++ /dev/null @@ -1,76 +0,0 @@ -set sed="C:\Program Files\Git\usr\bin\sed.exe" -set community_dir=%1 - -::cmake\install.inc -%sed% -i "s/C:\/TDengine/C:\/KingHistorian/g" %community_dir%\cmake\install.inc -%sed% -i "s/taos\.cfg/kinghistorian\.cfg/g" %community_dir%\cmake\install.inc -%sed% -i "s/taos\.exe/khclient\.exe/g" %community_dir%\cmake\install.inc -%sed% -i "s/taosdemo\.exe/khdemo\.exe/g" %community_dir%\cmake\install.inc -%sed% -i "/src\/connector/d" %community_dir%\cmake\install.inc -%sed% -i "/tests\/examples/d" %community_dir%\cmake\install.inc -::src\kit\shell\CMakeLists.txt -%sed% -i "s/OUTPUT_NAME taos/OUTPUT_NAME khclient/g" %community_dir%\src\kit\shell\CMakeLists.txt -::src\kit\shell\inc\shell.h -%sed% -i "s/taos_history/kh_history/g" %community_dir%\src\kit\shell\inc\shell.h -::src\inc\taosdef.h -%sed% -i "s/\"taosdata\"/\"khroot\"/g" %community_dir%\src\inc\taosdef.h -::src\util\src\tconfig.c -%sed% -i "s/taos\.cfg/kinghistorian\.cfg/g" %community_dir%\src\util\src\tconfig.c -%sed% -i "s/etc\/taos/etc\/kinghistorian/g" %community_dir%\src\util\src\tconfig.c -::src\kit\taosdemo\CMakeLists.txt -%sed% -i "s/ADD_EXECUTABLE(taosdemo/ADD_EXECUTABLE(khdemo/g" %community_dir%\src\kit\taosdemo\CMakeLists.txt -%sed% -i "s/TARGET_LINK_LIBRARIES(taosdemo/TARGET_LINK_LIBRARIES(khdemo/g" %community_dir%\src\kit\taosdemo\CMakeLists.txt -::src\kit\taosdemo\taosdemo.c -%sed% -i "s/taosdemo --help/khdemo --help/g" %community_dir%\src\kit\taosdemo\taosdemo.c -%sed% -i "s/taosdemo --usage/khdemo --usage/g" %community_dir%\src\kit\taosdemo\taosdemo.c -%sed% -i "s/Usage: taosdemo/Usage: khdemo/g" %community_dir%\src\kit\taosdemo\taosdemo.c -%sed% -i "s/taosdemo is simulating/khdemo is simulating/g" %community_dir%\src\kit\taosdemo\taosdemo.c -%sed% -i "s/taosdemo version/khdemo version/g" %community_dir%\src\kit\taosdemo\taosdemo.c -%sed% -i "s/\"taosdata\"/\"khroot\"/g" %community_dir%\src\kit\taosdemo\taosdemo.c -%sed% -i "s/support@taosdata\.com/support@wellintech\.com/g" %community_dir%\src\kit\taosdemo\taosdemo.c -%sed% -i "s/taosc, rest, and stmt/khclient, rest, and stmt/g" %community_dir%\src\kit\taosdemo\taosdemo.c -%sed% -i "s/taosdemo uses/khdemo uses/g" %community_dir%\src\kit\taosdemo\taosdemo.c -%sed% -i "s/use 'taosc'/use 'khclient'/g" %community_dir%\src\kit\taosdemo\taosdemo.c -::src\util\src\tlog.c -%sed% -i "s/log\/taos/log\/kinghistorian/g" %community_dir%\src\util\src\tlog.c -::src\dnode\src\dnodeSystem.c -%sed% -i "s/TDengine/KingHistorian/g" %community_dir%\src\dnode\src\dnodeSystem.c -::src\dnode\src\dnodeMain.c -%sed% -i "s/TDengine/KingHistorian/g" %community_dir%\src\dnode\src\dnodeMain.c -%sed% -i "s/taosdlog/khserverlog/g" %community_dir%\src\dnode\src\dnodeMain.c -::src\client\src\tscSystem.c -%sed% -i "s/taoslog/khclientlog/g" %community_dir%\src\client\src\tscSystem.c -::src\util\src\tnote.c -%sed% -i "s/taosinfo/khinfo/g" %community_dir%\src\util\src\tnote.c -::src\dnode\CMakeLists.txt -%sed% -i "s/taos\.cfg/kinghistorian\.cfg/g" %community_dir%\src\dnode\CMakeLists.txt -::src\kit\taosdump\taosdump.c -%sed% -i "s/support@taosdata\.com/support@wellintech\.com/g" %community_dir%\src\kit\taosdump\taosdump.c -%sed% -i "s/Default is taosdata/Default is khroot/g" %community_dir%\src\kit\taosdump\taosdump.c -%sed% -i "s/\"taosdata\"/\"khroot\"/g" %community_dir%\src\kit\taosdump\taosdump.c -%sed% -i "s/TDengine/KingHistorian/g" %community_dir%\src\kit\taosdump\taosdump.c -%sed% -i "s/taos\/taos\.cfg/kinghistorian\/kinghistorian\.cfg/g" %community_dir%\src\kit\taosdump\taosdump.c -::src\os\src\linux\linuxEnv.c -%sed% -i "s/etc\/taos/etc\/kinghistorian/g" %community_dir%\src\os\src\linux\linuxEnv.c -%sed% -i "s/lib\/taos/lib\/kinghistorian/g" %community_dir%\src\os\src\linux\linuxEnv.c -%sed% -i "s/log\/taos/log\/kinghistorian/g" %community_dir%\src\os\src\linux\linuxEnv.c -::src\kit\shell\src\shellDarwin.c -%sed% -i "s/TDengine shell/KingHistorian shell/g" %community_dir%\src\kit\shell\src\shellDarwin.c -%sed% -i "s/2020 by TAOS Data/2021 by Wellintech/g" %community_dir%\src\kit\shell\src\shellDarwin.c -::src\kit\shell\src\shellLinux.c -%sed% -i "s/support@taosdata\.com/support@wellintech\.com/g" %community_dir%\src\kit\shell\src\shellLinux.c -%sed% -i "s/TDengine shell/KingHistorian shell/g" %community_dir%\src\kit\shell\src\shellLinux.c -%sed% -i "s/2020 by TAOS Data/2021 by Wellintech/g" %community_dir%\src\kit\shell\src\shellLinux.c -::src\os\src\windows\wEnv.c -%sed% -i "s/TDengine/KingHistorian/g" %community_dir%\src\os\src\windows\wEnv.c -::src\kit\shell\src\shellEngine.c -%sed% -i "s/TDengine shell/KingHistorian shell/g" %community_dir%\src\kit\shell\src\shellEngine.c -%sed% -i "s/2020 by TAOS Data, Inc/2021 by Wellintech, Inc/g" %community_dir%\src\kit\shell\src\shellEngine.c -%sed% -i "s/taos connect failed/kh connect failed/g" %community_dir%\src\kit\shell\src\shellEngine.c -%sed% -i "s/\"taos^> \"/\"khclient^> \"/g" %community_dir%\src\kit\shell\src\shellEngine.c -%sed% -i "s/\" -^> \"/\" -^> \"/g" %community_dir%\src\kit\shell\src\shellEngine.c -%sed% -i "s/prompt_size = 6/prompt_size = 10/g" %community_dir%\src\kit\shell\src\shellEngine.c -::src\rpc\src\rpcMain.c -%sed% -i "s/taos connections/kh connections/g" %community_dir%\src\rpc\src\rpcMain.c -::src\plugins\monitor\src\monMain.c -%sed% -i "s/taosd is quiting/khserver is quiting/g" %community_dir%\src\plugins\monitor\src\monMain.c \ No newline at end of file diff --git a/packaging/sed_kh.sh b/packaging/sed_kh.sh deleted file mode 100755 index 3041dc9ffa82a0e9fa0e1a2a5dd859c80a6c311c..0000000000000000000000000000000000000000 --- a/packaging/sed_kh.sh +++ /dev/null @@ -1,162 +0,0 @@ -#!/bin/bash - -function replace_community_kh() { - # cmake/install.inc - sed -i "s/C:\/TDengine/C:\/KingHistorian/g" ${top_dir}/cmake/install.inc - sed -i "s/taos\.cfg/kinghistorian\.cfg/g" ${top_dir}/cmake/install.inc - sed -i "s/taos\.exe/khclient\.exe/g" ${top_dir}/cmake/install.inc - # src/kit/shell/CMakeLists.txt - sed -i "s/OUTPUT_NAME taos/OUTPUT_NAME khclient/g" ${top_dir}/src/kit/shell/CMakeLists.txt - # src/kit/shell/inc/shell.h - sed -i "s/taos_history/kh_history/g" ${top_dir}/src/kit/shell/inc/shell.h - # src/inc/taosdef.h - sed -i "s/\"taosdata\"/\"khroot\"/g" ${top_dir}/src/inc/taosdef.h - # src/util/src/tconfig.c - sed -i "s/taos\.cfg/kinghistorian\.cfg/g" ${top_dir}/src/util/src/tconfig.c - sed -i "s/etc\/taos/etc\/kinghistorian/g" ${top_dir}/src/util/src/tconfig.c - sed -i "s/taos config/kinghistorian config/g" ${top_dir}/src/util/src/tconfig.c - # src/util/src/tlog.c - sed -i "s/log\/taos/log\/kinghistorian/g" ${top_dir}/src/util/src/tlog.c - # src/dnode/src/dnodeSystem.c - sed -i "s/TDengine/KingHistorian/g" ${top_dir}/src/dnode/src/dnodeSystem.c - sed -i "s/TDengine/KingHistorian/g" ${top_dir}/src/dnode/src/dnodeMain.c - sed -i "s/taosdlog/khserverlog/g" ${top_dir}/src/dnode/src/dnodeMain.c - # src/client/src/tscSystem.c - sed -i "s/taoslog/khclientlog/g" ${top_dir}/src/client/src/tscSystem.c - # src/util/src/tnote.c - sed -i "s/taosinfo/khinfo/g" ${top_dir}/src/util/src/tnote.c - # src/dnode/CMakeLists.txt - sed -i "s/taos\.cfg/kinghistorian\.cfg/g" ${top_dir}/src/dnode/CMakeLists.txt - echo "SET_TARGET_PROPERTIES(taosd PROPERTIES OUTPUT_NAME khserver)" >>${top_dir}/src/dnode/CMakeLists.txt - # src/os/src/linux/linuxEnv.c - sed -i "s/etc\/taos/etc\/kinghistorian/g" ${top_dir}/src/os/src/linux/linuxEnv.c - sed -i "s/lib\/taos/lib\/kinghistorian/g" ${top_dir}/src/os/src/linux/linuxEnv.c - sed -i "s/log\/taos/log\/kinghistorian/g" ${top_dir}/src/os/src/linux/linuxEnv.c - # src/kit/shell/src/shellDarwin.c - sed -i "s/TDengine shell/KingHistorian shell/g" ${top_dir}/src/kit/shell/src/shellDarwin.c - sed -i "s/2020 by TAOS Data/2021 by Wellintech/g" ${top_dir}/src/kit/shell/src/shellDarwin.c - # src/kit/shell/src/shellLinux.c - sed -i "s/support@taosdata\.com/support@wellintech\.com/g" ${top_dir}/src/kit/shell/src/shellLinux.c - sed -i "s/TDengine shell/KingHistorian shell/g" ${top_dir}/src/kit/shell/src/shellLinux.c - sed -i "s/2020 by TAOS Data/2021 by Wellintech/g" ${top_dir}/src/kit/shell/src/shellLinux.c - # src/os/src/windows/wEnv.c - sed -i "s/C:\/TDengine/C:\/KingHistorian/g" ${top_dir}/src/os/src/windows/wEnv.c - # src/kit/shell/src/shellEngine.c - sed -i "s/TDengine shell/KingHistorian shell/g" ${top_dir}/src/kit/shell/src/shellEngine.c - sed -i "s/2020 by TAOS Data, Inc/2021 by Wellintech, Inc/g" ${top_dir}/src/kit/shell/src/shellEngine.c - sed -i "s/taos connect failed/khclient connect failed/g" ${top_dir}/src/kit/shell/src/shellEngine.c - sed -i "s/\"taos> \"/\"khclient> \"/g" ${top_dir}/src/kit/shell/src/shellEngine.c - sed -i "s/\" -> \"/\" -> \"/g" ${top_dir}/src/kit/shell/src/shellEngine.c - sed -i "s/prompt_size = 6/prompt_size = 10/g" ${top_dir}/src/kit/shell/src/shellEngine.c - # src/rpc/src/rpcMain.c - sed -i "s/taos connections/kh connections/g" ${top_dir}/src/rpc/src/rpcMain.c - # src/plugins/monitor/src/monMain.c - sed -i "s/taosd is quiting/khserver is quiting/g" ${top_dir}/src/plugins/monitor/src/monMain.c - - # packaging/tools/makepkg.sh - sed -i "s/productName=\"TDengine\"/productName=\"KingHistorian\"/g" ${top_dir}/packaging/tools/makepkg.sh - sed -i "s/serverName=\"taosd\"/serverName=\"khserver\"/g" ${top_dir}/packaging/tools/makepkg.sh - sed -i "s/clientName=\"taos\"/clientName=\"khclient\"/g" ${top_dir}/packaging/tools/makepkg.sh - sed -i "s/configFile=\"taos\.cfg\"/configFile=\"kinghistorian\.cfg\"/g" ${top_dir}/packaging/tools/makepkg.sh - sed -i "s/tarName=\"taos\.tar\.gz\"/tarName=\"kinghistorian\.tar\.gz\"/g" ${top_dir}/packaging/tools/makepkg.sh - # packaging/tools/remove.sh - sed -i "s/installDir=\"\/usr\/local\/taos\"/installDir=\"\/usr\/local\/kinghistorian\"/g" ${top_dir}/packaging/tools/remove.sh - sed -i "s/serverName=\"taosd\"/serverName=\"khserver\"/g" ${top_dir}/packaging/tools/remove.sh - sed -i "s/clientName=\"taos\"/clientName=\"khclient\"/g" ${top_dir}/packaging/tools/remove.sh - sed -i "s/uninstallScript=\"rmtaos\"/uninstallScript=\"rmkh\"/g" ${top_dir}/packaging/tools/remove.sh - sed -i "s/productName=\"TDengine\"/productName=\"KingHistorian\"/g" ${top_dir}/packaging/tools/remove.sh - # packaging/tools/startPre.sh - sed -i "s/serverName=\"taosd\"/serverName=\"khserver\"/g" ${top_dir}/packaging/tools/startPre.sh - sed -i "s/logDir=\"\/var\/log\/taos\"/logDir=\"\/var\/log\/kinghistorian\"/g" ${top_dir}/packaging/tools/startPre.sh - # packaging/tools/run_taosd.sh - sed -i "s/taosd/khserver/g" ${top_dir}/packaging/tools/run_taosd.sh - # packaging/tools/install.sh - sed -i "s/clientName=\"taos\"/clientName=\"khclient\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/serverName=\"taosd\"/serverName=\"khserver\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/configFile=\"taos\.cfg\"/configFile=\"kinghistorian\.cfg\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/productName=\"TDengine\"/productName=\"KingHistorian\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/emailName=\"taosdata\.com\"/emailName=\"\wellintech\.com\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/uninstallScript=\"rmtaos\"/uninstallScript=\"rmkh\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/historyFile=\"taos_history\"/historyFile=\"kh_history\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/tarName=\"taos\.tar\.gz\"/tarName=\"kinghistorian\.tar\.gz\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/dataDir=\"\/var\/lib\/taos\"/dataDir=\"\/var\/lib\/kinghistorian\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/logDir=\"\/var\/log\/taos\"/logDir=\"\/var\/log\/kinghistorian\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/configDir=\"\/etc\/taos\"/configDir=\"\/etc\/kinghistorian\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/installDir=\"\/usr\/local\/taos\"/installDir=\"\/usr\/local\/kinghistorian\"/g" ${top_dir}/packaging/tools/install.sh - - # packaging/tools/makeclient.sh - sed -i "s/productName=\"TDengine\"/productName=\"KingHistorian\"/g" ${top_dir}/packaging/tools/makeclient.sh - sed -i "s/clientName=\"taos\"/clientName=\"khclient\"/g" ${top_dir}/packaging/tools/makeclient.sh - sed -i "s/configFile=\"taos\.cfg\"/configFile=\"kinghistorian\.cfg\"/g" ${top_dir}/packaging/tools/makeclient.sh - sed -i "s/tarName=\"taos\.tar\.gz\"/tarName=\"kinghistorian\.tar\.gz\"/g" ${top_dir}/packaging/tools/makeclient.sh - # packaging/tools/remove_client.sh - sed -i "s/installDir=\"\/usr\/local\/taos\"/installDir=\"\/usr\/local\/kinghistorian\"/g" ${top_dir}/packaging/tools/remove_client.sh - sed -i "s/clientName=\"taos\"/clientName=\"khclient\"/g" ${top_dir}/packaging/tools/remove_client.sh - sed -i "s/uninstallScript=\"rmtaos\"/uninstallScript=\"rmkh\"/g" ${top_dir}/packaging/tools/remove_client.sh - # packaging/tools/install_client.sh - sed -i "s/dataDir=\"\/var\/lib\/taos\"/dataDir=\"\/var\/lib\/kinghistorian\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/logDir=\"\/var\/log\/taos\"/logDir=\"\/var\/log\/kinghistorian\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/productName=\"TDengine\"/productName=\"KingHistorian\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/installDir=\"\/usr\/local\/taos\"/installDir=\"\/usr\/local\/kinghistorian\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/configDir=\"\/etc\/taos\"/configDir=\"\/etc\/kinghistorian\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/serverName=\"taosd\"/serverName=\"khserver\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/clientName=\"taos\"/clientName=\"khclient\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/uninstallScript=\"rmtaos\"/uninstallScript=\"rmkh\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/configFile=\"taos\.cfg\"/configFile=\"kinghistorian\.cfg\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/tarName=\"taos\.tar\.gz\"/tarName=\"kinghistorian\.tar\.gz\"/g" ${top_dir}/packaging/tools/install_client.sh - - # packaging/tools/makearbi.sh - sed -i "s/productName=\"TDengine\"/productName=\"KingHistorian\"/g" ${top_dir}/packaging/tools/makearbi.sh - # packaging/tools/remove_arbi.sh - sed -i "s/TDengine/KingHistorian/g" ${top_dir}/packaging/tools/remove_arbi.sh - # packaging/tools/install_arbi.sh - sed -i "s/TDengine/KingHistorian/g" ${top_dir}/packaging/tools/install_arbi.sh - sed -i "s/taosdata\.com/wellintech\.com/g" ${top_dir}/packaging/tools/install_arbi.sh - - # packaging/tools/make_install.sh - sed -i "s/clientName=\"taos\"/clientName=\"khclient\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/serverName=\"taosd\"/serverName=\"khserver\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/logDir=\"\/var\/log\/taos\"/logDir=\"\/var\/log\/kinghistorian\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/dataDir=\"\/var\/lib\/taos\"/dataDir=\"\/var\/lib\/kinghistorian\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/configDir=\"\/etc\/taos\"/configDir=\"\/etc\/kinghistorian\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/configFile=\"taos\.cfg\"/configFile=\"kinghistorian\.cfg\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/installDir=\"\/usr\/local\/taos\"/installDir=\"\/usr\/local\/kinghistorian\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/productName=\"TDengine\"/productName=\"KingHistorian\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/emailName=\"taosdata\.com\"/emailName=\"wellintech\.com\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/uninstallScript=\"rmtaos\"/uninstallScript=\"rmkh\"/g" ${top_dir}/packaging/tools/make_install.sh - - # packaging/rpm/taosd - sed -i "s/TDengine/KingHistorian/g" ${top_dir}/packaging/rpm/taosd - sed -i "s/usr\/local\/taos/usr\/local\/kinghistorian/g" ${top_dir}/packaging/rpm/taosd - sed -i "s/taosd/khserver/g" ${top_dir}/packaging/rpm/taosd - # packaging/deb/taosd - sed -i "s/TDengine/KingHistorian/g" ${top_dir}/packaging/deb/taosd - sed -i "s/usr\/local\/taos/usr\/local\/kinghistorian/g" ${top_dir}/packaging/deb/taosd - sed -i "s/taosd/khserver/g" ${top_dir}/packaging/deb/taosd -} - -function replace_enterprise_kh() { - # enterprise/src/kit/perfMonitor/perfMonitor.c - sed -i "s/\"taosdata\"/\"khroot\"/g" ${top_dir}/../enterprise/src/kit/perfMonitor/perfMonitor.c - sed -i "s/TDengine/KingHistorian/g" ${top_dir}/../enterprise/src/kit/perfMonitor/perfMonitor.c - # enterprise/src/plugins/admin/src/httpAdminHandle.c - sed -i "s/taos\.cfg/kinghistorian\.cfg/g" ${top_dir}/../enterprise/src/plugins/admin/src/httpAdminHandle.c - # enterprise/src/plugins/grant/src/grantMain.c - sed -i "s/taos\.cfg/kinghistorian\.cfg/g" ${top_dir}/../enterprise/src/plugins/grant/src/grantMain.c - # enterprise/src/plugins/module/src/moduleMain.c - sed -i "s/taos\.cfg/kinghistorian\.cfg/g" ${top_dir}/../enterprise/src/plugins/module/src/moduleMain.c - - # enterprise/src/plugins/web - sed -i -e "s/www\.taosdata\.com/www\.kingview\.com/g" $(grep -r "www.taosdata.com" ${top_dir}/../enterprise/src/plugins/web | sed -r "s/(.*\.html):\s*(.*)/\1/g") - sed -i -e "s/2017, TAOS Data/2021, Wellintech/g" $(grep -r "TAOS Data" ${top_dir}/../enterprise/src/plugins/web | sed -r "s/(.*\.html):\s*(.*)/\1/g") - sed -i -e "s/taosd/khserver/g" $(grep -r "taosd" ${top_dir}/../enterprise/src/plugins/web | grep -E "*\.js\s*.*" | sed -r -e "s/(.*\.js):\s*(.*)/\1/g" | sort | uniq) - # enterprise/src/plugins/web/admin/monitor.html - sed -i -e "s/taosd<\/th>/khserver<\/th>/g" ${top_dir}/../enterprise/src/plugins/web/admin/monitor.html - sed -i -e "s/data:\['taosd', 'system'\],/data:\['khserver', 'system'\],/g" ${top_dir}/../enterprise/src/plugins/web/admin/monitor.html - sed -i -e "s/name: 'taosd',/name: 'khserver',/g" ${top_dir}/../enterprise/src/plugins/web/admin/monitor.html - # enterprise/src/plugins/web/admin/*.html - sed -i "s/TDengine/KingHistorian/g" ${top_dir}/../enterprise/src/plugins/web/admin/*.html - # enterprise/src/plugins/web/admin/js/*.js - sed -i "s/TDengine/KingHistorian/g" ${top_dir}/../enterprise/src/plugins/web/admin/js/*.js - -} diff --git a/packaging/sed_power.bat b/packaging/sed_power.bat deleted file mode 100644 index 2b02504408e0f78335c1df1f15bf6fb25c97fc57..0000000000000000000000000000000000000000 --- a/packaging/sed_power.bat +++ /dev/null @@ -1,48 +0,0 @@ -set sed="C:\Program Files\Git\usr\bin\sed.exe" -set community_dir=%1 - -::cmake\install.inc -%sed% -i "s/C:\/TDengine/C:\/Power/g" %community_dir%\cmake\install.inc -%sed% -i "s/taos\.cfg/power\.cfg/g" %community_dir%\cmake\install.inc -%sed% -i "s/taos\.exe/power\.exe/g" %community_dir%\cmake\install.inc -%sed% -i "/src\/connector/d" %community_dir%\cmake\install.inc -%sed% -i "/tests\/examples/d" %community_dir%\cmake\install.inc -::src\kit\shell\CMakeLists.txt -%sed% -i "s/OUTPUT_NAME taos/OUTPUT_NAME power/g" %community_dir%\src\kit\shell\CMakeLists.txt -::src\kit\shell\inc\shell.h -%sed% -i "s/taos_history/power_history/g" %community_dir%\src\kit\shell\inc\shell.h -::src\inc\taosdef.h -%sed% -i "s/\"taosdata\"/\"powerdb\"/g" %community_dir%\src\inc\taosdef.h -::src\util\src\tconfig.c -%sed% -i "s/taos\.cfg/power\.cfg/g" %community_dir%\src\util\src\tconfig.c -%sed% -i "s/etc\/taos/etc\/power/g" %community_dir%\src\util\src\tconfig.c -::src\util\src\tlog.c -%sed% -i "s/log\/taos/log\/power/g" %community_dir%\src\util\src\tlog.c -::src\dnode\src\dnodeSystem.c -%sed% -i "s/TDengine/Power/g" %community_dir%\src\dnode\src\dnodeSystem.c -::src\dnode\src\dnodeMain.c -%sed% -i "s/TDengine/Power/g" %community_dir%\src\dnode\src\dnodeMain.c -%sed% -i "s/taosdlog/powerdlog/g" %community_dir%\src\dnode\src\dnodeMain.c -::src\client\src\tscSystem.c -%sed% -i "s/taoslog/powerlog/g" %community_dir%\src\client\src\tscSystem.c -::src\util\src\tnote.c -%sed% -i "s/taosinfo/powerinfo/g" %community_dir%\src\util\src\tnote.c -::src\dnode\CMakeLists.txt -%sed% -i "s/taos\.cfg/power\.cfg/g" %community_dir%\src\dnode\CMakeLists.txt -::src\os\src\linux\linuxEnv.c -%sed% -i "s/etc\/taos/etc\/power/g" %community_dir%\src\os\src\linux\linuxEnv.c -%sed% -i "s/lib\/taos/lib\/power/g" %community_dir%\src\os\src\linux\linuxEnv.c -%sed% -i "s/log\/taos/log\/power/g" %community_dir%\src\os\src\linux\linuxEnv.c -::src\os\src\windows\wEnv.c -%sed% -i "s/TDengine/Power/g" %community_dir%\src\os\src\windows\wEnv.c -::src\kit\shell\src\shellEngine.c -%sed% -i "s/TDengine shell/Power shell/g" %community_dir%\src\kit\shell\src\shellEngine.c -%sed% -i "s/2020 by TAOS Data, Inc/2020 by PowerDB, Inc/g" %community_dir%\src\kit\shell\src\shellEngine.c -%sed% -i "s/taos connect failed/power connect failed/g" %community_dir%\src\kit\shell\src\shellEngine.c -%sed% -i "s/\"taos^> \"/\"power^> \"/g" %community_dir%\src\kit\shell\src\shellEngine.c -%sed% -i "s/\" -^> \"/\" -^> \"/g" %community_dir%\src\kit\shell\src\shellEngine.c -%sed% -i "s/prompt_size = 6/prompt_size = 7/g" %community_dir%\src\kit\shell\src\shellEngine.c -::src\rpc\src\rpcMain.c -%sed% -i "s/taos connections/power connections/g" %community_dir%\src\rpc\src\rpcMain.c -::src\plugins\monitor\src\monMain.c -%sed% -i "s/taosd is quiting/powerd is quiting/g" %community_dir%\src\plugins\monitor\src\monMain.c diff --git a/packaging/sed_power.sh b/packaging/sed_power.sh deleted file mode 100755 index 8955476591410b6efac3aa410aab2cf257c1ac41..0000000000000000000000000000000000000000 --- a/packaging/sed_power.sh +++ /dev/null @@ -1,202 +0,0 @@ -#!/bin/bash - -function replace_community_power() { - # cmake/install.inc - sed -i "s/C:\/TDengine/C:\/Power/g" ${top_dir}/cmake/install.inc - sed -i "s/taos\.cfg/power\.cfg/g" ${top_dir}/cmake/install.inc - sed -i "s/taos\.exe/power\.exe/g" ${top_dir}/cmake/install.inc - sed -i "s/taosdemo\.exe/powerdemo\.exe/g" ${top_dir}/cmake/install.inc - # src/kit/shell/inc/shell.h - sed -i "s/taos_history/power_history/g" ${top_dir}/src/kit/shell/inc/shell.h - # src/inc/taosdef.h - sed -i "s/\"taosdata\"/\"powerdb\"/g" ${top_dir}/src/inc/taosdef.h - # src/util/src/tconfig.c - sed -i "s/taos\.cfg/power\.cfg/g" ${top_dir}/src/util/src/tconfig.c - sed -i "s/etc\/taos/etc\/power/g" ${top_dir}/src/util/src/tconfig.c - - # src/util/src/tlog.c - sed -i "s/log\/taos/log\/power/g" ${top_dir}/src/util/src/tlog.c - # src/dnode/src/dnodeSystem.c - sed -i "s/TDengine/Power/g" ${top_dir}/src/dnode/src/dnodeSystem.c - sed -i "s/TDengine/Power/g" ${top_dir}/src/dnode/src/dnodeMain.c - sed -i "s/taosdlog/powerdlog/g" ${top_dir}/src/dnode/src/dnodeMain.c - # src/client/src/tscSystem.c - sed -i "s/taoslog/powerlog/g" ${top_dir}/src/client/src/tscSystem.c - # src/util/src/tnote.c - sed -i "s/taosinfo/powerinfo/g" ${top_dir}/src/util/src/tnote.c - # src/dnode/CMakeLists.txt - sed -i "s/taos\.cfg/power\.cfg/g" ${top_dir}/src/dnode/CMakeLists.txt - - # src/os/src/linux/linuxEnv.c - sed -i "s/etc\/taos/etc\/power/g" ${top_dir}/src/os/src/linux/linuxEnv.c - sed -i "s/lib\/taos/lib\/power/g" ${top_dir}/src/os/src/linux/linuxEnv.c - sed -i "s/log\/taos/log\/power/g" ${top_dir}/src/os/src/linux/linuxEnv.c - - # src/kit/shell/src/shellLinux.c - sed -i "s/TDengine shell/Power shell/g" ${top_dir}/src/kit/shell/src/shellLinux.c - - # src/os/src/windows/wEnv.c - sed -i "s/C:\/TDengine/C:\/Power/g" ${top_dir}/src/os/src/windows/wEnv.c - # src/kit/shell/src/shellEngine.c - sed -i "s/TDengine shell/PowerDB shell/g" ${top_dir}/src/kit/shell/src/shellEngine.c - sed -i "s/2020 by TAOS Data, Inc/2020 by PowerDB, Inc/g" ${top_dir}/src/kit/shell/src/shellEngine.c - - sed -i "s/\"taos> \"/\"power> \"/g" ${top_dir}/src/kit/shell/src/shellEngine.c - sed -i "s/\" -> \"/\" -> \"/g" ${top_dir}/src/kit/shell/src/shellEngine.c - sed -i "s/prompt_size = 6/prompt_size = 7/g" ${top_dir}/src/kit/shell/src/shellEngine.c - # src/rpc/src/rpcMain.c - sed -i "s/taos connections/power connections/g" ${top_dir}/src/rpc/src/rpcMain.c - # src/plugins/monitor/src/monMain.c - sed -i "s/taosd is quiting/powerd is quiting/g" ${top_dir}/src/plugins/monitor/src/monMain.c - - ############ - # cmake/install.inc - sed -i "s/C:\/TDengine/C:\/Power/g" ${top_dir}/cmake/install.inc - sed -i "s/taos\.cfg/power\.cfg/g" ${top_dir}/cmake/install.inc - sed -i "s/taos\.exe/power\.exe/g" ${top_dir}/cmake/install.inc - # src/kit/shell/CMakeLists.txt - sed -i "s/OUTPUT_NAME taos/OUTPUT_NAME power/g" ${top_dir}/src/kit/shell/CMakeLists.txt - # src/kit/shell/inc/shell.h - sed -i "s/taos_history/power_history/g" ${top_dir}/src/kit/shell/inc/shell.h - # src/inc/taosdef.h - sed -i "s/\"taosdata\"/\"power\"/g" ${top_dir}/src/inc/taosdef.h - # src/util/src/tconfig.c - sed -i "s/taos\.cfg/power\.cfg/g" ${top_dir}/src/util/src/tconfig.c - sed -i "s/etc\/taos/etc\/power/g" ${top_dir}/src/util/src/tconfig.c - sed -i "s/taos config/power config/g" ${top_dir}/src/util/src/tconfig.c - # src/util/src/tlog.c - sed -i "s/log\/taos/log\/power/g" ${top_dir}/src/util/src/tlog.c - # src/dnode/src/dnodeSystem.c - sed -i "s/TDengine/Power/g" ${top_dir}/src/dnode/src/dnodeSystem.c - sed -i "s/TDengine/Power/g" ${top_dir}/src/dnode/src/dnodeMain.c - sed -i "s/taosdlog/powerdlog/g" ${top_dir}/src/dnode/src/dnodeMain.c - # src/client/src/tscSystem.c - sed -i "s/taoslog/powerlog/g" ${top_dir}/src/client/src/tscSystem.c - # src/util/src/tnote.c - sed -i "s/taosinfo/powerinfo/g" ${top_dir}/src/util/src/tnote.c - # src/dnode/CMakeLists.txt - sed -i "s/taos\.cfg/power\.cfg/g" ${top_dir}/src/dnode/CMakeLists.txt - echo "SET_TARGET_PROPERTIES(taosd PROPERTIES OUTPUT_NAME powerd)" >>${top_dir}/src/dnode/CMakeLists.txt - # src/os/src/linux/linuxEnv.c - sed -i "s/etc\/taos/etc\/power/g" ${top_dir}/src/os/src/linux/linuxEnv.c - sed -i "s/lib\/taos/lib\/power/g" ${top_dir}/src/os/src/linux/linuxEnv.c - sed -i "s/log\/taos/log\/power/g" ${top_dir}/src/os/src/linux/linuxEnv.c - # src/kit/shell/src/shellDarwin.c - sed -i "s/TDengine shell/Power shell/g" ${top_dir}/src/kit/shell/src/shellDarwin.c - # src/kit/shell/src/shellLinux.c - sed -i "s/TDengine shell/Power shell/g" ${top_dir}/src/kit/shell/src/shellLinux.c - # src/os/src/windows/wEnv.c - sed -i "s/C:\/TDengine/C:\/Power/g" ${top_dir}/src/os/src/windows/wEnv.c - # src/kit/shell/src/shellEngine.c - sed -i "s/TDengine shell/Power shell/g" ${top_dir}/src/kit/shell/src/shellEngine.c - sed -i "s/taos connect failed/power connect failed/g" ${top_dir}/src/kit/shell/src/shellEngine.c - sed -i "s/\"taos> \"/\"power> \"/g" ${top_dir}/src/kit/shell/src/shellEngine.c - sed -i "s/\" -> \"/\" -> \"/g" ${top_dir}/src/kit/shell/src/shellEngine.c - sed -i "s/prompt_size = 6/prompt_size = 7/g" ${top_dir}/src/kit/shell/src/shellEngine.c - # src/rpc/src/rpcMain.c - sed -i "s/taos connections/power connections/g" ${top_dir}/src/rpc/src/rpcMain.c - # src/plugins/monitor/src/monMain.c - sed -i "s/taosd is quiting/powerd is quiting/g" ${top_dir}/src/plugins/monitor/src/monMain.c - - # packaging/tools/makepkg.sh - sed -i "s/productName=\"TDengine\"/productName=\"Power\"/g" ${top_dir}/packaging/tools/makepkg.sh - sed -i "s/serverName=\"taosd\"/serverName=\"powerd\"/g" ${top_dir}/packaging/tools/makepkg.sh - sed -i "s/clientName=\"taos\"/clientName=\"power\"/g" ${top_dir}/packaging/tools/makepkg.sh - sed -i "s/configFile=\"taos\.cfg\"/configFile=\"power\.cfg\"/g" ${top_dir}/packaging/tools/makepkg.sh - sed -i "s/tarName=\"taos\.tar\.gz\"/tarName=\"power\.tar\.gz\"/g" ${top_dir}/packaging/tools/makepkg.sh - # packaging/tools/remove.sh - sed -i "s/installDir=\"\/usr\/local\/taos\"/installDir=\"\/usr\/local\/power\"/g" ${top_dir}/packaging/tools/remove.sh - sed -i "s/serverName=\"taosd\"/serverName=\"pwerd\"/g" ${top_dir}/packaging/tools/remove.sh - sed -i "s/clientName=\"taos\"/clientName=\"power\"/g" ${top_dir}/packaging/tools/remove.sh - sed -i "s/uninstallScript=\"rmtaos\"/uninstallScript=\"rmpower\"/g" ${top_dir}/packaging/tools/remove.sh - sed -i "s/productName=\"TDengine\"/productName=\"Power\"/g" ${top_dir}/packaging/tools/remove.sh - # packaging/tools/startPre.sh - sed -i "s/serverName=\"taosd\"/serverName=\"powerd\"/g" ${top_dir}/packaging/tools/startPre.sh - sed -i "s/logDir=\"\/var\/log\/taos\"/logDir=\"\/var\/log\/Power\"/g" ${top_dir}/packaging/tools/startPre.sh - # packaging/tools/run_taosd.sh - sed -i "s/taosd/powerd/g" ${top_dir}/packaging/tools/run_taosd.sh - # packaging/tools/install.sh - sed -i "s/clientName=\"taos\"/clientName=\"power\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/serverName=\"taosd\"/serverName=\"powerd\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/configFile=\"taos\.cfg\"/configFile=\"power\.cfg\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/productName=\"TDengine\"/productName=\"Power\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/uninstallScript=\"rmtaos\"/uninstallScript=\"rmpower\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/historyFile=\"taos_history\"/historyFile=\"power_history\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/tarName=\"taos\.tar\.gz\"/tarName=\"power\.tar\.gz\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/dataDir=\"\/var\/lib\/taos\"/dataDir=\"\/var\/lib\/power\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/logDir=\"\/var\/log\/taos\"/logDir=\"\/var\/log\/power\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/configDir=\"\/etc\/taos\"/configDir=\"\/etc\/power\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/installDir=\"\/usr\/local\/taos\"/installDir=\"\/usr\/local\/power\"/g" ${top_dir}/packaging/tools/install.sh - - # packaging/tools/makeclient.sh - sed -i "s/productName=\"TDengine\"/productName=\"Power\"/g" ${top_dir}/packaging/tools/makeclient.sh - sed -i "s/clientName=\"taos\"/clientName=\"power\"/g" ${top_dir}/packaging/tools/makeclient.sh - sed -i "s/configFile=\"taos\.cfg\"/configFile=\"power\.cfg\"/g" ${top_dir}/packaging/tools/makeclient.sh - sed -i "s/tarName=\"taos\.tar\.gz\"/tarName=\"power\.tar\.gz\"/g" ${top_dir}/packaging/tools/makeclient.sh - # packaging/tools/remove_client.sh - sed -i "s/installDir=\"\/usr\/local\/taos\"/installDir=\"\/usr\/local\/power\"/g" ${top_dir}/packaging/tools/remove_client.sh - sed -i "s/clientName=\"taos\"/clientName=\"power\"/g" ${top_dir}/packaging/tools/remove_client.sh - sed -i "s/uninstallScript=\"rmtaos\"/uninstallScript=\"rmpower\"/g" ${top_dir}/packaging/tools/remove_client.sh - # packaging/tools/install_client.sh - sed -i "s/dataDir=\"\/var\/lib\/taos\"/dataDir=\"\/var\/lib\/power\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/logDir=\"\/var\/log\/taos\"/logDir=\"\/var\/log\/power\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/productName=\"TDengine\"/productName=\"Power\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/installDir=\"\/usr\/local\/taos\"/installDir=\"\/usr\/local\/power\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/configDir=\"\/etc\/taos\"/configDir=\"\/etc\/power\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/serverName=\"taosd\"/serverName=\"powerd\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/clientName=\"taos\"/clientName=\"power\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/uninstallScript=\"rmtaos\"/uninstallScript=\"rmpower\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/configFile=\"taos\.cfg\"/configFile=\"power\.cfg\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/tarName=\"taos\.tar\.gz\"/tarName=\"power\.tar\.gz\"/g" ${top_dir}/packaging/tools/install_client.sh - - # packaging/tools/makearbi.sh - sed -i "s/productName=\"TDengine\"/productName=\"Power\"/g" ${top_dir}/packaging/tools/makearbi.sh - # packaging/tools/remove_arbi.sh - sed -i "s/TDengine/Power/g" ${top_dir}/packaging/tools/remove_arbi.sh - # packaging/tools/install_arbi.sh - sed -i "s/TDengine/Power/g" ${top_dir}/packaging/tools/install_arbi.sh - - # packaging/tools/make_install.sh - sed -i "s/clientName=\"taos\"/clientName=\"power\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/serverName=\"taosd\"/serverName=\"powerd\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/logDir=\"\/var\/log\/taos\"/logDir=\"\/var\/log\/power\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/dataDir=\"\/var\/lib\/taos\"/dataDir=\"\/var\/lib\/power\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/configDir=\"\/etc\/taos\"/configDir=\"\/etc\/power\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/configFile=\"taos\.cfg\"/configFile=\"power\.cfg\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/installDir=\"\/usr\/local\/taos\"/installDir=\"\/usr\/local\/power\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/productName=\"TDengine\"/productName=\"Power\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/uninstallScript=\"rmtaos\"/uninstallScript=\"rmpower\"/g" ${top_dir}/packaging/tools/make_install.sh - - # packaging/rpm/taosd - sed -i "s/TDengine/Power/g" ${top_dir}/packaging/rpm/taosd - sed -i "s/usr\/local\/taos/usr\/local\/power/g" ${top_dir}/packaging/rpm/taosd - sed -i "s/taosd/powerd/g" ${top_dir}/packaging/rpm/taosd - # packaging/deb/taosd - sed -i "s/TDengine/Power/g" ${top_dir}/packaging/deb/taosd - sed -i "s/usr\/local\/taos/usr\/local\/power/g" ${top_dir}/packaging/deb/taosd - sed -i "s/taosd/powerd/g" ${top_dir}/packaging/deb/taosd -} - -function replace_enterprise_power() { - # enterprise/src/kit/perfMonitor/perfMonitor.c - sed -i "s/\"taosdata\"/\"powerdb\"/g" ${top_dir}/../enterprise/src/kit/perfMonitor/perfMonitor.c - sed -i "s/TDengine/Power/g" ${top_dir}/../enterprise/src/kit/perfMonitor/perfMonitor.c - # enterprise/src/plugins/admin/src/httpAdminHandle.c - sed -i "s/taos\.cfg/power\.cfg/g" ${top_dir}/../enterprise/src/plugins/admin/src/httpAdminHandle.c - # enterprise/src/plugins/grant/src/grantMain.c - sed -i "s/taos\.cfg/power\.cfg/g" ${top_dir}/../enterprise/src/plugins/grant/src/grantMain.c - # enterprise/src/plugins/module/src/moduleMain.c - sed -i "s/taos\.cfg/power\.cfg/g" ${top_dir}/../enterprise/src/plugins/module/src/moduleMain.c - - # enterprise/src/plugins/web - sed -i -e "s/taosd/powerd/g" $(grep -r "taosd" ${top_dir}/../enterprise/src/plugins/web | grep -E "*\.js\s*.*" | sed -r -e "s/(.*\.js):\s*(.*)/\1/g" | sort | uniq) - # enterprise/src/plugins/web/admin/monitor.html - sed -i -e "s/taosd<\/th>/powerd<\/th>/g" ${top_dir}/../enterprise/src/plugins/web/admin/monitor.html - sed -i -e "s/data:\['taosd', 'system'\],/data:\['powerd', 'system'\],/g" ${top_dir}/../enterprise/src/plugins/web/admin/monitor.html - sed -i -e "s/name: 'taosd',/name: 'powerd',/g" ${top_dir}/../enterprise/src/plugins/web/admin/monitor.html - # enterprise/src/plugins/web/admin/*.html - sed -i "s/TDengine/Power/g" ${top_dir}/../enterprise/src/plugins/web/admin/*.html - # enterprise/src/plugins/web/admin/js/*.js - sed -i "s/TDengine/Power/g" ${top_dir}/../enterprise/src/plugins/web/admin/js/*.js - -} diff --git a/packaging/sed_pro.bat b/packaging/sed_pro.bat deleted file mode 100644 index fe4447dc77670d12f7c11553e57c6161a7df640e..0000000000000000000000000000000000000000 --- a/packaging/sed_pro.bat +++ /dev/null @@ -1,55 +0,0 @@ -set sed="C:\Program Files\Git\usr\bin\sed.exe" -set community_dir=%1 - -::cmake\install.inc -%sed% -i "s/C:\/TDengine/C:\/ProDB/g" %community_dir%\cmake\install.inc -%sed% -i "s/taos\.cfg/prodb\.cfg/g" %community_dir%\cmake\install.inc -%sed% -i "s/taos\.exe/prodbc\.exe/g" %community_dir%\cmake\install.inc -%sed% -i "/src\/connector/d" %community_dir%\cmake\install.inc -%sed% -i "/tests\/examples/d" %community_dir%\cmake\install.inc -::src\kit\shell\CMakeLists.txt -%sed% -i "s/OUTPUT_NAME taos/OUTPUT_NAME prodbc/g" %community_dir%\src\kit\shell\CMakeLists.txt -::src\kit\shell\inc\shell.h -%sed% -i "s/taos_history/prodb_history/g" %community_dir%\src\kit\shell\inc\shell.h -::src\inc\taosdef.h -%sed% -i "s/\"taosdata\"/\"prodb\"/g" %community_dir%\src\inc\taosdef.h -::src\util\src\tconfig.c -%sed% -i "s/taos\.cfg/prodb\.cfg/g" %community_dir%\src\util\src\tconfig.c -%sed% -i "s/etc\/taos/etc\/ProDB/g" %community_dir%\src\util\src\tconfig.c -::src\util\src\tlog.c -%sed% -i "s/log\/taos/log\/ProDB/g" %community_dir%\src\util\src\tlog.c -::src\dnode\src\dnodeSystem.c -%sed% -i "s/TDengine/ProDB/g" %community_dir%\src\dnode\src\dnodeSystem.c -::src\dnode\src\dnodeMain.c -%sed% -i "s/TDengine/ProDB/g" %community_dir%\src\dnode\src\dnodeMain.c -%sed% -i "s/taosdlog/prodlog/g" %community_dir%\src\dnode\src\dnodeMain.c -::src\client\src\tscSystem.c -%sed% -i "s/taoslog/prolog/g" %community_dir%\src\client\src\tscSystem.c -::src\util\src\tnote.c -%sed% -i "s/taosinfo/proinfo/g" %community_dir%\src\util\src\tnote.c -::src\dnode\CMakeLists.txt -%sed% -i "s/taos\.cfg/prodb\.cfg/g" %community_dir%\src\dnode\CMakeLists.txt -::src\os\src\linux\linuxEnv.c -%sed% -i "s/etc\/taos/etc\/ProDB/g" %community_dir%\src\os\src\linux\linuxEnv.c -%sed% -i "s/lib\/taos/lib\/ProDB/g" %community_dir%\src\os\src\linux\linuxEnv.c -%sed% -i "s/log\/taos/log\/ProDB/g" %community_dir%\src\os\src\linux\linuxEnv.c -::src\kit\shell\src\shellDarwin.c -%sed% -i "s/TDengine shell/ProDB shell/g" %community_dir%\src\kit\shell\src\shellDarwin.c -%sed% -i "s/2020 by TAOS Data/2021 by HanaTech/g" %community_dir%\src\kit\shell\src\shellDarwin.c -::src\kit\shell\src\shellLinux.c -%sed% -i "s/support@taosdata\.com/support@hanatech\.com\.cn/g" %community_dir%\src\kit\shell\src\shellLinux.c -%sed% -i "s/TDengine shell/ProDB shell/g" %community_dir%\src\kit\shell\src\shellLinux.c -%sed% -i "s/2020 by TAOS Data/2021 by HanaTech/g" %community_dir%\src\kit\shell\src\shellLinux.c -::src\os\src\windows\wEnv.c -%sed% -i "s/TDengine/ProDB/g" %community_dir%\src\os\src\windows\wEnv.c -::src\kit\shell\src\shellEngine.c -%sed% -i "s/TDengine shell/ProDB shell/g" %community_dir%\src\kit\shell\src\shellEngine.c -%sed% -i "s/2020 by TAOS Data, Inc/2021 by HanaTech, Inc/g" %community_dir%\src\kit\shell\src\shellEngine.c -%sed% -i "s/taos connect failed/prodbc connect failed/g" %community_dir%\src\kit\shell\src\shellEngine.c -%sed% -i "s/\"taos^> \"/\"ProDB^> \"/g" %community_dir%\src\kit\shell\src\shellEngine.c -%sed% -i "s/\" -^> \"/\" -^> \"/g" %community_dir%\src\kit\shell\src\shellEngine.c -%sed% -i "s/prompt_size = 6/prompt_size = 7/g" %community_dir%\src\kit\shell\src\shellEngine.c -::src\rpc\src\rpcMain.c -%sed% -i "s/taos connections/prodbc connections/g" %community_dir%\src\rpc\src\rpcMain.c -::src\plugins\monitor\src\monMain.c -%sed% -i "s/taosd is quiting/prodbs is quiting/g" %community_dir%\src\plugins\monitor\src\monMain.c diff --git a/packaging/sed_pro.sh b/packaging/sed_pro.sh deleted file mode 100755 index e7fdaeda4c68f4dfc76d4d879f20f83c123238c1..0000000000000000000000000000000000000000 --- a/packaging/sed_pro.sh +++ /dev/null @@ -1,162 +0,0 @@ -#!/bin/bash - -function replace_community_pro() { - # cmake/install.inc - sed -i "s/C:\/TDengine/C:\/ProDB/g" ${top_dir}/cmake/install.inc - sed -i "s/taos\.cfg/prodb\.cfg/g" ${top_dir}/cmake/install.inc - sed -i "s/taos\.exe/prodbc\.exe/g" ${top_dir}/cmake/install.inc - # src/kit/shell/CMakeLists.txt - sed -i "s/OUTPUT_NAME taos/OUTPUT_NAME prodbc/g" ${top_dir}/src/kit/shell/CMakeLists.txt - # src/kit/shell/inc/shell.h - sed -i "s/taos_history/prodb_history/g" ${top_dir}/src/kit/shell/inc/shell.h - # src/inc/taosdef.h - sed -i "s/\"taosdata\"/\"prodb\"/g" ${top_dir}/src/inc/taosdef.h - # src/util/src/tconfig.c - sed -i "s/taos\.cfg/prodb\.cfg/g" ${top_dir}/src/util/src/tconfig.c - sed -i "s/etc\/taos/etc\/ProDB/g" ${top_dir}/src/util/src/tconfig.c - sed -i "s/taos config/prodb config/g" ${top_dir}/src/util/src/tconfig.c - # src/util/src/tlog.c - sed -i "s/log\/taos/log\/ProDB/g" ${top_dir}/src/util/src/tlog.c - # src/dnode/src/dnodeSystem.c - sed -i "s/TDengine/ProDB/g" ${top_dir}/src/dnode/src/dnodeSystem.c - sed -i "s/TDengine/ProDB/g" ${top_dir}/src/dnode/src/dnodeMain.c - sed -i "s/taosdlog/prodlog/g" ${top_dir}/src/dnode/src/dnodeMain.c - # src/client/src/tscSystem.c - sed -i "s/taoslog/prolog/g" ${top_dir}/src/client/src/tscSystem.c - # src/util/src/tnote.c - sed -i "s/taosinfo/proinfo/g" ${top_dir}/src/util/src/tnote.c - # src/dnode/CMakeLists.txt - sed -i "s/taos\.cfg/prodb\.cfg/g" ${top_dir}/src/dnode/CMakeLists.txt - echo "SET_TARGET_PROPERTIES(taosd PROPERTIES OUTPUT_NAME prodbs)" >>${top_dir}/src/dnode/CMakeLists.txt - # src/os/src/linux/linuxEnv.c - sed -i "s/etc\/taos/etc\/ProDB/g" ${top_dir}/src/os/src/linux/linuxEnv.c - sed -i "s/lib\/taos/lib\/ProDB/g" ${top_dir}/src/os/src/linux/linuxEnv.c - sed -i "s/log\/taos/log\/ProDB/g" ${top_dir}/src/os/src/linux/linuxEnv.c - # src/kit/shell/src/shellDarwin.c - sed -i "s/TDengine shell/ProDB shell/g" ${top_dir}/src/kit/shell/src/shellDarwin.c - sed -i "s/2020 by TAOS Data/2021 by HanaTech/g" ${top_dir}/src/kit/shell/src/shellDarwin.c - # src/kit/shell/src/shellLinux.c - sed -i "s/support@taosdata\.com/support@hanatech\.com\.cn/g" ${top_dir}/src/kit/shell/src/shellLinux.c - sed -i "s/TDengine shell/ProDB shell/g" ${top_dir}/src/kit/shell/src/shellLinux.c - sed -i "s/2020 by TAOS Data/2021 by HanaTech/g" ${top_dir}/src/kit/shell/src/shellLinux.c - # src/os/src/windows/wEnv.c - sed -i "s/C:\/TDengine/C:\/ProDB/g" ${top_dir}/src/os/src/windows/wEnv.c - # src/kit/shell/src/shellEngine.c - sed -i "s/TDengine shell/ProDB shell/g" ${top_dir}/src/kit/shell/src/shellEngine.c - sed -i "s/2020 by TAOS Data, Inc/2021 by Hanatech, Inc/g" ${top_dir}/src/kit/shell/src/shellEngine.c - sed -i "s/taos connect failed/prodbc connect failed/g" ${top_dir}/src/kit/shell/src/shellEngine.c - sed -i "s/\"taos> \"/\"ProDB> \"/g" ${top_dir}/src/kit/shell/src/shellEngine.c - sed -i "s/\" -> \"/\" -> \"/g" ${top_dir}/src/kit/shell/src/shellEngine.c - sed -i "s/prompt_size = 6/prompt_size = 7/g" ${top_dir}/src/kit/shell/src/shellEngine.c - # src/rpc/src/rpcMain.c - sed -i "s/taos connections/prodbc connections/g" ${top_dir}/src/rpc/src/rpcMain.c - # src/plugins/monitor/src/monMain.c - sed -i "s/taosd is quiting/prodbs is quiting/g" ${top_dir}/src/plugins/monitor/src/monMain.c - - # packaging/tools/makepkg.sh - sed -i "s/productName=\"TDengine\"/productName=\"ProDB\"/g" ${top_dir}/packaging/tools/makepkg.sh - sed -i "s/serverName=\"taosd\"/serverName=\"prodbs\"/g" ${top_dir}/packaging/tools/makepkg.sh - sed -i "s/clientName=\"taos\"/clientName=\"prodbc\"/g" ${top_dir}/packaging/tools/makepkg.sh - sed -i "s/configFile=\"taos\.cfg\"/configFile=\"prodb\.cfg\"/g" ${top_dir}/packaging/tools/makepkg.sh - sed -i "s/tarName=\"taos\.tar\.gz\"/tarName=\"prodb\.tar\.gz\"/g" ${top_dir}/packaging/tools/makepkg.sh - # packaging/tools/remove.sh - sed -i "s/installDir=\"\/usr\/local\/taos\"/installDir=\"\/usr\/local\/ProDB\"/g" ${top_dir}/packaging/tools/remove.sh - sed -i "s/serverName=\"taosd\"/serverName=\"prodbs\"/g" ${top_dir}/packaging/tools/remove.sh - sed -i "s/clientName=\"taos\"/clientName=\"prodbc\"/g" ${top_dir}/packaging/tools/remove.sh - sed -i "s/uninstallScript=\"rmtaos\"/uninstallScript=\"rmpro\"/g" ${top_dir}/packaging/tools/remove.sh - sed -i "s/productName=\"TDengine\"/productName=\"ProDB\"/g" ${top_dir}/packaging/tools/remove.sh - # packaging/tools/startPre.sh - sed -i "s/serverName=\"taosd\"/serverName=\"prodbs\"/g" ${top_dir}/packaging/tools/startPre.sh - sed -i "s/logDir=\"\/var\/log\/taos\"/logDir=\"\/var\/log\/ProDB\"/g" ${top_dir}/packaging/tools/startPre.sh - # packaging/tools/run_taosd.sh - sed -i "s/taosd/prodbs/g" ${top_dir}/packaging/tools/run_taosd.sh - # packaging/tools/install.sh - sed -i "s/clientName=\"taos\"/clientName=\"prodbc\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/serverName=\"taosd\"/serverName=\"prodbs\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/configFile=\"taos\.cfg\"/configFile=\"prodb\.cfg\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/productName=\"TDengine\"/productName=\"ProDB\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/emailName=\"taosdata\.com\"/emailName=\"\hanatech\.com\.cn\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/uninstallScript=\"rmtaos\"/uninstallScript=\"rmpro\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/historyFile=\"taos_history\"/historyFile=\"prodb_history\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/tarName=\"taos\.tar\.gz\"/tarName=\"prodb\.tar\.gz\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/dataDir=\"\/var\/lib\/taos\"/dataDir=\"\/var\/lib\/ProDB\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/logDir=\"\/var\/log\/taos\"/logDir=\"\/var\/log\/ProDB\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/configDir=\"\/etc\/taos\"/configDir=\"\/etc\/ProDB\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/installDir=\"\/usr\/local\/taos\"/installDir=\"\/usr\/local\/ProDB\"/g" ${top_dir}/packaging/tools/install.sh - - # packaging/tools/makeclient.sh - sed -i "s/productName=\"TDengine\"/productName=\"ProDB\"/g" ${top_dir}/packaging/tools/makeclient.sh - sed -i "s/clientName=\"taos\"/clientName=\"prodbc\"/g" ${top_dir}/packaging/tools/makeclient.sh - sed -i "s/configFile=\"taos\.cfg\"/configFile=\"prodb\.cfg\"/g" ${top_dir}/packaging/tools/makeclient.sh - sed -i "s/tarName=\"taos\.tar\.gz\"/tarName=\"prodb\.tar\.gz\"/g" ${top_dir}/packaging/tools/makeclient.sh - # packaging/tools/remove_client.sh - sed -i "s/installDir=\"\/usr\/local\/taos\"/installDir=\"\/usr\/local\/ProDB\"/g" ${top_dir}/packaging/tools/remove_client.sh - sed -i "s/clientName=\"taos\"/clientName=\"prodbc\"/g" ${top_dir}/packaging/tools/remove_client.sh - sed -i "s/uninstallScript=\"rmtaos\"/uninstallScript=\"rmpro\"/g" ${top_dir}/packaging/tools/remove_client.sh - # packaging/tools/install_client.sh - sed -i "s/dataDir=\"\/var\/lib\/taos\"/dataDir=\"\/var\/lib\/ProDB\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/logDir=\"\/var\/log\/taos\"/logDir=\"\/var\/log\/ProDB\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/productName=\"TDengine\"/productName=\"ProDB\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/installDir=\"\/usr\/local\/taos\"/installDir=\"\/usr\/local\/ProDB\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/configDir=\"\/etc\/taos\"/configDir=\"\/etc\/ProDB\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/serverName=\"taosd\"/serverName=\"prodbs\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/clientName=\"taos\"/clientName=\"prodbc\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/uninstallScript=\"rmtaos\"/uninstallScript=\"rmpro\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/configFile=\"taos\.cfg\"/configFile=\"prodb\.cfg\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/tarName=\"taos\.tar\.gz\"/tarName=\"prodb\.tar\.gz\"/g" ${top_dir}/packaging/tools/install_client.sh - - # packaging/tools/makearbi.sh - sed -i "s/productName=\"TDengine\"/productName=\"ProDB\"/g" ${top_dir}/packaging/tools/makearbi.sh - # packaging/tools/remove_arbi.sh - sed -i "s/TDengine/ProDB/g" ${top_dir}/packaging/tools/remove_arbi.sh - # packaging/tools/install_arbi.sh - sed -i "s/TDengine/ProDB/g" ${top_dir}/packaging/tools/install_arbi.sh - sed -i "s/taosdata\.com/hanatech\.com\.cn/g" ${top_dir}/packaging/tools/install_arbi.sh - - # packaging/tools/make_install.sh - sed -i "s/clientName=\"taos\"/clientName=\"prodbc\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/serverName=\"taosd\"/serverName=\"prodbs\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/logDir=\"\/var\/log\/taos\"/logDir=\"\/var\/log\/ProDB\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/dataDir=\"\/var\/lib\/taos\"/dataDir=\"\/var\/lib\/ProDB\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/configDir=\"\/etc\/taos\"/configDir=\"\/etc\/ProDB\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/configFile=\"taos\.cfg\"/configFile=\"prodb\.cfg\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/installDir=\"\/usr\/local\/taos\"/installDir=\"\/usr\/local\/ProDB\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/productName=\"TDengine\"/productName=\"ProDB\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/emailName=\"taosdata\.com\"/emailName=\"hanatech\.com\.cn\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/uninstallScript=\"rmtaos\"/uninstallScript=\"rmpro\"/g" ${top_dir}/packaging/tools/make_install.sh - - # packaging/rpm/taosd - sed -i "s/TDengine/ProDB/g" ${top_dir}/packaging/rpm/taosd - sed -i "s/usr\/local\/taos/usr\/local\/ProDB/g" ${top_dir}/packaging/rpm/taosd - sed -i "s/taosd/prodbs/g" ${top_dir}/packaging/rpm/taosd - # packaging/deb/taosd - sed -i "s/TDengine/ProDB/g" ${top_dir}/packaging/deb/taosd - sed -i "s/usr\/local\/taos/usr\/local\/ProDB/g" ${top_dir}/packaging/deb/taosd - sed -i "s/taosd/prodbs/g" ${top_dir}/packaging/deb/taosd - -} - -function replace_enterprise_pro() { - # enterprise/src/kit/perfMonitor/perfMonitor.c - sed -i "s/\"taosdata\"/\"prodb\"/g" ${top_dir}/../enterprise/src/kit/perfMonitor/perfMonitor.c - sed -i "s/TDengine/ProDB/g" ${top_dir}/../enterprise/src/kit/perfMonitor/perfMonitor.c - # enterprise/src/plugins/admin/src/httpAdminHandle.c - sed -i "s/taos\.cfg/prodb\.cfg/g" ${top_dir}/../enterprise/src/plugins/admin/src/httpAdminHandle.c - # enterprise/src/plugins/grant/src/grantMain.c - sed -i "s/taos\.cfg/prodb\.cfg/g" ${top_dir}/../enterprise/src/plugins/grant/src/grantMain.c - # enterprise/src/plugins/module/src/moduleMain.c - sed -i "s/taos\.cfg/prodb\.cfg/g" ${top_dir}/../enterprise/src/plugins/module/src/moduleMain.c - - # enterprise/src/plugins/web - sed -i -e "s/www\.taosdata\.com/www\.hanatech\.com\.cn/g" $(grep -r "www.taosdata.com" ${top_dir}/../enterprise/src/plugins/web | sed -r "s/(.*\.html):\s*(.*)/\1/g") - sed -i -e "s/2017, TAOS Data/2021, Hanatech/g" $(grep -r "TAOS Data" ${top_dir}/../enterprise/src/plugins/web | sed -r "s/(.*\.html):\s*(.*)/\1/g") - sed -i -e "s/taosd/prodbs/g" $(grep -r "taosd" ${top_dir}/../enterprise/src/plugins/web | grep -E "*\.js\s*.*" | sed -r -e "s/(.*\.js):\s*(.*)/\1/g" | sort | uniq) - # enterprise/src/plugins/web/admin/monitor.html - sed -i -e "s/taosd<\/th>/prodbs<\/th>/g" ${top_dir}/../enterprise/src/plugins/web/admin/monitor.html - sed -i -e "s/data:\['taosd', 'system'\],/data:\['prodbs', 'system'\],/g" ${top_dir}/../enterprise/src/plugins/web/admin/monitor.html - sed -i -e "s/name: 'taosd',/name: 'prodbs',/g" ${top_dir}/../enterprise/src/plugins/web/admin/monitor.html - # enterprise/src/plugins/web/admin/*.html - sed -i "s/TDengine/ProDB/g" ${top_dir}/../enterprise/src/plugins/web/admin/*.html - # enterprise/src/plugins/web/admin/js/*.js - sed -i "s/TDengine/ProDB/g" ${top_dir}/../enterprise/src/plugins/web/admin/js/*.js -} diff --git a/packaging/sed_tq.bat b/packaging/sed_tq.bat deleted file mode 100644 index f8131eac3055e65dfe5289b58f2ac044cd79bd99..0000000000000000000000000000000000000000 --- a/packaging/sed_tq.bat +++ /dev/null @@ -1,46 +0,0 @@ -set sed="C:\Program Files\Git\usr\bin\sed.exe" -set community_dir=%1 - -::cmake\install.inc -%sed% -i "s/C:\/TDengine/C:\/TQ/g" %community_dir%\cmake\install.inc -%sed% -i "s/taos\.cfg/tq\.cfg/g" %community_dir%\cmake\install.inc -%sed% -i "s/taos\.exe/tq\.exe/g" %community_dir%\cmake\install.inc -%sed% -i "/src\/connector/d" %community_dir%\cmake\install.inc -%sed% -i "/tests\/examples/d" %community_dir%\cmake\install.inc -::src\kit\shell\CMakeLists.txt -%sed% -i "s/OUTPUT_NAME taos/OUTPUT_NAME tq/g" %community_dir%\src\kit\shell\CMakeLists.txt -::src\kit\shell\inc\shell.h -%sed% -i "s/taos_history/tq_history/g" %community_dir%\src\kit\shell\inc\shell.h -::src\inc\taosdef.h -%sed% -i "s/\"taosdata\"/\"tqueue\"/g" %community_dir%\src\inc\taosdef.h -::src\util\src\tconfig.c -%sed% -i "s/taos\.cfg/tq\.cfg/g" %community_dir%\src\util\src\tconfig.c -%sed% -i "s/etc\/taos/etc\/tq/g" %community_dir%\src\util\src\tconfig.c -::src\util\src\tlog.c -%sed% -i "s/log\/taos/log\/tq/g" %community_dir%\src\util\src\tlog.c -::src\dnode\src\dnodeSystem.c -%sed% -i "s/TDengine/TQ/g" %community_dir%\src\dnode\src\dnodeSystem.c -::src\dnode\src\dnodeMain.c -%sed% -i "s/TDengine/TQ/g" %community_dir%\src\dnode\src\dnodeMain.c -%sed% -i "s/taosdlog/tqdlog/g" %community_dir%\src\dnode\src\dnodeMain.c -::src\client\src\tscSystem.c -%sed% -i "s/taoslog/tqlog/g" %community_dir%\src\client\src\tscSystem.c -::src\util\src\tnote.c -%sed% -i "s/taosinfo/tqinfo/g" %community_dir%\src\util\src\tnote.c -::src\dnode\CMakeLists.txt -%sed% -i "s/taos\.cfg/tq\.cfg/g" %community_dir%\src\dnode\CMakeLists.txt -::src\os\src\linux\linuxEnv.c -%sed% -i "s/etc\/taos/etc\/tq/g" %community_dir%\src\os\src\linux\linuxEnv.c -%sed% -i "s/lib\/taos/lib\/tq/g" %community_dir%\src\os\src\linux\linuxEnv.c -%sed% -i "s/log\/taos/log\/tq/g" %community_dir%\src\os\src\linux\linuxEnv.c -::src\os\src\windows\wEnv.c -%sed% -i "s/TDengine/TQ/g" %community_dir%\src\os\src\windows\wEnv.c -::src\kit\shell\src\shellEngine.c -%sed% -i "s/TDengine shell/TQ shell/g" %community_dir%\src\kit\shell\src\shellEngine.c -%sed% -i "s/\"taos^> \"/\"tq^> \"/g" %community_dir%\src\kit\shell\src\shellEngine.c -%sed% -i "s/\" -^> \"/\" -^> \"/g" %community_dir%\src\kit\shell\src\shellEngine.c -%sed% -i "s/prompt_size = 6/prompt_size = 4/g" %community_dir%\src\kit\shell\src\shellEngine.c -::src\rpc\src\rpcMain.c -%sed% -i "s/taos connections/tq connections/g" %community_dir%\src\rpc\src\rpcMain.c -::src\plugins\monitor\src\monMain.c -%sed% -i "s/taosd is quiting/tqd is quiting/g" %community_dir%\src\plugins\monitor\src\monMain.c diff --git a/packaging/sed_tq.sh b/packaging/sed_tq.sh deleted file mode 100755 index 412abb1fa702839a8d9a789c7860155a120419c6..0000000000000000000000000000000000000000 --- a/packaging/sed_tq.sh +++ /dev/null @@ -1,152 +0,0 @@ -#!/bin/bash - -function replace_community_tq() { - # cmake/install.inc - sed -i "s/C:\/TDengine/C:\/TQ/g" ${top_dir}/cmake/install.inc - sed -i "s/taos\.cfg/tq\.cfg/g" ${top_dir}/cmake/install.inc - sed -i "s/taos\.exe/tq\.exe/g" ${top_dir}/cmake/install.inc - # src/kit/shell/CMakeLists.txt - sed -i "s/OUTPUT_NAME taos/OUTPUT_NAME tq/g" ${top_dir}/src/kit/shell/CMakeLists.txt - # src/kit/shell/inc/shell.h - sed -i "s/taos_history/tq_history/g" ${top_dir}/src/kit/shell/inc/shell.h - # src/inc/taosdef.h - sed -i "s/\"taosdata\"/\"tqueue\"/g" ${top_dir}/src/inc/taosdef.h - # src/util/src/tconfig.c - sed -i "s/taos\.cfg/tq\.cfg/g" ${top_dir}/src/util/src/tconfig.c - sed -i "s/etc\/taos/etc\/tq/g" ${top_dir}/src/util/src/tconfig.c - sed -i "s/taos config/tq config/g" ${top_dir}/src/util/src/tconfig.c - # src/util/src/tlog.c - sed -i "s/log\/taos/log\/tq/g" ${top_dir}/src/util/src/tlog.c - # src/dnode/src/dnodeSystem.c - sed -i "s/TDengine/TQ/g" ${top_dir}/src/dnode/src/dnodeSystem.c - sed -i "s/TDengine/TQ/g" ${top_dir}/src/dnode/src/dnodeMain.c - sed -i "s/taosdlog/tqdlog/g" ${top_dir}/src/dnode/src/dnodeMain.c - # src/client/src/tscSystem.c - sed -i "s/taoslog/tqlog/g" ${top_dir}/src/client/src/tscSystem.c - # src/util/src/tnote.c - sed -i "s/taosinfo/tqinfo/g" ${top_dir}/src/util/src/tnote.c - # src/dnode/CMakeLists.txt - sed -i "s/taos\.cfg/tq\.cfg/g" ${top_dir}/src/dnode/CMakeLists.txt - echo "SET_TARGET_PROPERTIES(taosd PROPERTIES OUTPUT_NAME tqd)" >>${top_dir}/src/dnode/CMakeLists.txt - # src/os/src/linux/linuxEnv.c - sed -i "s/etc\/taos/etc\/tq/g" ${top_dir}/src/os/src/linux/linuxEnv.c - sed -i "s/lib\/taos/lib\/tq/g" ${top_dir}/src/os/src/linux/linuxEnv.c - sed -i "s/log\/taos/log\/tq/g" ${top_dir}/src/os/src/linux/linuxEnv.c - # src/kit/shell/src/shellDarwin.c - sed -i "s/TDengine shell/TQ shell/g" ${top_dir}/src/kit/shell/src/shellDarwin.c - # src/kit/shell/src/shellLinux.c - sed -i "s/TDengine shell/TQ shell/g" ${top_dir}/src/kit/shell/src/shellLinux.c - # src/os/src/windows/wEnv.c - sed -i "s/C:\/TDengine/C:\/TQ/g" ${top_dir}/src/os/src/windows/wEnv.c - # src/kit/shell/src/shellEngine.c - sed -i "s/TDengine shell/TQ shell/g" ${top_dir}/src/kit/shell/src/shellEngine.c - sed -i "s/taos connect failed/tq connect failed/g" ${top_dir}/src/kit/shell/src/shellEngine.c - sed -i "s/\"taos> \"/\"tq> \"/g" ${top_dir}/src/kit/shell/src/shellEngine.c - sed -i "s/\" -> \"/\" -> \"/g" ${top_dir}/src/kit/shell/src/shellEngine.c - sed -i "s/prompt_size = 6/prompt_size = 4/g" ${top_dir}/src/kit/shell/src/shellEngine.c - # src/rpc/src/rpcMain.c - sed -i "s/taos connections/tq connections/g" ${top_dir}/src/rpc/src/rpcMain.c - # src/plugins/monitor/src/monMain.c - sed -i "s/taosd is quiting/tqd is quiting/g" ${top_dir}/src/plugins/monitor/src/monMain.c - - # packaging/tools/makepkg.sh - sed -i "s/productName=\"TDengine\"/productName=\"TQ\"/g" ${top_dir}/packaging/tools/makepkg.sh - sed -i "s/serverName=\"taosd\"/serverName=\"tqd\"/g" ${top_dir}/packaging/tools/makepkg.sh - sed -i "s/clientName=\"taos\"/clientName=\"tq\"/g" ${top_dir}/packaging/tools/makepkg.sh - sed -i "s/configFile=\"taos\.cfg\"/configFile=\"tq\.cfg\"/g" ${top_dir}/packaging/tools/makepkg.sh - sed -i "s/tarName=\"taos\.tar\.gz\"/tarName=\"tq\.tar\.gz\"/g" ${top_dir}/packaging/tools/makepkg.sh - # packaging/tools/remove.sh - sed -i "s/installDir=\"\/usr\/local\/taos\"/installDir=\"\/usr\/local\/tq\"/g" ${top_dir}/packaging/tools/remove.sh - sed -i "s/serverName=\"taosd\"/serverName=\"tqd\"/g" ${top_dir}/packaging/tools/remove.sh - sed -i "s/clientName=\"taos\"/clientName=\"tq\"/g" ${top_dir}/packaging/tools/remove.sh - sed -i "s/uninstallScript=\"rmtaos\"/uninstallScript=\"rmtq\"/g" ${top_dir}/packaging/tools/remove.sh - sed -i "s/productName=\"TDengine\"/productName=\"TQ\"/g" ${top_dir}/packaging/tools/remove.sh - # packaging/tools/startPre.sh - sed -i "s/serverName=\"taosd\"/serverName=\"tqd\"/g" ${top_dir}/packaging/tools/startPre.sh - sed -i "s/logDir=\"\/var\/log\/taos\"/logDir=\"\/var\/log\/tq\"/g" ${top_dir}/packaging/tools/startPre.sh - # packaging/tools/run_taosd.sh - sed -i "s/taosd/tqd/g" ${top_dir}/packaging/tools/run_taosd.sh - # packaging/tools/install.sh - sed -i "s/clientName=\"taos\"/clientName=\"tq\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/serverName=\"taosd\"/serverName=\"tqd\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/configFile=\"taos\.cfg\"/configFile=\"tq\.cfg\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/productName=\"TDengine\"/productName=\"TQ\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/uninstallScript=\"rmtaos\"/uninstallScript=\"rmtq\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/historyFile=\"taos_history\"/historyFile=\"tq_history\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/tarName=\"taos\.tar\.gz\"/tarName=\"tq\.tar\.gz\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/dataDir=\"\/var\/lib\/taos\"/dataDir=\"\/var\/lib\/tq\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/logDir=\"\/var\/log\/taos\"/logDir=\"\/var\/log\/tq\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/configDir=\"\/etc\/taos\"/configDir=\"\/etc\/tq\"/g" ${top_dir}/packaging/tools/install.sh - sed -i "s/installDir=\"\/usr\/local\/taos\"/installDir=\"\/usr\/local\/tq\"/g" ${top_dir}/packaging/tools/install.sh - - # packaging/tools/makeclient.sh - sed -i "s/productName=\"TDengine\"/productName=\"TQ\"/g" ${top_dir}/packaging/tools/makeclient.sh - sed -i "s/clientName=\"taos\"/clientName=\"tq\"/g" ${top_dir}/packaging/tools/makeclient.sh - sed -i "s/configFile=\"taos\.cfg\"/configFile=\"tq\.cfg\"/g" ${top_dir}/packaging/tools/makeclient.sh - sed -i "s/tarName=\"taos\.tar\.gz\"/tarName=\"tq\.tar\.gz\"/g" ${top_dir}/packaging/tools/makeclient.sh - # packaging/tools/remove_client.sh - sed -i "s/installDir=\"\/usr\/local\/taos\"/installDir=\"\/usr\/local\/tq\"/g" ${top_dir}/packaging/tools/remove_client.sh - sed -i "s/clientName=\"taos\"/clientName=\"tq\"/g" ${top_dir}/packaging/tools/remove_client.sh - sed -i "s/uninstallScript=\"rmtaos\"/uninstallScript=\"rmtq\"/g" ${top_dir}/packaging/tools/remove_client.sh - # packaging/tools/install_client.sh - sed -i "s/dataDir=\"\/var\/lib\/taos\"/dataDir=\"\/var\/lib\/tq\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/logDir=\"\/var\/log\/taos\"/logDir=\"\/var\/log\/tq\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/productName=\"TDengine\"/productName=\"TQ\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/installDir=\"\/usr\/local\/taos\"/installDir=\"\/usr\/local\/tq\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/configDir=\"\/etc\/taos\"/configDir=\"\/etc\/tq\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/serverName=\"taosd\"/serverName=\"tqd\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/clientName=\"taos\"/clientName=\"tq\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/uninstallScript=\"rmtaos\"/uninstallScript=\"rmtq\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/configFile=\"taos\.cfg\"/configFile=\"tq\.cfg\"/g" ${top_dir}/packaging/tools/install_client.sh - sed -i "s/tarName=\"taos\.tar\.gz\"/tarName=\"tq\.tar\.gz\"/g" ${top_dir}/packaging/tools/install_client.sh - - # packaging/tools/makearbi.sh - sed -i "s/productName=\"TDengine\"/productName=\"TQ\"/g" ${top_dir}/packaging/tools/makearbi.sh - # packaging/tools/remove_arbi.sh - sed -i "s/TDengine/TQ/g" ${top_dir}/packaging/tools/remove_arbi.sh - # packaging/tools/install_arbi.sh - sed -i "s/TDengine/TQ/g" ${top_dir}/packaging/tools/install_arbi.sh - - # packaging/tools/make_install.sh - sed -i "s/clientName=\"taos\"/clientName=\"tq\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/serverName=\"taosd\"/serverName=\"tqd\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/logDir=\"\/var\/log\/taos\"/logDir=\"\/var\/log\/tq\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/dataDir=\"\/var\/lib\/taos\"/dataDir=\"\/var\/lib\/tq\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/configDir=\"\/etc\/taos\"/configDir=\"\/etc\/tq\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/configFile=\"taos\.cfg\"/configFile=\"tq\.cfg\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/installDir=\"\/usr\/local\/taos\"/installDir=\"\/usr\/local\/tq\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/productName=\"TDengine\"/productName=\"TQ\"/g" ${top_dir}/packaging/tools/make_install.sh - sed -i "s/uninstallScript=\"rmtaos\"/uninstallScript=\"rmtq\"/g" ${top_dir}/packaging/tools/make_install.sh - - # packaging/rpm/taosd - sed -i "s/TDengine/TQ/g" ${top_dir}/packaging/rpm/taosd - sed -i "s/usr\/local\/taos/usr\/local\/tq/g" ${top_dir}/packaging/rpm/taosd - sed -i "s/taosd/tqd/g" ${top_dir}/packaging/rpm/taosd - # packaging/deb/taosd - sed -i "s/TDengine/TQ/g" ${top_dir}/packaging/deb/taosd - sed -i "s/usr\/local\/taos/usr\/local\/tq/g" ${top_dir}/packaging/deb/taosd - sed -i "s/taosd/tqd/g" ${top_dir}/packaging/deb/taosd -} - -function replace_enterprise_tq() { - # enterprise/src/kit/perfMonitor/perfMonitor.c - sed -i "s/\"taosdata\"/\"tqueue\"/g" ${top_dir}/../enterprise/src/kit/perfMonitor/perfMonitor.c - sed -i "s/TDengine/TQ/g" ${top_dir}/../enterprise/src/kit/perfMonitor/perfMonitor.c - # enterprise/src/plugins/admin/src/httpAdminHandle.c - sed -i "s/taos\.cfg/tq\.cfg/g" ${top_dir}/../enterprise/src/plugins/admin/src/httpAdminHandle.c - # enterprise/src/plugins/grant/src/grantMain.c - sed -i "s/taos\.cfg/tq\.cfg/g" ${top_dir}/../enterprise/src/plugins/grant/src/grantMain.c - # enterprise/src/plugins/module/src/moduleMain.c - sed -i "s/taos\.cfg/tq\.cfg/g" ${top_dir}/../enterprise/src/plugins/module/src/moduleMain.c - - # enterprise/src/plugins/web - sed -i -e "s/taosd/tqd/g" $(grep -r "taosd" ${top_dir}/../enterprise/src/plugins/web | grep -E "*\.js\s*.*" | sed -r -e "s/(.*\.js):\s*(.*)/\1/g" | sort | uniq) - # enterprise/src/plugins/web/admin/monitor.html - sed -i -e "s/taosd<\/th>/tqd<\/th>/g" ${top_dir}/../enterprise/src/plugins/web/admin/monitor.html - sed -i -e "s/data:\['taosd', 'system'\],/data:\['tqd', 'system'\],/g" ${top_dir}/../enterprise/src/plugins/web/admin/monitor.html - sed -i -e "s/name: 'taosd',/name: 'tqd',/g" ${top_dir}/../enterprise/src/plugins/web/admin/monitor.html - # enterprise/src/plugins/web/admin/*.html - sed -i "s/TDengine/TQ/g" ${top_dir}/../enterprise/src/plugins/web/admin/*.html - # enterprise/src/plugins/web/admin/js/*.js - sed -i "s/TDengine/TQ/g" ${top_dir}/../enterprise/src/plugins/web/admin/js/*.js -} diff --git a/packaging/tools/install.sh b/packaging/tools/install.sh index b9a5ad35947a117e2c673701ce244e0d74cefcde..ee01252b5ff6ade06ea6526315ab960657a5705b 100755 --- a/packaging/tools/install.sh +++ b/packaging/tools/install.sh @@ -201,6 +201,7 @@ function install_bin() { [ -x ${install_main_dir}/bin/${serverName} ] && ${csudo}ln -s ${install_main_dir}/bin/${serverName} ${bin_link_dir}/${serverName} || : [ -x ${install_main_dir}/bin/taosadapter ] && ${csudo}ln -s ${install_main_dir}/bin/taosadapter ${bin_link_dir}/taosadapter || : [ -x ${install_main_dir}/bin/taosBenchmark ] && ${csudo}ln -s ${install_main_dir}/bin/taosBenchmark ${bin_link_dir}/taosdemo || : + [ -x ${install_main_dir}/bin/taosBenchmark ] && ${csudo}ln -s ${install_main_dir}/bin/taosBenchmark ${bin_link_dir}/taosBenchmark || : [ -x ${install_main_dir}/bin/taosdump ] && ${csudo}ln -s ${install_main_dir}/bin/taosdump ${bin_link_dir}/taosdump || : [ -x ${install_main_dir}/bin/TDinsight.sh ] && ${csudo}ln -s ${install_main_dir}/bin/TDinsight.sh ${bin_link_dir}/TDinsight.sh || : [ -x ${install_main_dir}/bin/remove.sh ] && ${csudo}ln -s ${install_main_dir}/bin/remove.sh ${bin_link_dir}/${uninstallScript} || : diff --git a/packaging/tools/makeclient.sh b/packaging/tools/makeclient.sh index b168f89851e5d9912895d86fe749dba4552c91f1..1e15a916cfa569508aeea2c4ab3aa9b57a93d61d 100755 --- a/packaging/tools/makeclient.sh +++ b/packaging/tools/makeclient.sh @@ -13,6 +13,7 @@ osType=$5 verMode=$6 verType=$7 pagMode=$8 +dbName=$9 productName="TDengine" clientName="taos" @@ -62,7 +63,7 @@ else fi header_files="${code_dir}/inc/taos.h ${code_dir}/inc/taosdef.h ${code_dir}/inc/taoserror.h" -if [ "$verMode" == "cluster" ]; then +if [ "$dbName" != "taos" ]; then cfg_dir="${top_dir}/../enterprise/packaging/cfg" else cfg_dir="${top_dir}/packaging/cfg" diff --git a/packaging/tools/makepkg.sh b/packaging/tools/makepkg.sh index f9f9d8b68bcf5061c3c3c76efbb706750c27ca33..9f3d67f386cb967c0cb203b70e7c6d4afe85b5a2 100755 --- a/packaging/tools/makepkg.sh +++ b/packaging/tools/makepkg.sh @@ -3,7 +3,7 @@ # Generate tar.gz package for all os system set -e -set -x +#set -x curr_dir=$(pwd) compile_dir=$1 @@ -15,6 +15,7 @@ verMode=$6 verType=$7 pagMode=$8 versionComp=$9 +dbName=${10} script_dir="$(dirname $(readlink -f $0))" top_dir="$(readlink -f ${script_dir}/../..)" @@ -24,6 +25,11 @@ serverName="taosd" clientName="taos" configFile="taos.cfg" tarName="taos.tar.gz" +dumpName="taosdump" +benchmarkName="taosBenchmark" +toolsName="taostools" +adapterName="taosadapter" +defaultPasswd="taosdata" # create compressed install file. build_dir="${compile_dir}/build" @@ -32,21 +38,21 @@ release_dir="${top_dir}/release" #package_name='linux' if [ "$verMode" == "cluster" ]; then - install_dir="${release_dir}/${productName}-enterprise-server-${version}" + install_dir="${release_dir}/${productName}-enterprise-server-${version}" else - install_dir="${release_dir}/${productName}-server-${version}" + install_dir="${release_dir}/${productName}-server-${version}" fi if [ -d ${top_dir}/src/kit/taos-tools/packaging/deb ]; then - cd ${top_dir}/src/kit/taos-tools/packaging/deb - [ -z "$taos_tools_ver" ] && taos_tools_ver="0.1.0" + cd ${top_dir}/src/kit/taos-tools/packaging/deb + [ -z "$taos_tools_ver" ] && taos_tools_ver="0.1.0" - taostools_ver=$(git describe --tags|sed -e 's/ver-//g'|awk -F '-' '{print $1}') - taostools_install_dir="${release_dir}/taosTools-${taostools_ver}" + taostools_ver=$(git describe --tags | sed -e 's/ver-//g' | awk -F '-' '{print $1}') + taostools_install_dir="${release_dir}/${clientName}Tools-${taostools_ver}" - cd ${curr_dir} + cd ${curr_dir} else - taostools_install_dir="${release_dir}/taosTools-${version}" + taostools_install_dir="${release_dir}/${clientName}Tools-${version}" fi # Directories and files @@ -63,11 +69,11 @@ else || echo "failed to download TDinsight.sh" taostools_bin_files=" ${build_dir}/bin/taosdump \ + ${build_dir}/bin/taosBenchmark \ ${build_dir}/bin/TDinsight.sh " bin_files="${build_dir}/bin/${serverName} \ ${build_dir}/bin/${clientName} \ - ${build_dir}/bin/taosBenchmark \ ${taostools_bin_files} \ ${build_dir}/bin/taosadapter \ ${build_dir}/bin/tarbitrator\ @@ -80,7 +86,8 @@ fi lib_files="${build_dir}/lib/libtaos.so.${version}" header_files="${code_dir}/inc/taos.h ${code_dir}/inc/taosdef.h ${code_dir}/inc/taoserror.h" -if [ "$verMode" == "cluster" ]; then + +if [ "$dbName" != "taos" ]; then cfg_dir="${top_dir}/../enterprise/packaging/cfg" else cfg_dir="${top_dir}/packaging/cfg" @@ -99,25 +106,24 @@ mkdir -p ${install_dir} mkdir -p ${install_dir}/inc && cp ${header_files} ${install_dir}/inc mkdir -p ${install_dir}/cfg && cp ${cfg_dir}/${configFile} ${install_dir}/cfg/${configFile} - if [ -f "${compile_dir}/test/cfg/taosadapter.toml" ]; then - cp ${compile_dir}/test/cfg/taosadapter.toml ${install_dir}/cfg || : + cp ${compile_dir}/test/cfg/taosadapter.toml ${install_dir}/cfg || : fi if [ -f "${compile_dir}/test/cfg/taosadapter.service" ]; then - cp ${compile_dir}/test/cfg/taosadapter.service ${install_dir}/cfg || : + cp ${compile_dir}/test/cfg/taosadapter.service ${install_dir}/cfg || : fi if [ -f "${cfg_dir}/${serverName}.service" ]; then - cp ${cfg_dir}/${serverName}.service ${install_dir}/cfg || : + cp ${cfg_dir}/${serverName}.service ${install_dir}/cfg || : fi -if [ -f "${cfg_dir}/tarbitratord.service" ]; then - cp ${cfg_dir}/tarbitratord.service ${install_dir}/cfg || : +if [ -f "${top_dir}/packaging/cfg/tarbitratord.service" ]; then + cp ${top_dir}/packaging/cfg/tarbitratord.service ${install_dir}/cfg || : fi -if [ -f "${cfg_dir}/nginxd.service" ]; then - cp ${cfg_dir}/nginxd.service ${install_dir}/cfg || : +if [ -f "${top_dir}/packaging/cfg/nginxd.service" ]; then + cp ${top_dir}/packaging/cfg/nginxd.service ${install_dir}/cfg || : fi mkdir -p ${install_dir}/bin && cp ${bin_files} ${install_dir}/bin && chmod a+x ${install_dir}/bin/* || : @@ -126,153 +132,171 @@ mkdir -p ${install_dir}/init.d && cp ${init_file_rpm} ${install_dir}/init.d/${se mkdir -p ${install_dir}/init.d && cp ${init_file_tarbitrator_deb} ${install_dir}/init.d/tarbitratord.deb || : mkdir -p ${install_dir}/init.d && cp ${init_file_tarbitrator_rpm} ${install_dir}/init.d/tarbitratord.rpm || : -#if [ -n "${taostools_bin_files}" ]; then -# mkdir -p ${taostools_install_dir} || echo -e "failed to create ${taostools_install_dir}" -# mkdir -p ${taostools_install_dir}/bin \ -# && cp ${taostools_bin_files} ${taostools_install_dir}/bin \ -# && chmod a+x ${taostools_install_dir}/bin/* || : - -# if [ -f ${top_dir}/src/kit/taos-tools/packaging/tools/install-taostools.sh ]; then -# cp ${top_dir}/src/kit/taos-tools/packaging/tools/install-taostools.sh \ -# ${taostools_install_dir}/ > /dev/null \ -# && chmod a+x ${taostools_install_dir}/install-taostools.sh \ -# || echo -e "failed to copy install-taostools.sh" -# else -# echo -e "install-taostools.sh not found" -# fi - -# if [ -f ${top_dir}/src/kit/taos-tools/packaging/tools/uninstall-taostools.sh ]; then -# cp ${top_dir}/src/kit/taos-tools/packaging/tools/uninstall-taostools.sh \ -# ${taostools_install_dir}/ > /dev/null \ -# && chmod a+x ${taostools_install_dir}/uninstall-taostools.sh \ -# || echo -e "failed to copy uninstall-taostools.sh" -# else -# echo -e "uninstall-taostools.sh not found" -# fi - -# if [ -f ${build_dir}/lib/libavro.so.23.0.0 ]; then -# mkdir -p ${taostools_install_dir}/avro/{lib,lib/pkgconfig} || echo -e "failed to create ${taostools_install_dir}/avro" -# cp ${build_dir}/lib/libavro.* ${taostools_install_dir}/avro/lib -# cp ${build_dir}/lib/pkgconfig/avro-c.pc ${taostools_install_dir}/avro/lib/pkgconfig -# fi -#fi +if [ $adapterName != "taosadapter" ]; then + mv ${install_dir}/cfg/taosadapter.toml ${install_dir}/cfg/$adapterName.toml + sed -i "s/path = \"\/var\/log\/taos\"/path = \"\/var\/log\/${productName}\"/g" ${install_dir}/cfg/$adapterName.toml + sed -i "s/password = \"taosdata\"/password = \"${defaultPasswd}\"/g" ${install_dir}/cfg/$adapterName.toml -if [ -f ${build_dir}/bin/jemalloc-config ]; then - mkdir -p ${install_dir}/jemalloc/{bin,lib,lib/pkgconfig,include/jemalloc,share/doc/jemalloc,share/man/man3} - cp ${build_dir}/bin/jemalloc-config ${install_dir}/jemalloc/bin - if [ -f ${build_dir}/bin/jemalloc.sh ]; then - cp ${build_dir}/bin/jemalloc.sh ${install_dir}/jemalloc/bin - fi - if [ -f ${build_dir}/bin/jeprof ]; then - cp ${build_dir}/bin/jeprof ${install_dir}/jemalloc/bin - fi - if [ -f ${build_dir}/include/jemalloc/jemalloc.h ]; then - cp ${build_dir}/include/jemalloc/jemalloc.h ${install_dir}/jemalloc/include/jemalloc - fi - if [ -f ${build_dir}/lib/libjemalloc.so.2 ]; then - cp ${build_dir}/lib/libjemalloc.so.2 ${install_dir}/jemalloc/lib - ln -sf libjemalloc.so.2 ${install_dir}/jemalloc/lib/libjemalloc.so - fi - if [ -f ${build_dir}/lib/libjemalloc.a ]; then - cp ${build_dir}/lib/libjemalloc.a ${install_dir}/jemalloc/lib - fi - if [ -f ${build_dir}/lib/libjemalloc_pic.a ]; then - cp ${build_dir}/lib/libjemalloc_pic.a ${install_dir}/jemalloc/lib - fi - if [ -f ${build_dir}/lib/pkgconfig/jemalloc.pc ]; then - cp ${build_dir}/lib/pkgconfig/jemalloc.pc ${install_dir}/jemalloc/lib/pkgconfig + mv ${install_dir}/cfg/taosadapter.service ${install_dir}/cfg/$adapterName.service + sed -i "s/TDengine/${productName}/g" ${install_dir}/cfg/$adapterName.service + sed -i "s/taosAdapter/${adapterName}/g" ${install_dir}/cfg/$adapterName.service + sed -i "s/taosadapter/${adapterName}/g" ${install_dir}/cfg/$adapterName.service + + mv ${install_dir}/bin/taosadapter ${install_dir}/bin/${adapterName} + mv ${install_dir}/bin/run_taosd_and_taosadapter.sh ${install_dir}/bin/run_${serverName}_and_${adapterName}.sh + mv ${install_dir}/bin/taosd-dump-cfg.gdb ${install_dir}/bin/${serverName}-dump-cfg.gdb +fi + +if [ -n "${taostools_bin_files}" ]; then + mkdir -p ${taostools_install_dir} || echo -e "failed to create ${taostools_install_dir}" + mkdir -p ${taostools_install_dir}/bin \ + && cp ${taostools_bin_files} ${taostools_install_dir}/bin \ + && chmod a+x ${taostools_install_dir}/bin/* || : + + if [ -f ${top_dir}/src/kit/taos-tools/packaging/tools/install-taostools.sh ]; then + cp ${top_dir}/src/kit/taos-tools/packaging/tools/install-taostools.sh \ + ${taostools_install_dir}/ > /dev/null \ + && chmod a+x ${taostools_install_dir}/install-taostools.sh \ + || echo -e "failed to copy install-taostools.sh" + else + echo -e "install-taostools.sh not found" fi - if [ -f ${build_dir}/share/doc/jemalloc/jemalloc.html ]; then - cp ${build_dir}/share/doc/jemalloc/jemalloc.html ${install_dir}/jemalloc/share/doc/jemalloc + + if [ -f ${top_dir}/src/kit/taos-tools/packaging/tools/uninstall-taostools.sh ]; then + cp ${top_dir}/src/kit/taos-tools/packaging/tools/uninstall-taostools.sh \ + ${taostools_install_dir}/ > /dev/null \ + && chmod a+x ${taostools_install_dir}/uninstall-taostools.sh \ + || echo -e "failed to copy uninstall-taostools.sh" + else + echo -e "uninstall-taostools.sh not found" fi - if [ -f ${build_dir}/share/man/man3/jemalloc.3 ]; then - cp ${build_dir}/share/man/man3/jemalloc.3 ${install_dir}/jemalloc/share/man/man3 + + if [ -f ${build_dir}/lib/libavro.so.23.0.0 ]; then + mkdir -p ${taostools_install_dir}/avro/{lib,lib/pkgconfig} || echo -e "failed to create ${taostools_install_dir}/avro" + cp ${build_dir}/lib/libavro.* ${taostools_install_dir}/avro/lib + cp ${build_dir}/lib/pkgconfig/avro-c.pc ${taostools_install_dir}/avro/lib/pkgconfig fi fi +if [ -f ${build_dir}/bin/jemalloc-config ]; then + mkdir -p ${install_dir}/jemalloc/{bin,lib,lib/pkgconfig,include/jemalloc,share/doc/jemalloc,share/man/man3} + cp ${build_dir}/bin/jemalloc-config ${install_dir}/jemalloc/bin + if [ -f ${build_dir}/bin/jemalloc.sh ]; then + cp ${build_dir}/bin/jemalloc.sh ${install_dir}/jemalloc/bin + fi + if [ -f ${build_dir}/bin/jeprof ]; then + cp ${build_dir}/bin/jeprof ${install_dir}/jemalloc/bin + fi + if [ -f ${build_dir}/include/jemalloc/jemalloc.h ]; then + cp ${build_dir}/include/jemalloc/jemalloc.h ${install_dir}/jemalloc/include/jemalloc + fi + if [ -f ${build_dir}/lib/libjemalloc.so.2 ]; then + cp ${build_dir}/lib/libjemalloc.so.2 ${install_dir}/jemalloc/lib + ln -sf libjemalloc.so.2 ${install_dir}/jemalloc/lib/libjemalloc.so + fi + if [ -f ${build_dir}/lib/libjemalloc.a ]; then + cp ${build_dir}/lib/libjemalloc.a ${install_dir}/jemalloc/lib + fi + if [ -f ${build_dir}/lib/libjemalloc_pic.a ]; then + cp ${build_dir}/lib/libjemalloc_pic.a ${install_dir}/jemalloc/lib + fi + if [ -f ${build_dir}/lib/pkgconfig/jemalloc.pc ]; then + cp ${build_dir}/lib/pkgconfig/jemalloc.pc ${install_dir}/jemalloc/lib/pkgconfig + fi + if [ -f ${build_dir}/share/doc/jemalloc/jemalloc.html ]; then + cp ${build_dir}/share/doc/jemalloc/jemalloc.html ${install_dir}/jemalloc/share/doc/jemalloc + fi + if [ -f ${build_dir}/share/man/man3/jemalloc.3 ]; then + cp ${build_dir}/share/man/man3/jemalloc.3 ${install_dir}/jemalloc/share/man/man3 + fi +fi + if [ "$verMode" == "cluster" ]; then - sed 's/verMode=edge/verMode=cluster/g' ${install_dir}/bin/remove.sh >> remove_temp.sh - mv remove_temp.sh ${install_dir}/bin/remove.sh + sed 's/verMode=edge/verMode=cluster/g' ${install_dir}/bin/remove.sh >>remove_temp.sh + mv remove_temp.sh ${install_dir}/bin/remove.sh - mkdir -p ${install_dir}/nginxd && cp -r ${nginx_dir}/* ${install_dir}/nginxd - cp ${nginx_dir}/png/taos.png ${install_dir}/nginxd/admin/images/taos.png - rm -rf ${install_dir}/nginxd/png + mkdir -p ${install_dir}/nginxd && cp -r ${nginx_dir}/* ${install_dir}/nginxd + cp ${nginx_dir}/png/taos.png ${install_dir}/nginxd/admin/images/taos.png + rm -rf ${install_dir}/nginxd/png - if [ "$cpuType" == "aarch64" ]; then - cp -f ${install_dir}/nginxd/sbin/arm/64bit/nginx ${install_dir}/nginxd/sbin/ - elif [ "$cpuType" == "aarch32" ]; then - cp -f ${install_dir}/nginxd/sbin/arm/32bit/nginx ${install_dir}/nginxd/sbin/ - fi - rm -rf ${install_dir}/nginxd/sbin/arm + if [ "$cpuType" == "aarch64" ]; then + cp -f ${install_dir}/nginxd/sbin/arm/64bit/nginx ${install_dir}/nginxd/sbin/ + elif [ "$cpuType" == "aarch32" ]; then + cp -f ${install_dir}/nginxd/sbin/arm/32bit/nginx ${install_dir}/nginxd/sbin/ + fi + rm -rf ${install_dir}/nginxd/sbin/arm fi cd ${install_dir} -tar -zcv -f ${tarName} * --remove-files || : +tar -zcv -f ${tarName} * --remove-files || : exitcode=$? if [ "$exitcode" != "0" ]; then - echo "tar ${tarName} error !!!" - exit $exitcode + echo "tar ${tarName} error !!!" + exit $exitcode fi cd ${curr_dir} cp ${install_files} ${install_dir} if [ "$verMode" == "cluster" ]; then - sed 's/verMode=edge/verMode=cluster/g' ${install_dir}/install.sh >> install_temp.sh - mv install_temp.sh ${install_dir}/install.sh + sed 's/verMode=edge/verMode=cluster/g' ${install_dir}/install.sh >>install_temp.sh + mv install_temp.sh ${install_dir}/install.sh fi if [ "$pagMode" == "lite" ]; then - sed 's/pagMode=full/pagMode=lite/g' ${install_dir}/install.sh >> install_temp.sh - mv install_temp.sh ${install_dir}/install.sh + sed 's/pagMode=full/pagMode=lite/g' ${install_dir}/install.sh >>install_temp.sh + mv install_temp.sh ${install_dir}/install.sh fi chmod a+x ${install_dir}/install.sh -# Copy example code -mkdir -p ${install_dir}/examples -examples_dir="${top_dir}/examples" - cp -r ${examples_dir}/c ${install_dir}/examples -if [[ "$pagMode" != "lite" ]] && [[ "$cpuType" != "aarch32" ]]; then - if [ -d ${examples_dir}/JDBC/connectionPools/target ]; then - rm -rf ${examples_dir}/JDBC/connectionPools/target - fi - if [ -d ${examples_dir}/JDBC/JDBCDemo/target ]; then - rm -rf ${examples_dir}/JDBC/JDBCDemo/target - fi - if [ -d ${examples_dir}/JDBC/mybatisplus-demo/target ]; then - rm -rf ${examples_dir}/JDBC/mybatisplus-demo/target - fi - if [ -d ${examples_dir}/JDBC/springbootdemo/target ]; then - rm -rf ${examples_dir}/JDBC/springbootdemo/target - fi - if [ -d ${examples_dir}/JDBC/SpringJdbcTemplate/target ]; then - rm -rf ${examples_dir}/JDBC/SpringJdbcTemplate/target - fi - if [ -d ${examples_dir}/JDBC/taosdemo/target ]; then - rm -rf ${examples_dir}/JDBC/taosdemo/target - fi +if [[ $dbName == "taos" ]]; then + # Copy example code + mkdir -p ${install_dir}/examples + examples_dir="${top_dir}/examples" + cp -r ${examples_dir}/c ${install_dir}/examples + if [[ "$pagMode" != "lite" ]] && [[ "$cpuType" != "aarch32" ]]; then + if [ -d ${examples_dir}/JDBC/connectionPools/target ]; then + rm -rf ${examples_dir}/JDBC/connectionPools/target + fi + if [ -d ${examples_dir}/JDBC/JDBCDemo/target ]; then + rm -rf ${examples_dir}/JDBC/JDBCDemo/target + fi + if [ -d ${examples_dir}/JDBC/mybatisplus-demo/target ]; then + rm -rf ${examples_dir}/JDBC/mybatisplus-demo/target + fi + if [ -d ${examples_dir}/JDBC/springbootdemo/target ]; then + rm -rf ${examples_dir}/JDBC/springbootdemo/target + fi + if [ -d ${examples_dir}/JDBC/SpringJdbcTemplate/target ]; then + rm -rf ${examples_dir}/JDBC/SpringJdbcTemplate/target + fi + if [ -d ${examples_dir}/JDBC/taosdemo/target ]; then + rm -rf ${examples_dir}/JDBC/taosdemo/target + fi - cp -r ${examples_dir}/JDBC ${install_dir}/examples - cp -r ${examples_dir}/matlab ${install_dir}/examples - cp -r ${examples_dir}/python ${install_dir}/examples - cp -r ${examples_dir}/R ${install_dir}/examples - cp -r ${examples_dir}/go ${install_dir}/examples - cp -r ${examples_dir}/nodejs ${install_dir}/examples - cp -r ${examples_dir}/C# ${install_dir}/examples + cp -r ${examples_dir}/JDBC ${install_dir}/examples + cp -r ${examples_dir}/matlab ${install_dir}/examples + cp -r ${examples_dir}/python ${install_dir}/examples + cp -r ${examples_dir}/R ${install_dir}/examples + cp -r ${examples_dir}/go ${install_dir}/examples + cp -r ${examples_dir}/nodejs ${install_dir}/examples + cp -r ${examples_dir}/C# ${install_dir}/examples + fi fi + # Copy driver -mkdir -p ${install_dir}/driver && cp ${lib_files} ${install_dir}/driver && echo "${versionComp}" > ${install_dir}/driver/vercomp.txt +mkdir -p ${install_dir}/driver && cp ${lib_files} ${install_dir}/driver && echo "${versionComp}" >${install_dir}/driver/vercomp.txt # Copy connector #connector_dir="${code_dir}/connector" #mkdir -p ${install_dir}/connector #if [[ "$pagMode" != "lite" ]] && [[ "$cpuType" != "aarch32" ]]; then -# cp ${build_dir}/lib/*.jar ${install_dir}/connector ||: +# cp ${build_dir}/lib/*.jar ${install_dir}/connector || : # if find ${connector_dir}/go -mindepth 1 -maxdepth 1 | read; then # cp -r ${connector_dir}/go ${install_dir}/connector # else # echo "WARNING: go connector not found, please check if want to use it!" # fi -# cp -r ${connector_dir}/python ${install_dir}/connector -# cp -r ${connector_dir}/nodejs ${install_dir}/connector +# cp -r ${connector_dir}/python ${install_dir}/connector +# cp -r ${connector_dir}/nodejs ${install_dir}/connector #fi # Copy release note # cp ${script_dir}/release_note ${install_dir} @@ -313,18 +337,18 @@ fi tar -zcv -f "$(basename ${pkg_name}).tar.gz" "$(basename ${install_dir})" --remove-files || : exitcode=$? if [ "$exitcode" != "0" ]; then - echo "tar ${pkg_name}.tar.gz error !!!" - exit $exitcode + echo "tar ${pkg_name}.tar.gz error !!!" + exit $exitcode fi -#if [ -n "${taostools_bin_files}" ]; then -# wget https://github.com/taosdata/grafanaplugin/releases/latest/download/TDinsight.sh -O ${taostools_install_dir}/bin/TDinsight.sh && echo "TDinsight.sh downloaded!"|| echo "failed to download TDinsight.sh" -# tar -zcv -f "$(basename ${taostools_pkg_name}).tar.gz" "$(basename ${taostools_install_dir})" --remove-files || : -# exitcode=$? -# if [ "$exitcode" != "0" ]; then -# echo "tar ${taostools_pkg_name}.tar.gz error !!!" -# exit $exitcode -# fi -#fi +if [ -n "${taostools_bin_files}" ]; then + wget https://github.com/taosdata/grafanaplugin/releases/latest/download/TDinsight.sh -O ${taostools_install_dir}/bin/TDinsight.sh && echo "TDinsight.sh downloaded!"|| echo "failed to download TDinsight.sh" + tar -zcv -f "$(basename ${taostools_pkg_name}).tar.gz" "$(basename ${taostools_install_dir})" --remove-files || : + exitcode=$? + if [ "$exitcode" != "0" ]; then + echo "tar ${taostools_pkg_name}.tar.gz error !!!" + exit $exitcode + fi +fi cd ${curr_dir} diff --git a/packaging/tools/remove.sh b/packaging/tools/remove.sh index 14b9688eb4b42bfecd2fbc78afba66f1118f5d45..b129d12840ee6a85807eb5f2e1f1e6d13814a94e 100755 --- a/packaging/tools/remove.sh +++ b/packaging/tools/remove.sh @@ -1,6 +1,6 @@ #!/bin/bash # -# Script to stop the service and uninstall TDengine, but retain the config, data and log files. +# Script to stop and uninstall the service, but retain the config, data and log files. set -e #set -x diff --git a/src/client/inc/tscUtil.h b/src/client/inc/tscUtil.h index 1f84fa27d7ccfc32337365295b80da873c1053f9..e380dc77768a97a267ce80f8fb20273acb389fa7 100644 --- a/src/client/inc/tscUtil.h +++ b/src/client/inc/tscUtil.h @@ -231,15 +231,15 @@ void addExprParams(SSqlExpr* pExpr, char* argument, int32_t type, int32_t bytes) int32_t tscGetResRowLength(SArray* pExprList); SExprInfo* tscExprInsert(SQueryInfo* pQueryInfo, int32_t index, int16_t functionId, SColumnIndex* pColIndex, int16_t type, - int16_t size, int16_t resColId, int16_t interSize, bool isTagCol); + int16_t size, int16_t resColId, int32_t interSize, bool isTagCol); SExprInfo* tscExprCreate(STableMetaInfo* pTableMetaInfo, int16_t functionId, SColumnIndex* pColIndex, int16_t type, - int16_t size, int16_t resColId, int16_t interSize, int32_t colType); + int16_t size, int16_t resColId, int32_t interSize, int32_t colType); void tscExprAddParams(SSqlExpr* pExpr, char* argument, int32_t type, int32_t bytes); SExprInfo* tscExprAppend(SQueryInfo* pQueryInfo, int16_t functionId, SColumnIndex* pColIndex, int16_t type, - int16_t size, int16_t resColId, int16_t interSize, bool isTagCol); + int16_t size, int16_t resColId, int32_t interSize, bool isTagCol); SExprInfo* tscExprUpdate(SQueryInfo* pQueryInfo, int32_t index, int16_t functionId, int16_t srcColumnIndex, int16_t type, int32_t size); diff --git a/src/client/inc/tsclient.h b/src/client/inc/tsclient.h index cd19dae3a88b3dfbb54c0c28d6e6908ec98a82c2..6bb124847bae2a21d6696d3cf2abb4961d29718f 100644 --- a/src/client/inc/tsclient.h +++ b/src/client/inc/tsclient.h @@ -330,7 +330,7 @@ typedef struct STscObj { void * signature; void * pTimer; char user[TSDB_USER_LEN]; - char pass[TSDB_KEY_LEN]; + char pass[TSDB_PASS_LEN]; char acctId[TSDB_ACCT_ID_LEN]; char db[TSDB_ACCT_ID_LEN + TSDB_DB_NAME_LEN]; char sversion[TSDB_VERSION_LEN]; diff --git a/src/client/src/tscGlobalmerge.c b/src/client/src/tscGlobalmerge.c index d01e1fcae3b4824959dced85f31b3cc252cda6c5..f6a9b8e257ce2c4a4dac4b0026030abf64d1ac6b 100644 --- a/src/client/src/tscGlobalmerge.c +++ b/src/client/src/tscGlobalmerge.c @@ -615,11 +615,9 @@ static void doMergeResultImpl(SOperatorInfo* pInfo, SQLFunctionCtx *pCtx, int32_ aAggs[functionId].mergeFunc(&pCtx[j]); } - if (functionId == TSDB_FUNC_UNIQUE && - (GET_RES_INFO(&(pCtx[j]))->numOfRes > MAX_UNIQUE_RESULT_ROWS || GET_RES_INFO(&(pCtx[j]))->numOfRes == -1)){ - tscError("Unique result num is too large. num: %d, limit: %d", - GET_RES_INFO(&(pCtx[j]))->numOfRes, MAX_UNIQUE_RESULT_ROWS); - longjmp(pInfo->pRuntimeEnv->env, TSDB_CODE_QRY_UNIQUE_RESULT_TOO_LARGE); + if (GET_RES_INFO(&(pCtx[j]))->numOfRes == -1){ + tscError("result num is too large."); + longjmp(pInfo->pRuntimeEnv->env, TSDB_CODE_QRY_RESULT_TOO_LARGE); } } } diff --git a/src/client/src/tscSQLParser.c b/src/client/src/tscSQLParser.c index 196cfaae20eb8b9a2850f073c761a845c51f9bb8..c53d55cc4da5c63633c5a1b4a91b87f19eadb3df 100644 --- a/src/client/src/tscSQLParser.c +++ b/src/client/src/tscSQLParser.c @@ -366,7 +366,7 @@ static int32_t handlePassword(SSqlCmd* pCmd, SStrToken* pPwd) { return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg1); } - if (pPwd->n >= TSDB_KEY_LEN) { + if (pPwd->n > TSDB_PASS_LEN - 1) { return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg2); } @@ -2601,14 +2601,14 @@ static int32_t setExprInfoForFunctions(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, SS void setResultColName(char* name, tSqlExprItem* pItem, int32_t functionId, SStrToken* pToken, bool multiCols) { if (pItem->aliasName != NULL) { tstrncpy(name, pItem->aliasName, TSDB_COL_NAME_LEN); - } else if (multiCols) { + } else { char uname[TSDB_COL_NAME_LEN] = {0}; int32_t len = MIN(pToken->n + 1, TSDB_COL_NAME_LEN); tstrncpy(uname, pToken->z, len); if (tsKeepOriginalColumnName) { // keep the original column name tstrncpy(name, uname, TSDB_COL_NAME_LEN); - } else { + } else if (multiCols) { if (!TSDB_FUNC_IS_SCALAR(functionId)) { int32_t size = TSDB_COL_NAME_LEN + tListLen(aAggs[functionId].name) + 2 + 1; char tmp[TSDB_COL_NAME_LEN + tListLen(aAggs[functionId].name) + 2 + 1] = {0}; @@ -2623,10 +2623,10 @@ void setResultColName(char* name, tSqlExprItem* pItem, int32_t functionId, SStrT tstrncpy(name, tmp, TSDB_COL_NAME_LEN); } + } else { // use the user-input result column name + len = MIN(pItem->pNode->exprToken.n + 1, TSDB_COL_NAME_LEN); + tstrncpy(name, pItem->pNode->exprToken.z, len); } - } else { // use the user-input result column name - int32_t len = MIN(pItem->pNode->exprToken.n + 1, TSDB_COL_NAME_LEN); - tstrncpy(name, pItem->pNode->exprToken.z, len); } } @@ -2693,7 +2693,7 @@ int32_t addExprAndResultField(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t col const char* msg26 = "start param cannot be 0 with 'log_bin'"; const char* msg27 = "factor param cannot be negative or equal to 0/1"; const char* msg28 = "the second paramter of diff should be 0 or 1"; - const char* msg29 = "key timestamp column cannot be used to unique function"; + const char* msg29 = "key timestamp column cannot be used to unique/mode function"; switch (functionId) { case TSDB_FUNC_COUNT: { @@ -2791,7 +2791,8 @@ int32_t addExprAndResultField(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t col case TSDB_FUNC_CSUM: case TSDB_FUNC_STDDEV: case TSDB_FUNC_LEASTSQR: - case TSDB_FUNC_ELAPSED: { + case TSDB_FUNC_ELAPSED: + case TSDB_FUNC_MODE: { // 1. valid the number of parameters int32_t numOfParams = (pItem->pNode->Expr.paramList == NULL) ? 0 : (int32_t)taosArrayGetSize(pItem->pNode->Expr.paramList); @@ -2852,7 +2853,9 @@ int32_t addExprAndResultField(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t col // 2. check if sql function can be applied on this column data type SSchema* pSchema = tscGetTableColumnSchema(pTableMetaInfo->pTableMeta, index.columnIndex); - if (!IS_NUMERIC_TYPE(pSchema->type) && (functionId != TSDB_FUNC_ELAPSED)) { + if (functionId == TSDB_FUNC_MODE && pColumnSchema->colId == PRIMARYKEY_TIMESTAMP_COL_INDEX ){ + return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg29); + } else if (!IS_NUMERIC_TYPE(pSchema->type) && (functionId != TSDB_FUNC_ELAPSED) && (functionId != TSDB_FUNC_MODE)) { return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg1); } else if (IS_UNSIGNED_NUMERIC_TYPE(pSchema->type) && (functionId == TSDB_FUNC_DIFF || functionId == TSDB_FUNC_DERIVATIVE)) { @@ -4010,7 +4013,8 @@ int32_t tscTansformFuncForSTableQuery(SQueryInfo* pQueryInfo) { (functionId == TSDB_FUNC_SAMPLE) || (functionId == TSDB_FUNC_ELAPSED) || (functionId == TSDB_FUNC_HISTOGRAM) || - (functionId == TSDB_FUNC_UNIQUE)) { + (functionId == TSDB_FUNC_UNIQUE) || + (functionId == TSDB_FUNC_MODE)) { if (getResultDataInfo(pSrcSchema->type, pSrcSchema->bytes, functionId, (int32_t)pExpr->base.param[0].i64, &type, &bytes, &interBytes, 0, true, NULL) != TSDB_CODE_SUCCESS) { return TSDB_CODE_TSC_INVALID_OPERATION; diff --git a/src/client/src/tscServer.c b/src/client/src/tscServer.c index b59f7cc4db0c4e0cf6c5d2d04d5aeb1d5aa44154..4ce44fad76568063c8d8ef4184fa2002b36de0f2 100644 --- a/src/client/src/tscServer.c +++ b/src/client/src/tscServer.c @@ -1254,6 +1254,17 @@ int32_t tscBuildCreateDnodeMsg(SSqlObj *pSql, SSqlInfo *pInfo) { return TSDB_CODE_SUCCESS; } +static bool tscIsAlterCommand(char* sqlstr) { + int32_t index = 0; + + do { + SStrToken t0 = tStrGetToken(sqlstr, &index, false); + if (t0.type != TK_LP) { + return t0.type == TK_ALTER; + } + } while (1); +} + int32_t tscBuildAcctMsg(SSqlObj *pSql, SSqlInfo *pInfo) { SSqlCmd *pCmd = &pSql->cmd; pCmd->payloadLen = sizeof(SCreateAcctMsg); @@ -1295,7 +1306,12 @@ int32_t tscBuildAcctMsg(SSqlObj *pSql, SSqlInfo *pInfo) { } } - pCmd->msgType = TSDB_MSG_TYPE_CM_CREATE_ACCT; + if (tscIsAlterCommand(pSql->sqlstr)) { + pCmd->msgType = TSDB_MSG_TYPE_CM_ALTER_ACCT; + } else { + pCmd->msgType = TSDB_MSG_TYPE_CM_CREATE_ACCT; + } + return TSDB_CODE_SUCCESS; } diff --git a/src/client/src/tscSql.c b/src/client/src/tscSql.c index 2b9dda4f0b1e165bab5bcd1f482efe81fe38e590..d5eaa65761a128b05bdaf076ad726e13a54cc661 100644 --- a/src/client/src/tscSql.c +++ b/src/client/src/tscSql.c @@ -49,7 +49,7 @@ static bool validUserName(const char* user) { } static bool validPassword(const char* passwd) { - return validImpl(passwd, TSDB_KEY_LEN - 1); + return validImpl(passwd, TSDB_PASS_LEN - 1); } static SSqlObj *taosConnectImpl(const char *ip, const char *user, const char *pass, const char *auth, const char *db, @@ -64,7 +64,7 @@ static SSqlObj *taosConnectImpl(const char *ip, const char *user, const char *pa } SRpcCorEpSet corMgmtEpSet; - char secretEncrypt[32] = {0}; + char secretEncrypt[TSDB_PASS_LEN] = {0}; int secretEncryptLen = 0; if (auth == NULL) { if (!validPassword(pass)) { @@ -82,6 +82,11 @@ static SSqlObj *taosConnectImpl(const char *ip, const char *user, const char *pa terrno = TSDB_CODE_TSC_INVALID_PASS_LENGTH; return NULL; } else { + if (outlen >= TSDB_PASS_LEN) { + terrno = TSDB_CODE_TSC_INVALID_USER_LENGTH; + tscError("failed to connect DB, too long length of authentication: %s", base64); + return NULL; + } memcpy(secretEncrypt, base64, outlen); free(base64); } @@ -240,11 +245,11 @@ TAOS *taos_connect_c(const char *ip, uint8_t ipLen, const char *user, uint8_t us uint8_t passLen, const char *db, uint8_t dbLen, uint16_t port) { char ipBuf[TSDB_EP_LEN] = {0}; char userBuf[TSDB_USER_LEN] = {0}; - char passBuf[TSDB_KEY_LEN] = {0}; + char passBuf[TSDB_PASS_LEN] = {0}; char dbBuf[TSDB_DB_NAME_LEN] = {0}; strncpy(ipBuf, ip, MIN(TSDB_EP_LEN - 1, ipLen)); strncpy(userBuf, user, MIN(TSDB_USER_LEN - 1, userLen)); - strncpy(passBuf, pass, MIN(TSDB_KEY_LEN - 1, passLen)); + strncpy(passBuf, pass, MIN(TSDB_PASS_LEN - 1, passLen)); strncpy(dbBuf, db, MIN(TSDB_DB_NAME_LEN - 1, dbLen)); return taos_connect(ipBuf, userBuf, passBuf, dbBuf, port); } diff --git a/src/client/src/tscStream.c b/src/client/src/tscStream.c index 1fa6b2d78d6bbc5506104e4b18687adb3a62dea0..e2779854e096fcea454db76c26ff365a9f6e1309 100644 --- a/src/client/src/tscStream.c +++ b/src/client/src/tscStream.c @@ -431,9 +431,8 @@ bool toAnotherTable(STscObj *pTscObj, char *superName, TAOS_FIELD *fields, int32 SArray *arr = *(SArray**)pIter; if(arr) { // get key as tableName - SHashNode *pNode = (SHashNode *)GET_HASH_PNODE(pIter); - char *data = (char *)GET_HASH_NODE_KEY(pNode); - uint32_t len = pNode->keyLen; + char *data = (char *)taosHashGetDataKey(tbHash, pIter); + uint32_t len = taosHashGetDataKeyLen(tbHash, pIter); char *key = tmalloc(len + 1); memcpy(key, data, len); key[len] = 0; // string end '\0' diff --git a/src/client/src/tscUtil.c b/src/client/src/tscUtil.c index dfbe4441463a3a3e18c50955110bcc368549217d..d8fc838858d2c4fdd81aad741dd26c76d743eb5c 100644 --- a/src/client/src/tscUtil.c +++ b/src/client/src/tscUtil.c @@ -2535,7 +2535,7 @@ void tscFieldInfoCopy(SFieldInfo* pFieldInfo, const SFieldInfo* pSrc, const SArr SExprInfo* tscExprCreate(STableMetaInfo* pTableMetaInfo, int16_t functionId, SColumnIndex* pColIndex, int16_t type, - int16_t size, int16_t resColId, int16_t interSize, int32_t colType) { + int16_t size, int16_t resColId, int32_t interSize, int32_t colType) { SExprInfo* pExpr = calloc(1, sizeof(SExprInfo)); if (pExpr == NULL) { return NULL; @@ -2592,7 +2592,7 @@ SExprInfo* tscExprCreate(STableMetaInfo* pTableMetaInfo, int16_t functionId, SCo } SExprInfo* tscExprInsert(SQueryInfo* pQueryInfo, int32_t index, int16_t functionId, SColumnIndex* pColIndex, int16_t type, - int16_t size, int16_t resColId, int16_t interSize, bool isTagCol) { + int16_t size, int16_t resColId, int32_t interSize, bool isTagCol) { int32_t num = (int32_t)taosArrayGetSize(pQueryInfo->exprList); if (index == num) { return tscExprAppend(pQueryInfo, functionId, pColIndex, type, size, resColId, interSize, isTagCol); @@ -2605,7 +2605,7 @@ SExprInfo* tscExprInsert(SQueryInfo* pQueryInfo, int32_t index, int16_t function } SExprInfo* tscExprAppend(SQueryInfo* pQueryInfo, int16_t functionId, SColumnIndex* pColIndex, int16_t type, - int16_t size, int16_t resColId, int16_t interSize, bool isTagCol) { + int16_t size, int16_t resColId, int32_t interSize, bool isTagCol) { STableMetaInfo* pTableMetaInfo = tscGetMetaInfo(pQueryInfo, pColIndex->tableIndex); SExprInfo* pExpr = tscExprCreate(pTableMetaInfo, functionId, pColIndex, type, size, resColId, interSize, isTagCol); taosArrayPush(pQueryInfo->exprList, &pExpr); @@ -4937,7 +4937,8 @@ static int32_t createGlobalAggregateExpr(SQueryAttr* pQueryAttr, SQueryInfo* pQu pse->colInfo.colIndex = i; pse->colType = pExpr->base.resType; - if(pExpr->base.resBytes > INT16_MAX && pExpr->base.functionId == TSDB_FUNC_UNIQUE){ + if(pExpr->base.resBytes > INT16_MAX && + (pExpr->base.functionId == TSDB_FUNC_UNIQUE || pExpr->base.functionId == TSDB_FUNC_MODE)){ pQueryAttr->interBytesForGlobal = pExpr->base.resBytes; }else{ pse->colBytes = pExpr->base.resBytes; diff --git a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/rs/RestfulDriver.java b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/rs/RestfulDriver.java index 888f58856a8f858f25f7ee5317f10864c00bac0e..7af6bc607eeac7f2605cda9f9a55587b535d73cc 100644 --- a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/rs/RestfulDriver.java +++ b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/rs/RestfulDriver.java @@ -65,8 +65,7 @@ public class RestfulDriver extends AbstractDriver { } String loginUrl; String batchLoad = info.getProperty(TSDBDriver.PROPERTY_KEY_BATCH_LOAD); -// if (Boolean.parseBoolean(batchLoad)) { - if (false) { + if (Boolean.parseBoolean(batchLoad)) { loginUrl = "ws://" + props.getProperty(TSDBDriver.PROPERTY_KEY_HOST) + ":" + props.getProperty(TSDBDriver.PROPERTY_KEY_PORT) + "/rest/ws"; WSClient client; @@ -99,7 +98,6 @@ public class RestfulDriver extends AbstractDriver { } catch (InterruptedException e) { throw new SQLException("creat websocket connection has been Interrupted ", e); } - // TODO fetch Type from config props.setProperty(TSDBDriver.PROPERTY_KEY_TIMESTAMP_FORMAT, String.valueOf(TimestampFormat.TIMESTAMP)); return new WSConnection(url, props, transport, database); } diff --git a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/utils/NullType.java b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/utils/NullType.java index 0e05aeeee7ae0eeb7728910cb5e77a5084d0aa2f..4ab1bc419f964e4ade7b816926cbfff6b358d3ef 100755 --- a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/utils/NullType.java +++ b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/utils/NullType.java @@ -11,81 +11,97 @@ public class NullType { public static boolean isBooleanNull(byte val) { return val == NullType.NULL_BOOL_VAL; } - + public static boolean isTinyIntNull(byte val) { return val == Byte.MIN_VALUE; } - + + public static boolean isUnsignedTinyIntNull(byte val) { + return val == (byte) 0xFF; + } + public static boolean isSmallIntNull(short val) { return val == Short.MIN_VALUE; } + public static boolean isUnsignedSmallIntNull(short val) { + return val == (short) 0xFFFF; + } + public static boolean isIntNull(int val) { return val == Integer.MIN_VALUE; } - + + public static boolean isUnsignedIntNull(int val) { + return val == 0xFFFFFFFF; + } + public static boolean isBigIntNull(long val) { return val == Long.MIN_VALUE; } - + + public static boolean isUnsignedBigIntNull(long val) { + return val == 0xFFFFFFFFFFFFFFFFL; + } + public static boolean isFloatNull(float val) { return Float.isNaN(val); } - + public static boolean isDoubleNull(double val) { return Double.isNaN(val); } - + public static boolean isBinaryNull(byte[] val, int length) { if (length != Byte.BYTES) { return false; } - return val[0] == 0xFF; + return val[0] == (byte) 0xFF; } - + public static boolean isNcharNull(byte[] val, int length) { if (length != Integer.BYTES) { return false; } - return (val[0] & val[1] & val[2] & val[3]) == 0xFF; + return (val[0] & val[1] & val[2] & val[3] & 0xFF) == 0xFF; } - + public static byte getBooleanNull() { - return NullType.NULL_BOOL_VAL; + return NullType.NULL_BOOL_VAL; } - + public static byte getTinyintNull() { - return Byte.MIN_VALUE; + return Byte.MIN_VALUE; } - + public static int getIntNull() { - return Integer.MIN_VALUE; + return Integer.MIN_VALUE; } - + public static short getSmallIntNull() { - return Short.MIN_VALUE; + return Short.MIN_VALUE; } public static long getBigIntNull() { - return Long.MIN_VALUE; + return Long.MIN_VALUE; } - + public static int getFloatNull() { - return 0x7FF00000; + return 0x7FF00000; } public static long getDoubleNull() { - return 0x7FFFFF0000000000L; + return 0x7FFFFF0000000000L; } public static byte getBinaryNull() { - return (byte) 0xFF; + return (byte) 0xFF; } - + public static byte[] getNcharNull() { - return new byte[] {(byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF}; + return new byte[]{(byte) 0xFF, (byte) 0xFF, (byte) 0xFF, (byte) 0xFF}; } } diff --git a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/ws/AbstractWSResultSet.java b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/ws/AbstractWSResultSet.java index 2325161d689d6acdf91bd7469b3c820f7716229d..7df84c740c9742941810ab760c08d8a171740f11 100644 --- a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/ws/AbstractWSResultSet.java +++ b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/ws/AbstractWSResultSet.java @@ -20,32 +20,12 @@ import java.util.concurrent.CompletableFuture; import java.util.concurrent.ExecutionException; public abstract class AbstractWSResultSet extends AbstractResultSet { - public static DateTimeFormatter rfc3339Parser = new DateTimeFormatterBuilder() - .parseCaseInsensitive() - .appendValue(ChronoField.YEAR, 4) - .appendLiteral('-') - .appendValue(ChronoField.MONTH_OF_YEAR, 2) - .appendLiteral('-') - .appendValue(ChronoField.DAY_OF_MONTH, 2) - .appendLiteral('T') - .appendValue(ChronoField.HOUR_OF_DAY, 2) - .appendLiteral(':') - .appendValue(ChronoField.MINUTE_OF_HOUR, 2) - .appendLiteral(':') - .appendValue(ChronoField.SECOND_OF_MINUTE, 2) - .optionalStart() - .appendFraction(ChronoField.NANO_OF_SECOND, 2, 9, true) - .optionalEnd() - .appendOffset("+HH:MM", "Z").toFormatter() - .withResolverStyle(ResolverStyle.STRICT) - .withChronology(IsoChronology.INSTANCE); - protected final Statement statement; protected final Transport transport; protected final RequestFactory factory; protected final long queryId; - protected boolean isClosed; + protected volatile boolean isClosed; // meta protected final ResultSetMetaData metaData; protected final List fields = new ArrayList<>(); @@ -108,7 +88,7 @@ public abstract class AbstractWSResultSet extends AbstractResultSet { throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNKNOWN, fetchResp.getMessage()); } this.reset(); - if (fetchResp.isCompleted()) { + if (fetchResp.isCompleted() || fetchResp.getRows() == 0) { this.isCompleted = true; return false; } @@ -125,10 +105,14 @@ public abstract class AbstractWSResultSet extends AbstractResultSet { @Override public void close() throws SQLException { - this.isClosed = true; - if (result != null && !result.isEmpty() && !isCompleted) { - FetchReq fetchReq = new FetchReq(queryId, queryId); - transport.sendWithoutRep(new Request(Action.FREE_RESULT.getAction(), fetchReq)); + synchronized (this) { + if (!this.isClosed) { + this.isClosed = true; + if (result != null && !result.isEmpty() && !isCompleted) { + FetchReq fetchReq = new FetchReq(queryId, queryId); + transport.sendWithoutRep(new Request(Action.FREE_RESULT.getAction(), fetchReq)); + } + } } } diff --git a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/ws/BlockResultSet.java b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/ws/BlockResultSet.java index 8371b9e7c4727c5a014c43faead2f4864df6afa8..709f740e486e8bcfe06a44d14f67cf3cf5b32264 100644 --- a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/ws/BlockResultSet.java +++ b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/ws/BlockResultSet.java @@ -3,24 +3,17 @@ package com.taosdata.jdbc.ws; import com.google.common.primitives.Ints; import com.google.common.primitives.Longs; import com.google.common.primitives.Shorts; -import com.taosdata.jdbc.TSDBConstants; -import com.taosdata.jdbc.TSDBDriver; -import com.taosdata.jdbc.TSDBError; -import com.taosdata.jdbc.TSDBErrorNumbers; -import com.taosdata.jdbc.enums.TimestampFormat; +import com.taosdata.jdbc.*; import com.taosdata.jdbc.enums.TimestampPrecision; +import com.taosdata.jdbc.utils.NullType; import com.taosdata.jdbc.utils.Utils; import com.taosdata.jdbc.ws.entity.*; +import java.io.UnsupportedEncodingException; import java.math.BigDecimal; -import java.nio.Buffer; import java.nio.ByteBuffer; -import java.nio.ByteOrder; -import java.nio.charset.StandardCharsets; import java.sql.*; import java.time.Instant; -import java.time.ZoneOffset; -import java.time.ZonedDateTime; import java.time.format.DateTimeParseException; import java.util.ArrayList; import java.util.Arrays; @@ -39,7 +32,7 @@ public class BlockResultSet extends AbstractWSResultSet { } @Override - public List> fetchJsonData() throws SQLException, ExecutionException, InterruptedException { + public List> fetchJsonData() throws ExecutionException, InterruptedException { Request blockRequest = factory.generateFetchBlock(queryId); CompletableFuture fetchFuture = transport.send(blockRequest); FetchBlockResp resp = (FetchBlockResp) fetchFuture.get(); @@ -51,30 +44,27 @@ public class BlockResultSet extends AbstractWSResultSet { int type = fields.get(i).getTaosType(); switch (type) { case TSDB_DATA_TYPE_BOOL: - for (int j = 0; j < numOfRows; j++) { - col.add(buffer.get() == 1); - } - break; - case TSDB_DATA_TYPE_UTINYINT: case TSDB_DATA_TYPE_TINYINT: + case TSDB_DATA_TYPE_UTINYINT: for (int j = 0; j < numOfRows; j++) { col.add(buffer.get()); } break; - case TSDB_DATA_TYPE_USMALLINT: case TSDB_DATA_TYPE_SMALLINT: + case TSDB_DATA_TYPE_USMALLINT: for (int j = 0; j < numOfRows; j++) { col.add(buffer.getShort()); } break; - case TSDB_DATA_TYPE_UINT: case TSDB_DATA_TYPE_INT: + case TSDB_DATA_TYPE_UINT: for (int j = 0; j < numOfRows; j++) { col.add(buffer.getInt()); } break; - case TSDB_DATA_TYPE_UBIGINT: case TSDB_DATA_TYPE_BIGINT: + case TSDB_DATA_TYPE_UBIGINT: + case TSDB_DATA_TYPE_TIMESTAMP: for (int j = 0; j < numOfRows; j++) { col.add(buffer.getLong()); } @@ -94,7 +84,13 @@ public class BlockResultSet extends AbstractWSResultSet { for (int j = 0; j < numOfRows; j++) { short s = buffer.getShort(); buffer.get(bytes); - col.add(Arrays.copyOf(bytes, s)); + + byte[] tmp = Arrays.copyOf(bytes, s); + if (NullType.isBinaryNull(tmp, s)) { + col.add(null); + continue; + } + col.add(tmp); } break; } @@ -104,19 +100,19 @@ public class BlockResultSet extends AbstractWSResultSet { for (int j = 0; j < numOfRows; j++) { short s = buffer.getShort(); buffer.get(bytes); - col.add(new String(Arrays.copyOf(bytes, s), StandardCharsets.UTF_8)); - } - break; - } - case TSDB_DATA_TYPE_TIMESTAMP: { - byte[] bytes = new byte[fieldLength.get(i)]; - for (int j = 0; j < numOfRows; j++) { - buffer.get(bytes); - col.add(parseTimestampColumnData(bytes)); + + byte[] tmp = Arrays.copyOf(bytes, s); + if (NullType.isNcharNull(tmp, s)) { + col.add(null); + continue; + } + col.add(tmp); } break; } default: + // unknown type, do nothing + col.add(null); break; } list.add(col); @@ -125,81 +121,130 @@ public class BlockResultSet extends AbstractWSResultSet { return list; } - public static long bytesToLong(byte[] bytes) { - ByteBuffer buffer = ByteBuffer.allocate(8); - buffer.put(bytes, 0, bytes.length); - buffer.flip();//need flip - buffer.order(ByteOrder.LITTLE_ENDIAN); - return buffer.getLong(); + private Timestamp parseTimestampColumnData(long value) { + if (TimestampPrecision.MS == this.timestampPrecision) + return new Timestamp(value); + + if (TimestampPrecision.US == this.timestampPrecision) { + long epochSec = value / 1000_000L; + long nanoAdjustment = value % 1000_000L * 1000L; + return Timestamp.from(Instant.ofEpochSecond(epochSec, nanoAdjustment)); + } + if (TimestampPrecision.NS == this.timestampPrecision) { + long epochSec = value / 1000_000_000L; + long nanoAdjustment = value % 1000_000_000L; + return Timestamp.from(Instant.ofEpochSecond(epochSec, nanoAdjustment)); + } + return null; } - private Timestamp parseTimestampColumnData(byte[] bytes) throws SQLException { - if (bytes == null || bytes.length < 1) + public Object parseValue(int columnIndex) { + Object source = result.get(columnIndex - 1).get(rowIndex); + if (null == source) return null; - String tsFormatUpperCase = this.statement.getConnection().getClientInfo(TSDBDriver.PROPERTY_KEY_TIMESTAMP_FORMAT).toUpperCase(); - TimestampFormat timestampFormat = TimestampFormat.valueOf(tsFormatUpperCase); - switch (timestampFormat) { - case TIMESTAMP: { - long value = bytesToLong(bytes); - if (TimestampPrecision.MS == this.timestampPrecision) - return new Timestamp(value); - - if (TimestampPrecision.US == this.timestampPrecision) { - long epochSec = value / 1000_000L; - long nanoAdjustment = value % 1000_000L * 1000L; - return Timestamp.from(Instant.ofEpochSecond(epochSec, nanoAdjustment)); + + int type = fields.get(columnIndex - 1).getTaosType(); + switch (type) { + case TSDB_DATA_TYPE_BOOL: { + byte val = (byte) source; + if (NullType.isBooleanNull(val)) { + return null; } - if (TimestampPrecision.NS == this.timestampPrecision) { - long epochSec = value / 1000_000_000L; - long nanoAdjustment = value % 1000_000_000L; - return Timestamp.from(Instant.ofEpochSecond(epochSec, nanoAdjustment)); + return (val == 0x0) ? Boolean.FALSE : Boolean.TRUE; + } + case TSDB_DATA_TYPE_TINYINT: { + byte val = (byte) source; + if (NullType.isTinyIntNull(val)) { + return null; } + return val; } - case UTC: { - String value = new String(bytes); - if (value.lastIndexOf(":") > 19) { - ZonedDateTime parse = ZonedDateTime.parse(value, rfc3339Parser); - return Timestamp.from(parse.toInstant()); - } else { - long epochSec = Timestamp.valueOf(value.substring(0, 19).replace("T", " ")).getTime() / 1000; - int fractionalSec = Integer.parseInt(value.substring(20, value.length() - 5)); - long nanoAdjustment; - if (TimestampPrecision.NS == this.timestampPrecision) { - // ns timestamp: yyyy-MM-ddTHH:mm:ss.SSSSSSSSS+0x00 - nanoAdjustment = fractionalSec; - } else if (TimestampPrecision.US == this.timestampPrecision) { - // ms timestamp: yyyy-MM-ddTHH:mm:ss.SSSSSS+0x00 - nanoAdjustment = fractionalSec * 1000L; - } else { - // ms timestamp: yyyy-MM-ddTHH:mm:ss.SSS+0x00 - nanoAdjustment = fractionalSec * 1000_000L; - } - ZoneOffset zoneOffset = ZoneOffset.of(value.substring(value.length() - 5)); - Instant instant = Instant.ofEpochSecond(epochSec, nanoAdjustment).atOffset(zoneOffset).toInstant(); - return Timestamp.from(instant); + case TSDB_DATA_TYPE_UTINYINT: { + byte val = (byte) source; + if (NullType.isUnsignedTinyIntNull(val)) { + return null; + } + return (short) val & 0xFF; + } + case TSDB_DATA_TYPE_SMALLINT: { + short val = (short) source; + if (NullType.isSmallIntNull(val)) { + return null; } + return val; } - case STRING: - default: { - String value = new String(bytes, StandardCharsets.UTF_8); - if (TimestampPrecision.MS == this.timestampPrecision) { - // ms timestamp: yyyy-MM-dd HH:mm:ss.SSS - return Timestamp.valueOf(value); + case TSDB_DATA_TYPE_USMALLINT: { + short val = (short) source; + if (NullType.isUnsignedSmallIntNull(val)) { + return null; } - if (TimestampPrecision.US == this.timestampPrecision) { - // us timestamp: yyyy-MM-dd HH:mm:ss.SSSSSS - long epochSec = Timestamp.valueOf(value.substring(0, 19)).getTime() / 1000; - long nanoAdjustment = Integer.parseInt(value.substring(20)) * 1000L; - return Timestamp.from(Instant.ofEpochSecond(epochSec, nanoAdjustment)); + return val & 0xFFFF; + } + case TSDB_DATA_TYPE_INT: { + int val = (int) source; + if (NullType.isIntNull(val)) { + return null; + } + return val; + } + case TSDB_DATA_TYPE_UINT: { + int val = (int) source; + if (NullType.isUnsignedIntNull(val)) { + return null; } - if (TimestampPrecision.NS == this.timestampPrecision) { - // ms timestamp: yyyy-MM-dd HH:mm:ss.SSSSSSSSS - long epochSec = Timestamp.valueOf(value.substring(0, 19)).getTime() / 1000; - long nanoAdjustment = Integer.parseInt(value.substring(20)); - return Timestamp.from(Instant.ofEpochSecond(epochSec, nanoAdjustment)); + return val & 0xFFFFFFFFL; + } + case TSDB_DATA_TYPE_BIGINT: { + long val = (long) source; + if (NullType.isBigIntNull(val)) { + return null; + } + return val; + } + case TSDB_DATA_TYPE_TIMESTAMP: { + long val = (long) source; + if (NullType.isBigIntNull(val)) { + return null; } - throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNKNOWN_TIMESTAMP_PRECISION); + return parseTimestampColumnData(val); } + case TSDB_DATA_TYPE_UBIGINT: { + long val = (long) source; + if (NullType.isUnsignedBigIntNull(val)) { + return null; + } + BigDecimal tmp = new BigDecimal(val >>> 1).multiply(new BigDecimal(2)); + return (val & 0x1) == 0x1 ? tmp.add(new BigDecimal(1)) : tmp; + } + case TSDB_DATA_TYPE_FLOAT: { + float val = (float) source; + if (NullType.isFloatNull(val)) { + return null; + } + return val; + } + case TSDB_DATA_TYPE_DOUBLE: { + double val = (double) source; + if (NullType.isDoubleNull(val)) { + return null; + } + return val; + } + case TSDB_DATA_TYPE_BINARY: { + return source; + } + case TSDB_DATA_TYPE_NCHAR: + case TSDB_DATA_TYPE_JSON: { + String charset = TaosGlobalConfig.getCharset(); + try { + return new String((byte[]) source, charset); + } catch (UnsupportedEncodingException e) { + throw new RuntimeException(e.getMessage()); + } + } + default: + // unknown type, do nothing + return null; } } @@ -207,14 +252,23 @@ public class BlockResultSet extends AbstractWSResultSet { public String getString(int columnIndex) throws SQLException { checkAvailability(columnIndex, fields.size()); - Object value = result.get(columnIndex - 1).get(rowIndex); - wasNull = value == null; - if (value == null) + Object value = parseValue(columnIndex); + if (value == null) { + wasNull = true; +// return new NullType().toString(); return null; + } if (value instanceof String) return (String) value; - if (value instanceof byte[]) - return new String((byte[]) value); + + if (value instanceof byte[]) { + String charset = TaosGlobalConfig.getCharset(); + try { + return new String((byte[]) value, charset); + } catch (UnsupportedEncodingException e) { + throw new RuntimeException(e.getMessage()); + } + } return value.toString(); } @@ -222,83 +276,295 @@ public class BlockResultSet extends AbstractWSResultSet { public boolean getBoolean(int columnIndex) throws SQLException { checkAvailability(columnIndex, fields.size()); - Object value = result.get(columnIndex - 1).get(rowIndex); - wasNull = value == null; - if (value == null) + Object value = parseValue(columnIndex); + if (value == null) { + wasNull = true; return false; + } if (value instanceof Boolean) return (boolean) value; - return Boolean.parseBoolean(value.toString()); + + int taosType = fields.get(columnIndex - 1).getTaosType(); + switch (taosType) { + case TSDB_DATA_TYPE_TINYINT: + return ((byte) value == 0) ? Boolean.FALSE : Boolean.TRUE; + case TSDB_DATA_TYPE_UTINYINT: + case TSDB_DATA_TYPE_SMALLINT: + return ((short) value == 0) ? Boolean.FALSE : Boolean.TRUE; + case TSDB_DATA_TYPE_USMALLINT: + case TSDB_DATA_TYPE_INT: + return ((int) value == 0) ? Boolean.FALSE : Boolean.TRUE; + case TSDB_DATA_TYPE_UINT: + case TSDB_DATA_TYPE_BIGINT: + case TSDB_DATA_TYPE_TIMESTAMP: + return (((long) value) == 0L) ? Boolean.FALSE : Boolean.TRUE; + case TSDB_DATA_TYPE_UBIGINT: + return value.equals(new BigDecimal(0)) ? Boolean.FALSE : Boolean.TRUE; + + case TSDB_DATA_TYPE_FLOAT: + return (((float) value) == 0) ? Boolean.FALSE : Boolean.TRUE; + case TSDB_DATA_TYPE_DOUBLE: { + return (((double) value) == 0) ? Boolean.FALSE : Boolean.TRUE; + } + + case TSDB_DATA_TYPE_NCHAR: + case TSDB_DATA_TYPE_JSON: { + if ("TRUE".compareToIgnoreCase((String) value) == 0) { + return Boolean.TRUE; + } else if ("FALSE".compareToIgnoreCase((String) value) == 0) { + return Boolean.TRUE; + } else { + throw new SQLDataException(); + } + } + case TSDB_DATA_TYPE_BINARY: { + String charset = TaosGlobalConfig.getCharset(); + String tmp; + try { + tmp = new String((byte[]) value, charset); + } catch (UnsupportedEncodingException e) { + throw new RuntimeException(e.getMessage()); + } + if ("TRUE".compareToIgnoreCase(tmp) == 0) { + return Boolean.TRUE; + } else if ("FALSE".compareToIgnoreCase(tmp) == 0) { + return Boolean.TRUE; + } else { + throw new SQLDataException(); + } + } + } + + return Boolean.FALSE; } @Override public byte getByte(int columnIndex) throws SQLException { checkAvailability(columnIndex, fields.size()); - Object value = result.get(columnIndex - 1).get(rowIndex); - wasNull = value == null; - if (value == null) + Object value = parseValue(columnIndex); + if (value == null) { + wasNull = true; return 0; + } if (value instanceof Byte) return (byte) value; - long valueAsLong = Long.parseLong(value.toString()); - if (valueAsLong == Byte.MIN_VALUE) - return 0; - if (valueAsLong < Byte.MIN_VALUE || valueAsLong > Byte.MAX_VALUE) - throwRangeException(value.toString(), columnIndex, Types.TINYINT); - return (byte) valueAsLong; + int taosType = fields.get(columnIndex - 1).getTaosType(); + switch (taosType) { + case TSDB_DATA_TYPE_BOOL: + return (boolean) value ? (byte) 1 : (byte) 0; + case TSDB_DATA_TYPE_UTINYINT: + case TSDB_DATA_TYPE_SMALLINT: { + short tmp = (short) value; + if (tmp < Byte.MIN_VALUE || tmp > Byte.MAX_VALUE) + throwRangeException(value.toString(), columnIndex, Types.TINYINT); + return (byte) tmp; + } + case TSDB_DATA_TYPE_USMALLINT: + case TSDB_DATA_TYPE_INT: { + int tmp = (int) value; + if (tmp < Byte.MIN_VALUE || tmp > Byte.MAX_VALUE) + throwRangeException(value.toString(), columnIndex, Types.TINYINT); + return (byte) tmp; + } + + case TSDB_DATA_TYPE_BIGINT: + case TSDB_DATA_TYPE_UINT: { + long tmp = (long) value; + if (tmp < Byte.MIN_VALUE || tmp > Byte.MAX_VALUE) + throwRangeException(value.toString(), columnIndex, Types.TINYINT); + return (byte) tmp; + } + case TSDB_DATA_TYPE_UBIGINT: { + BigDecimal tmp = (BigDecimal) value; + if (tmp.compareTo(new BigDecimal(Byte.MIN_VALUE)) < 0 || tmp.compareTo(new BigDecimal(Byte.MAX_VALUE)) > 0) + throwRangeException(value.toString(), columnIndex, Types.TINYINT); + + return tmp.byteValue(); + } + case TSDB_DATA_TYPE_FLOAT: { + float tmp = (float) value; + if (tmp < Byte.MIN_VALUE || tmp > Byte.MAX_VALUE) + throwRangeException(value.toString(), columnIndex, Types.TINYINT); + return (byte) tmp; + } + case TSDB_DATA_TYPE_DOUBLE: { + double tmp = (double) value; + if (tmp < Byte.MIN_VALUE || tmp > Byte.MAX_VALUE) + throwRangeException(value.toString(), columnIndex, Types.TINYINT); + return (byte) tmp; + } + + case TSDB_DATA_TYPE_NCHAR: + case TSDB_DATA_TYPE_JSON: + return Byte.parseByte((String) value); + case TSDB_DATA_TYPE_BINARY: { + String charset = TaosGlobalConfig.getCharset(); + String tmp; + try { + tmp = new String((byte[]) value, charset); + } catch (UnsupportedEncodingException e) { + throw new RuntimeException(e.getMessage()); + } + return Byte.parseByte(tmp); + } + } + + return 0; } private void throwRangeException(String valueAsString, int columnIndex, int jdbcType) throws SQLException { throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_NUMERIC_VALUE_OUT_OF_RANGE, - "'" + valueAsString + "' in column '" + columnIndex + "' is outside valid range for the jdbcType " + TSDBConstants.jdbcType2TaosTypeName(jdbcType)); + "'" + valueAsString + "' in column '" + columnIndex + "' is outside valid range for the jdbcType " + jdbcType2TaosTypeName(jdbcType)); } @Override public short getShort(int columnIndex) throws SQLException { checkAvailability(columnIndex, fields.size()); - Object value = result.get(columnIndex - 1).get(rowIndex); - wasNull = value == null; - if (value == null) + Object value = parseValue(columnIndex); + if (value == null) { + wasNull = true; return 0; + } if (value instanceof Short) return (short) value; - long valueAsLong = Long.parseLong(value.toString()); - if (valueAsLong == Short.MIN_VALUE) - return 0; - if (valueAsLong < Short.MIN_VALUE || valueAsLong > Short.MAX_VALUE) - throwRangeException(value.toString(), columnIndex, Types.SMALLINT); - return (short) valueAsLong; + + int taosType = fields.get(columnIndex - 1).getTaosType(); + switch (taosType) { + case TSDB_DATA_TYPE_BOOL: + return (boolean) value ? (short) 1 : (short) 0; + case TSDB_DATA_TYPE_TINYINT: + return (byte) value; + case TSDB_DATA_TYPE_USMALLINT: + case TSDB_DATA_TYPE_INT: { + int tmp = (int) value; + if (tmp < Short.MIN_VALUE || tmp > Short.MAX_VALUE) + throwRangeException(value.toString(), columnIndex, Types.SMALLINT); + return (short) tmp; + } + + case TSDB_DATA_TYPE_BIGINT: + case TSDB_DATA_TYPE_UINT: { + long tmp = (long) value; + if (tmp < Short.MIN_VALUE || tmp > Short.MAX_VALUE) + throwRangeException(value.toString(), columnIndex, Types.SMALLINT); + return (short) tmp; + } + case TSDB_DATA_TYPE_UBIGINT: { + BigDecimal tmp = (BigDecimal) value; + if (tmp.compareTo(new BigDecimal(Short.MIN_VALUE)) < 0 || tmp.compareTo(new BigDecimal(Short.MAX_VALUE)) > 0) + throwRangeException(value.toString(), columnIndex, Types.SMALLINT); + return tmp.shortValue(); + } + case TSDB_DATA_TYPE_FLOAT: { + float tmp = (float) value; + if (tmp < Short.MIN_VALUE || tmp > Short.MAX_VALUE) + throwRangeException(value.toString(), columnIndex, Types.SMALLINT); + return (short) tmp; + } + case TSDB_DATA_TYPE_DOUBLE: { + double tmp = (double) value; + if (tmp < Short.MIN_VALUE || tmp > Short.MAX_VALUE) + throwRangeException(value.toString(), columnIndex, Types.SMALLINT); + return (short) tmp; + } + + case TSDB_DATA_TYPE_NCHAR: + case TSDB_DATA_TYPE_JSON: + return Short.parseShort((String) value); + case TSDB_DATA_TYPE_BINARY: { + String charset = TaosGlobalConfig.getCharset(); + String tmp; + try { + tmp = new String((byte[]) value, charset); + } catch (UnsupportedEncodingException e) { + throw new RuntimeException(e.getMessage()); + } + return Short.parseShort(tmp); + } + } + return 0; } @Override public int getInt(int columnIndex) throws SQLException { checkAvailability(columnIndex, fields.size()); - Object value = result.get(columnIndex - 1).get(rowIndex); - wasNull = value == null; - if (value == null) + Object value = parseValue(columnIndex); + if (value == null) { + wasNull = true; return 0; + } if (value instanceof Integer) return (int) value; - long valueAsLong = Long.parseLong(value.toString()); - if (valueAsLong == Integer.MIN_VALUE) - return 0; - if (valueAsLong < Integer.MIN_VALUE || valueAsLong > Integer.MAX_VALUE) - throwRangeException(value.toString(), columnIndex, Types.INTEGER); - return (int) valueAsLong; + + int taosType = fields.get(columnIndex - 1).getTaosType(); + switch (taosType) { + case TSDB_DATA_TYPE_BOOL: + return (boolean) value ? 1 : 0; + case TSDB_DATA_TYPE_TINYINT: + return (byte) value; + case TSDB_DATA_TYPE_UTINYINT: + case TSDB_DATA_TYPE_SMALLINT: + return (short) value; + case TSDB_DATA_TYPE_USMALLINT: + case TSDB_DATA_TYPE_INT: + return (int) value; + + case TSDB_DATA_TYPE_BIGINT: + case TSDB_DATA_TYPE_UINT: { + long tmp = (long) value; + if (tmp < Integer.MIN_VALUE || tmp > Integer.MAX_VALUE) + throwRangeException(value.toString(), columnIndex, Types.INTEGER); + return (int) tmp; + } + case TSDB_DATA_TYPE_UBIGINT: { + BigDecimal tmp = (BigDecimal) value; + if (tmp.compareTo(new BigDecimal(Integer.MIN_VALUE)) < 0 || tmp.compareTo(new BigDecimal(Integer.MAX_VALUE)) > 0) + throwRangeException(value.toString(), columnIndex, Types.INTEGER); + return tmp.intValue(); + } + case TSDB_DATA_TYPE_FLOAT: { + float tmp = (float) value; + if (tmp < Integer.MIN_VALUE || tmp > Integer.MAX_VALUE) + throwRangeException(value.toString(), columnIndex, Types.INTEGER); + return (int) tmp; + } + case TSDB_DATA_TYPE_DOUBLE: { + double tmp = (double) value; + if (tmp < Integer.MIN_VALUE || tmp > Integer.MAX_VALUE) + throwRangeException(value.toString(), columnIndex, Types.INTEGER); + return (int) tmp; + } + + case TSDB_DATA_TYPE_NCHAR: + case TSDB_DATA_TYPE_JSON: + return Integer.parseInt((String) value); + case TSDB_DATA_TYPE_BINARY: { + String charset = TaosGlobalConfig.getCharset(); + String tmp; + try { + tmp = new String((byte[]) value, charset); + } catch (UnsupportedEncodingException e) { + throw new RuntimeException(e.getMessage()); + } + return Integer.parseInt(tmp); + } + } + return 0; } @Override public long getLong(int columnIndex) throws SQLException { checkAvailability(columnIndex, fields.size()); - Object value = result.get(columnIndex - 1).get(rowIndex); - wasNull = value == null; - if (value == null) + Object value = parseValue(columnIndex); + if (value == null) { + wasNull = true; return 0; + } if (value instanceof Long) return (long) value; if (value instanceof Timestamp) { @@ -313,54 +579,178 @@ public class BlockResultSet extends AbstractWSResultSet { return ts.getTime() * 1000_000 + ts.getNanos() % 1000_000; } } - long valueAsLong = 0; - try { - valueAsLong = Long.parseLong(value.toString()); - if (valueAsLong == Long.MIN_VALUE) - return 0; - } catch (NumberFormatException e) { - throwRangeException(value.toString(), columnIndex, Types.BIGINT); + + int taosType = fields.get(columnIndex - 1).getTaosType(); + switch (taosType) { + case TSDB_DATA_TYPE_BOOL: + return (boolean) value ? 1 : 0; + case TSDB_DATA_TYPE_TINYINT: + return (byte) value; + case TSDB_DATA_TYPE_UTINYINT: + case TSDB_DATA_TYPE_SMALLINT: + return (short) value; + case TSDB_DATA_TYPE_USMALLINT: + case TSDB_DATA_TYPE_INT: + return (int) value; + case TSDB_DATA_TYPE_BIGINT: + case TSDB_DATA_TYPE_UINT: + return (long) value; + + case TSDB_DATA_TYPE_UBIGINT: { + BigDecimal tmp = (BigDecimal) value; + if (tmp.compareTo(new BigDecimal(Long.MIN_VALUE)) < 0 || tmp.compareTo(new BigDecimal(Long.MAX_VALUE)) > 0) + throwRangeException(value.toString(), columnIndex, Types.BIGINT); + return tmp.longValue(); + } + case TSDB_DATA_TYPE_TIMESTAMP: + return ((Timestamp) value).getTime(); + case TSDB_DATA_TYPE_FLOAT: { + float tmp = (float) value; + if (tmp < Long.MIN_VALUE || tmp > Long.MAX_VALUE) + throwRangeException(value.toString(), columnIndex, Types.BIGINT); + return (long) tmp; + } + case TSDB_DATA_TYPE_DOUBLE: { + double tmp = (Double) value; + if (tmp < Long.MIN_VALUE || tmp > Long.MAX_VALUE) + throwRangeException(value.toString(), columnIndex, Types.BIGINT); + return (long) tmp; + } + + case TSDB_DATA_TYPE_NCHAR: + case TSDB_DATA_TYPE_JSON: + return Long.parseLong((String) value); + case TSDB_DATA_TYPE_BINARY: { + String charset = TaosGlobalConfig.getCharset(); + String tmp; + try { + tmp = new String((byte[]) value, charset); + } catch (UnsupportedEncodingException e) { + throw new RuntimeException(e.getMessage()); + } + return Long.parseLong(tmp); + } } - return valueAsLong; + return 0; } @Override public float getFloat(int columnIndex) throws SQLException { checkAvailability(columnIndex, fields.size()); - Object value = result.get(columnIndex - 1).get(rowIndex); - wasNull = value == null; - if (value == null) + Object value = parseValue(columnIndex); + if (value == null) { + wasNull = true; return 0; + } if (value instanceof Float) return (float) value; if (value instanceof Double) - return new Float((Double) value); - return Float.parseFloat(value.toString()); + return (float) (double) value; + + int taosType = fields.get(columnIndex - 1).getTaosType(); + switch (taosType) { + case TSDB_DATA_TYPE_BOOL: + return (boolean) value ? (float) 1 : (float) 0; + case TSDB_DATA_TYPE_TINYINT: + return (byte) value; + case TSDB_DATA_TYPE_UTINYINT: + case TSDB_DATA_TYPE_SMALLINT: + return (short) value; + case TSDB_DATA_TYPE_USMALLINT: + case TSDB_DATA_TYPE_INT: + return (int) value; + case TSDB_DATA_TYPE_BIGINT: + case TSDB_DATA_TYPE_UINT: + return (long) value; + + case TSDB_DATA_TYPE_UBIGINT: { + BigDecimal tmp = (BigDecimal) value; + if (tmp.compareTo(new BigDecimal(Float.MIN_VALUE)) < 0 || tmp.compareTo(new BigDecimal(Float.MAX_VALUE)) > 0) + throwRangeException(value.toString(), columnIndex, Types.FLOAT); + return tmp.floatValue(); + } + + case TSDB_DATA_TYPE_NCHAR: + case TSDB_DATA_TYPE_JSON: + return Float.parseFloat(value.toString()); + case TSDB_DATA_TYPE_BINARY: { + String charset = TaosGlobalConfig.getCharset(); + String tmp; + try { + tmp = new String((byte[]) value, charset); + } catch (UnsupportedEncodingException e) { + throw new RuntimeException(e.getMessage()); + } + return Float.parseFloat(tmp); + } + } + return 0; } @Override public double getDouble(int columnIndex) throws SQLException { checkAvailability(columnIndex, fields.size()); - Object value = result.get(columnIndex - 1).get(rowIndex); - wasNull = value == null; + Object value = parseValue(columnIndex); if (value == null) { + wasNull = true; return 0; } - if (value instanceof Double || value instanceof Float) + if (value instanceof Double) return (double) value; - return Double.parseDouble(value.toString()); + if (value instanceof Float) + return (float) value; + + int taosType = fields.get(columnIndex - 1).getTaosType(); + switch (taosType) { + case TSDB_DATA_TYPE_BOOL: + return (boolean) value ? 1 : 0; + case TSDB_DATA_TYPE_TINYINT: + return (byte) value; + case TSDB_DATA_TYPE_UTINYINT: + case TSDB_DATA_TYPE_SMALLINT: + return (short) value; + case TSDB_DATA_TYPE_USMALLINT: + case TSDB_DATA_TYPE_INT: + return (int) value; + case TSDB_DATA_TYPE_BIGINT: + case TSDB_DATA_TYPE_UINT: + return (long) value; + + case TSDB_DATA_TYPE_UBIGINT: { + BigDecimal tmp = (BigDecimal) value; + if (tmp.compareTo(new BigDecimal(Double.MIN_VALUE)) < 0 || tmp.compareTo(new BigDecimal(Double.MAX_VALUE)) > 0) + throwRangeException(value.toString(), columnIndex, Types.DOUBLE); + return tmp.floatValue(); + } + + case TSDB_DATA_TYPE_NCHAR: + case TSDB_DATA_TYPE_JSON: + return Double.parseDouble(value.toString()); + case TSDB_DATA_TYPE_BINARY: { + String charset = TaosGlobalConfig.getCharset(); + String tmp; + try { + tmp = new String((byte[]) value, charset); + } catch (UnsupportedEncodingException e) { + throw new RuntimeException(e.getMessage()); + } + return Double.parseDouble(tmp); + } + } + return 0; } @Override public byte[] getBytes(int columnIndex) throws SQLException { checkAvailability(columnIndex, fields.size()); - Object value = result.get(columnIndex - 1).get(rowIndex); - wasNull = value == null; - if (value == null) + Object value = parseValue(columnIndex); + if (value == null) { + wasNull = true; return null; + } if (value instanceof byte[]) return (byte[]) value; if (value instanceof String) @@ -384,10 +774,11 @@ public class BlockResultSet extends AbstractWSResultSet { public Date getDate(int columnIndex) throws SQLException { checkAvailability(columnIndex, fields.size()); - Object value = result.get(columnIndex - 1).get(rowIndex); - wasNull = value == null; - if (value == null) + Object value = parseValue(columnIndex); + if (value == null) { + wasNull = true; return null; + } if (value instanceof Timestamp) return new Date(((Timestamp) value).getTime()); return Utils.parseDate(value.toString()); @@ -397,16 +788,18 @@ public class BlockResultSet extends AbstractWSResultSet { public Time getTime(int columnIndex) throws SQLException { checkAvailability(columnIndex, fields.size()); - Object value = result.get(columnIndex - 1).get(rowIndex); - wasNull = value == null; - if (value == null) + Object value = parseValue(columnIndex); + if (value == null) { + wasNull = true; return null; + } if (value instanceof Timestamp) return new Time(((Timestamp) value).getTime()); Time time = null; try { time = Utils.parseTime(value.toString()); - } catch (DateTimeParseException ignored) { + } catch (DateTimeParseException e) { + throw new RuntimeException(e.getMessage()); } return time; } @@ -415,18 +808,15 @@ public class BlockResultSet extends AbstractWSResultSet { public Timestamp getTimestamp(int columnIndex) throws SQLException { checkAvailability(columnIndex, fields.size()); - Object value = result.get(columnIndex - 1).get(rowIndex); - wasNull = value == null; - if (value == null) + Object value = parseValue(columnIndex); + if (value == null) { + wasNull = true; return null; + } if (value instanceof Timestamp) return (Timestamp) value; if (value instanceof Long) { - if (1_0000_0000_0000_0L > (long) value) - return Timestamp.from(Instant.ofEpochMilli((long) value)); - long epochSec = (long) value / 1000_000L; - long nanoAdjustment = (long) value % 1000_000L * 1000; - return Timestamp.from(Instant.ofEpochSecond(epochSec, nanoAdjustment)); + return parseTimestampColumnData((long) value); } Timestamp ret; try { @@ -438,18 +828,11 @@ public class BlockResultSet extends AbstractWSResultSet { return ret; } - @Override - public ResultSetMetaData getMetaData() throws SQLException { - if (isClosed()) - throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_RESULTSET_CLOSED); - return this.metaData; - } - @Override public Object getObject(int columnIndex) throws SQLException { checkAvailability(columnIndex, fields.size()); - Object value = result.get(columnIndex - 1).get(rowIndex); + Object value = parseValue(columnIndex); wasNull = value == null; return value; } @@ -469,23 +852,54 @@ public class BlockResultSet extends AbstractWSResultSet { public BigDecimal getBigDecimal(int columnIndex) throws SQLException { checkAvailability(columnIndex, fields.size()); - Object value = result.get(columnIndex - 1).get(rowIndex); - wasNull = value == null; - if (value == null) + Object value = parseValue(columnIndex); + if (value == null) { + wasNull = true; return null; - if (value instanceof Long || value instanceof Integer || value instanceof Short || value instanceof Byte) - return new BigDecimal(Long.parseLong(value.toString())); - if (value instanceof Double || value instanceof Float) - return BigDecimal.valueOf(Double.parseDouble(value.toString())); - if (value instanceof Timestamp) - return new BigDecimal(((Timestamp) value).getTime()); - BigDecimal ret; - try { - ret = new BigDecimal(value.toString()); - } catch (Exception e) { - ret = null; } - return ret; + if (value instanceof BigDecimal) + return (BigDecimal) value; + + + int taosType = fields.get(columnIndex - 1).getTaosType(); + switch (taosType) { + case TSDB_DATA_TYPE_BOOL: + return (boolean) value ? new BigDecimal(1) : new BigDecimal(0); + case TSDB_DATA_TYPE_TINYINT: + return new BigDecimal((byte) value); + case TSDB_DATA_TYPE_UTINYINT: + case TSDB_DATA_TYPE_SMALLINT: + return new BigDecimal((short) value); + case TSDB_DATA_TYPE_USMALLINT: + case TSDB_DATA_TYPE_INT: + return new BigDecimal((int) value); + case TSDB_DATA_TYPE_BIGINT: + case TSDB_DATA_TYPE_UINT: + return new BigDecimal((long) value); + + case TSDB_DATA_TYPE_FLOAT: + return BigDecimal.valueOf((float) value); + case TSDB_DATA_TYPE_DOUBLE: + return BigDecimal.valueOf((double) value); + + case TSDB_DATA_TYPE_TIMESTAMP: + return new BigDecimal(((Timestamp) value).getTime()); + case TSDB_DATA_TYPE_NCHAR: + case TSDB_DATA_TYPE_JSON: + return new BigDecimal(value.toString()); + case TSDB_DATA_TYPE_BINARY: { + String charset = TaosGlobalConfig.getCharset(); + String tmp; + try { + tmp = new String((byte[]) value, charset); + } catch (UnsupportedEncodingException e) { + throw new RuntimeException(e.getMessage()); + } + return new BigDecimal(tmp); + } + } + + return new BigDecimal(0); } @Override @@ -620,7 +1034,6 @@ public class BlockResultSet extends AbstractWSResultSet { @Override public Timestamp getTimestamp(int columnIndex, Calendar cal) throws SQLException { - //TODO:did not use the specified timezone in cal return getTimestamp(columnIndex); } } diff --git a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/ws/WSClient.java b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/ws/WSClient.java index f66bbbe6b3a8391aca849a8236f29ca6083f172b..a24998cbf169e54a179ca9d78e68f4a80b7efdec 100644 --- a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/ws/WSClient.java +++ b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/ws/WSClient.java @@ -95,7 +95,6 @@ public class WSClient extends WebSocketClient implements AutoCloseable { long id = bytes.getLong(); ResponseFuture remove = inFlightRequest.remove(Action.FETCH_BLOCK.getAction(), id); if (null != remove) { -// FetchBlockResp fetchBlockResp = new FetchBlockResp(id, bytes.slice()); FetchBlockResp fetchBlockResp = new FetchBlockResp(id, bytes); remove.getFuture().complete(fetchBlockResp); } diff --git a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/ws/WSConnection.java b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/ws/WSConnection.java index bdd56c03ce6cb0a736ce5fe1e6e98be787c2a62f..4b3c54d3a6c313d3dfb1975405413651eaa18e05 100644 --- a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/ws/WSConnection.java +++ b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/ws/WSConnection.java @@ -39,9 +39,9 @@ public class WSConnection extends AbstractConnection { public PreparedStatement prepareStatement(String sql) throws SQLException { if (isClosed()) throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_CONNECTION_CLOSED); - -// return new WSPreparedStatement(); - return null; + //TODO + throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_METHOD); +// return new WSPreparedStatement(transport, database, this, factory, sql); } @Override diff --git a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/ws/WSPreparedStatement.java b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/ws/WSPreparedStatement.java new file mode 100644 index 0000000000000000000000000000000000000000..96c22f87aa4cde3f3a22e58dadf07ddc3f917495 --- /dev/null +++ b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/ws/WSPreparedStatement.java @@ -0,0 +1,575 @@ +package com.taosdata.jdbc.ws; + +import com.taosdata.jdbc.TSDBError; +import com.taosdata.jdbc.TSDBErrorNumbers; +import com.taosdata.jdbc.utils.Utils; +import com.taosdata.jdbc.ws.entity.RequestFactory; + +import java.io.InputStream; +import java.io.Reader; +import java.math.BigDecimal; +import java.net.URL; +import java.sql.*; +import java.util.ArrayList; +import java.util.Calendar; + +public class WSPreparedStatement extends WSStatement implements PreparedStatement { + private final String rawSql; + private Object[] parameters; + private ArrayList tableTags; + private ArrayList colData; + + public WSPreparedStatement(Transport transport, String database, Connection connection, RequestFactory factory, String rawSql) { + super(transport, database, connection, factory); + this.rawSql = rawSql; + int parameterCnt = 0; + if (rawSql.contains("?")) { + for (int i = 0; i < rawSql.length(); i++) { + if ('?' == rawSql.charAt(i)) { + parameterCnt++; + } + } + this.parameters = new Object[parameterCnt]; + this.colData = new ArrayList<>(); + this.tableTags = new ArrayList<>(); + } + } + + @Override + public ResultSet executeQuery() throws SQLException { + final String sql = Utils.getNativeSql(this.rawSql, this.parameters); + return executeQuery(sql); + } + + @Override + public int executeUpdate() throws SQLException { + return 0; + } + + @Override + public void setNull(int parameterIndex, int sqlType) throws SQLException { + + } + + @Override + public void setBoolean(int parameterIndex, boolean x) throws SQLException { + + } + + @Override + public void setByte(int parameterIndex, byte x) throws SQLException { + + } + + @Override + public void setShort(int parameterIndex, short x) throws SQLException { + + } + + @Override + public void setInt(int parameterIndex, int x) throws SQLException { + + } + + @Override + public void setLong(int parameterIndex, long x) throws SQLException { + + } + + @Override + public void setFloat(int parameterIndex, float x) throws SQLException { + + } + + @Override + public void setDouble(int parameterIndex, double x) throws SQLException { + + } + + @Override + public void setBigDecimal(int parameterIndex, BigDecimal x) throws SQLException { + + } + + @Override + public void setString(int parameterIndex, String x) throws SQLException { + + } + + @Override + public void setBytes(int parameterIndex, byte[] x) throws SQLException { + + } + + @Override + public void setDate(int parameterIndex, Date x) throws SQLException { + + } + + @Override + public void setTime(int parameterIndex, Time x) throws SQLException { + + } + + @Override + public void setTimestamp(int parameterIndex, Timestamp x) throws SQLException { + + } + + @Override + public void setAsciiStream(int parameterIndex, InputStream x, int length) throws SQLException { + + } + + @Override + public void setUnicodeStream(int parameterIndex, InputStream x, int length) throws SQLException { + + } + + @Override + public void setBinaryStream(int parameterIndex, InputStream x, int length) throws SQLException { + + } + + @Override + public void clearParameters() throws SQLException { + + } + + @Override + public void setObject(int parameterIndex, Object x, int targetSqlType) throws SQLException { + + } + + @Override + public void setObject(int parameterIndex, Object x) throws SQLException { + + } + + @Override + public boolean execute() throws SQLException { + return false; + } + + @Override + public void addBatch() throws SQLException { + + } + + @Override + public void setCharacterStream(int parameterIndex, Reader reader, int length) throws SQLException { + + } + + @Override + public void setRef(int parameterIndex, Ref x) throws SQLException { + + } + + @Override + public void setBlob(int parameterIndex, Blob x) throws SQLException { + + } + + @Override + public void setClob(int parameterIndex, Clob x) throws SQLException { + + } + + @Override + public void setArray(int parameterIndex, Array x) throws SQLException { + + } + + @Override + public ResultSetMetaData getMetaData() throws SQLException { + return null; + } + + @Override + public void setDate(int parameterIndex, Date x, Calendar cal) throws SQLException { + + } + + @Override + public void setTime(int parameterIndex, Time x, Calendar cal) throws SQLException { + + } + + @Override + public void setTimestamp(int parameterIndex, Timestamp x, Calendar cal) throws SQLException { + + } + + @Override + public void setNull(int parameterIndex, int sqlType, String typeName) throws SQLException { + + } + + @Override + public void setURL(int parameterIndex, URL x) throws SQLException { + + } + + @Override + public ParameterMetaData getParameterMetaData() throws SQLException { + return null; + } + + @Override + public void setRowId(int parameterIndex, RowId x) throws SQLException { + + } + + @Override + public void setNString(int parameterIndex, String value) throws SQLException { + + } + + @Override + public void setNCharacterStream(int parameterIndex, Reader value, long length) throws SQLException { + + } + + @Override + public void setNClob(int parameterIndex, NClob value) throws SQLException { + + } + + @Override + public void setClob(int parameterIndex, Reader reader, long length) throws SQLException { + + } + + @Override + public void setBlob(int parameterIndex, InputStream inputStream, long length) throws SQLException { + + } + + @Override + public void setNClob(int parameterIndex, Reader reader, long length) throws SQLException { + + } + + @Override + public void setSQLXML(int parameterIndex, SQLXML xmlObject) throws SQLException { + + } + + @Override + public void setObject(int parameterIndex, Object x, int targetSqlType, int scaleOrLength) throws SQLException { + + } + + @Override + public void setAsciiStream(int parameterIndex, InputStream x, long length) throws SQLException { + + } + + @Override + public void setBinaryStream(int parameterIndex, InputStream x, long length) throws SQLException { + + } + + @Override + public void setCharacterStream(int parameterIndex, Reader reader, long length) throws SQLException { + + } + + @Override + public void setAsciiStream(int parameterIndex, InputStream x) throws SQLException { + + } + + @Override + public void setBinaryStream(int parameterIndex, InputStream x) throws SQLException { + + } + + @Override + public void setCharacterStream(int parameterIndex, Reader reader) throws SQLException { + + } + + @Override + public void setNCharacterStream(int parameterIndex, Reader value) throws SQLException { + + } + + @Override + public void setClob(int parameterIndex, Reader reader) throws SQLException { + + } + + @Override + public void setBlob(int parameterIndex, InputStream inputStream) throws SQLException { + + } + + @Override + public void setNClob(int parameterIndex, Reader reader) throws SQLException { + + } + + @Override + public ResultSet executeQuery(String sql) throws SQLException { + return null; + } + + @Override + public int executeUpdate(String sql) throws SQLException { + return 0; + } + + @Override + public void close() throws SQLException { + + } + + @Override + public int getMaxFieldSize() throws SQLException { + return 0; + } + + @Override + public void setMaxFieldSize(int max) throws SQLException { + + } + + @Override + public int getMaxRows() throws SQLException { + return 0; + } + + @Override + public void setMaxRows(int max) throws SQLException { + + } + + @Override + public void setEscapeProcessing(boolean enable) throws SQLException { + + } + + @Override + public int getQueryTimeout() throws SQLException { + return 0; + } + + @Override + public void setQueryTimeout(int seconds) throws SQLException { + + } + + @Override + public void cancel() throws SQLException { + + } + + @Override + public SQLWarning getWarnings() throws SQLException { + return null; + } + + @Override + public void clearWarnings() throws SQLException { + + } + + @Override + public void setCursorName(String name) throws SQLException { + + } + + @Override + public boolean execute(String sql) throws SQLException { + return false; + } + + @Override + public ResultSet getResultSet() throws SQLException { + return null; + } + + @Override + public int getUpdateCount() throws SQLException { + return 0; + } + + @Override + public boolean getMoreResults() throws SQLException { + return false; + } + + @Override + public void setFetchDirection(int direction) throws SQLException { + + } + + @Override + public int getFetchDirection() throws SQLException { + return 0; + } + + @Override + public void setFetchSize(int rows) throws SQLException { + + } + + @Override + public int getFetchSize() throws SQLException { + return 0; + } + + @Override + public int getResultSetConcurrency() throws SQLException { + return 0; + } + + @Override + public int getResultSetType() throws SQLException { + return 0; + } + + @Override + public void addBatch(String sql) throws SQLException { + + } + + @Override + public void clearBatch() throws SQLException { + + } + + @Override + public int[] executeBatch() throws SQLException { + return new int[0]; + } + + @Override + public Connection getConnection() throws SQLException { + return null; + } + + @Override + public boolean getMoreResults(int current) throws SQLException { + return false; + } + + @Override + public ResultSet getGeneratedKeys() throws SQLException { + return null; + } + + @Override + public int executeUpdate(String sql, int autoGeneratedKeys) throws SQLException { + return 0; + } + + @Override + public int executeUpdate(String sql, int[] columnIndexes) throws SQLException { + return 0; + } + + @Override + public int executeUpdate(String sql, String[] columnNames) throws SQLException { + return 0; + } + + @Override + public boolean execute(String sql, int autoGeneratedKeys) throws SQLException { + return false; + } + + @Override + public boolean execute(String sql, int[] columnIndexes) throws SQLException { + return false; + } + + @Override + public boolean execute(String sql, String[] columnNames) throws SQLException { + return false; + } + + @Override + public int getResultSetHoldability() throws SQLException { + return 0; + } + + @Override + public boolean isClosed() throws SQLException { + return false; + } + + @Override + public void setPoolable(boolean poolable) throws SQLException { + + } + + @Override + public boolean isPoolable() throws SQLException { + return false; + } + + @Override + public void closeOnCompletion() throws SQLException { + + } + + @Override + public boolean isCloseOnCompletion() throws SQLException { + return false; + } + + @Override + public T unwrap(Class iface) throws SQLException { + return null; + } + + @Override + public boolean isWrapperFor(Class iface) throws SQLException { + return false; + } + + private static class ColumnInfo { + @SuppressWarnings("rawtypes") + private ArrayList data; + private int type; + private int bytes; + private boolean typeIsSet; + + public ColumnInfo() { + this.typeIsSet = false; + } + + public void setType(int type) throws SQLException { + if (this.isTypeSet()) { + throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNKNOWN, "column data type has been set"); + } + + this.typeIsSet = true; + this.type = type; + } + + public boolean isTypeSet() { + return this.typeIsSet; + } + } + + private static class TableTagInfo { + private boolean isNull; + private final Object value; + private final int type; + + public TableTagInfo(Object value, int type) { + this.value = value; + this.type = type; + } + + public static TableTagInfo createNullTag(int type) { + TableTagInfo info = new TableTagInfo(null, type); + info.isNull = true; + return info; + } + } +} diff --git a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/ws/WSStatement.java b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/ws/WSStatement.java index 58e6ad31930e985acf903d34d993ddc5bbfc1002..77b6fcaaddde5ba6f183e8212bf090b5765df987 100644 --- a/src/connector/jdbc/src/main/java/com/taosdata/jdbc/ws/WSStatement.java +++ b/src/connector/jdbc/src/main/java/com/taosdata/jdbc/ws/WSStatement.java @@ -1,6 +1,7 @@ package com.taosdata.jdbc.ws; import com.taosdata.jdbc.AbstractStatement; +import com.taosdata.jdbc.TSDBDriver; import com.taosdata.jdbc.TSDBError; import com.taosdata.jdbc.TSDBErrorNumbers; import com.taosdata.jdbc.utils.SqlSyntaxValidator; @@ -14,7 +15,7 @@ import java.util.concurrent.ExecutionException; public class WSStatement extends AbstractStatement { private final Transport transport; - private final String database; + private String database; private final Connection connection; private final RequestFactory factory; @@ -52,8 +53,12 @@ public class WSStatement extends AbstractStatement { @Override public void close() throws SQLException { - if (!isClosed()) + if (!isClosed()) { this.closed = true; + if (resultSet != null && !resultSet.isClosed()) { + resultSet.close(); + } + } } @Override @@ -71,6 +76,11 @@ public class WSStatement extends AbstractStatement { if (Code.SUCCESS.getCode() != queryResp.getCode()) { throw TSDBError.createSQLException(queryResp.getCode(), queryResp.getMessage()); } + if (SqlSyntaxValidator.isUseSql(sql)) { + this.database = sql.trim().replace("use", "").trim(); + this.connection.setCatalog(this.database); + this.connection.setClientInfo(TSDBDriver.PROPERTY_KEY_DBNAME, this.database); + } if (queryResp.isUpdate()) { this.resultSet = null; this.affectedRows = queryResp.getAffectedRows(); diff --git a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/RestfulDatabaseMetaDataTest.java b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/RestfulDatabaseMetaDataTest.java index 50b0b97d90da255a8995e921d7e3ff22685e3bdb..8bdc269843d3c817f3790671715d0aa0567b7053 100644 --- a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/RestfulDatabaseMetaDataTest.java +++ b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/RestfulDatabaseMetaDataTest.java @@ -15,6 +15,7 @@ public class RestfulDatabaseMetaDataTest { private static final String url = "jdbc:TAOS-RS://" + host + ":6041/?user=root&password=taosdata"; private static Connection connection; private static RestfulDatabaseMetaData metaData; + private static final String dbName = "test"; @Test public void unwrap() throws SQLException { @@ -1092,9 +1093,9 @@ public class RestfulDatabaseMetaDataTest { properties.setProperty(TSDBDriver.PROPERTY_KEY_TIME_ZONE, "UTC-8"); connection = DriverManager.getConnection(url, properties); Statement stmt = connection.createStatement(); - stmt.execute("drop database if exists log"); - stmt.execute("create database if not exists log precision 'us'"); - stmt.execute("use log"); + stmt.execute("drop database if exists " + dbName); + stmt.execute("create database if not exists " + dbName + " precision 'us'"); + stmt.execute("use " + dbName); stmt.execute("create table `dn` (ts TIMESTAMP,cpu_taosd FLOAT,cpu_system FLOAT,cpu_cores INT,mem_taosd FLOAT,mem_system FLOAT,mem_total INT,disk_used FLOAT,disk_total INT,band_speed FLOAT,io_read FLOAT,io_write FLOAT,req_http INT,req_select INT,req_insert INT) TAGS (dnodeid INT,fqdn BINARY(128))"); stmt.execute("insert into dn1 using dn tags(1,'a') (ts) values(now)"); diff --git a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/ws/WSConnectionTest.java b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/ws/WSConnectionTest.java index 916f8287de42a411a44025749f2b77130bc52198..1e531e65c1e1cc218f1b318445d3ec94bf588c99 100644 --- a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/ws/WSConnectionTest.java +++ b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/ws/WSConnectionTest.java @@ -19,7 +19,7 @@ import java.util.concurrent.TimeUnit; @RunWith(CatalogRunner.class) @TestTarget(alias = "test connection with server", author = "huolibo", version = "2.0.37") public class WSConnectionTest { -// private static final String host = "192.168.1.98"; + // private static final String host = "192.168.1.98"; private static final String host = "127.0.0.1"; private static final int port = 6041; private Connection connection; @@ -27,7 +27,7 @@ public class WSConnectionTest { @Test @Description("normal test with websocket server") public void normalConection() throws SQLException { - String url = "jdbc:TAOS-RS://" + host + ":" + port + "/test?user=root&password=taosdata"; + String url = "jdbc:TAOS-RS://" + host + ":" + port + "/log?user=root&password=taosdata"; Properties properties = new Properties(); properties.setProperty(TSDBDriver.PROPERTY_KEY_BATCH_LOAD, "true"); connection = DriverManager.getConnection(url, properties); @@ -56,7 +56,7 @@ public class WSConnectionTest { @Test(expected = SQLException.class) @Description("wrong password or user") public void wrongUserOrPasswordConection() throws SQLException { - String url = "jdbc:TAOS-RS://" + host + ":" + port + "/test?user=abc&password=taosdata"; + String url = "jdbc:TAOS-RS://" + host + ":" + port + "/log?user=abc&password=taosdata"; Properties properties = new Properties(); properties.setProperty(TSDBDriver.PROPERTY_KEY_BATCH_LOAD, "true"); connection = DriverManager.getConnection(url, properties); @@ -69,13 +69,13 @@ public class WSConnectionTest { Properties properties = new Properties(); properties.setProperty(TSDBDriver.PROPERTY_KEY_BATCH_LOAD, "true"); connection = DriverManager.getConnection(url, properties); - TimeUnit.MINUTES.sleep(1); + TimeUnit.SECONDS.sleep(20); Statement statement = connection.createStatement(); ResultSet resultSet = statement.executeQuery("show databases"); - TimeUnit.MINUTES.sleep(1); + TimeUnit.SECONDS.sleep(20); resultSet.next(); - System.out.println(resultSet.getTimestamp(1)); resultSet.close(); statement.close(); + connection.close(); } } diff --git a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/ws/WSJsonTagTest.java b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/ws/WSJsonTagTest.java index a106e57fbfbcb71c52e75b8872319dc870472368..666ac910e76d9d1f379d8dfa964a10705bee7097 100644 --- a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/ws/WSJsonTagTest.java +++ b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/ws/WSJsonTagTest.java @@ -11,12 +11,6 @@ import org.junit.runners.MethodSorters; import java.sql.*; import java.util.Properties; -/** - * Most of the functionality is consistent with {@link com.taosdata.jdbc.JsonTagTest}, - * Except for batchInsert, which is not supported by restful API. - * Restful could not distinguish between empty and nonexistent of json value, the result is always null. - * The order of json results may change due to serialization and deserialization - */ @Ignore @FixMethodOrder(MethodSorters.NAME_ASCENDING) @RunWith(CatalogRunner.class) diff --git a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/ws/WSQueryTest.java b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/ws/WSQueryTest.java index 70ea3c4d88446a31273ce9f334f4d8c0a8a72285..f00f850dd46a1648fbeb16a0e6e59f715256e367 100644 --- a/src/connector/jdbc/src/test/java/com/taosdata/jdbc/ws/WSQueryTest.java +++ b/src/connector/jdbc/src/test/java/com/taosdata/jdbc/ws/WSQueryTest.java @@ -9,7 +9,9 @@ import org.junit.runner.RunWith; import java.sql.*; import java.util.Properties; +import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicInteger; import java.util.stream.IntStream; @Ignore @@ -17,7 +19,8 @@ import java.util.stream.IntStream; @TestTarget(alias = "query test", author = "huolibo", version = "2.0.38") @FixMethodOrder public class WSQueryTest { - private static final String host = "192.168.1.98"; + // private static final String host = "192.168.1.98"; + private static final String host = "127.0.0.1"; private static final int port = 6041; private static final String databaseName = "ws_query"; private static final String tableName = "wq"; @@ -26,29 +29,27 @@ public class WSQueryTest { @Description("query") @Test - public void queryBlock() throws SQLException, InterruptedException { - IntStream.range(1, 100).limit(1000).parallel().forEach(x -> { - try { - Statement statement = connection.createStatement(); - + public void queryBlock() throws InterruptedException { + CountDownLatch latch = new CountDownLatch(1000); + IntStream.range(1, 10000).limit(1000).parallel().forEach(x -> { + try (Statement statement = connection.createStatement()) { statement.execute("insert into " + databaseName + "." + tableName + " values(now+100s, 100)"); ResultSet resultSet = statement.executeQuery("select * from " + databaseName + "." + tableName); resultSet.next(); Assert.assertEquals(100, resultSet.getInt(2)); statement.close(); - TimeUnit.SECONDS.sleep(10); + latch.countDown(); } catch (SQLException e) { e.printStackTrace(); - } catch (InterruptedException e) { - e.printStackTrace(); } }); + latch.await(); } @Before public void before() throws SQLException { - String url = "jdbc:TAOS-RS://" + host + ":" + port + "/test?user=root&password=taosdata"; + String url = "jdbc:TAOS-RS://" + host + ":" + port + "/log?user=root&password=taosdata"; Properties properties = new Properties(); properties.setProperty(TSDBDriver.PROPERTY_KEY_BATCH_LOAD, "true"); connection = DriverManager.getConnection(url, properties); diff --git a/src/connector/nodejs/examples/stmtBindParamBatchSample.js b/src/connector/nodejs/examples/stmtBindParamBatchSample.js index 030958bfd16faf88f79c6d4476defb76c0a3e990..c7748790875a5d7e99482dbf1266917fd2882231 100755 --- a/src/connector/nodejs/examples/stmtBindParamBatchSample.js +++ b/src/connector/nodejs/examples/stmtBindParamBatchSample.js @@ -32,7 +32,7 @@ function stmtBindParamBatchSample() { `f32 float,` + `d64 double,` + `bnr binary(20),` + - `blob nchar(20),` + + `nchr nchar(20),` + `u8 tinyint unsigned,` + `u16 smallint unsigned,` + `u32 int unsigned,` + @@ -46,7 +46,7 @@ function stmtBindParamBatchSample() { `t_f32 float,` + `t_d64 double,` + `t_bnr binary(20),` + - `t_blob nchar(20),` + + `t_nchr nchar(20),` + `t_u8 tinyint unsigned,` + `t_u16 smallint unsigned,` + `t_u32 int unsigned,` + @@ -89,7 +89,7 @@ function stmtBindParamBatchSample() { tags.bindNchar('TDengine数据'); tags.bindUTinyInt(254); tags.bindUSmallInt(65534); - tags.bindUInt(4294967290 / 2); + tags.bindUInt(4294967290); tags.bindUBigInt(164243520000011111n); cursor.stmtInit(); diff --git a/src/connector/nodejs/examples/stmtBindParamSample.js b/src/connector/nodejs/examples/stmtBindParamSample.js index ee1354aff0a1052a67d961de39c147c6cbe616dd..57c097d8cf1215b60a3ce386eb4b60f2005e7c87 100644 --- a/src/connector/nodejs/examples/stmtBindParamSample.js +++ b/src/connector/nodejs/examples/stmtBindParamSample.js @@ -34,7 +34,7 @@ function stmtBindParamSample() { `f32 float,` + `d64 double,` + `bnr binary(20),` + - `blob nchar(20),` + + `nchr nchar(20),` + `u8 tinyint unsigned,` + `u16 smallint unsigned,` + `u32 int unsigned,` + diff --git a/src/connector/nodejs/examples/stmtBindSingleParamBatchSample.js b/src/connector/nodejs/examples/stmtBindSingleParamBatchSample.js index 3b424b8d0cdc0d18997c2224fdac499e42c0c57d..938de75b7f3314ce3db32294de9d30b609e480b8 100755 --- a/src/connector/nodejs/examples/stmtBindSingleParamBatchSample.js +++ b/src/connector/nodejs/examples/stmtBindSingleParamBatchSample.js @@ -33,7 +33,7 @@ function stmtSingleParaBatchSample() { `f32 float,` + `d64 double,` + `bnr binary(20),` + - `blob nchar(20),` + + `nchr nchar(20),` + `u8 tinyint unsigned,` + `u16 smallint unsigned,` + `u32 int unsigned,` + diff --git a/src/connector/nodejs/examples/stmtUseResultSample.js b/src/connector/nodejs/examples/stmtUseResultSample.js index b9f55545b0892d575c952308febfa9055a4f570a..21e2fbf378e763c1e6f26692f31b1c85010c86fb 100755 --- a/src/connector/nodejs/examples/stmtUseResultSample.js +++ b/src/connector/nodejs/examples/stmtUseResultSample.js @@ -32,7 +32,7 @@ function stmtUseResultSample() { `f32 float,` + `d64 double,` + `bnr binary(20),` + - `blob nchar(20),` + + `nchr nchar(20),` + `u8 tinyint unsigned,` + `u16 smallint unsigned,` + `u32 int unsigned,` + diff --git a/src/connector/nodejs/examples/taosBindParamSample.js b/src/connector/nodejs/examples/taosBindParamSample.js index 3913b741d060549810957526bb9963958f3d48e4..cf8f0a263555844535916f91b1492afb36ef536e 100644 --- a/src/connector/nodejs/examples/taosBindParamSample.js +++ b/src/connector/nodejs/examples/taosBindParamSample.js @@ -34,7 +34,7 @@ function stmtBindParamSample(){ `f32 float,`+ `d64 double,`+ `bnr binary(20),`+ - `blob nchar(20),`+ + `nchr nchar(20),`+ `u8 tinyint unsigned,`+ `u16 smallint unsigned,`+ `u32 int unsigned,`+ diff --git a/src/connector/nodejs/nodetaos/taosMultiBind.js b/src/connector/nodejs/nodetaos/taosMultiBind.js index deead6d2f702ea2d40584a2c57574da5f75f4de7..d4134a9ef6df29f196e9b573a5f8e2e524de21ed 100755 --- a/src/connector/nodejs/nodetaos/taosMultiBind.js +++ b/src/connector/nodejs/nodetaos/taosMultiBind.js @@ -452,7 +452,8 @@ class TaosMultiBind { if (element == null || element == undefined) { ref.set(mbindIsNullBuf, index * ref.types.char.size, 1, ref.types.char); } else { - ref.writeInt64LE(mbindBufferBuf, index * ref.types.uint64.size, element.toString()) + + ref.writeUInt64LE(mbindBufferBuf, index * ref.types.uint64.size, element.toString()) ref.set(mbindIsNullBuf, index * ref.types.char.size, 0, ref.types.char); } diff --git a/src/connector/nodejs/readme.md b/src/connector/nodejs/readme.md index 0077c557f15806e66bb9ed0685c75f2e61382442..e0157fd221f1e90f8ad70f5d0ee8e20e2716f898 100644 --- a/src/connector/nodejs/readme.md +++ b/src/connector/nodejs/readme.md @@ -69,7 +69,7 @@ To target native ARM64 Node.js on Windows 10 on ARM, add the components "Visual ## Usage -The following is a short summary of the basic usage of the connector, the full api and documentation can be found [here](http://docs.taosdata.com/node) +The following is a short summary of the basic usage of the connector, the full api and documentation can be found [here](https://www.taosdata.com/docs/cn/v2.0/connector#nodejs) ### Connection @@ -152,9 +152,9 @@ promise2.then(function(result) { ## Example -An example of using the NodeJS connector to create a table with weather data and create and execute queries can be found [here](https://github.com/taosdata/TDengine/tree/master/tests/examples/nodejs/node-example.js) (The preferred method for using the connector) +An example of using the NodeJS connector to create a table with weather data and create and execute queries can be found [here](https://github.com/taosdata/TDengine/blob/master/examples/nodejs/node-example.js) (The preferred method for using the connector) -An example of using the NodeJS connector to achieve the same things but without all the object wrappers that wrap around the data returned to achieve higher functionality can be found [here](https://github.com/taosdata/TDengine/tree/master/tests/examples/nodejs/node-example-raw.js) +An example of using the NodeJS connector to achieve the same things but without all the object wrappers that wrap around the data returned to achieve higher functionality can be found [here](https://github.com/taosdata/TDengine/blob/master/examples/nodejs/node-example-raw.js) ## Contributing to TDengine diff --git a/src/connector/nodejs/test/cases/test.cases.js b/src/connector/nodejs/test/cases/test.cases.js index 3aa7ab96e5575f6b9c36566b69823e1d246efa66..6cf723331ef4330d56ee5d0db38e0a30b57eadb3 100644 --- a/src/connector/nodejs/test/cases/test.cases.js +++ b/src/connector/nodejs/test/cases/test.cases.js @@ -222,7 +222,7 @@ describe("test unsigned type", () => { ",us2 smallint unsigned" + ",ui4 int unsigned" + ",ubi8 bigint unsigned" + - ",desc_blob nchar(200)" + + ",desc_nchr nchar(200)" + ");"; executeUpdate(createSql); let expectResField = getFieldArr(getFeildsFromDll(createSql)); @@ -268,7 +268,7 @@ describe("test unsigned type", () => { ",us2 smallint unsigned" + ",ui4 int unsigned" + ",ubi8 bigint unsigned" + - ",desc_blob nchar(200)" + + ",desc_nchr nchar(200)" + ");"; executeUpdate(createSql); let expectResField = getFieldArr(getFeildsFromDll(createSql)); @@ -315,7 +315,7 @@ describe("test unsigned type", () => { ",us2 smallint unsigned" + ",ui4 int unsigned" + ",ubi8 bigint unsigned" + - ",desc_blob nchar(200)" + + ",desc_nchr nchar(200)" + ");"; executeUpdate(createSql); let expectResField = getFieldArr(getFeildsFromDll(createSql)); @@ -384,7 +384,7 @@ describe("test cn character", () => { `desc:create,insert,query with cn characters;` + `filename:${fileName};` + `result:${result}`, () => { - createSql = "create table if not exists nchartest_s(ts timestamp,value int,text binary(200),detail nchar(200))tags(tag_bi binary(50),tag_blob nchar(50));" + createSql = "create table if not exists nchartest_s(ts timestamp,value int,text binary(200),detail nchar(200))tags(tag_bi binary(50),tag_nchr nchar(50));" executeUpdate(createSql); let expectResField = getFieldArr(getFeildsFromDll(createSql)); let colData = [1641827743305, 1, 'taosdata', 'tdengine' diff --git a/src/connector/nodejs/test/cases/test.stmt.js b/src/connector/nodejs/test/cases/test.stmt.js new file mode 100644 index 0000000000000000000000000000000000000000..235178bb7c59ad5b6f15d179a55979f71a9e268a --- /dev/null +++ b/src/connector/nodejs/test/cases/test.stmt.js @@ -0,0 +1,915 @@ +const taos = require('../../tdengine'); +const { getFeildsFromDll, buildInsertSql, getFieldArr, getResData } = require('../utils/utilTools') + +const author = 'xiaolei'; +const result = 'passed'; +const fileName = __filename.slice(__dirname.length + 1); + +// This is a taos connection +let conn; +// This is a Cursor +let c1; + +// prepare data +let dbName = 'node_test_stmt_db'; +let tsArr = [1642435200000, 1642435300000, 1642435400000, 1642435500000, 1642435600000]; +let boolArr = [true, false, true, false, null]; +let tinyIntArr = [-127, 3, 127, 0, null]; +let smallIntArr = [-32767, 16, 32767, 0, null]; +let intArr = [-2147483647, 17, 2147483647, 0, null]; +let bigIntArr = [-9223372036854775807n, 9223372036854775807n, 18n, 0n, null]; +let floatArr = [3.4028234663852886e+38, -3.4028234663852886e+38, 19, 0, null]; +let doubleArr = [1.7976931348623157e+308, -1.7976931348623157e+308, 20, 0, null]; +let binaryArr = ['TDengine_Binary', 'taosdata涛思数据', '~!@#$%^&*()', '', null]; +let ncharArr = ['TDengine_Nchar', 'taosdata涛思数据', '~!@#$$%^&*()', '', null]; +let uTinyIntArr = [0, 127, 254, 23, null]; +let uSmallIntArr = [0, 256, 65534, 24, null]; +let uIntArr = [0, 1233, 4294967294, 25, null]; +let uBigIntArr = [0n, 36424354000001111n, 18446744073709551614n, 26n, null]; + +//prepare tag data. +let tagData1 = [true, 1, 32767, 1234555, -164243520000011111n, 214.02, 2.01, 'taosdata涛思数据', 'TDengine数据', 254, 65534, 4294967290 / 2, 164243520000011111n]; +let tagData2 = [true, 2, 32767, 1234555, -164243520000011111n, 214.02, 2.01, 'taosdata涛思数据', 'TDengine数据', 254, 65534, 4294967290 / 2, 164243520000011111n]; +let tagData3 = [true, 3, 32767, 1234555, -164243520000011111n, 214.02, 2.01, 'taosdata涛思数据', 'TDengine数据', 254, 65534, 4294967290 / 2, 164243520000011111n]; + +/** + * Combine individual array of every tdengine type that + * has been declared and then return a new array. + * @returns return data array. + */ +function getBindData() { + let bindDataArr = []; + for (let i = 0; i < 5; i++) { + bindDataArr.push(tsArr[i]); + bindDataArr.push(boolArr[i]); + bindDataArr.push(tinyIntArr[i]); + bindDataArr.push(smallIntArr[i]); + bindDataArr.push(intArr[i]); + bindDataArr.push(bigIntArr[i]); + bindDataArr.push(floatArr[i]); + bindDataArr.push(doubleArr[i]); + bindDataArr.push(binaryArr[i]); + bindDataArr.push(ncharArr[i]); + bindDataArr.push(uTinyIntArr[i]); + bindDataArr.push(uSmallIntArr[i]); + bindDataArr.push(uIntArr[i]); + bindDataArr.push(uBigIntArr[i]); + } + return bindDataArr; +} + +function executeUpdate(sql) { + console.log(sql); + c1.execute(sql); +} + +function executeQuery(sql) { + c1.execute(sql, { quiet: true }) + var data = c1.fetchall(); + let fields = c1.fields; + let resArr = []; + + data.forEach(row => { + row.forEach(data => { + if (data instanceof Date) { + // console.log("date obejct:"+data.valueOf()); + resArr.push(data.taosTimestamp()); + } else { + // console.log("not date:"+data); + resArr.push(data); + } + // console.log(data instanceof Date) + }) + }) + return { resData: resArr, resFeilds: fields }; +} + +beforeAll(() => { + conn = taos.connect({ host: "127.0.0.1", user: "root", password: "taosdata", config: "/etc/taos", port: 10 }); + c1 = conn.cursor(); + executeUpdate(`create database if not exists ${dbName} keep 3650;`); + executeUpdate(`use ${dbName};`); +}); + +// Clears the database and adds some testing data. +// Jest will wait for this promise to resolve before running tests. +afterAll(() => { + executeUpdate(`drop database if exists ${dbName};`); + conn.close(); +}); + +describe("stmt_bind_single_param", () => { + test(`name:bindSingleParamWithOneTable;` + + `author:${author};` + + `desc:Using stmtBindSingleParam() bind one table in a batch;` + + `filename:${fileName};` + + `result:${result}`, () => { + let table = 'bindsingleparambatch_121'; + let createSql = `create table if not exists ${table} ` + + `(ts timestamp,` + + `bl bool,` + + `i8 tinyint,` + + `i16 smallint,` + + `i32 int,` + + `i64 bigint,` + + `f32 float,` + + `d64 double,` + + `bnr binary(20),` + + `nchr nchar(20),` + + `u8 tinyint unsigned,` + + `u16 smallint unsigned,` + + `u32 int unsigned,` + + `u64 bigint unsigned` + + `)tags(` + + `t_bl bool,` + + `t_i8 tinyint,` + + `t_i16 smallint,` + + `t_i32 int,` + + `t_i64 bigint,` + + `t_f32 float,` + + `t_d64 double,` + + `t_bnr binary(20),` + + `t_nchr nchar(20),` + + `t_u8 tinyint unsigned,` + + `t_u16 smallint unsigned,` + + `t_u32 int unsigned,` + + `t_u64 bigint unsigned` + + `);`; + let insertSql = `insert into ? using ${table} tags(?,?,?,?,?,?,?,?,?,?,?,?,?) values (?,?,?,?,?,?,?,?,?,?,?,?,?,?);`; + let querySql = `select * from ${table}`; + let expectResField = getFieldArr(getFeildsFromDll(createSql)); + let expectResData = getResData(getBindData(), tagData1, 14); + + // prepare tag TAOS_BIND + let tagBind1 = new taos.TaosBind(14); + tagBind1.bindBool(true); + tagBind1.bindTinyInt(1); + tagBind1.bindSmallInt(32767); + tagBind1.bindInt(1234555); + tagBind1.bindBigInt(-164243520000011111n); + tagBind1.bindFloat(214.02); + tagBind1.bindDouble(2.01); + tagBind1.bindBinary('taosdata涛思数据'); + tagBind1.bindNchar('TDengine数据'); + tagBind1.bindUTinyInt(254); + tagBind1.bindUSmallInt(65534); + tagBind1.bindUInt(4294967290 / 2); + tagBind1.bindUBigInt(164243520000011111n); + + //Prepare TAOS_MULTI_BIND data + let mBind1 = new taos.TaosMultiBind(); + + executeUpdate(createSql); + c1.stmtInit(); + c1.stmtPrepare(insertSql); + c1.stmtSetTbnameTags(`${table}_s01`, tagBind1.getBind()); + c1.stmtBindSingleParamBatch(mBind1.multiBindTimestamp(tsArr), 0); + c1.stmtBindSingleParamBatch(mBind1.multiBindBool(boolArr), 1); + c1.stmtBindSingleParamBatch(mBind1.multiBindTinyInt(tinyIntArr), 2); + c1.stmtBindSingleParamBatch(mBind1.multiBindSmallInt(smallIntArr), 3); + c1.stmtBindSingleParamBatch(mBind1.multiBindInt(intArr), 4); + c1.stmtBindSingleParamBatch(mBind1.multiBindBigInt(bigIntArr), 5); + c1.stmtBindSingleParamBatch(mBind1.multiBindFloat(floatArr), 6); + c1.stmtBindSingleParamBatch(mBind1.multiBindDouble(doubleArr), 7); + c1.stmtBindSingleParamBatch(mBind1.multiBindBinary(binaryArr), 8); + c1.stmtBindSingleParamBatch(mBind1.multiBindNchar(ncharArr), 9); + c1.stmtBindSingleParamBatch(mBind1.multiBindUTinyInt(uTinyIntArr), 10); + c1.stmtBindSingleParamBatch(mBind1.multiBindUSmallInt(uSmallIntArr), 11); + c1.stmtBindSingleParamBatch(mBind1.multiBindUInt(uIntArr), 12); + c1.stmtBindSingleParamBatch(mBind1.multiBindUBigInt(uBigIntArr), 13); + c1.stmtAddBatch(); + c1.stmtExecute(); + c1.stmtClose(); + + let result = executeQuery(querySql); + let actualResData = result.resData; + let actualResFields = result.resFeilds; + + //assert result data length + expect(expectResData.length).toEqual(actualResData.length); + //assert result data + expectResData.forEach((item, index) => { + expect(item).toEqual(actualResData[index]); + }); + + //assert result meta data + expectResField.forEach((item, index) => { + expect(item).toEqual(actualResFields[index]) + }) + }); + + test(`name:bindSingleParamWithMultiTable;` + + `author:${author};` + + `desc:Using stmtBindSingleParam() bind multiple tables in a batch;` + + `filename:${fileName};` + + `result:${result}`, () => { + let table = 'bindsingleparambatch_m21';//bind multiple table to one batch + let createSql = `create table if not exists ${table} ` + + `(ts timestamp,` + + `bl bool,` + + `i8 tinyint,` + + `i16 smallint,` + + `i32 int,` + + `i64 bigint,` + + `f32 float,` + + `d64 double,` + + `bnr binary(20),` + + `nchr nchar(20),` + + `u8 tinyint unsigned,` + + `u16 smallint unsigned,` + + `u32 int unsigned,` + + `u64 bigint unsigned` + + `)tags(` + + `t_bl bool,` + + `t_i8 tinyint,` + + `t_i16 smallint,` + + `t_i32 int,` + + `t_i64 bigint,` + + `t_f32 float,` + + `t_d64 double,` + + `t_bnr binary(20),` + + `t_nchr nchar(20),` + + `t_u8 tinyint unsigned,` + + `t_u16 smallint unsigned,` + + `t_u32 int unsigned,` + + `t_u64 bigint unsigned` + + `);`; + let insertSql = `insert into ? using ${table} tags(?,?,?,?,?,?,?,?,?,?,?,?,?) values (?,?,?,?,?,?,?,?,?,?,?,?,?,?);`; + let querySql = `select * from ${table}`; + let expectResField = getFieldArr(getFeildsFromDll(createSql)); + let expectResData = getResData(getBindData(), tagData1, 14).concat(getResData(getBindData(), tagData2, 14)).concat(getResData(getBindData(), tagData3, 14)); + + // prepare tag TAOS_BIND + let tagBind1 = new taos.TaosBind(14); + tagBind1.bindBool(true); + tagBind1.bindTinyInt(1); + tagBind1.bindSmallInt(32767); + tagBind1.bindInt(1234555); + tagBind1.bindBigInt(-164243520000011111n); + tagBind1.bindFloat(214.02); + tagBind1.bindDouble(2.01); + tagBind1.bindBinary('taosdata涛思数据'); + tagBind1.bindNchar('TDengine数据'); + tagBind1.bindUTinyInt(254); + tagBind1.bindUSmallInt(65534); + tagBind1.bindUInt(4294967290 / 2); + tagBind1.bindUBigInt(164243520000011111n); + + let tagBind2 = new taos.TaosBind(14); + tagBind2.bindBool(true); + tagBind2.bindTinyInt(2); + tagBind2.bindSmallInt(32767); + tagBind2.bindInt(1234555); + tagBind2.bindBigInt(-164243520000011111n); + tagBind2.bindFloat(214.02); + tagBind2.bindDouble(2.01); + tagBind2.bindBinary('taosdata涛思数据'); + tagBind2.bindNchar('TDengine数据'); + tagBind2.bindUTinyInt(254); + tagBind2.bindUSmallInt(65534); + tagBind2.bindUInt(4294967290 / 2); + tagBind2.bindUBigInt(164243520000011111n); + + let tagBind3 = new taos.TaosBind(14); + tagBind3.bindBool(true); + tagBind3.bindTinyInt(3); + tagBind3.bindSmallInt(32767); + tagBind3.bindInt(1234555); + tagBind3.bindBigInt(-164243520000011111n); + tagBind3.bindFloat(214.02); + tagBind3.bindDouble(2.01); + tagBind3.bindBinary('taosdata涛思数据'); + tagBind3.bindNchar('TDengine数据'); + tagBind3.bindUTinyInt(254); + tagBind3.bindUSmallInt(65534); + tagBind3.bindUInt(4294967290 / 2); + tagBind3.bindUBigInt(164243520000011111n); + + //Prepare TAOS_MULTI_BIND data + let mBind = new taos.TaosMultiBind(); + + executeUpdate(createSql); + c1.stmtInit(); + c1.stmtPrepare(insertSql); + // ========bind for 1st table ============= + c1.stmtSetTbnameTags(`${table}_s01`, tagBind1.getBind()); + c1.stmtBindSingleParamBatch(mBind.multiBindTimestamp(tsArr), 0); + c1.stmtBindSingleParamBatch(mBind.multiBindBool(boolArr), 1); + c1.stmtBindSingleParamBatch(mBind.multiBindTinyInt(tinyIntArr), 2); + c1.stmtBindSingleParamBatch(mBind.multiBindSmallInt(smallIntArr), 3); + c1.stmtBindSingleParamBatch(mBind.multiBindInt(intArr), 4); + c1.stmtBindSingleParamBatch(mBind.multiBindBigInt(bigIntArr), 5); + c1.stmtBindSingleParamBatch(mBind.multiBindFloat(floatArr), 6); + c1.stmtBindSingleParamBatch(mBind.multiBindDouble(doubleArr), 7); + c1.stmtBindSingleParamBatch(mBind.multiBindBinary(binaryArr), 8); + c1.stmtBindSingleParamBatch(mBind.multiBindNchar(ncharArr), 9); + c1.stmtBindSingleParamBatch(mBind.multiBindUTinyInt(uTinyIntArr), 10); + c1.stmtBindSingleParamBatch(mBind.multiBindUSmallInt(uSmallIntArr), 11); + c1.stmtBindSingleParamBatch(mBind.multiBindUInt(uIntArr), 12); + c1.stmtBindSingleParamBatch(mBind.multiBindUBigInt(uBigIntArr), 13); + c1.stmtAddBatch(); + // c1.stmtExecute(); + + // ========bind for 2nd table ============= + c1.stmtSetTbnameTags(`${table}_s02`, tagBind2.getBind()); + c1.stmtBindSingleParamBatch(mBind.multiBindTimestamp(tsArr), 0); + c1.stmtBindSingleParamBatch(mBind.multiBindBool(boolArr), 1); + c1.stmtBindSingleParamBatch(mBind.multiBindTinyInt(tinyIntArr), 2); + c1.stmtBindSingleParamBatch(mBind.multiBindSmallInt(smallIntArr), 3); + c1.stmtBindSingleParamBatch(mBind.multiBindInt(intArr), 4); + c1.stmtBindSingleParamBatch(mBind.multiBindBigInt(bigIntArr), 5); + c1.stmtBindSingleParamBatch(mBind.multiBindFloat(floatArr), 6); + c1.stmtBindSingleParamBatch(mBind.multiBindDouble(doubleArr), 7); + c1.stmtBindSingleParamBatch(mBind.multiBindBinary(binaryArr), 8); + c1.stmtBindSingleParamBatch(mBind.multiBindNchar(ncharArr), 9); + c1.stmtBindSingleParamBatch(mBind.multiBindUTinyInt(uTinyIntArr), 10); + c1.stmtBindSingleParamBatch(mBind.multiBindUSmallInt(uSmallIntArr), 11); + c1.stmtBindSingleParamBatch(mBind.multiBindUInt(uIntArr), 12); + c1.stmtBindSingleParamBatch(mBind.multiBindUBigInt(uBigIntArr), 13); + c1.stmtAddBatch(); + // c1.stmtExecute(); + + // ========bind for 3rd table ============= + c1.stmtSetTbnameTags(`${table}_s0`, tagBind3.getBind()); + c1.stmtBindSingleParamBatch(mBind.multiBindTimestamp(tsArr), 0); + c1.stmtBindSingleParamBatch(mBind.multiBindBool(boolArr), 1); + c1.stmtBindSingleParamBatch(mBind.multiBindTinyInt(tinyIntArr), 2); + c1.stmtBindSingleParamBatch(mBind.multiBindSmallInt(smallIntArr), 3); + c1.stmtBindSingleParamBatch(mBind.multiBindInt(intArr), 4); + c1.stmtBindSingleParamBatch(mBind.multiBindBigInt(bigIntArr), 5); + c1.stmtBindSingleParamBatch(mBind.multiBindFloat(floatArr), 6); + c1.stmtBindSingleParamBatch(mBind.multiBindDouble(doubleArr), 7); + c1.stmtBindSingleParamBatch(mBind.multiBindBinary(binaryArr), 8); + c1.stmtBindSingleParamBatch(mBind.multiBindNchar(ncharArr), 9); + c1.stmtBindSingleParamBatch(mBind.multiBindUTinyInt(uTinyIntArr), 10); + c1.stmtBindSingleParamBatch(mBind.multiBindUSmallInt(uSmallIntArr), 11); + c1.stmtBindSingleParamBatch(mBind.multiBindUInt(uIntArr), 12); + c1.stmtBindSingleParamBatch(mBind.multiBindUBigInt(uBigIntArr), 13); + c1.stmtAddBatch(); + c1.stmtExecute(); + c1.stmtClose(); + + let result = executeQuery(querySql); + let actualResData = result.resData; + let actualResFields = result.resFeilds; + + //assert result data length + expect(expectResData.length).toEqual(actualResData.length); + //assert result data + expectResData.forEach((item, index) => { + expect(item).toEqual(actualResData[index]); + }); + //assert result meta data + expectResField.forEach((item, index) => { + expect(item).toEqual(actualResFields[index]) + }) + }); +}) + +describe("stmt_bind_para_batch", () => { + test(`name:bindParamBatchWithOneTable;` + + `author:${author};` + + `desc:Using stmtBindParamBatch() bind one table in a batch;` + + `filename:${fileName};` + + `result:${result}`, () => { + let table = 'bindparambatch_121';//bind one table to one batch + let createSql = `create table if not exists ${table} ` + + `(ts timestamp,` + + `bl bool,` + + `i8 tinyint,` + + `i16 smallint,` + + `i32 int,` + + `i64 bigint,` + + `f32 float,` + + `d64 double,` + + `bnr binary(20),` + + `nchr nchar(20),` + + `u8 tinyint unsigned,` + + `u16 smallint unsigned,` + + `u32 int unsigned,` + + `u64 bigint unsigned` + + `)tags(` + + `t_bl bool,` + + `t_i8 tinyint,` + + `t_i16 smallint,` + + `t_i32 int,` + + `t_i64 bigint,` + + `t_f32 float,` + + `t_d64 double,` + + `t_bnr binary(20),` + + `t_nchr nchar(20),` + + `t_u8 tinyint unsigned,` + + `t_u16 smallint unsigned,` + + `t_u32 int unsigned,` + + `t_u64 bigint unsigned` + + `);`; + let insertSql = `insert into ? using ${table} tags(?,?,?,?,?,?,?,?,?,?,?,?,?) values (?,?,?,?,?,?,?,?,?,?,?,?,?,?);`; + let querySql = `select * from ${table}`; + let expectResField = getFieldArr(getFeildsFromDll(createSql)); + let expectResData = getResData(getBindData(), tagData1, 14); + + //prepare tag TAO_BIND + let tagBind = new taos.TaosBind(14); + tagBind.bindBool(true); + tagBind.bindTinyInt(1); + tagBind.bindSmallInt(32767); + tagBind.bindInt(1234555); + tagBind.bindBigInt(-164243520000011111n); + tagBind.bindFloat(214.02); + tagBind.bindDouble(2.01); + tagBind.bindBinary('taosdata涛思数据'); + tagBind.bindNchar('TDengine数据'); + tagBind.bindUTinyInt(254); + tagBind.bindUSmallInt(65534); + tagBind.bindUInt(4294967290 / 2); + tagBind.bindUBigInt(164243520000011111n); + + //Prepare TAOS_MULTI_BIND data array + let mBinds = new taos.TaosMultiBindArr(14); + mBinds.multiBindTimestamp(tsArr); + mBinds.multiBindBool(boolArr); + mBinds.multiBindTinyInt(tinyIntArr); + mBinds.multiBindSmallInt(smallIntArr); + mBinds.multiBindInt(intArr); + mBinds.multiBindBigInt(bigIntArr); + mBinds.multiBindFloat(floatArr); + mBinds.multiBindDouble(doubleArr); + mBinds.multiBindBinary(binaryArr); + mBinds.multiBindNchar(ncharArr); + mBinds.multiBindUTinyInt(uTinyIntArr); + mBinds.multiBindUSmallInt(uSmallIntArr); + mBinds.multiBindUInt(uIntArr); + mBinds.multiBindUBigInt(uBigIntArr); + + executeUpdate(createSql); + c1.stmtInit(); + c1.stmtPrepare(insertSql); + c1.stmtSetTbnameTags(`${table}_s01`, tagBind.getBind()); + c1.stmtBindParamBatch(mBinds.getMultiBindArr()); + c1.stmtAddBatch(); + c1.stmtExecute(); + c1.stmtClose(); + + let result = executeQuery(querySql); + let actualResData = result.resData; + let actualResFields = result.resFeilds; + + //assert result data length + expect(expectResData.length).toEqual(actualResData.length); + //assert result data + expectResData.forEach((item, index) => { + expect(item).toEqual(actualResData[index]); + }); + + //assert result meta data + expectResField.forEach((item, index) => { + expect(item).toEqual(actualResFields[index]) + }) + }); + + test(`name:bindParamBatchWithMultiTable;` + + `author:${author};` + + `desc:Using stmtBindParamBatch() bind multiple tables in a batch;` + + `filename:${fileName};` + + `result:${result}`, () => { + let table = 'bindparambatch_m21';//bind multiple tables to one batch + let createSql = `create table if not exists ${table} ` + + `(ts timestamp,` + + `bl bool,` + + `i8 tinyint,` + + `i16 smallint,` + + `i32 int,` + + `i64 bigint,` + + `f32 float,` + + `d64 double,` + + `bnr binary(20),` + + `nchr nchar(20),` + + `u8 tinyint unsigned,` + + `u16 smallint unsigned,` + + `u32 int unsigned,` + + `u64 bigint unsigned` + + `)tags(` + + `t_bl bool,` + + `t_i8 tinyint,` + + `t_i16 smallint,` + + `t_i32 int,` + + `t_i64 bigint,` + + `t_f32 float,` + + `t_d64 double,` + + `t_bnr binary(20),` + + `t_nchr nchar(20),` + + `t_u8 tinyint unsigned,` + + `t_u16 smallint unsigned,` + + `t_u32 int unsigned,` + + `t_u64 bigint unsigned` + + `);`; + let insertSql = `insert into ? using ${table} tags(?,?,?,?,?,?,?,?,?,?,?,?,?) values (?,?,?,?,?,?,?,?,?,?,?,?,?,?);`; + let querySql = `select * from ${table}`; + let expectResField = getFieldArr(getFeildsFromDll(createSql)); + let expectResData = getResData(getBindData(), tagData1, 14).concat(getResData(getBindData(), tagData2, 14)).concat(getResData(getBindData(), tagData3, 14)); + + + // prepare tag TAOS_BIND + let tagBind1 = new taos.TaosBind(14); + tagBind1.bindBool(true); + tagBind1.bindTinyInt(1); + tagBind1.bindSmallInt(32767); + tagBind1.bindInt(1234555); + tagBind1.bindBigInt(-164243520000011111n); + tagBind1.bindFloat(214.02); + tagBind1.bindDouble(2.01); + tagBind1.bindBinary('taosdata涛思数据'); + tagBind1.bindNchar('TDengine数据'); + tagBind1.bindUTinyInt(254); + tagBind1.bindUSmallInt(65534); + tagBind1.bindUInt(4294967290 / 2); + tagBind1.bindUBigInt(164243520000011111n); + + let tagBind2 = new taos.TaosBind(14); + tagBind2.bindBool(true); + tagBind2.bindTinyInt(2); + tagBind2.bindSmallInt(32767); + tagBind2.bindInt(1234555); + tagBind2.bindBigInt(-164243520000011111n); + tagBind2.bindFloat(214.02); + tagBind2.bindDouble(2.01); + tagBind2.bindBinary('taosdata涛思数据'); + tagBind2.bindNchar('TDengine数据'); + tagBind2.bindUTinyInt(254); + tagBind2.bindUSmallInt(65534); + tagBind2.bindUInt(4294967290 / 2); + tagBind2.bindUBigInt(164243520000011111n); + + let tagBind3 = new taos.TaosBind(14); + tagBind3.bindBool(true); + tagBind3.bindTinyInt(3); + tagBind3.bindSmallInt(32767); + tagBind3.bindInt(1234555); + tagBind3.bindBigInt(-164243520000011111n); + tagBind3.bindFloat(214.02); + tagBind3.bindDouble(2.01); + tagBind3.bindBinary('taosdata涛思数据'); + tagBind3.bindNchar('TDengine数据'); + tagBind3.bindUTinyInt(254); + tagBind3.bindUSmallInt(65534); + tagBind3.bindUInt(4294967290 / 2); + tagBind3.bindUBigInt(164243520000011111n); + + //Prepare TAOS_MULTI_BIND data array + let mBinds = new taos.TaosMultiBindArr(14); + mBinds.multiBindTimestamp(tsArr); + mBinds.multiBindBool(boolArr); + mBinds.multiBindTinyInt(tinyIntArr); + mBinds.multiBindSmallInt(smallIntArr); + mBinds.multiBindInt(intArr); + mBinds.multiBindBigInt(bigIntArr); + mBinds.multiBindFloat(floatArr); + mBinds.multiBindDouble(doubleArr); + mBinds.multiBindBinary(binaryArr); + mBinds.multiBindNchar(ncharArr); + mBinds.multiBindUTinyInt(uTinyIntArr); + mBinds.multiBindUSmallInt(uSmallIntArr); + mBinds.multiBindUInt(uIntArr); + mBinds.multiBindUBigInt(uBigIntArr); + + executeUpdate(createSql); + c1.stmtInit(); + c1.stmtPrepare(insertSql); + // ===========bind for 1st table ========== + c1.stmtSetTbnameTags(`${table}_s01`, tagBind1.getBind()); + c1.stmtBindParamBatch(mBinds.getMultiBindArr()); + c1.stmtAddBatch(); + // c1.stmtExecute(); + + // ===========bind for 2nd table ========== + c1.stmtSetTbnameTags(`${table}_s02`, tagBind2.getBind()); + c1.stmtBindParamBatch(mBinds.getMultiBindArr()); + c1.stmtAddBatch(); + // c1.stmtExecute(); + + // ===========bind for 3rd table ========== + c1.stmtSetTbnameTags(`${table}_s03`, tagBind3.getBind()); + c1.stmtBindParamBatch(mBinds.getMultiBindArr()); + c1.stmtAddBatch(); + c1.stmtExecute(); + c1.stmtClose(); + + let result = executeQuery(querySql); + let actualResData = result.resData; + let actualResFields = result.resFeilds; + + //assert result data length + expect(expectResData.length).toEqual(actualResData.length); + //assert result data + expectResData.forEach((item, index) => { + expect(item).toEqual(actualResData[index]); + }); + + //assert result meta data + expectResField.forEach((item, index) => { + expect(item).toEqual(actualResFields[index]) + }) + + + }); +}) + +describe("stmt_bind_param", () => { + test(`name:bindParamWithOneTable;` + + `author:${author};` + + `desc:using stmtBindParam() bind one table in a batch;` + + `filename:${fileName};` + + `result:${result}`, () => { + let table = 'bindparam_121';//bind one table to one batch + let createSql = `create table if not exists ${table} ` + + `(ts timestamp,` + + `bl bool,` + + `i8 tinyint,` + + `i16 smallint,` + + `i32 int,` + + `i64 bigint,` + + `f32 float,` + + `d64 double,` + + `bnr binary(20),` + + `nchr nchar(20),` + + `u8 tinyint unsigned,` + + `u16 smallint unsigned,` + + `u32 int unsigned,` + + `u64 bigint unsigned` + + `)tags(` + + `t_bl bool,` + + `t_i8 tinyint,` + + `t_i16 smallint,` + + `t_i32 int,` + + `t_i64 bigint,` + + `t_f32 float,` + + `t_d64 double,` + + `t_bnr binary(20),` + + `t_nchr nchar(20),` + + `t_u8 tinyint unsigned,` + + `t_u16 smallint unsigned,` + + `t_u32 int unsigned,` + + `t_u64 bigint unsigned` + + `);`; + let insertSql = `insert into ? using ${table} tags(?,?,?,?,?,?,?,?,?,?,?,?,?) values (?,?,?,?,?,?,?,?,?,?,?,?,?,?);`; + let querySql = `select * from ${table}`; + let expectResField = getFieldArr(getFeildsFromDll(createSql)); + let data = getBindData(); + let expectResData = getResData(data, tagData1, 14); + + //prepare tag data + let tags = new taos.TaosBind(13); + tags.bindBool(true); + tags.bindTinyInt(1); + tags.bindSmallInt(32767); + tags.bindInt(1234555); + tags.bindBigInt(-164243520000011111n); + tags.bindFloat(214.02); + tags.bindDouble(2.01); + tags.bindBinary('taosdata涛思数据'); + tags.bindNchar('TDengine数据'); + tags.bindUTinyInt(254); + tags.bindUSmallInt(65534); + tags.bindUInt(4294967290 / 2); + tags.bindUBigInt(164243520000011111n); + executeUpdate(createSql); + + c1.stmtInit(); + c1.stmtPrepare(insertSql); + c1.stmtSetTbnameTags(`${table}_s01`, tags.getBind()); + for (let i = 0; i < data.length - 14; i += 14) { + let bind = new taos.TaosBind(14); + bind.bindTimestamp(data[i]); + bind.bindBool(data[i + 1]); + bind.bindTinyInt(data[i + 2]); + bind.bindSmallInt(data[i + 3]); + bind.bindInt(data[i + 4]); + bind.bindBigInt(data[i + 5]); + bind.bindFloat(data[i + 6]); + bind.bindDouble(data[i + 7]); + bind.bindBinary(data[i + 8]); + bind.bindNchar(data[i + 9]); + bind.bindUTinyInt(data[i + 10]); + bind.bindUSmallInt(data[i + 11]); + bind.bindUInt(data[i + 12]); + bind.bindUBigInt(data[i + 13]); + c1.stmtBindParam(bind.getBind()); + c1.stmtAddBatch(); + } + let bind2 = new taos.TaosBind(14); + bind2.bindTimestamp(data[14 * 4]); + for (let j = 0; j < 13; j++) { + bind2.bindNil(); + } + c1.stmtBindParam(bind2.getBind()); + c1.stmtAddBatch(); + c1.stmtExecute(); + c1.stmtClose(); + + let result = executeQuery(querySql); + let actualResData = result.resData; + let actualResFields = result.resFeilds; + + //assert result data length + expect(expectResData.length).toEqual(actualResData.length); + //assert result data + expectResData.forEach((item, index) => { + expect(item).toEqual(actualResData[index]); + }); + + //assert result meta data + expectResField.forEach((item, index) => { + expect(item).toEqual(actualResFields[index]) + }) + }); + + test(`name:bindParamWithMultiTable;` + + `author:${author};` + + `desc:using stmtBindParam() bind multiple tables in a batch;` + + `filename:${fileName};` + + `result:${result}`, () => { + let table = 'bindparam_m21';//bind one table to one batch + let createSql = `create table if not exists ${table} ` + + `(ts timestamp,` + + `bl bool,` + + `i8 tinyint,` + + `i16 smallint,` + + `i32 int,` + + `i64 bigint,` + + `f32 float,` + + `d64 double,` + + `bnr binary(20),` + + `nchr nchar(20),` + + `u8 tinyint unsigned,` + + `u16 smallint unsigned,` + + `u32 int unsigned,` + + `u64 bigint unsigned` + + `)tags(` + + `t_bl bool,` + + `t_i8 tinyint,` + + `t_i16 smallint,` + + `t_i32 int,` + + `t_i64 bigint,` + + `t_f32 float,` + + `t_d64 double,` + + `t_bnr binary(20),` + + `t_nchr nchar(20),` + + `t_u8 tinyint unsigned,` + + `t_u16 smallint unsigned,` + + `t_u32 int unsigned,` + + `t_u64 bigint unsigned` + + `);`; + let insertSql = `insert into ? using ${table} tags(?,?,?,?,?,?,?,?,?,?,?,?,?) values (?,?,?,?,?,?,?,?,?,?,?,?,?,?);`; + let querySql = `select * from ${table}`; + let expectResField = getFieldArr(getFeildsFromDll(createSql)); + let data = getBindData(); + let expectResData = getResData(data, tagData1, 14).concat(getResData(data, tagData2, 14)).concat(getResData(data, tagData3, 14)); + + // prepare tag TAOS_BIND + let tagBind1 = new taos.TaosBind(14); + tagBind1.bindBool(true); + tagBind1.bindTinyInt(1); + tagBind1.bindSmallInt(32767); + tagBind1.bindInt(1234555); + tagBind1.bindBigInt(-164243520000011111n); + tagBind1.bindFloat(214.02); + tagBind1.bindDouble(2.01); + tagBind1.bindBinary('taosdata涛思数据'); + tagBind1.bindNchar('TDengine数据'); + tagBind1.bindUTinyInt(254); + tagBind1.bindUSmallInt(65534); + tagBind1.bindUInt(4294967290 / 2); + tagBind1.bindUBigInt(164243520000011111n); + + let tagBind2 = new taos.TaosBind(14); + tagBind2.bindBool(true); + tagBind2.bindTinyInt(2); + tagBind2.bindSmallInt(32767); + tagBind2.bindInt(1234555); + tagBind2.bindBigInt(-164243520000011111n); + tagBind2.bindFloat(214.02); + tagBind2.bindDouble(2.01); + tagBind2.bindBinary('taosdata涛思数据'); + tagBind2.bindNchar('TDengine数据'); + tagBind2.bindUTinyInt(254); + tagBind2.bindUSmallInt(65534); + tagBind2.bindUInt(4294967290 / 2); + tagBind2.bindUBigInt(164243520000011111n); + + let tagBind3 = new taos.TaosBind(14); + tagBind3.bindBool(true); + tagBind3.bindTinyInt(3); + tagBind3.bindSmallInt(32767); + tagBind3.bindInt(1234555); + tagBind3.bindBigInt(-164243520000011111n); + tagBind3.bindFloat(214.02); + tagBind3.bindDouble(2.01); + tagBind3.bindBinary('taosdata涛思数据'); + tagBind3.bindNchar('TDengine数据'); + tagBind3.bindUTinyInt(254); + tagBind3.bindUSmallInt(65534); + tagBind3.bindUInt(4294967290 / 2); + tagBind3.bindUBigInt(164243520000011111n); + + executeUpdate(createSql); + c1.stmtInit(); + c1.stmtPrepare(insertSql); + // ========= bind for 1st table ================= + c1.stmtSetTbnameTags(`${table}_s01`, tagBind1.getBind()); + for (let i = 0; i < data.length - 14; i += 14) { + let bind = new taos.TaosBind(14); + bind.bindTimestamp(data[i]); + bind.bindBool(data[i + 1]); + bind.bindTinyInt(data[i + 2]); + bind.bindSmallInt(data[i + 3]); + bind.bindInt(data[i + 4]); + bind.bindBigInt(data[i + 5]); + bind.bindFloat(data[i + 6]); + bind.bindDouble(data[i + 7]); + bind.bindBinary(data[i + 8]); + bind.bindNchar(data[i + 9]); + bind.bindUTinyInt(data[i + 10]); + bind.bindUSmallInt(data[i + 11]); + bind.bindUInt(data[i + 12]); + bind.bindUBigInt(data[i + 13]); + c1.stmtBindParam(bind.getBind()); + c1.stmtAddBatch(); + } + let bind2 = new taos.TaosBind(14); + bind2.bindTimestamp(data[14 * 4]); + for (let j = 0; j < 13; j++) { + bind2.bindNil(); + } + c1.stmtBindParam(bind2.getBind()); + c1.stmtAddBatch(); + // c1.stmtExecute(); + + // ========= bind for 2nd table ================= + c1.stmtSetTbnameTags(`${table}_s02`, tagBind2.getBind()); + for (let i = 0; i < data.length - 14; i += 14) { + let bind = new taos.TaosBind(14); + bind.bindTimestamp(data[i]); + bind.bindBool(data[i + 1]); + bind.bindTinyInt(data[i + 2]); + bind.bindSmallInt(data[i + 3]); + bind.bindInt(data[i + 4]); + bind.bindBigInt(data[i + 5]); + bind.bindFloat(data[i + 6]); + bind.bindDouble(data[i + 7]); + bind.bindBinary(data[i + 8]); + bind.bindNchar(data[i + 9]); + bind.bindUTinyInt(data[i + 10]); + bind.bindUSmallInt(data[i + 11]); + bind.bindUInt(data[i + 12]); + bind.bindUBigInt(data[i + 13]); + c1.stmtBindParam(bind.getBind()); + c1.stmtAddBatch(); + } + c1.stmtBindParam(bind2.getBind()); + c1.stmtAddBatch(); + // c1.stmtExecute(); + + // ========= bind for 3rd table ================= + c1.stmtSetTbnameTags(`${table}_s03`, tagBind3.getBind()); + for (let i = 0; i < data.length - 14; i += 14) { + let bind = new taos.TaosBind(14); + bind.bindTimestamp(data[i]); + bind.bindBool(data[i + 1]); + bind.bindTinyInt(data[i + 2]); + bind.bindSmallInt(data[i + 3]); + bind.bindInt(data[i + 4]); + bind.bindBigInt(data[i + 5]); + bind.bindFloat(data[i + 6]); + bind.bindDouble(data[i + 7]); + bind.bindBinary(data[i + 8]); + bind.bindNchar(data[i + 9]); + bind.bindUTinyInt(data[i + 10]); + bind.bindUSmallInt(data[i + 11]); + bind.bindUInt(data[i + 12]); + bind.bindUBigInt(data[i + 13]); + c1.stmtBindParam(bind.getBind()); + c1.stmtAddBatch(); + } + c1.stmtBindParam(bind2.getBind()); + c1.stmtAddBatch(); + c1.stmtExecute(); + c1.stmtClose(); + + let result = executeQuery(querySql); + let actualResData = result.resData; + let actualResFields = result.resFeilds; + + //assert result data length + expect(expectResData.length).toEqual(actualResData.length); + //assert result data + expectResData.forEach((item, index) => { + expect(item).toEqual(actualResData[index]); + }); + + //assert result meta data + expectResField.forEach((item, index) => { + expect(item).toEqual(actualResFields[index]) + }) + }); +}) + diff --git a/src/inc/taosdef.h b/src/inc/taosdef.h index 16607d27024763b1c650828198d0d7faa67c421e..8f31e1860fe2a1fa08b3eb65466a0f23b656f95d 100644 --- a/src/inc/taosdef.h +++ b/src/inc/taosdef.h @@ -86,7 +86,9 @@ extern const int32_t TYPE_BYTES[16]; #define TSDB_DEFAULT_USER "root" #define TSDB_DEFAULT_PASS "taosdata" -#define SHELL_MAX_PASSWORD_LEN 20 +#define TSDB_PASS_LEN 129 + +#define SHELL_MAX_PASSWORD_LEN TSDB_PASS_LEN #define TSDB_TRUE 1 #define TSDB_FALSE 0 #define TSDB_OK 0 diff --git a/src/inc/taoserror.h b/src/inc/taoserror.h index 64065d0b4672a36c0510242cf9d52830aeccc67b..54c34f5d8abde6575c0aa1e41df910ef7043c4b5 100644 --- a/src/inc/taoserror.h +++ b/src/inc/taoserror.h @@ -293,7 +293,7 @@ int32_t* taosGetErrno(); #define TSDB_CODE_QRY_SYS_ERROR TAOS_DEF_ERROR_CODE(0, 0x070D) //"System error") #define TSDB_CODE_QRY_INVALID_TIME_CONDITION TAOS_DEF_ERROR_CODE(0, 0x070E) //"invalid time condition") #define TSDB_CODE_QRY_INVALID_SCHEMA_VERSION TAOS_DEF_ERROR_CODE(0, 0x0710) //"invalid schema version") -#define TSDB_CODE_QRY_UNIQUE_RESULT_TOO_LARGE TAOS_DEF_ERROR_CODE(0, 0x0711) //"unique result num is too large") +#define TSDB_CODE_QRY_RESULT_TOO_LARGE TAOS_DEF_ERROR_CODE(0, 0x0711) //"result num is too large") // grant #define TSDB_CODE_GRANT_EXPIRED TAOS_DEF_ERROR_CODE(0, 0x0800) //"License expired" diff --git a/src/inc/taosmsg.h b/src/inc/taosmsg.h index e5c390f9191f1aad622a9b8787d4643791c2a870..2ef646d734caa3a17fb6b61417e7658afb8b7db8 100644 --- a/src/inc/taosmsg.h +++ b/src/inc/taosmsg.h @@ -372,7 +372,7 @@ typedef struct { typedef struct { int8_t extend; char user[TSDB_USER_LEN]; - char pass[TSDB_KEY_LEN]; + char pass[TSDB_PASS_LEN]; SAcctCfg cfg; } SCreateAcctMsg, SAlterAcctMsg; @@ -384,7 +384,7 @@ typedef struct { typedef struct { int8_t extend; char user[TSDB_USER_LEN]; - char pass[TSDB_KEY_LEN]; + char pass[TSDB_PASS_LEN]; int8_t privilege; int8_t flag; } SCreateUserMsg, SAlterUserMsg; diff --git a/src/kit/shell/src/shellDarwin.c b/src/kit/shell/src/shellDarwin.c index a1413be1ce4ce6f67516fc09121115f30bbc56f0..5a33e182c8473f92f6ac0271b78a9f9c00768a50 100644 --- a/src/kit/shell/src/shellDarwin.c +++ b/src/kit/shell/src/shellDarwin.c @@ -89,7 +89,7 @@ void shellParseArgument(int argc, char *argv[], SShellArguments *arguments) { || (strncmp(argv[i], "--password", 10) == 0)) { printf("Enter password: "); taosSetConsoleEcho(false); - if (scanf("%s", g_password) > 1) { + if (scanf("%128s", g_password) > 1) { fprintf(stderr, "password read error\n"); } taosSetConsoleEcho(true); diff --git a/src/kit/shell/src/shellLinux.c b/src/kit/shell/src/shellLinux.c index 93783b205560604c9d25c9f5dc2e73a239a67b8e..a78590aab8351451585204e0540998082a54a9a7 100644 --- a/src/kit/shell/src/shellLinux.c +++ b/src/kit/shell/src/shellLinux.c @@ -186,7 +186,7 @@ static void parse_args( || (strncmp(argv[i], "--password", 10) == 0)) { printf("Enter password: "); taosSetConsoleEcho(false); - if (scanf("%20s", g_password) > 1) { + if (scanf("%128s", g_password) > 1) { fprintf(stderr, "password reading error\n"); } taosSetConsoleEcho(true); diff --git a/src/kit/shell/src/shellWindows.c b/src/kit/shell/src/shellWindows.c index 131bce04a797f7c1ad7173b6655f1e41dac8d4ec..a685498617530322e2e5f7de182c5329c5c66614 100644 --- a/src/kit/shell/src/shellWindows.c +++ b/src/kit/shell/src/shellWindows.c @@ -93,7 +93,7 @@ void shellParseArgument(int argc, char *argv[], SShellArguments *arguments) { || (strncmp(argv[i], "--password", 10) == 0)) { printf("Enter password: "); taosSetConsoleEcho(false); - if (scanf("%s", g_password) > 1) { + if (scanf("%128s", g_password) > 1) { fprintf(stderr, "password read error!\n"); } taosSetConsoleEcho(true); diff --git a/src/kit/taos-tools b/src/kit/taos-tools index ca4a90027ddfd5faa858a676e695ddcdd56ef2b5..470ac8486f8887c2fb2570181f3d70d869eecbb7 160000 --- a/src/kit/taos-tools +++ b/src/kit/taos-tools @@ -1 +1 @@ -Subproject commit ca4a90027ddfd5faa858a676e695ddcdd56ef2b5 +Subproject commit 470ac8486f8887c2fb2570181f3d70d869eecbb7 diff --git a/src/mnode/inc/mnodeDnode.h b/src/mnode/inc/mnodeDnode.h index 10f79582d4a492622e1cc425d69aaa95966846d7..9d4fc4bb1ffe6f7e9c393de265fc0304de2cf24e 100644 --- a/src/mnode/inc/mnodeDnode.h +++ b/src/mnode/inc/mnodeDnode.h @@ -67,6 +67,7 @@ void mnodeCleanupDnodes(); int32_t mnodeGetDnodesNum(); int32_t mnodeGetOnlinDnodesCpuCoreNum(); int32_t mnodeGetOnlineDnodesNum(); +int32_t mnodeGetVnodeDnodesNum(); void mnodeGetOnlineAndTotalDnodesNum(int32_t *onlineNum, int32_t *totalNum); void * mnodeGetNextDnode(void *pIter, SDnodeObj **pDnode); void mnodeCancelGetNextDnode(void *pIter); diff --git a/src/mnode/src/mnodeDb.c b/src/mnode/src/mnodeDb.c index 13afd1c29894fea07e5c269eee53b36a386ee442..cc0b18c9b7047a718f6244cbe20ce76789a21a4c 100644 --- a/src/mnode/src/mnodeDb.c +++ b/src/mnode/src/mnodeDb.c @@ -339,8 +339,8 @@ static int32_t mnodeCheckDbCfg(SDbCfg *pCfg) { return TSDB_CODE_MND_INVALID_DB_OPTION; } - if (pCfg->replications > mnodeGetDnodesNum()) { - mError("no enough dnode to config replica: %d, #dnodes: %d", pCfg->replications, mnodeGetDnodesNum()); + if (pCfg->replications > mnodeGetVnodeDnodesNum()) { + mError("no enough dnode to config replica: %d, #dnodes: %d", pCfg->replications, mnodeGetVnodeDnodesNum()); return TSDB_CODE_MND_INVALID_DB_OPTION; } @@ -1057,7 +1057,7 @@ static SDbCfg mnodeGetAlterDbOption(SDbObj *pDb, SAlterDbMsg *pAlter) { terrno = TSDB_CODE_MND_INVALID_DB_OPTION; } - if (replications > mnodeGetDnodesNum()) { + if (replications > mnodeGetVnodeDnodesNum()) { mError("db:%s, no enough dnode to change replica:%d", pDb->name, replications); terrno = TSDB_CODE_MND_NO_ENOUGH_DNODES; } diff --git a/src/mnode/src/mnodeDnode.c b/src/mnode/src/mnodeDnode.c index 58e9f8b749b3df1f58fbd3e67f29dacb379ca0bc..c9ea2f1af4be64d5301e77e44f30d2f7f7833dd3 100644 --- a/src/mnode/src/mnodeDnode.c +++ b/src/mnode/src/mnodeDnode.c @@ -271,6 +271,21 @@ int32_t mnodeGetOnlineDnodesNum() { return onlineDnodes; } +int32_t mnodeGetVnodeDnodesNum() { + SDnodeObj *pDnode = NULL; + void * pIter = NULL; + int32_t numOfDnodes = 0; + + while (1) { + pIter = mnodeGetNextDnode(pIter, &pDnode); + if (pDnode == NULL) break; + if (pDnode->alternativeRole != TAOS_DN_ALTERNATIVE_ROLE_MNODE) numOfDnodes++; + mnodeDecDnodeRef(pDnode); + } + + return numOfDnodes; +} + void mnodeGetOnlineAndTotalDnodesNum(int32_t *onlineNum, int32_t *totalNum) { SDnodeObj *pDnode = NULL; void * pIter = NULL; diff --git a/src/mnode/src/mnodeUser.c b/src/mnode/src/mnodeUser.c index b3e3ba6cd9698b08aceb86841bd858a7c6f05220..2ebe644de189ec459383122837cd82a4d37643bb 100644 --- a/src/mnode/src/mnodeUser.c +++ b/src/mnode/src/mnodeUser.c @@ -625,11 +625,18 @@ int32_t mnodeRetriveAuth(char *user, char *spi, char *encrypt, char *secret, cha mError("user:%s, failed to auth user, reason:%s", user, tstrerror(TSDB_CODE_MND_INVALID_USER)); return TSDB_CODE_MND_INVALID_USER; } else { + if (pUser->superAuth) { + SAcctObj *pAcct = mnodeGetAcct(user); + memcpy(secret, pAcct->pass, TSDB_KEY_LEN); + mnodeDecAcctRef(pAcct); + } else { + memcpy(secret, pUser->pass, TSDB_KEY_LEN); + } + *spi = 1; *encrypt = 0; *ckey = 0; - memcpy(secret, pUser->pass, TSDB_KEY_LEN); mnodeDecUserRef(pUser); mDebug("user:%s, auth info is returned", user); return TSDB_CODE_SUCCESS; diff --git a/src/query/inc/qAggMain.h b/src/query/inc/qAggMain.h index aa5e2abd803d611be005115ba387a45e1138ed56..ab506b7061a5091b642ce5d2598df3afbd81a578 100644 --- a/src/query/inc/qAggMain.h +++ b/src/query/inc/qAggMain.h @@ -79,8 +79,9 @@ extern "C" { #define TSDB_FUNC_ELAPSED 37 #define TSDB_FUNC_HISTOGRAM 38 #define TSDB_FUNC_UNIQUE 39 +#define TSDB_FUNC_MODE 40 -#define TSDB_FUNC_MAX_NUM 40 +#define TSDB_FUNC_MAX_NUM 41 #define TSDB_FUNCSTATE_SO 0x1u // single output #define TSDB_FUNCSTATE_MO 0x2u // dynamic number of output, not multinumber of output e.g., TOP/BOTTOM @@ -148,7 +149,7 @@ typedef struct SResultRowCellInfo { int8_t hasResult; // result generated, not NULL value bool initialized; // output buffer has been initialized bool complete; // query has completed - uint32_t numOfRes; // num of output result in current buffer + int32_t numOfRes; // num of output result in current buffer } SResultRowCellInfo; typedef struct SPoint1 { @@ -203,6 +204,7 @@ typedef struct SQLFunctionCtx { SPoint1 end; SHashObj **pUniqueSet; // for unique function + SHashObj **pModeSet; // for mode function } SQLFunctionCtx; typedef struct SAggFunctionInfo { diff --git a/src/query/inc/qExecutor.h b/src/query/inc/qExecutor.h index c4aebc07b15749da343b4d0175812ca6e4211021..23c67793fe237e8070672856d9c195b62f2f9683 100644 --- a/src/query/inc/qExecutor.h +++ b/src/query/inc/qExecutor.h @@ -91,6 +91,7 @@ typedef struct SResultRow { STimeWindow win; char *key; // start key of current result row SHashObj *uniqueHash; // for unique function + SHashObj *modeHash; // for unique function } SResultRow; typedef struct SResultRowCell { diff --git a/src/query/inc/qResultbuf.h b/src/query/inc/qResultbuf.h index d4194168e565fd8e1202985d3597ace56326e92e..c8779f8130ff9bfb63ae60fd823d2ae01529c092 100644 --- a/src/query/inc/qResultbuf.h +++ b/src/query/inc/qResultbuf.h @@ -80,6 +80,8 @@ typedef struct SDiskbasedResultBuf { #define PAGE_INFO_INITIALIZER (SPageDiskInfo){-1, -1} #define MAX_UNIQUE_RESULT_ROWS (1000) #define MAX_UNIQUE_RESULT_SIZE (1024*1024*1) +#define MAX_MODE_INNER_RESULT_ROWS (1000000) +#define MAX_MODE_INNER_RESULT_SIZE (1024*1024*10) /** * create disk-based result buffer * @param pResultBuf diff --git a/src/query/src/qAggMain.c b/src/query/src/qAggMain.c index b294c0482f0d2002cca7255f572d527ec21b543b..d85896f1a59b004a8cd93d9b28a9cb99161c123f 100644 --- a/src/query/src/qAggMain.c +++ b/src/query/src/qAggMain.c @@ -233,6 +233,16 @@ typedef struct { char res[]; } SUniqueFuncInfo; +typedef struct { + int64_t count; + char data[]; +} ModeUnit; + +typedef struct { + int32_t num; + char res[]; +} SModeFuncInfo; + int32_t getResultDataInfo(int32_t dataType, int32_t dataBytes, int32_t functionId, int32_t param, int16_t *type, int32_t *bytes, int32_t *interBytes, int16_t extLength, bool isSuperTable, SUdfInfo* pUdfInfo) { if (!isValidDataType(dataType)) { @@ -369,13 +379,25 @@ int32_t getResultDataInfo(int32_t dataType, int32_t dataBytes, int32_t functionI int64_t size = sizeof(UniqueUnit) + dataBytes + extLength; size *= param; size += sizeof(SUniqueFuncInfo); - if (size > MAX_UNIQUE_RESULT_SIZE){ + if (size > MAX_UNIQUE_RESULT_SIZE) { size = MAX_UNIQUE_RESULT_SIZE; } - *bytes = size; + *bytes = (int32_t)size; *interBytes = *bytes; return TSDB_CODE_SUCCESS; + } else if (functionId == TSDB_FUNC_MODE) { + *type = TSDB_DATA_TYPE_BINARY; + int64_t size = sizeof(ModeUnit) + dataBytes; + size *= MAX_MODE_INNER_RESULT_ROWS; + size += sizeof(SModeFuncInfo); + if (size > MAX_MODE_INNER_RESULT_SIZE){ + size = MAX_MODE_INNER_RESULT_SIZE; + } + *bytes = (int32_t)size; + *interBytes = *bytes; + + return TSDB_CODE_SUCCESS; } else if (functionId == TSDB_FUNC_SAMPLE) { *type = TSDB_DATA_TYPE_BINARY; *bytes = (sizeof(SSampleFuncInfo) + dataBytes*param + sizeof(int64_t)*param + extLength*param); @@ -513,7 +535,18 @@ int32_t getResultDataInfo(int32_t dataType, int32_t dataBytes, int32_t functionI size = MAX_UNIQUE_RESULT_SIZE; } *interBytes = (int32_t)size; - } else if (functionId == TSDB_FUNC_SAMPLE) { + } else if(functionId == TSDB_FUNC_MODE) { + *type = (int16_t)dataType; + *bytes = dataBytes; + int64_t size = sizeof(ModeUnit) + dataBytes; + size *= MAX_MODE_INNER_RESULT_ROWS; + size += sizeof(SModeFuncInfo); + if (size > MAX_MODE_INNER_RESULT_SIZE){ + size = MAX_MODE_INNER_RESULT_SIZE; + } + *interBytes = (int32_t)size; + return TSDB_CODE_SUCCESS; + }else if (functionId == TSDB_FUNC_SAMPLE) { *type = (int16_t)dataType; *bytes = dataBytes; size_t size = sizeof(SSampleFuncInfo) + dataBytes*param + sizeof(int64_t)*param + extLength*param; @@ -2245,20 +2278,12 @@ static void copyTopBotRes(SQLFunctionCtx *pCtx, int32_t type) { tfree(pData); } -/* - * Parameters values: - * 1. param[0]: maximum allowable results - * 2. param[1]: order by type (time or value) - * 3. param[2]: asc/desc order - * - * top/bottom use the intermediate result buffer to keep the intermediate result - */ -static STopBotInfo *getTopBotOutputInfo(SQLFunctionCtx *pCtx) { +static void *getOutputInfo(SQLFunctionCtx *pCtx) { SResultRowCellInfo *pResInfo = GET_RES_INFO(pCtx); // only the first_stage_merge is directly written data into final output buffer if (pCtx->stableQuery && pCtx->currentStage != MERGE_STAGE) { - return (STopBotInfo*) pCtx->pOutput; + return pCtx->pOutput; } else { // during normal table query and super table at the secondary_stage, result is written to intermediate buffer return GET_ROWCELL_INTERBUF(pResInfo); } @@ -2291,7 +2316,7 @@ bool topbot_datablock_filter(SQLFunctionCtx *pCtx, const char *minval, const cha return true; } - STopBotInfo *pTopBotInfo = getTopBotOutputInfo(pCtx); + STopBotInfo *pTopBotInfo = getOutputInfo(pCtx); // required number of results are not reached, continue load data block if (pTopBotInfo->num < pCtx->param[0].i64) { @@ -2346,7 +2371,7 @@ static bool top_bottom_function_setup(SQLFunctionCtx *pCtx, SResultRowCellInfo* return false; } - STopBotInfo *pInfo = getTopBotOutputInfo(pCtx); + STopBotInfo *pInfo = getOutputInfo(pCtx); buildTopBotStruct(pInfo, pCtx); return true; } @@ -2354,7 +2379,7 @@ static bool top_bottom_function_setup(SQLFunctionCtx *pCtx, SResultRowCellInfo* static void top_function(SQLFunctionCtx *pCtx) { int32_t notNullElems = 0; - STopBotInfo *pRes = getTopBotOutputInfo(pCtx); + STopBotInfo *pRes = getOutputInfo(pCtx); assert(pRes->num >= 0); if ((void *)pRes->res[0] != (void *)((char *)pRes + sizeof(STopBotInfo) + POINTER_BYTES * pCtx->param[0].i64)) { @@ -2393,7 +2418,7 @@ static void top_func_merge(SQLFunctionCtx *pCtx) { // construct the input data struct from binary data buildTopBotStruct(pInput, pCtx); - STopBotInfo *pOutput = getTopBotOutputInfo(pCtx); + STopBotInfo *pOutput = getOutputInfo(pCtx); // the intermediate result is binary, we only use the output data type for (int32_t i = 0; i < pInput->num; ++i) { @@ -2413,7 +2438,7 @@ static void top_func_merge(SQLFunctionCtx *pCtx) { static void bottom_function(SQLFunctionCtx *pCtx) { int32_t notNullElems = 0; - STopBotInfo *pRes = getTopBotOutputInfo(pCtx); + STopBotInfo *pRes = getOutputInfo(pCtx); if ((void *)pRes->res[0] != (void *)((char *)pRes + sizeof(STopBotInfo) + POINTER_BYTES * pCtx->param[0].i64)) { buildTopBotStruct(pRes, pCtx); @@ -2450,7 +2475,7 @@ static void bottom_func_merge(SQLFunctionCtx *pCtx) { // construct the input data struct from binary data buildTopBotStruct(pInput, pCtx); - STopBotInfo *pOutput = getTopBotOutputInfo(pCtx); + STopBotInfo *pOutput = getOutputInfo(pCtx); // the intermediate result is binary, we only use the output data type for (int32_t i = 0; i < pInput->num; ++i) { @@ -2619,18 +2644,6 @@ static void buildHistogramInfo(SAPercentileInfo* pInfo) { pInfo->pHisto->elems = (SHistBin*) ((char*)pInfo->pHisto + sizeof(SHistogramInfo)); } -static SAPercentileInfo *getAPerctInfo(SQLFunctionCtx *pCtx) { - SResultRowCellInfo *pResInfo = GET_RES_INFO(pCtx); - SAPercentileInfo* pInfo = NULL; - - if (pCtx->stableQuery && pCtx->currentStage != MERGE_STAGE) { - pInfo = (SAPercentileInfo*) pCtx->pOutput; - } else { - pInfo = GET_ROWCELL_INTERBUF(pResInfo); - } - return pInfo; -} - // // ----------------- tdigest ------------------- // @@ -2642,7 +2655,7 @@ static bool tdigest_setup(SQLFunctionCtx *pCtx, SResultRowCellInfo *pResultInfo) } // new TDigest - SAPercentileInfo *pInfo = getAPerctInfo(pCtx); + SAPercentileInfo *pInfo = getOutputInfo(pCtx); char *tmp = (char *)pInfo + sizeof(SAPercentileInfo); pInfo->pTDigest = tdigestNewFrom(tmp, COMPRESSION); return true; @@ -2652,7 +2665,7 @@ static void tdigest_do(SQLFunctionCtx *pCtx) { int32_t notNullElems = 0; SResultRowCellInfo *pResInfo = GET_RES_INFO(pCtx); - SAPercentileInfo * pAPerc = getAPerctInfo(pCtx); + SAPercentileInfo * pAPerc = getOutputInfo(pCtx); assert(pAPerc->pTDigest != NULL); if(pAPerc->pTDigest == NULL) { @@ -2694,7 +2707,7 @@ static void tdigest_merge(SQLFunctionCtx *pCtx) { return ; } - SAPercentileInfo *pOutput = getAPerctInfo(pCtx); + SAPercentileInfo *pOutput = getOutputInfo(pCtx); if(pOutput->pTDigest->num_centroids == 0) { memcpy(pOutput->pTDigest, pInput->pTDigest, (size_t)TDIGEST_SIZE(COMPRESSION)); tdigestAutoFill(pOutput->pTDigest, COMPRESSION); @@ -2711,7 +2724,7 @@ static void tdigest_finalizer(SQLFunctionCtx *pCtx) { double q = (pCtx->param[0].nType == TSDB_DATA_TYPE_INT) ? pCtx->param[0].i64 : pCtx->param[0].dKey; SResultRowCellInfo *pResInfo = GET_RES_INFO(pCtx); - SAPercentileInfo * pAPerc = getAPerctInfo(pCtx); + SAPercentileInfo * pAPerc = getOutputInfo(pCtx); if (pCtx->currentStage == MERGE_STAGE) { if (pResInfo->hasResult == DATA_SET_FLAG) { // check for null @@ -2755,7 +2768,7 @@ static bool apercentile_function_setup(SQLFunctionCtx *pCtx, SResultRowCellInfo* return false; } - SAPercentileInfo *pInfo = getAPerctInfo(pCtx); + SAPercentileInfo *pInfo = getOutputInfo(pCtx); buildHistogramInfo(pInfo); char *tmp = (char *)pInfo + sizeof(SAPercentileInfo); @@ -2772,7 +2785,7 @@ static void apercentile_function(SQLFunctionCtx *pCtx) { int32_t notNullElems = 0; SResultRowCellInfo * pResInfo = GET_RES_INFO(pCtx); - SAPercentileInfo *pInfo = getAPerctInfo(pCtx); + SAPercentileInfo *pInfo = getOutputInfo(pCtx); buildHistogramInfo(pInfo); assert(pInfo->pHisto->elems != NULL); @@ -2816,7 +2829,7 @@ static void apercentile_func_merge(SQLFunctionCtx *pCtx) { return; } - SAPercentileInfo *pOutput = getAPerctInfo(pCtx); + SAPercentileInfo *pOutput = getOutputInfo(pCtx); buildHistogramInfo(pOutput); SHistogramInfo *pHisto = pOutput->pHisto; @@ -3045,12 +3058,18 @@ static void col_project_function(SQLFunctionCtx *pCtx) { char *pData = GET_INPUT_DATA_LIST(pCtx); if (pCtx->order == TSDB_ORDER_ASC) { + // ASC int32_t numOfRows = (pCtx->param[0].i64 == 1)? 1:pCtx->size; memcpy(pCtx->pOutput, pData, (size_t) numOfRows * pCtx->inputBytes); } else { + // DESC for(int32_t i = 0; i < pCtx->size; ++i) { - memcpy(pCtx->pOutput + (pCtx->size - 1 - i) * pCtx->inputBytes, pData + i * pCtx->inputBytes, - pCtx->inputBytes); + char* dst = pCtx->pOutput + (pCtx->size - 1 - i) * pCtx->inputBytes; + char* src = pData + i * pCtx->inputBytes; + if (IS_VAR_DATA_TYPE(pCtx->inputType)) + varDataCopy(dst, src); + else + memcpy(dst, src, pCtx->inputBytes); } } } @@ -4710,17 +4729,6 @@ static void mavg_function(SQLFunctionCtx *pCtx) { ////////////////////////////////////////////////////////////////////////////////// // Sample function with reservoir sampling algorithm -static SSampleFuncInfo* getSampleFuncOutputInfo(SQLFunctionCtx *pCtx) { - SResultRowCellInfo *pResInfo = GET_RES_INFO(pCtx); - - // only the first_stage stable is directly written data into final output buffer - if (pCtx->stableQuery && pCtx->currentStage != MERGE_STAGE) { - return (SSampleFuncInfo *) pCtx->pOutput; - } else { // during normal table query and super table at the secondary_stage, result is written to intermediate buffer - return GET_ROWCELL_INTERBUF(pResInfo); - } -} - static void assignResultSample(SQLFunctionCtx *pCtx, SSampleFuncInfo *pInfo, int32_t index, int64_t ts, void *pData, uint16_t type, int16_t bytes, char *inputTags) { assignVal(pInfo->values + index*bytes, pData, bytes, type); *(pInfo->timeStamps + index) = ts; @@ -4800,7 +4808,7 @@ static bool sample_function_setup(SQLFunctionCtx *pCtx, SResultRowCellInfo* pRes srand(taosSafeRand()); - SSampleFuncInfo *pRes = getSampleFuncOutputInfo(pCtx); + SSampleFuncInfo *pRes = getOutputInfo(pCtx); pRes->totalPoints = 0; pRes->numSampled = 0; pRes->values = ((char*)pRes + sizeof(SSampleFuncInfo)); @@ -4814,7 +4822,7 @@ static void sample_function(SQLFunctionCtx *pCtx) { int32_t notNullElems = 0; SResultRowCellInfo *pResInfo = GET_RES_INFO(pCtx); - SSampleFuncInfo *pRes = getSampleFuncOutputInfo(pCtx); + SSampleFuncInfo *pRes = getOutputInfo(pCtx); if (pRes->values != ((char*)pRes + sizeof(SSampleFuncInfo))) { pRes->values = ((char*)pRes + sizeof(SSampleFuncInfo)); @@ -4852,7 +4860,7 @@ static void sample_func_merge(SQLFunctionCtx *pCtx) { pInput->timeStamps = (int64_t*)((char*)pInput->values + pInput->colBytes * pCtx->param[0].i64); pInput->taglists = (char*)pInput->timeStamps + sizeof(int64_t)*pCtx->param[0].i64; - SSampleFuncInfo *pOutput = getSampleFuncOutputInfo(pCtx); + SSampleFuncInfo *pOutput = getOutputInfo(pCtx); pOutput->totalPoints = pInput->totalPoints; pOutput->numSampled = pInput->numSampled; for (int32_t i = 0; i < pInput->numSampled; ++i) { @@ -4886,20 +4894,12 @@ static void sample_func_finalizer(SQLFunctionCtx *pCtx) { ////////////////////////////////////////////////////////////////////////////////// // elapsed function -static SElapsedInfo * getSElapsedInfo(SQLFunctionCtx *pCtx) { - if (pCtx->stableQuery && pCtx->currentStage != MERGE_STAGE) { - return (SElapsedInfo *)pCtx->pOutput; - } else { - return GET_ROWCELL_INTERBUF(GET_RES_INFO(pCtx)); - } -} - static bool elapsedSetup(SQLFunctionCtx *pCtx, SResultRowCellInfo* pResInfo) { if (!function_setup(pCtx, pResInfo)) { return false; } - SElapsedInfo *pInfo = getSElapsedInfo(pCtx); + SElapsedInfo *pInfo = getOutputInfo(pCtx); pInfo->min = MAX_TS_KEY; pInfo->max = 0; pInfo->hasResult = 0; @@ -4912,7 +4912,7 @@ static int32_t elapsedRequired(SQLFunctionCtx *pCtx, STimeWindow* w, int32_t col } static void elapsedFunction(SQLFunctionCtx *pCtx) { - SElapsedInfo *pInfo = getSElapsedInfo(pCtx); + SElapsedInfo *pInfo = getOutputInfo(pCtx); if (pCtx->preAggVals.isSet) { if (pInfo->min == MAX_TS_KEY) { pInfo->min = pCtx->preAggVals.statis.min; @@ -4979,7 +4979,7 @@ elapsedOver: } static void elapsedMerge(SQLFunctionCtx *pCtx) { - SElapsedInfo *pInfo = getSElapsedInfo(pCtx); + SElapsedInfo *pInfo = getOutputInfo(pCtx); memcpy(pInfo, pCtx->pInput, (size_t)pCtx->inputBytes); GET_RES_INFO(pCtx)->hasResult = pInfo->hasResult; } @@ -5002,25 +5002,12 @@ static void elapsedFinalizer(SQLFunctionCtx *pCtx) { } ////////////////////////////////////////////////////////////////////////////////// -// histogram function -static SHistogramFuncInfo* getHistogramFuncOutputInfo(SQLFunctionCtx *pCtx) { - SResultRowCellInfo *pResInfo = GET_RES_INFO(pCtx); - - // only the first_stage stable is directly written data into final output buffer - if (pCtx->stableQuery && pCtx->currentStage != MERGE_STAGE) { - return (SHistogramFuncInfo *) pCtx->pOutput; - } else { // during normal table query and super table at the secondary_stage, result is written to intermediate buffer - return GET_ROWCELL_INTERBUF(pResInfo); - } -} - - static bool histogram_function_setup(SQLFunctionCtx *pCtx, SResultRowCellInfo* pResInfo) { if (!function_setup(pCtx, pResInfo)) { return false; } - SHistogramFuncInfo *pRes = getHistogramFuncOutputInfo(pCtx); + SHistogramFuncInfo *pRes = getOutputInfo(pCtx); if (!pRes) { return false; } @@ -5044,7 +5031,7 @@ static bool histogram_function_setup(SQLFunctionCtx *pCtx, SResultRowCellInfo* p static void histogram_function(SQLFunctionCtx *pCtx) { SResultRowCellInfo* pResInfo = GET_RES_INFO(pCtx); - SHistogramFuncInfo* pRes = getHistogramFuncOutputInfo(pCtx); + SHistogramFuncInfo* pRes = getOutputInfo(pCtx); if (pRes->orderedBins != (SHistogramFuncBin*)((char*)pRes + sizeof(SHistogramFuncInfo))) { pRes->orderedBins = (SHistogramFuncBin*)((char*)pRes + sizeof(SHistogramFuncInfo)); @@ -5092,7 +5079,7 @@ static void histogram_func_merge(SQLFunctionCtx *pCtx) { SHistogramFuncInfo* pInput = (SHistogramFuncInfo*) GET_INPUT_DATA_LIST(pCtx); pInput->orderedBins = (SHistogramFuncBin*)((char*)pInput + sizeof(SHistogramFuncInfo)); - SHistogramFuncInfo* pRes = getHistogramFuncOutputInfo(pCtx); + SHistogramFuncInfo* pRes = getOutputInfo(pCtx); for (int32_t i = 0; i < pInput->numOfBins; ++i) { pRes->orderedBins[i].count += pInput->orderedBins[i].count; } @@ -5129,18 +5116,6 @@ static void histogram_func_finalizer(SQLFunctionCtx *pCtx) { doFinalizer(pCtx); } -// unique use the intermediate result buffer to keep the intermediate result -static SUniqueFuncInfo *getUniqueOutputInfo(SQLFunctionCtx *pCtx) { - SResultRowCellInfo *pResInfo = GET_RES_INFO(pCtx); - - // only the first_stage_merge is directly written data into final output buffer - if (pCtx->stableQuery && pCtx->currentStage != MERGE_STAGE) { - return (SUniqueFuncInfo*) pCtx->pOutput; - } else { // during normal table query and super table at the secondary_stage, result is written to intermediate buffer - return GET_ROWCELL_INTERBUF(pResInfo); - } -} - // unique static void copyUniqueRes(SQLFunctionCtx *pCtx, int32_t bytes) { SResultRowCellInfo *pResInfo = GET_RES_INFO(pCtx); @@ -5174,7 +5149,7 @@ static void copyUniqueRes(SQLFunctionCtx *pCtx, int32_t bytes) { tvp = pRes->res + (size * ((pCtx->param[2].i64 == TSDB_ORDER_ASC) ? 0 : len -1)); for (int32_t i = 0; i < len; ++i) { - int16_t offset = sizeof(UniqueUnit) + bytes; + int32_t offset = (int32_t)sizeof(UniqueUnit) + bytes; for (int32_t j = 0; j < pCtx->tagInfo.numOfTagCols; ++j) { memcpy(pData[j], tvp + offset, (size_t)pCtx->tagInfo.pTagCtxList[j]->outputBytes); offset += pCtx->tagInfo.pTagCtxList[j]->outputBytes; @@ -5238,7 +5213,7 @@ static void do_unique_function(SQLFunctionCtx *pCtx, SUniqueFuncInfo *pInfo, TSK } static void unique_function(SQLFunctionCtx *pCtx) { - SUniqueFuncInfo *pInfo = getUniqueOutputInfo(pCtx); + SUniqueFuncInfo *pInfo = getOutputInfo(pCtx); for (int32_t i = 0; i < pCtx->size; i++) { char *pData = GET_INPUT_DATA(pCtx, i); @@ -5248,7 +5223,8 @@ static void unique_function(SQLFunctionCtx *pCtx) { } do_unique_function(pCtx, pInfo, k, pData, NULL, pCtx->inputBytes, pCtx->inputType); - if (sizeof(SUniqueFuncInfo) + pInfo->num * (sizeof(UniqueUnit) + pCtx->inputBytes + pCtx->tagInfo.tagsLen) >= MAX_UNIQUE_RESULT_SIZE){ + if (sizeof(SUniqueFuncInfo) + pInfo->num * (sizeof(UniqueUnit) + pCtx->inputBytes + pCtx->tagInfo.tagsLen) >= MAX_UNIQUE_RESULT_SIZE + || (pInfo->num > MAX_UNIQUE_RESULT_ROWS)){ GET_RES_INFO(pCtx)->numOfRes = -1; // mark out of memory return; } @@ -5259,7 +5235,7 @@ static void unique_function(SQLFunctionCtx *pCtx) { static void unique_function_merge(SQLFunctionCtx *pCtx) { SUniqueFuncInfo *pInput = (SUniqueFuncInfo *)GET_INPUT_DATA_LIST(pCtx); - SUniqueFuncInfo *pOutput = getUniqueOutputInfo(pCtx); + SUniqueFuncInfo *pOutput = getOutputInfo(pCtx); size_t size = sizeof(UniqueUnit) + pCtx->outputBytes + pCtx->tagInfo.tagsLen; for (int32_t i = 0; i < pInput->num; ++i) { char *tmp = pInput->res + i* size; @@ -5268,13 +5244,14 @@ static void unique_function_merge(SQLFunctionCtx *pCtx) { char *tags = tmp + sizeof(UniqueUnit) + pCtx->outputBytes; do_unique_function(pCtx, pOutput, timestamp, data, tags, pCtx->outputBytes, pCtx->outputType); - if (sizeof(SUniqueFuncInfo) + pOutput->num * (sizeof(UniqueUnit) + pCtx->outputBytes + pCtx->tagInfo.tagsLen) >= MAX_UNIQUE_RESULT_SIZE){ + if (sizeof(SUniqueFuncInfo) + pOutput->num * (sizeof(UniqueUnit) + pCtx->outputBytes + pCtx->tagInfo.tagsLen) >= MAX_UNIQUE_RESULT_SIZE + || (pOutput->num > MAX_UNIQUE_RESULT_ROWS)){ GET_RES_INFO(pCtx)->numOfRes = -1; // mark out of memory return; } } - GET_RES_INFO(pCtx)->numOfRes = pOutput->num; +// GET_RES_INFO(pCtx)->numOfRes = pOutput->num; } typedef struct{ @@ -5284,11 +5261,11 @@ typedef struct{ static int32_t uniqueCompareFn(const void *p1, const void *p2, const void *param) { UiqueSupporter *support = (UiqueSupporter *)param; - return support->comparFn(p1 + support->dataOffset, p2 + support->dataOffset); + return support->comparFn((const char*)p1 + support->dataOffset, (const char*)p2 + support->dataOffset); } static void unique_func_finalizer(SQLFunctionCtx *pCtx) { - SUniqueFuncInfo *pInfo = getUniqueOutputInfo(pCtx); + SUniqueFuncInfo *pInfo = GET_ROWCELL_INTERBUF(GET_RES_INFO(pCtx)); GET_RES_INFO(pCtx)->numOfRes = pInfo->num; int32_t bytes = 0; @@ -5317,6 +5294,114 @@ static void unique_func_finalizer(SQLFunctionCtx *pCtx) { doFinalizer(pCtx); } +static bool mode_function_setup(SQLFunctionCtx *pCtx, SResultRowCellInfo* pResInfo) { + if (!function_setup(pCtx, pResInfo)) { + return false; + } + if(*pCtx->pModeSet != NULL){ + taosHashClear(*pCtx->pModeSet); + }else{ + *pCtx->pModeSet = taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_NO_LOCK); + } + + return true; +} + +static void do_mode_function(SQLFunctionCtx *pCtx, SModeFuncInfo *pInfo, char *pData, int64_t count, int32_t bytes, int16_t type){ + int32_t hashKeyBytes = bytes; + if(IS_VAR_DATA_TYPE(type)){ // for var data, we can not use bytes, because there are dirty data in the back of var data + hashKeyBytes = varDataTLen(pData); + } + ModeUnit **mode = taosHashGet(*pCtx->pModeSet, pData, hashKeyBytes); + if (mode == NULL) { + size_t size = sizeof(ModeUnit) + bytes; + char *tmp = pInfo->res + pInfo->num * size; + ((ModeUnit*)tmp)->count = count; + char *data = tmp + sizeof(ModeUnit); + memcpy(data, pData, bytes); + + taosHashPut(*pCtx->pModeSet, pData, hashKeyBytes, &tmp, sizeof(ModeUnit*)); + pInfo->num++; + }else{ + (*mode)->count += count; + } +} + +static void mode_function(SQLFunctionCtx *pCtx) { + SModeFuncInfo *pInfo = getOutputInfo(pCtx); + + for (int32_t i = 0; i < pCtx->size; i++) { + char *pData = GET_INPUT_DATA(pCtx, i); + if (pCtx->hasNull && isNull(pData, pCtx->inputType)) { + continue; + } + + do_mode_function(pCtx, pInfo, pData, 1, pCtx->inputBytes, pCtx->inputType); + + if (sizeof(SModeFuncInfo) + pInfo->num * (sizeof(ModeUnit) + pCtx->inputBytes) >= MAX_MODE_INNER_RESULT_SIZE){ + GET_RES_INFO(pCtx)->numOfRes = -1; // mark out of memory + return; + } + } + GET_RES_INFO(pCtx)->numOfRes = 1; +} + +static void mode_function_merge(SQLFunctionCtx *pCtx) { + SModeFuncInfo *pInput = (SModeFuncInfo *)GET_INPUT_DATA_LIST(pCtx); + SModeFuncInfo *pOutput = getOutputInfo(pCtx); + size_t size = sizeof(ModeUnit) + pCtx->outputBytes; + for (int32_t i = 0; i < pInput->num; ++i) { + char *tmp = pInput->res + i* size; + char *data = tmp + sizeof(ModeUnit); + do_mode_function(pCtx, pOutput, data, ((ModeUnit*)tmp)->count, pCtx->outputBytes, pCtx->outputType); + + if (sizeof(SModeFuncInfo) + pOutput->num * (sizeof(ModeUnit) + pCtx->outputBytes) >= MAX_MODE_INNER_RESULT_SIZE){ + GET_RES_INFO(pCtx)->numOfRes = -1; // mark out of memory + return; + } + } +} + +static void mode_func_finalizer(SQLFunctionCtx *pCtx) { + int32_t bytes = 0; + int32_t type = 0; + if (pCtx->currentStage == MERGE_STAGE) { + bytes = pCtx->outputBytes; + type = pCtx->outputType; + assert(pCtx->inputType == TSDB_DATA_TYPE_BINARY); + } else { + bytes = pCtx->inputBytes; + type = pCtx->inputType; + } + + SResultRowCellInfo *pResInfo = GET_RES_INFO(pCtx); + SModeFuncInfo *pRes = GET_ROWCELL_INTERBUF(pResInfo); + + size_t size = sizeof(ModeUnit) + bytes; + + char *tvp = pRes->res; + char *result = NULL; + int64_t maxCount = 0; + for (int32_t i = 0; i < pRes->num; ++i) { + int64_t count = ((ModeUnit*)tvp)->count; + if (count > maxCount){ + maxCount = count; + result = tvp; + }else if(count == maxCount){ + result = NULL; + } + tvp += size; + } + + if (result){ + memcpy(pCtx->pOutput, result + sizeof(ModeUnit), bytes); + }else{ + setNull(pCtx->pOutput, type, 0); + } + pResInfo->numOfRes = 1; + doFinalizer(pCtx); +} + ///////////////////////////////////////////////////////////////////////////////////////////// /* * function compatible list. @@ -5337,8 +5422,8 @@ int32_t functionCompatList[] = { 1, 1, 1, 1, -1, 1, 1, 1, 5, 1, 1, // tid_tag, deriv, csum, mavg, sample, 6, 8, -1, -1, -1, - // block_info,elapsed,histogram,unique - 7, 1, -1, -1 + // block_info,elapsed,histogram,unique,mode + 7, 1, -1, -1, 1 }; SAggFunctionInfo aAggs[TSDB_FUNC_MAX_NUM] = {{ @@ -5823,5 +5908,17 @@ SAggFunctionInfo aAggs[TSDB_FUNC_MAX_NUM] = {{ unique_func_finalizer, unique_function_merge, dataBlockRequired, + }, + { + // 40 + "mode", + TSDB_FUNC_MODE, + TSDB_FUNC_MODE, + TSDB_BASE_FUNC_SO, + mode_function_setup, + mode_function, + mode_func_finalizer, + mode_function_merge, + dataBlockRequired, } }; diff --git a/src/query/src/qExecutor.c b/src/query/src/qExecutor.c index 0e1cf3a8830d12ad33d994bf421d70c7eaeec274..79a46d37bf791733bde960be74a9b67a8bc2bc2d 100644 --- a/src/query/src/qExecutor.c +++ b/src/query/src/qExecutor.c @@ -117,7 +117,7 @@ do { \ a = a + b; \ } \ } \ -} while(0) +} while(0) #define TSKEY_MIN_SUB(a,b) \ do { \ @@ -362,7 +362,7 @@ SSDataBlock* createOutputBuf(SExprInfo* pExpr, int32_t numOfOutput, int32_t numO qError("size is too large, failed to allocate column buffer for output buffer"); tmp = 128*1024*1024; } - int32_t size = MAX(tmp, minSize); + int32_t size = (int32_t)MAX(tmp, minSize); idata.pData = calloc(1, size); // at least to hold a pointer on x64 platform if (idata.pData == NULL) { qError("failed to allocate column buffer for output buffer"); @@ -1009,11 +1009,9 @@ static void doApplyFunctions(SQueryRuntimeEnv* pRuntimeEnv, SQLFunctionCtx* pCtx } } - if (functionId == TSDB_FUNC_UNIQUE && - (GET_RES_INFO(&(pCtx[k]))->numOfRes > MAX_UNIQUE_RESULT_ROWS || GET_RES_INFO(&(pCtx[k]))->numOfRes == -1)){ - qError("Unique result num is too large. num: %d, limit: %d", - GET_RES_INFO(&(pCtx[k]))->numOfRes, MAX_UNIQUE_RESULT_ROWS); - longjmp(pRuntimeEnv->env, TSDB_CODE_QRY_UNIQUE_RESULT_TOO_LARGE); + if (GET_RES_INFO(&(pCtx[k]))->numOfRes == -1){ + qError("result num is too large."); + longjmp(pRuntimeEnv->env, TSDB_CODE_QRY_RESULT_TOO_LARGE); } // restore it @@ -1276,11 +1274,9 @@ static void doAggregateImpl(SOperatorInfo* pOperator, TSKEY startTs, SQLFunction assert(0); } - if (functionId == TSDB_FUNC_UNIQUE && - (GET_RES_INFO(&(pCtx[k]))->numOfRes > MAX_UNIQUE_RESULT_ROWS || GET_RES_INFO(&(pCtx[k]))->numOfRes == -1)){ - qError("Unique result num is too large. num: %d, limit: %d", - GET_RES_INFO(&(pCtx[k]))->numOfRes, MAX_UNIQUE_RESULT_ROWS); - longjmp(pRuntimeEnv->env, TSDB_CODE_QRY_UNIQUE_RESULT_TOO_LARGE); + if (GET_RES_INFO(&(pCtx[k]))->numOfRes == -1){ + qError("Mode inner result num is too large"); + longjmp(pRuntimeEnv->env, TSDB_CODE_QRY_RESULT_TOO_LARGE); } } } @@ -1336,7 +1332,7 @@ void doTimeWindowInterpolation(SOperatorInfo* pOperator, SOptrBasicInfo* pInfo, } else { COPY_DATA(&pCtx[k].start.val, (char *)pColInfo->pData + prevRowIndex * pColInfo->info.bytes); } - + pCtx[k].start.key = prevTs; if (pColInfo->info.type == TSDB_DATA_TYPE_BINARY || pColInfo->info.type == TSDB_DATA_TYPE_NCHAR) { @@ -1353,7 +1349,7 @@ void doTimeWindowInterpolation(SOperatorInfo* pOperator, SOptrBasicInfo* pInfo, } else { COPY_DATA(&pCtx[k].end.val, (char *)pColInfo->pData + curRowIndex * pColInfo->info.bytes); } - + pCtx[k].end.key = curTs; if (pColInfo->info.type == TSDB_DATA_TYPE_BINARY || pColInfo->info.type == TSDB_DATA_TYPE_NCHAR) { @@ -1368,9 +1364,9 @@ void doTimeWindowInterpolation(SOperatorInfo* pOperator, SOptrBasicInfo* pInfo, } else { GET_TYPED_DATA(v1, double, pColInfo->info.type, (char *)pColInfo->pData + prevRowIndex * pColInfo->info.bytes); } - + GET_TYPED_DATA(v2, double, pColInfo->info.type, (char *)pColInfo->pData + curRowIndex * pColInfo->info.bytes); - + SPoint point1 = (SPoint){.key = prevTs, .val = &v1}; SPoint point2 = (SPoint){.key = curTs, .val = &v2}; SPoint point = (SPoint){.key = windowKey, .val = &v }; @@ -1965,7 +1961,8 @@ static SQLFunctionCtx* createSQLFunctionCtx(SQueryRuntimeEnv* pRuntimeEnv, SExpr } pCtx->inputType = pSqlExpr->colType; - if (pRuntimeEnv->pQueryAttr->interBytesForGlobal > INT16_MAX && pSqlExpr->functionId == TSDB_FUNC_UNIQUE){ + if (pRuntimeEnv->pQueryAttr->interBytesForGlobal > INT16_MAX && + (pSqlExpr->functionId == TSDB_FUNC_UNIQUE || pSqlExpr->functionId == TSDB_FUNC_MODE)){ pCtx->inputBytes = pRuntimeEnv->pQueryAttr->interBytesForGlobal; }else{ pCtx->inputBytes = pSqlExpr->colBytes; @@ -2238,6 +2235,7 @@ static int32_t setupQueryRuntimeEnv(SQueryRuntimeEnv *pRuntimeEnv, int32_t numOf if (pRuntimeEnv->proot == NULL) { goto _clean; } + int32_t opType = pRuntimeEnv->proot->upstream[0]->operatorType; if (opType != OP_DummyInput) { setTableScanFilterOperatorInfo(pRuntimeEnv->proot->upstream[0]->info, pRuntimeEnv->proot); @@ -2400,7 +2398,7 @@ static void teardownQueryRuntimeEnv(SQueryRuntimeEnv *pRuntimeEnv) { if (!pRuntimeEnv->udfIsCopy) { destroyUdfInfo(pRuntimeEnv->pUdfInfo); } - + destroyResultBuf(pRuntimeEnv->pResultBuf); doFreeQueryHandle(pRuntimeEnv); @@ -3010,7 +3008,7 @@ void filterRowsInDataBlock(SQueryRuntimeEnv* pRuntimeEnv, SSingleColumnFilterInf if (i < (numOfRows - 1)) { all = false; } - + break; } } @@ -3035,9 +3033,9 @@ void filterColRowsInDataBlock(SQueryRuntimeEnv* pRuntimeEnv, SSDataBlock* pBlock bool all = true; if (pRuntimeEnv->pTsBuf != NULL) { - SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, 0); + SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, 0); p = calloc(numOfRows, sizeof(int8_t)); - + TSKEY* k = (TSKEY*) pColInfoData->pData; for (int32_t i = 0; i < numOfRows; ++i) { int32_t offset = ascQuery? i:(numOfRows - i - 1); @@ -3052,7 +3050,7 @@ void filterColRowsInDataBlock(SQueryRuntimeEnv* pRuntimeEnv, SSDataBlock* pBlock p[offset] = true; } - if (!tsBufNextPos(pRuntimeEnv->pTsBuf)) { + if (!tsBufNextPos(pRuntimeEnv->pTsBuf)) { if (i < (numOfRows - 1)) { all = false; } @@ -3060,7 +3058,7 @@ void filterColRowsInDataBlock(SQueryRuntimeEnv* pRuntimeEnv, SSDataBlock* pBlock break; } } - + // save the cursor status pRuntimeEnv->current->cur = tsBufGetCursor(pRuntimeEnv->pTsBuf); } else { @@ -3079,7 +3077,7 @@ void filterColRowsInDataBlock(SQueryRuntimeEnv* pRuntimeEnv, SSDataBlock* pBlock tfree(p); } - + static SColumnInfo* doGetTagColumnInfoById(SColumnInfo* pTagColList, int32_t numOfTags, int16_t colId); static void doSetTagValueInParam(void* pTable, char* param, int32_t paraLen, int32_t tagColId, tVariant *tag, int16_t type, int16_t bytes); @@ -3125,7 +3123,7 @@ void doSetFilterColumnInfo(SSingleColumnFilterInfo* pFilterInfo, int32_t numOfFi FORCE_INLINE int32_t getColumnDataFromId(void *param, int32_t id, void **data) { int32_t numOfCols = ((SColumnDataParam *)param)->numOfCols; SArray* pDataBlock = ((SColumnDataParam *)param)->pDataBlock; - + for (int32_t j = 0; j < numOfCols; ++j) { SColumnInfoData* pColInfo = taosArrayGet(pDataBlock, j); if (id == pColInfo->info.colId) { @@ -3286,7 +3284,7 @@ int32_t loadDataBlockOnDemand(SQueryRuntimeEnv* pRuntimeEnv, STableScanInfo* pTa SColumnDataParam param = {.numOfCols = pBlock->info.numOfCols, .pDataBlock = pBlock->pDataBlock}; filterSetColFieldData(pQueryAttr->pFilters, ¶m, getColumnDataFromId); } - + if (pQueryAttr->pFilters != NULL || pRuntimeEnv->pTsBuf != NULL) { filterColRowsInDataBlock(pRuntimeEnv, pBlock, ascQuery); } @@ -3448,7 +3446,7 @@ void setTagValue(SOperatorInfo* pOperatorInfo, void *pTable, SQLFunctionCtx* pCt } else { if (pCtx[idx].tag.pz != NULL) { memcpy(pRuntimeEnv->tagVal + offset, pCtx[idx].tag.pz, pCtx[idx].tag.nLen); - } + } } offset += pLocalExprInfo->base.resBytes; @@ -3690,6 +3688,8 @@ void setDefaultOutputBuf(SQueryRuntimeEnv *pRuntimeEnv, SOptrBasicInfo *pInfo, i pCtx[i].resultInfo = pCellInfo; if (pCtx[i].functionId == TSDB_FUNC_UNIQUE) { pCtx[i].pUniqueSet = &pRow->uniqueHash; + }else if (pCtx[i].functionId == TSDB_FUNC_MODE) { + pCtx[i].pModeSet = &pRow->modeHash; } pCtx[i].pOutput = pData->pData; pCtx[i].currentStage = stage; @@ -4027,6 +4027,8 @@ void setResultRowOutputBufInitCtx(SQueryRuntimeEnv *pRuntimeEnv, SResultRow *pRe pCtx[i].resultInfo = getResultCell(pResult, i, rowCellInfoOffset); if (pCtx[i].functionId == TSDB_FUNC_UNIQUE){ pCtx[i].pUniqueSet = &pResult->uniqueHash; + }else if (pCtx[i].functionId == TSDB_FUNC_MODE){ + pCtx[i].pModeSet = &pResult->modeHash; } SResultRowCellInfo* pResInfo = pCtx[i].resultInfo; @@ -4123,6 +4125,8 @@ void setResultOutputBuf(SQueryRuntimeEnv *pRuntimeEnv, SResultRow *pResult, SQLF pCtx[i].resultInfo = getResultCell(pResult, i, rowCellInfoOffset); if (pCtx[i].functionId == TSDB_FUNC_UNIQUE) { pCtx[i].pUniqueSet = &pResult->uniqueHash; + }else if (pCtx[i].functionId == TSDB_FUNC_MODE) { + pCtx[i].pModeSet = &pResult->modeHash; } } } @@ -4280,13 +4284,13 @@ void setIntervalQueryRange(SQueryRuntimeEnv *pRuntimeEnv, STimeWindow* winx, int } TSKEY key = QUERY_IS_ASC_QUERY(pQueryAttr)? winx->skey:winx->ekey; - + qDebug("0x%"PRIx64" update query window, tid:%d, %"PRId64" - %"PRId64", old:%"PRId64" - %"PRId64, GET_QID(pRuntimeEnv), tid, key, pTableQueryInfo->win.ekey, pTableQueryInfo->win.skey, pTableQueryInfo->win.ekey); pTableQueryInfo->win.skey = key; STimeWindow win = {.skey = key, .ekey = pQueryAttr->window.ekey}; - + /** * In handling the both ascending and descending order super table query, we need to find the first qualified * timestamp of this table, and then set the first qualified start timestamp. @@ -4902,7 +4906,7 @@ static int32_t setupQueryHandle(void* tsdb, SQueryRuntimeEnv* pRuntimeEnv, int64 STsdbQueryCond cond = createTsdbQueryCond(pQueryAttr, &pQueryAttr->window); if (pQueryAttr->tsCompQuery || pQueryAttr->pointInterpQuery) { - cond.type = BLOCK_LOAD_TABLE_SEQ_ORDER; + cond.type = BLOCK_LOAD_TABLE_SEQ_ORDER; } if (!isSTableQuery @@ -5043,7 +5047,7 @@ int32_t doInitQInfo(SQInfo* pQInfo, STSBuf* pTsBuf, void* tsdb, void* sourceOptr int16_t order = (pQueryAttr->order.order == pRuntimeEnv->pTsBuf->tsOrder) ? TSDB_ORDER_ASC : TSDB_ORDER_DESC; tsBufResetPos(pRuntimeEnv->pTsBuf); tsBufSetTraverseOrder(pRuntimeEnv->pTsBuf, order); - tsBufNextPos(pTsBuf); + tsBufNextPos(pTsBuf); } int32_t ps = DEFAULT_PAGE_SIZE; @@ -5464,7 +5468,7 @@ void setTableScanFilterOperatorInfo(STableScanInfo* pTableScanInfo, SOperatorInf pTableScanInfo->rowCellInfoOffset = pIntervalInfo->rowCellInfoOffset; } else if (pDownstream->operatorType == OP_TimeEvery) { STimeEveryOperatorInfo *pEveryInfo = pDownstream->info; - + pTableScanInfo->pCtx = pEveryInfo->binfo.pCtx; pTableScanInfo->pResultRowInfo = &pEveryInfo->binfo.resultRowInfo; pTableScanInfo->rowCellInfoOffset = pEveryInfo->binfo.rowCellInfoOffset; @@ -5522,7 +5526,7 @@ SOperatorInfo* createDataBlocksOptScanInfo(void* pTsdbQueryHandle, SQueryRuntime if (pRuntimeEnv->pQueryAttr->pointInterpQuery) { pRuntimeEnv->enableGroupData = true; } - + SOperatorInfo* pOptr = calloc(1, sizeof(SOperatorInfo)); if (pOptr == NULL) { tfree(pInfo); @@ -5870,7 +5874,7 @@ static SSDataBlock* doSort(void* param, bool* newgroup) { if (pInfo->pDataBlock->info.rows) { taoscQSort(pCols, pSchema, numOfCols, pInfo->pDataBlock->info.rows, pInfo->colIndex, comp); } - + tfree(pCols); tfree(pSchema); return (pInfo->pDataBlock->info.rows > 0)? pInfo->pDataBlock:NULL; @@ -6054,7 +6058,7 @@ static SSDataBlock* doSTableAggregate(void* param, bool* newgroup) { key = pBlock->info.window.skey; TSKEY_MIN_SUB(key, -1); } - + setExecutionContext(pRuntimeEnv, pInfo, pOperator->numOfOutput, pRuntimeEnv->current->groupIndex, key); doAggregateImpl(pOperator, pQueryAttr->window.skey, pInfo->pCtx, pBlock); } @@ -6351,7 +6355,7 @@ static SSDataBlock* doIntervalAgg(void* param, bool* newgroup) { if (pIntervalInfo->resultRowInfo.size > 0 && pQueryAttr->needSort) { qsort(pIntervalInfo->resultRowInfo.pResult, pIntervalInfo->resultRowInfo.size, POINTER_BYTES, resRowCompare); } - + closeAllResultRows(&pIntervalInfo->resultRowInfo); setQueryStatus(pRuntimeEnv, QUERY_COMPLETED); finalizeQueryResult(pOperator, pIntervalInfo->pCtx, &pIntervalInfo->resultRowInfo, pIntervalInfo->rowCellInfoOffset); @@ -6380,7 +6384,7 @@ static void everyApplyFunctions(SQueryRuntimeEnv *pRuntimeEnv, SQLFunctionCtx *p static int64_t getEveryStartTs(bool ascQuery, STimeWindow *range, STimeWindow *blockWin, SQueryAttr *pQueryAttr) { int64_t startTs = range->skey, ekey = 0; - + assert(range->skey != INT64_MIN); if (ascQuery) { @@ -6418,15 +6422,15 @@ static bool doEveryInterpolation(SOperatorInfo* pOperatorInfo, SSDataBlock* pBlo SQLFunctionCtx* pCtx = NULL; *needApply = false; - + if (!pQueryAttr->pointInterpQuery) { goto group_finished_exit; } assert(pOperatorInfo->numOfOutput > 1); - + for (int32_t i = 1; i < pOperatorInfo->numOfOutput; ++i) { - assert(pEveryInfo->binfo.pCtx[i].functionId == TSDB_FUNC_INTERP + assert(pEveryInfo->binfo.pCtx[i].functionId == TSDB_FUNC_INTERP || pEveryInfo->binfo.pCtx[i].functionId == TSDB_FUNC_TS_DUMMY || pEveryInfo->binfo.pCtx[i].functionId == TSDB_FUNC_TAG || pEveryInfo->binfo.pCtx[i].functionId == TSDB_FUNC_TAG_DUMMY); @@ -6436,7 +6440,7 @@ static bool doEveryInterpolation(SOperatorInfo* pOperatorInfo, SSDataBlock* pBlo break; } } - + TSKEY* tsCols = NULL; if (pBlock && pBlock->pDataBlock != NULL) { SColumnInfoData* pColDataInfo = taosArrayGet(pBlock->pDataBlock, 0); @@ -6446,7 +6450,7 @@ static bool doEveryInterpolation(SOperatorInfo* pOperatorInfo, SSDataBlock* pBlo if (pCtx->startTs == INT64_MIN) { if (pQueryAttr->range.skey == INT64_MIN) { - if (NULL == tsCols) { + if (NULL == tsCols) { goto group_finished_exit; } @@ -6485,12 +6489,12 @@ static bool doEveryInterpolation(SOperatorInfo* pOperatorInfo, SSDataBlock* pBlo } else { pCtx->startTs = ascQuery ? pCtx->startTs + pQueryAttr->interval.interval : pCtx->startTs - pQueryAttr->interval.interval; } - - if (ascQuery && pQueryAttr->range.ekey != INT64_MIN && pCtx->startTs > pQueryAttr->range.ekey) { + + if (ascQuery && pQueryAttr->range.ekey != INT64_MIN && pCtx->startTs > pQueryAttr->range.ekey) { goto group_finished_exit; } - if ((!ascQuery) && pQueryAttr->range.skey != INT64_MIN && pCtx->startTs < pQueryAttr->range.skey) { + if ((!ascQuery) && pQueryAttr->range.skey != INT64_MIN && pCtx->startTs < pQueryAttr->range.skey) { goto group_finished_exit; } } else { @@ -6504,8 +6508,8 @@ static bool doEveryInterpolation(SOperatorInfo* pOperatorInfo, SSDataBlock* pBlo if ((ascQuery && pQueryAttr->range.ekey == INT64_MIN) || ((!ascQuery) && pQueryAttr->range.skey == INT64_MIN)) { goto group_finished_exit; } - - if (pQueryAttr->fillType == TSDB_FILL_NONE || pQueryAttr->fillType == TSDB_FILL_LINEAR + + if (pQueryAttr->fillType == TSDB_FILL_NONE || pQueryAttr->fillType == TSDB_FILL_LINEAR || ((ascQuery && pQueryAttr->fillType == TSDB_FILL_NEXT) || ((!ascQuery) && pQueryAttr->fillType == TSDB_FILL_PREV))) { goto group_finished_exit; } @@ -6527,11 +6531,11 @@ static bool doEveryInterpolation(SOperatorInfo* pOperatorInfo, SSDataBlock* pBlo } *needApply = true; - + for (int32_t i = 0; i < pOperatorInfo->numOfOutput; ++i) { pEveryInfo->binfo.pCtx[i].startTs = pCtx->startTs; } - + return false; } @@ -6554,14 +6558,14 @@ static bool doEveryInterpolation(SOperatorInfo* pOperatorInfo, SSDataBlock* pBlo } else { if (tsCols[startPos] == pCtx->startTs) { doTimeWindowInterpolation(pOperatorInfo, pOperatorInfo->info, pBlock->pDataBlock, pCtx->startTs, startPos, INT64_MIN, 0, 0, RESULT_ROW_START_INTERP); - } else { + } else { doTimeWindowInterpolation(pOperatorInfo, pOperatorInfo->info, pBlock->pDataBlock, tsCols[startPos - 1], startPos - 1, INT64_MIN, 0, 0, RESULT_ROW_START_INTERP); } } if (pQueryAttr->fillType != TSDB_FILL_LINEAR) { *needApply = true; - } + } } if ((!ascQuery) && (pQueryAttr->fillType == TSDB_FILL_LINEAR || pQueryAttr->fillType == TSDB_FILL_NEXT) && pCtx->end.key == INT64_MIN) { @@ -6571,7 +6575,7 @@ static bool doEveryInterpolation(SOperatorInfo* pOperatorInfo, SSDataBlock* pBlo } else if (startPos == (pBlock->info.rows - 1)) { if (tsCols[startPos] == pCtx->startTs) { doTimeWindowInterpolation(pOperatorInfo, pOperatorInfo->info, pBlock->pDataBlock, INT64_MIN, 0, pCtx->startTs, startPos, 0, RESULT_ROW_END_INTERP); - } else { + } else { TSKEY lastTs = *(TSKEY *) pRuntimeEnv->prevRow[0]; if (lastTs != INT64_MIN) { doTimeWindowInterpolation(pOperatorInfo, pOperatorInfo->info, pBlock->pDataBlock, INT64_MIN, 0, lastTs, -1, 0, RESULT_ROW_END_INTERP); @@ -6584,17 +6588,17 @@ static bool doEveryInterpolation(SOperatorInfo* pOperatorInfo, SSDataBlock* pBlo doTimeWindowInterpolation(pOperatorInfo, pOperatorInfo->info, pBlock->pDataBlock, INT64_MIN, 0, tsCols[startPos + 1], startPos + 1, 0, RESULT_ROW_END_INTERP); } } - + if (pQueryAttr->fillType != TSDB_FILL_LINEAR) { *needApply = true; - } + } } - + if (ascQuery && (pQueryAttr->fillType == TSDB_FILL_LINEAR || pQueryAttr->fillType == TSDB_FILL_NEXT) && pCtx->end.key == INT64_MIN) { if (startPos < 0) { return true; } - + doTimeWindowInterpolation(pOperatorInfo, pOperatorInfo->info, pBlock->pDataBlock, INT64_MIN, 0, tsCols[startPos], startPos, 0, RESULT_ROW_END_INTERP); *needApply = true; @@ -6604,7 +6608,7 @@ static bool doEveryInterpolation(SOperatorInfo* pOperatorInfo, SSDataBlock* pBlo if (startPos < 0) { return true; } - + doTimeWindowInterpolation(pOperatorInfo, pOperatorInfo->info, pBlock->pDataBlock, tsCols[startPos], startPos, INT64_MIN, 0, 0, RESULT_ROW_START_INTERP); *needApply = true; @@ -6625,7 +6629,7 @@ group_finished_exit: if (pQueryAttr->needReverseScan) { pQueryAttr->range.skey = INT64_MIN; } - + pEveryInfo->groupDone = true; if (pCtx) { @@ -6633,7 +6637,7 @@ group_finished_exit: pCtx->start.key = INT64_MIN; pCtx->end.key = INT64_MIN; } - + return true; } @@ -6668,7 +6672,7 @@ static void doTimeEveryImpl(SOperatorInfo* pOperator, SQLFunctionCtx *pCtx, SSDa setQueryStatus(pRuntimeEnv, QUERY_COMPLETED); return; } - + tsCols = (int64_t*) pColDataInfo->pData; assert(tsCols[0] == pBlock->info.window.skey && tsCols[pBlock->info.rows - 1] == pBlock->info.window.ekey); @@ -6682,7 +6686,7 @@ static void doTimeEveryImpl(SOperatorInfo* pOperator, SQLFunctionCtx *pCtx, SSDa if (needApply) { everyApplyFunctions(pRuntimeEnv, pEveryInfo->binfo.pCtx, numOfOutput); - + pRes->info.rows = getNumOfResult(pRuntimeEnv, pEveryInfo->binfo.pCtx, pOperator->numOfOutput); if (pRes->info.rows >= pRuntimeEnv->resultInfo.threshold) { pEveryInfo->lastBlock = pBlock; @@ -6716,8 +6720,8 @@ static SSDataBlock* doTimeEvery(void* param, bool* newgroup) { clearNumOfRes(pInfo->pCtx, pOperator->numOfOutput); return pInfo->pRes; } - - if (pRes->info.rows > 0) { + + if (pRes->info.rows > 0) { copyTsColoum(pRes, pInfo->pCtx, pOperator->numOfOutput); clearNumOfRes(pInfo->pCtx, pOperator->numOfOutput); return pInfo->pRes; @@ -6776,9 +6780,9 @@ static SSDataBlock* doTimeEvery(void* param, bool* newgroup) { if (pRes->info.rows >= pRuntimeEnv->resultInfo.threshold) { break; } - + assert(pEveryInfo->groupDone); - + if (pRes->info.rows > 0) { break; } @@ -7584,12 +7588,12 @@ SColumnInfo* extractColumnFilterInfo(SExprInfo* pExpr, int32_t numOfOutput, int3 pCols[i].colId = pExpr[i].base.resColId; pCols[i].flist.numOfFilters = pExpr[i].base.flist.numOfFilters; - if (pCols[i].flist.numOfFilters != 0) { + if (pCols[i].flist.numOfFilters != 0) { pCols[i].flist.filterInfo = calloc(pCols[i].flist.numOfFilters, sizeof(SColumnFilterInfo)); memcpy(pCols[i].flist.filterInfo, pExpr[i].base.flist.filterInfo, pCols[i].flist.numOfFilters * sizeof(SColumnFilterInfo)); } else { // avoid runtime error - pCols[i].flist.filterInfo = NULL; + pCols[i].flist.filterInfo = NULL; } } @@ -7723,7 +7727,7 @@ SOperatorInfo* createTimeEveryOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv, SOpera if (pQueryAttr->needReverseScan) { pInfo->rangeStart = taosHashInit(256, taosGetDefaultHashFunction(TSDB_DATA_TYPE_TIMESTAMP), false, false); } - + initResultRowInfo(&pBInfo->resultRowInfo, 8, TSDB_DATA_TYPE_INT); if (pBInfo->pRes == NULL || pBInfo->pCtx == NULL || pBInfo->resultRowInfo.pResult == NULL || @@ -8259,11 +8263,11 @@ _clean: static bool initMultiDistinctInfo(SDistinctOperatorInfo *pInfo, SOperatorInfo* pOperator, SSDataBlock *pBlock) { if (taosArrayGetSize(pInfo->pDistinctDataInfo) == pOperator->numOfOutput) { - // distinct info already inited + // distinct info already inited return true; } for (int i = 0; i < pOperator->numOfOutput; i++) { - pInfo->totalBytes += pOperator->pExpr[i].base.colBytes; + pInfo->totalBytes += pOperator->pExpr[i].base.colBytes; } for (int i = 0; i < pOperator->numOfOutput; i++) { int numOfBlock = (int)(taosArrayGetSize(pBlock->pDataBlock)); @@ -8283,14 +8287,14 @@ static bool initMultiDistinctInfo(SDistinctOperatorInfo *pInfo, SOperatorInfo* p static void buildMultiDistinctKey(SDistinctOperatorInfo *pInfo, SSDataBlock *pBlock, int32_t rowId) { char *p = pInfo->buf; - memset(p, 0, pInfo->totalBytes); + memset(p, 0, pInfo->totalBytes); for (int i = 0; i < taosArrayGetSize(pInfo->pDistinctDataInfo); i++) { - SDistinctDataInfo* pDistDataInfo = (SDistinctDataInfo *)taosArrayGet(pInfo->pDistinctDataInfo, i); + SDistinctDataInfo* pDistDataInfo = (SDistinctDataInfo *)taosArrayGet(pInfo->pDistinctDataInfo, i); SColumnInfoData* pColDataInfo = taosArrayGet(pBlock->pDataBlock, pDistDataInfo->index); char *val = ((char *)pColDataInfo->pData) + pColDataInfo->info.bytes * rowId; - if (isNull(val, pDistDataInfo->type)) { - p += pDistDataInfo->bytes; + if (isNull(val, pDistDataInfo->type)) { + p += pDistDataInfo->bytes; continue; } if (IS_VAR_DATA_TYPE(pDistDataInfo->type)) { @@ -8316,7 +8320,7 @@ static SSDataBlock* hashDistinct(void* param, bool* newgroup) { pRes->info.rows = 0; SSDataBlock* pBlock = NULL; - + while(1) { publishOperatorProfEvent(pOperator->upstream[0], QUERY_PROF_BEFORE_OPERATOR_EXEC); pBlock = pOperator->upstream[0]->exec(pOperator->upstream[0], newgroup); @@ -8330,7 +8334,7 @@ static SSDataBlock* hashDistinct(void* param, bool* newgroup) { doSetOperatorCompleted(pOperator); break; } - // ensure result output buf + // ensure result output buf if (pRes->info.rows + pBlock->info.rows > pInfo->outputCapacity) { int32_t newSize = pRes->info.rows + pBlock->info.rows; for (int i = 0; i < taosArrayGetSize(pRes->pDataBlock); i++) { @@ -8354,14 +8358,14 @@ static SSDataBlock* hashDistinct(void* param, bool* newgroup) { for (int j = 0; j < taosArrayGetSize(pRes->pDataBlock); j++) { SDistinctDataInfo* pDistDataInfo = taosArrayGet(pInfo->pDistinctDataInfo, j); // distinct meta info SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, pDistDataInfo->index); //src - SColumnInfoData* pResultColInfoData = taosArrayGet(pRes->pDataBlock, j); // dist + SColumnInfoData* pResultColInfoData = taosArrayGet(pRes->pDataBlock, j); // dist char* val = ((char*)pColInfoData->pData) + pDistDataInfo->bytes * i; - char *start = pResultColInfoData->pData + pDistDataInfo->bytes * pInfo->pRes->info.rows; + char *start = pResultColInfoData->pData + pDistDataInfo->bytes * pInfo->pRes->info.rows; memcpy(start, val, pDistDataInfo->bytes); } pRes->info.rows += 1; - } + } } if (pRes->info.rows >= pInfo->threshold) { @@ -8381,10 +8385,10 @@ SOperatorInfo* createDistinctOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv, SOperat pInfo->buf = NULL; pInfo->threshold = tsMaxNumOfDistinctResults; // distinct result threshold pInfo->outputCapacity = 4096; - pInfo->pDistinctDataInfo = taosArrayInit(numOfOutput, sizeof(SDistinctDataInfo)); + pInfo->pDistinctDataInfo = taosArrayInit(numOfOutput, sizeof(SDistinctDataInfo)); pInfo->pSet = taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), false, HASH_NO_LOCK); pInfo->pRes = createOutputBuf(pExpr, numOfOutput, (int32_t) pInfo->outputCapacity); - + if (pInfo->pDistinctDataInfo == NULL || pInfo->pSet == NULL || pInfo->pRes == NULL) { goto _clean; } @@ -8403,7 +8407,7 @@ SOperatorInfo* createDistinctOperatorInfo(SQueryRuntimeEnv* pRuntimeEnv, SOperat pOperator->info = pInfo; pOperator->pRuntimeEnv = pRuntimeEnv; pOperator->exec = hashDistinct; - pOperator->pExpr = pExpr; + pOperator->pExpr = pExpr; pOperator->cleanup = destroyDistinctOperatorInfo; appendUpstream(pOperator, upstream); @@ -8584,7 +8588,7 @@ int32_t convertQueryMsg(SQueryTableMsg *pQueryMsg, SQueryParam* param) { pQueryMsg->window.ekey = htobe64(pQueryMsg->window.ekey); pQueryMsg->range.skey = htobe64(pQueryMsg->range.skey); pQueryMsg->range.ekey = htobe64(pQueryMsg->range.ekey); - + pQueryMsg->interval.interval = htobe64(pQueryMsg->interval.interval); pQueryMsg->interval.sliding = htobe64(pQueryMsg->interval.sliding); pQueryMsg->interval.offset = htobe64(pQueryMsg->interval.offset); @@ -8655,7 +8659,7 @@ int32_t convertQueryMsg(SQueryTableMsg *pQueryMsg, SQueryParam* param) { if (code != TSDB_CODE_SUCCESS) { goto _cleanup; } -*/ +*/ } if (pQueryMsg->colCondLen > 0) { @@ -9113,14 +9117,14 @@ int32_t initUdfInfo(SUdfInfo* pUdfInfo) { if (pUdfInfo->path) { unlink(pUdfInfo->path); } - + tfree(pUdfInfo->path); pUdfInfo->path = strdup(path); if (pUdfInfo->handle) { taosCloseDll(pUdfInfo->handle); } - + pUdfInfo->handle = taosLoadDll(path); if (NULL == pUdfInfo->handle) { @@ -9278,7 +9282,7 @@ int32_t createQueryFunc(SQueriedTableInfo* pTableInfo, int32_t numOfOutput, SExp int32_t createQueryFilter(char *data, int32_t len, void** pFilters) { tExprNode* expr = NULL; - + TRY(TSDB_MAX_TAG_CONDITIONS) { expr = exprTreeFromBinary(data, len); } CATCH( code ) { @@ -10038,6 +10042,9 @@ static void doSetTagValueToResultBuf(char* output, const char* val, int16_t type static int64_t getQuerySupportBufSize(size_t numOfTables) { size_t s1 = sizeof(STableQueryInfo); + + // TODO: struct SHashNode is an internal implementation of + // hash table. The implementation should not leak here. size_t s2 = sizeof(SHashNode); // size_t s3 = sizeof(STableCheckInfo); buffer consumption in tsdb diff --git a/src/query/src/qUtil.c b/src/query/src/qUtil.c index 22bdefd59ef8844a560bb2944f8e61ad15f5f27f..6af5de813fe957d0c504f74f462e63e5a2984afc 100644 --- a/src/query/src/qUtil.c +++ b/src/query/src/qUtil.c @@ -92,6 +92,10 @@ void cleanupResultRowInfo(SResultRowInfo *pResultRowInfo) { taosHashCleanup(pResultRowInfo->pResult[i]->uniqueHash); pResultRowInfo->pResult[i]->uniqueHash = NULL; } + if (pResultRowInfo->pResult[i]->modeHash){ + taosHashCleanup(pResultRowInfo->pResult[i]->modeHash); + pResultRowInfo->pResult[i]->modeHash = NULL; + } } } @@ -205,7 +209,7 @@ SResultRowPool* initResultRowPool(size_t size) { qError("ResultRow blockSize is too large:%" PRId64, tmp); tmp = 128*1024*1024; } - p->blockSize = tmp; + p->blockSize = (int32_t)tmp; p->position.pos = 0; p->pData = taosArrayInit(8, POINTER_BYTES); diff --git a/src/query/tests/apercentileTest.cpp b/src/query/tests/apercentileTest.cpp index 65bbbe85b0c9d65cbea33baa90d608bed63a3ae6..12450846f39788019c560e1f726af4ab21f236bd 100644 --- a/src/query/tests/apercentileTest.cpp +++ b/src/query/tests/apercentileTest.cpp @@ -205,7 +205,7 @@ void tdigestTest() { double res = tdigestQuantile(pTDigest, ratio); free(pTDigest); useTime[0][i][j][m][p] = ((double)(testGetTimestampUs() - startu))/1000; - printf("DMode:%d,Type:%d,Num:%"PRId64",randP:%d,Used:%fms\tRES:%f\n", dataMode[i], dataTypes[j], totalNum[m], randPers[p], useTime[0][i][j][m][p], res); + printf("DMode:%d,Type:%d,Num:%" PRId64 ",randP:%d,Used:%fms\tRES:%f\n", dataMode[i], dataTypes[j], totalNum[m], randPers[p], useTime[0][i][j][m][p], res); startu = testGetTimestampUs(); thistogram_init(&pHisto); @@ -215,7 +215,7 @@ void tdigestTest() { double *res2 = thistogram_end(pHisto, &ratio, 1); free(pHisto); useTime[1][i][j][m][p] = ((double)(testGetTimestampUs() - startu))/1000; - printf("HMode:%d,Type:%d,Num:%"PRId64",randP:%d,Used:%fms\tRES:%f\n", dataMode[i], dataTypes[j], totalNum[m], randPers[p], useTime[1][i][j][m][p], *res2); + printf("HMode:%d,Type:%d,Num:%" PRId64 ",randP:%d,Used:%fms\tRES:%f\n", dataMode[i], dataTypes[j], totalNum[m], randPers[p], useTime[1][i][j][m][p], *res2); } free(data); @@ -234,7 +234,7 @@ void tdigestTest() { double res = tdigestQuantile(pTDigest, ratio); free(pTDigest); useTime[0][i][j][m][p] = ((double)(testGetTimestampUs() - startu))/1000; - printf("DMode:%d,Type:%d,Num:%"PRId64",randL:%d,Used:%fms\tRES:%f\n", dataMode[i], dataTypes[j], totalNum[m], randLimits[p], useTime[0][i][j][m][p], res); + printf("DMode:%d,Type:%d,Num:%" PRId64 ",randL:%d,Used:%fms\tRES:%f\n", dataMode[i], dataTypes[j], totalNum[m], randLimits[p], useTime[0][i][j][m][p], res); startu = testGetTimestampUs(); @@ -245,7 +245,7 @@ void tdigestTest() { double* res2 = thistogram_end(pHisto, &ratio, 1); free(pHisto); useTime[1][i][j][m][p] = ((double)(testGetTimestampUs() - startu))/1000; - printf("HMode:%d,Type:%d,Num:%"PRId64",randL:%d,Used:%fms\tRES:%f\n", dataMode[i], dataTypes[j], totalNum[m], randLimits[p], useTime[1][i][j][m][p], *res2); + printf("HMode:%d,Type:%d,Num:%" PRId64 ",randL:%d,Used:%fms\tRES:%f\n", dataMode[i], dataTypes[j], totalNum[m], randLimits[p], useTime[1][i][j][m][p], *res2); } free(data); } @@ -262,7 +262,7 @@ void tdigestTest() { double res = tdigestQuantile(pTDigest, ratio); free(pTDigest); useTime[0][i][j][m][0] = ((double)(testGetTimestampUs() - startu))/1000; - printf("DMode:%d,Type:%d,Num:%"PRId64",Used:%fms\tRES:%f\n", dataMode[i], dataTypes[j], totalNum[m], useTime[0][i][j][m][0], res); + printf("DMode:%d,Type:%d,Num:%" PRId64 ",Used:%fms\tRES:%f\n", dataMode[i], dataTypes[j], totalNum[m], useTime[0][i][j][m][0], res); startu = testGetTimestampUs(); @@ -273,7 +273,7 @@ void tdigestTest() { double* res2 = thistogram_end(pHisto, &ratio, 1); free(pHisto); useTime[1][i][j][m][0] = ((double)(testGetTimestampUs() - startu))/1000; - printf("HMode:%d,Type:%d,Num:%"PRId64",Used:%fms\tRES:%f\n", dataMode[i], dataTypes[j], totalNum[m], useTime[1][i][j][m][0], *res2); + printf("HMode:%d,Type:%d,Num:%" PRId64 ",Used:%fms\tRES:%f\n", dataMode[i], dataTypes[j], totalNum[m], useTime[1][i][j][m][0], *res2); } free(data); diff --git a/src/util/inc/hash.h b/src/util/inc/hash.h index d41c579a58dd149172ee42c94e97b72ab5687548..d6a1d0802c8a612ca155a52d8c691cbf9a191604 100644 --- a/src/util/inc/hash.h +++ b/src/util/inc/hash.h @@ -24,24 +24,18 @@ extern "C" { #include "hashfunc.h" #include "tlockfree.h" -#define HASH_MAX_CAPACITY (1024 * 1024 * 16) -#define HASH_DEFAULT_LOAD_FACTOR (0.75) -#define HASH_INDEX(v, c) ((v) & ((c)-1)) - -typedef void (*_hash_free_fn_t)(void *param); - +// TODO: SHashNode is an internal implementation and should not +// be in the public header file. typedef struct SHashNode { struct SHashNode *next; uint32_t hashVal; // the hash value of key uint32_t dataLen; // length of data uint32_t keyLen; // length of the key int8_t removed; // flag to indicate removed - int32_t count; // reference count + int32_t refCount; // reference count char data[]; } SHashNode; - -#define GET_HASH_NODE_KEY(_n) ((char*)(_n) + sizeof(SHashNode) + (_n)->dataLen) -#define GET_HASH_NODE_DATA(_n) ((char*)(_n) + sizeof(SHashNode)) + #define GET_HASH_PNODE(_n) ((SHashNode *)((char*)(_n) - sizeof(SHashNode))) typedef enum SHashLockTypeE { @@ -49,41 +43,24 @@ typedef enum SHashLockTypeE { HASH_ENTRY_LOCK = 1, } SHashLockTypeE; -typedef struct SHashEntry { - int32_t num; // number of elements in current entry - SRWLatch latch; // entry latch - SHashNode *next; -} SHashEntry; - -typedef struct SHashObj { - SHashEntry **hashList; - size_t capacity; // number of slots - size_t size; // number of elements in hash table - _hash_fn_t hashFp; // hash function - _hash_free_fn_t freeFp; // hash node free callback function - _equal_fn_t equalFp; // equal function - - SRWLatch lock; // read-write spin lock - SHashLockTypeE type; // lock type - bool enableUpdate; // enable update - SArray *pMemBlock; // memory block allocated for SHashEntry -} SHashObj; +typedef struct SHashObj SHashObj; /** - * init the hash table + * initialize a hash table * - * @param capacity initial capacity of the hash table - * @param fn hash function to generate the hash value - * @param threadsafe thread safe or not - * @return + * @param capacity initial capacity of the hash table + * @param fn hash function + * @param update whether the hash table allows in place update + * @param type whether the hash table has per entry lock + * @return hash table object */ SHashObj *taosHashInit(size_t capacity, _hash_fn_t fn, bool update, SHashLockTypeE type); - /** - * set equal func of the hash table - * @param pHashObj - * @param equalFp + * set equal func of the hash table + * + * @param pHashObj + * @param equalFp * @return */ void taosHashSetEqualFp(SHashObj *pHashObj, _equal_fn_t fp); @@ -92,6 +69,7 @@ void taosHashSetFreeFp(SHashObj *pHashObj, _hash_free_fn_t fp); /** * return the size of hash table + * * @param pHashObj * @return */ @@ -99,73 +77,114 @@ int32_t taosHashGetSize(const SHashObj *pHashObj); /** * put element into hash table, if the element with the same key exists, update it - * @param pHashObj - * @param key - * @param keyLen - * @param data - * @param size - * @return + * + * @param pHashObj hash table object + * @param key key + * @param keyLen length of key + * @param data data + * @param size size of data + * @return 0 if success, -1 otherwise */ int32_t taosHashPut(SHashObj *pHashObj, const void *key, size_t keyLen, void *data, size_t size); /** * return the payload data with the specified key * - * @param pHashObj - * @param key - * @param keyLen - * @return + * @param pHashObj hash table object + * @param key key + * @param keyLen length of key + * @return pointer to data */ void *taosHashGet(SHashObj *pHashObj, const void *key, size_t keyLen); /** - * apply the udf before return the result - * @param pHashObj - * @param key - * @param keyLen - * @param fp - * @param d - * @return + * Get the data associated with "key". Note that caller needs to make sure + * "d" has enough capacity to accomodate the data. + * + * @param pHashObj hash table object + * @param key key + * @param keyLen length of key + * @param fp function to be called on hash node when the data is found + * @param d buffer + * @return pointer to data */ void* taosHashGetClone(SHashObj *pHashObj, const void *key, size_t keyLen, void (*fp)(void *), void* d); /** - * @param pHashObj - * @param key - * @param keyLen - * @param fp - * @param d - * @param sz - * @return + * Get the data associated with "key". Note that caller needs to take ownership + * of the data "d" and make sure it is deallocated. + * + * @param pHashObj hash table object + * @param key key + * @param keyLen length of key + * @param fp function to be called on hash node when the data is found + * @param d buffer + * @param sz size of the data buffer + * @return pointer to data */ void* taosHashGetCloneExt(SHashObj *pHashObj, const void *key, size_t keyLen, void (*fp)(void *), void** d, size_t *sz); + /** * remove item with the specified key - * @param pHashObj - * @param key - * @param keyLen + * + * @param pHashObj hash table object + * @param key key + * @param keyLen length of key + * @return 0 if success, -1 otherwise */ int32_t taosHashRemove(SHashObj *pHashObj, const void *key, size_t keyLen); +/** + * remove item with the specified key + * + * @param pHashObj hash table object + * @param key key + * @param keyLen length of key + * @param data buffer for data + * @param dsize size of data buffer + * @return 0 if success, -1 otherwise + */ int32_t taosHashRemoveWithData(SHashObj *pHashObj, const void *key, size_t keyLen, void* data, size_t dsize); -int32_t taosHashCondTraverse(SHashObj *pHashObj, bool (*fp)(void *, void *), void *param); +/** + * traverse through all objects in the hash table and apply "fp" on each node. + * If "fp" returns false when applied on top of a node, the node will also be + * removed from table. + * + * @param pHashObj hash table object + * @param fp function pointer applied on each node + * @param param parameter fed into "fp" + */ +void taosHashCondTraverse(SHashObj *pHashObj, bool (*fp)(void *, void *), void *param); +/** + * clear the contents of the hash table + * + * @param pHashObj hash table object + */ void taosHashClear(SHashObj *pHashObj); /** * clean up hash table - * @param handle + * + * @param pHashObj hash table object */ void taosHashCleanup(SHashObj *pHashObj); /** + * return the number of collisions in the hash table * - * @param pHashObj - * @return + * @param pHashObj hash table object + * @return maximum number of collisions */ -int32_t taosHashGetMaxOverflowLinkLength(const SHashObj *pHashObj); +int32_t taosHashGetMaxOverflowLinkLength(SHashObj *pHashObj); +/** + * return the consumed memory of the hash table + * + * @param pHashObj hash table object + * @return consumed memory of the hash table + */ size_t taosHashGetMemSize(const SHashObj *pHashObj); void *taosHashIterate(SHashObj *pHashObj, void *p); diff --git a/src/util/inc/hashfunc.h b/src/util/inc/hashfunc.h index a9563d03941496a42e71a91fc7b7aab014ff4f50..529188849846088106f6576242ef76e2894ccb1b 100644 --- a/src/util/inc/hashfunc.h +++ b/src/util/inc/hashfunc.h @@ -22,6 +22,8 @@ typedef uint32_t (*_hash_fn_t)(const char *, uint32_t); typedef int32_t (*_equal_fn_t)(const void *a, const void *b, size_t sz); +typedef void (*_hash_free_fn_t)(void *param); + /** * murmur hash algorithm * @key usually string diff --git a/src/util/src/hash.c b/src/util/src/hash.c index fa33d436d4a7cea62e235c683f76ce1b9dc4d5b8..e2fd37fdc41479743d21e43f451c4fc4270b01d8 100644 --- a/src/util/src/hash.c +++ b/src/util/src/hash.c @@ -18,53 +18,117 @@ #include "tulog.h" #include "taosdef.h" -#define EXT_SIZE 1024 +/* + * Macro definition + */ -#define HASH_NEED_RESIZE(_h) ((_h)->size >= (_h)->capacity * HASH_DEFAULT_LOAD_FACTOR) +#define HASH_MAX_CAPACITY (1024 * 1024 * 16) +#define HASH_DEFAULT_LOAD_FACTOR (0.75) +#define HASH_INDEX(v, c) ((v) & ((c)-1)) -#define DO_FREE_HASH_NODE(_n) \ - do { \ - tfree(_n); \ - } while (0) +#define HASH_NEED_RESIZE(_h) ((_h)->size >= (_h)->capacity * HASH_DEFAULT_LOAD_FACTOR) -#define FREE_HASH_NODE(_h, _n) \ +#define FREE_HASH_NODE(_n) \ do { \ - if ((_h)->freeFp) { \ - (_h)->freeFp(GET_HASH_NODE_DATA(_n)); \ - } \ - \ - DO_FREE_HASH_NODE(_n); \ + tfree(_n); \ } while (0); -static FORCE_INLINE void __wr_lock(void *lock, int32_t type) { - if (type == HASH_NO_LOCK) { +#define GET_HASH_NODE_KEY(_n) ((char*)(_n) + sizeof(SHashNode) + (_n)->dataLen) +#define GET_HASH_NODE_DATA(_n) ((char*)(_n) + sizeof(SHashNode)) +#define GET_HASH_PNODE(_n) ((SHashNode *)((char*)(_n) - sizeof(SHashNode))) + +/* + * typedef + */ + +typedef struct SHashEntry { + int32_t num; // number of elements in current entry + SRWLatch latch; // entry latch + SHashNode *next; +} SHashEntry; + +typedef struct SHashObj { + SHashEntry **hashList; + size_t capacity; // number of slots + size_t size; // number of elements in hash table + _hash_fn_t hashFp; // hash function + _equal_fn_t equalFp; // equal function + _hash_free_fn_t freeFp; // hash node free callback function + SRWLatch lock; // read-write spin lock + SHashLockTypeE type; // lock type + bool enableUpdate; // enable update + SArray *pMemBlock; // memory block allocated for SHashEntry +} SHashObj; + +/* + * Function definition + */ + +static FORCE_INLINE void taosHashWLock(SHashObj *pHashObj) { + if (pHashObj->type == HASH_NO_LOCK) { + return; + } + taosWLockLatch(&pHashObj->lock); +} + +static FORCE_INLINE void taosHashWUnlock(SHashObj *pHashObj) { + if (pHashObj->type == HASH_NO_LOCK) { + return; + } + + taosWUnLockLatch(&pHashObj->lock); +} + +static FORCE_INLINE void taosHashRLock(SHashObj *pHashObj) { + if (pHashObj->type == HASH_NO_LOCK) { + return; + } + + taosRLockLatch(&pHashObj->lock); +} + +static FORCE_INLINE void taosHashRUnlock(SHashObj *pHashObj) { + if (pHashObj->type == HASH_NO_LOCK) { + return; + } + + taosRUnLockLatch(&pHashObj->lock); +} + + +static FORCE_INLINE void +taosHashEntryWLock(const SHashObj *pHashObj, SHashEntry* pe) { + if (pHashObj->type == HASH_NO_LOCK) { return; } - taosWLockLatch(lock); + taosWLockLatch(&pe->latch); } -static FORCE_INLINE void __rd_lock(void *lock, int32_t type) { - if (type == HASH_NO_LOCK) { +static FORCE_INLINE void +taosHashEntryWUnlock(const SHashObj *pHashObj, SHashEntry* pe) { + if (pHashObj->type == HASH_NO_LOCK) { return; } - taosRLockLatch(lock); + taosWUnLockLatch(&pe->latch); } -static FORCE_INLINE void __rd_unlock(void *lock, int32_t type) { - if (type == HASH_NO_LOCK) { +static FORCE_INLINE void +taosHashEntryRLock(const SHashObj *pHashObj, SHashEntry* pe) { + if (pHashObj->type == HASH_NO_LOCK) { return; } - taosRUnLockLatch(lock); + taosRLockLatch(&pe->latch); } -static FORCE_INLINE void __wr_unlock(void *lock, int32_t type) { - if (type == HASH_NO_LOCK) { +static FORCE_INLINE void +taosHashEntryRUnlock(const SHashObj *pHashObj, SHashEntry* pe) { + if (pHashObj->type == HASH_NO_LOCK) { return; } - taosWUnLockLatch(lock); + taosRUnLockLatch(&pe->latch); } static FORCE_INLINE int32_t taosHashCapacity(int32_t length) { @@ -75,10 +139,13 @@ static FORCE_INLINE int32_t taosHashCapacity(int32_t length) { return i; } -static FORCE_INLINE SHashNode *doSearchInEntryList(SHashObj *pHashObj, SHashEntry *pe, const void *key, size_t keyLen, uint32_t hashVal) { +static FORCE_INLINE SHashNode * +doSearchInEntryList(SHashObj *pHashObj, SHashEntry *pe, const void *key, size_t keyLen, uint32_t hashVal) { SHashNode *pNode = pe->next; while (pNode) { - if ((pNode->keyLen == keyLen) && ((*(pHashObj->equalFp))(GET_HASH_NODE_KEY(pNode), key, keyLen) == 0) && pNode->removed == 0) { + if ((pNode->keyLen == keyLen) && + ((*(pHashObj->equalFp))(GET_HASH_NODE_KEY(pNode), key, keyLen) == 0) && + pNode->removed == 0) { assert(pNode->hashVal == hashVal); break; } @@ -90,59 +157,57 @@ static FORCE_INLINE SHashNode *doSearchInEntryList(SHashObj *pHashObj, SHashEntr } /** - * Resize the hash list if the threshold is reached + * resize the hash list if the threshold is reached * * @param pHashObj */ static void taosHashTableResize(SHashObj *pHashObj); /** + * allocate and initialize a hash node + * * @param key key of object for hash, usually a null-terminated string * @param keyLen length of key - * @param pData actually data. Requires a consecutive memory block, no pointer is allowed in pData. - * Pointer copy causes memory access error. + * @param pData data to be stored in hash node * @param dsize size of data * @return SHashNode */ static SHashNode *doCreateHashNode(const void *key, size_t keyLen, const void *pData, size_t dsize, uint32_t hashVal); /** - * Update the hash node + * update the hash node * - * @param pNode hash node - * @param key key for generate hash value - * @param keyLen key length - * @param pData actual data - * @param dsize size of actual data - * @return hash node + * @param pHashObj hash table object + * @param pe hash table entry to operate on + * @param prev previous node + * @param pNode the old node with requested key + * @param pNewNode the new node with requested key */ -static FORCE_INLINE SHashNode *doUpdateHashNode(SHashObj *pHashObj, SHashEntry* pe, SHashNode* prev, SHashNode *pNode, SHashNode *pNewNode) { +static FORCE_INLINE void doUpdateHashNode(SHashObj *pHashObj, SHashEntry* pe, SHashNode* prev, SHashNode *pNode, SHashNode *pNewNode) { assert(pNode->keyLen == pNewNode->keyLen); - atomic_sub_fetch_32(&pNode->count, 1); + atomic_sub_fetch_32(&pNode->refCount, 1); if (prev != NULL) { prev->next = pNewNode; } else { pe->next = pNewNode; } - if (pNode->count <= 0) { + if (pNode->refCount <= 0) { pNewNode->next = pNode->next; - DO_FREE_HASH_NODE(pNode); + FREE_HASH_NODE(pNode); } else { - pNewNode->next = pNode; + pNewNode->next = pNode; pe->num++; atomic_add_fetch_64(&pHashObj->size, 1); } - - return pNewNode; } /** * insert the hash node at the front of the linked list * - * @param pHashObj - * @param pNode + * @param pHashObj hash table object + * @param pNode the old node with requested key */ static void pushfrontNodeInEntryList(SHashEntry *pEntry, SHashNode *pNode); @@ -155,13 +220,21 @@ static void pushfrontNodeInEntryList(SHashEntry *pEntry, SHashNode *pNode); static FORCE_INLINE bool taosHashTableEmpty(const SHashObj *pHashObj); /** - * Get the next element in hash table for iterator - * @param pIter - * @return + * initialize a hash table + * + * @param capacity initial capacity of the hash table + * @param fn hash function + * @param update whether the hash table allows in place update + * @param type whether the hash table has per entry lock + * @return hash table object */ - SHashObj *taosHashInit(size_t capacity, _hash_fn_t fn, bool update, SHashLockTypeE type) { - assert(fn != NULL); + if (fn == NULL) { + uError("hash table must have a valid hash function"); + assert(0); + return NULL; + } + if (capacity == 0) { capacity = 4; } @@ -174,28 +247,43 @@ SHashObj *taosHashInit(size_t capacity, _hash_fn_t fn, bool update, SHashLockTyp // the max slots is not defined by user pHashObj->capacity = taosHashCapacity((int32_t)capacity); - assert((pHashObj->capacity & (pHashObj->capacity - 1)) == 0); pHashObj->equalFp = memcmp; pHashObj->hashFp = fn; pHashObj->type = type; pHashObj->enableUpdate = update; + assert((pHashObj->capacity & (pHashObj->capacity - 1)) == 0); + pHashObj->hashList = (SHashEntry **)calloc(pHashObj->capacity, sizeof(void *)); if (pHashObj->hashList == NULL) { free(pHashObj); uError("failed to allocate memory, reason:%s", strerror(errno)); return NULL; - } else { - pHashObj->pMemBlock = taosArrayInit(8, sizeof(void *)); + } - void *p = calloc(pHashObj->capacity, sizeof(SHashEntry)); - for (int32_t i = 0; i < pHashObj->capacity; ++i) { - pHashObj->hashList[i] = (void *)((char *)p + i * sizeof(SHashEntry)); - } + pHashObj->pMemBlock = taosArrayInit(8, sizeof(void *)); + if (pHashObj->pMemBlock == NULL) { + free(pHashObj->hashList); + free(pHashObj); + uError("failed to allocate memory, reason:%s", strerror(errno)); + return NULL; + } - taosArrayPush(pHashObj->pMemBlock, &p); + void *p = calloc(pHashObj->capacity, sizeof(SHashEntry)); + if (p == NULL) { + taosArrayDestroy(&pHashObj->pMemBlock); + free(pHashObj->hashList); + free(pHashObj); + uError("failed to allocate memory, reason:%s", strerror(errno)); + return NULL; + } + + for (int32_t i = 0; i < pHashObj->capacity; ++i) { + pHashObj->hashList[i] = (void *)((char *)p + i * sizeof(SHashEntry)); } + taosArrayPush(pHashObj->pMemBlock, &p); + return pHashObj; } @@ -212,7 +300,7 @@ void taosHashSetFreeFp(SHashObj *pHashObj, _hash_free_fn_t fp) { } int32_t taosHashGetSize(const SHashObj *pHashObj) { - if (!pHashObj) { + if (pHashObj == NULL) { return 0; } return (int32_t)atomic_load_64(&pHashObj->size); @@ -223,6 +311,10 @@ static FORCE_INLINE bool taosHashTableEmpty(const SHashObj *pHashObj) { } int32_t taosHashPut(SHashObj *pHashObj, const void *key, size_t keyLen, void *data, size_t size) { + if (pHashObj == NULL || key == NULL || keyLen == 0 || data == NULL || size == 0) { + return -1; + } + uint32_t hashVal = (*pHashObj->hashFp)(key, (uint32_t)keyLen); SHashNode *pNewNode = doCreateHashNode(key, keyLen, data, size, hashVal); if (pNewNode == NULL) { @@ -231,19 +323,17 @@ int32_t taosHashPut(SHashObj *pHashObj, const void *key, size_t keyLen, void *da // need the resize process, write lock applied if (HASH_NEED_RESIZE(pHashObj)) { - __wr_lock(&pHashObj->lock, pHashObj->type); + taosHashWLock(pHashObj); taosHashTableResize(pHashObj); - __wr_unlock(&pHashObj->lock, pHashObj->type); + taosHashWUnlock(pHashObj); } - __rd_lock(&pHashObj->lock, pHashObj->type); + taosHashRLock(pHashObj); int32_t slot = HASH_INDEX(hashVal, pHashObj->capacity); SHashEntry *pe = pHashObj->hashList[slot]; - if (pHashObj->type == HASH_ENTRY_LOCK) { - taosWLockLatch(&pe->latch); - } + taosHashEntryWLock(pHashObj, pe); SHashNode *pNode = pe->next; if (pe->num > 0) { @@ -254,7 +344,9 @@ int32_t taosHashPut(SHashObj *pHashObj, const void *key, size_t keyLen, void *da SHashNode* prev = NULL; while (pNode) { - if ((pNode->keyLen == keyLen) && ((*(pHashObj->equalFp))(GET_HASH_NODE_KEY(pNode), key, keyLen) == 0) && pNode->removed == 0) { + if ((pNode->keyLen == keyLen) && + (*(pHashObj->equalFp))(GET_HASH_NODE_KEY(pNode), key, keyLen) == 0 && + pNode->removed == 0) { assert(pNode->hashVal == hashVal); break; } @@ -267,18 +359,12 @@ int32_t taosHashPut(SHashObj *pHashObj, const void *key, size_t keyLen, void *da // no data in hash table with the specified key, add it into hash table pushfrontNodeInEntryList(pe, pNewNode); - if (pe->num == 0) { - assert(pe->next == NULL); - } else { - assert(pe->next != NULL); - } + assert(pe->next != NULL); - if (pHashObj->type == HASH_ENTRY_LOCK) { - taosWUnLockLatch(&pe->latch); - } + taosHashEntryWUnlock(pHashObj, pe); // enable resize - __rd_unlock(&pHashObj->lock, pHashObj->type); + taosHashRUnlock(pHashObj); atomic_add_fetch_64(&pHashObj->size, 1); return 0; @@ -287,15 +373,13 @@ int32_t taosHashPut(SHashObj *pHashObj, const void *key, size_t keyLen, void *da if (pHashObj->enableUpdate) { doUpdateHashNode(pHashObj, pe, prev, pNode, pNewNode); } else { - DO_FREE_HASH_NODE(pNewNode); + FREE_HASH_NODE(pNewNode); } - if (pHashObj->type == HASH_ENTRY_LOCK) { - taosWUnLockLatch(&pe->latch); - } + taosHashEntryWUnlock(pHashObj, pe); // enable resize - __rd_unlock(&pHashObj->lock, pHashObj->type); + taosHashRUnlock(pHashObj); return pHashObj->enableUpdate ? 0 : -1; } @@ -306,30 +390,27 @@ void *taosHashGet(SHashObj *pHashObj, const void *key, size_t keyLen) { } //TODO(yihaoDeng), merge with taosHashGetClone void* taosHashGetCloneExt(SHashObj *pHashObj, const void *key, size_t keyLen, void (*fp)(void *), void** d, size_t *sz) { - if (taosHashTableEmpty(pHashObj) || keyLen == 0 || key == NULL) { + if (pHashObj == NULL || taosHashTableEmpty(pHashObj) || keyLen == 0 || key == NULL) { return NULL; } uint32_t hashVal = (*pHashObj->hashFp)(key, (uint32_t)keyLen); // only add the read lock to disable the resize process - __rd_lock(&pHashObj->lock, pHashObj->type); + taosHashRLock(pHashObj); int32_t slot = HASH_INDEX(hashVal, pHashObj->capacity); SHashEntry *pe = pHashObj->hashList[slot]; // no data, return directly if (atomic_load_32(&pe->num) == 0) { - __rd_unlock(&pHashObj->lock, pHashObj->type); + taosHashRUnlock(pHashObj); return NULL; } char *data = NULL; - // lock entry - if (pHashObj->type == HASH_ENTRY_LOCK) { - taosRLockLatch(&pe->latch); - } + taosHashEntryRLock(pHashObj, pe); if (pe->num > 0) { assert(pe->next != NULL); @@ -342,56 +423,47 @@ void* taosHashGetCloneExt(SHashObj *pHashObj, const void *key, size_t keyLen, vo if (fp != NULL) { fp(GET_HASH_NODE_DATA(pNode)); } - + if (*d == NULL) { - *sz = pNode->dataLen + EXT_SIZE; - *d = calloc(1, *sz); + *sz = pNode->dataLen; + *d = calloc(1, *sz); } else if (*sz < pNode->dataLen){ - *sz = pNode->dataLen + EXT_SIZE; - *d = realloc(*d, *sz); + *sz = pNode->dataLen; + *d = realloc(*d, *sz); } memcpy((char *)(*d), GET_HASH_NODE_DATA(pNode), pNode->dataLen); - // just make runtime happy - if ((*sz) - pNode->dataLen > 0) { - memset((char *)(*d) + pNode->dataLen, 0, (*sz) - pNode->dataLen); - } data = GET_HASH_NODE_DATA(pNode); } - if (pHashObj->type == HASH_ENTRY_LOCK) { - taosRUnLockLatch(&pe->latch); - } + taosHashEntryRUnlock(pHashObj, pe); + taosHashRUnlock(pHashObj); - __rd_unlock(&pHashObj->lock, pHashObj->type); return data; } void* taosHashGetClone(SHashObj *pHashObj, const void *key, size_t keyLen, void (*fp)(void *), void* d) { - if (taosHashTableEmpty(pHashObj) || keyLen == 0 || key == NULL) { + if (pHashObj == NULL || taosHashTableEmpty(pHashObj) || keyLen == 0 || key == NULL) { return NULL; } uint32_t hashVal = (*pHashObj->hashFp)(key, (uint32_t)keyLen); // only add the read lock to disable the resize process - __rd_lock(&pHashObj->lock, pHashObj->type); + taosHashRLock(pHashObj); int32_t slot = HASH_INDEX(hashVal, pHashObj->capacity); SHashEntry *pe = pHashObj->hashList[slot]; // no data, return directly if (atomic_load_32(&pe->num) == 0) { - __rd_unlock(&pHashObj->lock, pHashObj->type); + taosHashRUnlock(pHashObj); return NULL; } char *data = NULL; - // lock entry - if (pHashObj->type == HASH_ENTRY_LOCK) { - taosRLockLatch(&pe->latch); - } + taosHashEntryRLock(pHashObj, pe); if (pe->num > 0) { assert(pe->next != NULL); @@ -412,11 +484,9 @@ void* taosHashGetClone(SHashObj *pHashObj, const void *key, size_t keyLen, void data = GET_HASH_NODE_DATA(pNode); } - if (pHashObj->type == HASH_ENTRY_LOCK) { - taosRUnLockLatch(&pe->latch); - } + taosHashEntryRUnlock(pHashObj, pe); + taosHashRUnlock(pHashObj); - __rd_unlock(&pHashObj->lock, pHashObj->type); return data; } @@ -425,28 +495,26 @@ int32_t taosHashRemove(SHashObj *pHashObj, const void *key, size_t keyLen) { } int32_t taosHashRemoveWithData(SHashObj *pHashObj, const void *key, size_t keyLen, void *data, size_t dsize) { - if (pHashObj == NULL || taosHashTableEmpty(pHashObj)) { + if (pHashObj == NULL || taosHashTableEmpty(pHashObj) || key == NULL || keyLen == 0) { return -1; } uint32_t hashVal = (*pHashObj->hashFp)(key, (uint32_t)keyLen); // disable the resize process - __rd_lock(&pHashObj->lock, pHashObj->type); + taosHashRLock(pHashObj); int32_t slot = HASH_INDEX(hashVal, pHashObj->capacity); SHashEntry *pe = pHashObj->hashList[slot]; - if (pHashObj->type == HASH_ENTRY_LOCK) { - taosWLockLatch(&pe->latch); - } + taosHashEntryWLock(pHashObj, pe); // double check after locked if (pe->num == 0) { assert(pe->next == NULL); - taosWUnLockLatch(&pe->latch); - __rd_unlock(&pHashObj->lock, pHashObj->type); + taosHashEntryWUnlock(pHashObj, pe); + taosHashRUnlock(pHashObj); return -1; } @@ -455,49 +523,46 @@ int32_t taosHashRemoveWithData(SHashObj *pHashObj, const void *key, size_t keyLe SHashNode *prevNode = NULL; while (pNode) { - if ((pNode->keyLen == keyLen) && ((*(pHashObj->equalFp))(GET_HASH_NODE_KEY(pNode), key, keyLen) == 0) && pNode->removed == 0) - break; - - prevNode = pNode; - pNode = pNode->next; - } + if ((pNode->keyLen == keyLen) && + ((*(pHashObj->equalFp))(GET_HASH_NODE_KEY(pNode), key, keyLen) == 0) && + pNode->removed == 0) { + code = 0; // it is found + + atomic_sub_fetch_32(&pNode->refCount, 1); + pNode->removed = 1; + if (pNode->refCount <= 0) { + if (prevNode == NULL) { + pe->next = pNode->next; + } else { + prevNode->next = pNode->next; + } - if (pNode) { - code = 0; // it is found + if (data) memcpy(data, GET_HASH_NODE_DATA(pNode), dsize); - atomic_sub_fetch_32(&pNode->count, 1); - pNode->removed = 1; - if (pNode->count <= 0) { - if (prevNode) { - prevNode->next = pNode->next; - } else { - pe->next = pNode->next; + pe->num--; + atomic_sub_fetch_64(&pHashObj->size, 1); + FREE_HASH_NODE(pNode); } - - if (data) memcpy(data, GET_HASH_NODE_DATA(pNode), dsize); - - pe->num--; - atomic_sub_fetch_64(&pHashObj->size, 1); - FREE_HASH_NODE(pHashObj, pNode); + } else { + prevNode = pNode; + pNode = pNode->next; } - } - - if (pHashObj->type == HASH_ENTRY_LOCK) { - taosWUnLockLatch(&pe->latch); } - __rd_unlock(&pHashObj->lock, pHashObj->type); + taosHashEntryWUnlock(pHashObj, pe); + + taosHashRUnlock(pHashObj); return code; } -int32_t taosHashCondTraverse(SHashObj *pHashObj, bool (*fp)(void *, void *), void *param) { - if (pHashObj == NULL || taosHashTableEmpty(pHashObj)) { - return 0; +void taosHashCondTraverse(SHashObj *pHashObj, bool (*fp)(void *, void *), void *param) { + if (pHashObj == NULL || taosHashTableEmpty(pHashObj) || fp == NULL) { + return; } // disable the resize process - __rd_lock(&pHashObj->lock, pHashObj->type); + taosHashRLock(pHashObj); int32_t numOfEntries = (int32_t)pHashObj->capacity; for (int32_t i = 0; i < numOfEntries; ++i) { @@ -506,63 +571,32 @@ int32_t taosHashCondTraverse(SHashObj *pHashObj, bool (*fp)(void *, void *), voi continue; } - if (pHashObj->type == HASH_ENTRY_LOCK) { - taosWLockLatch(&pEntry->latch); - } - - // todo remove the first node - SHashNode *pNode = NULL; - while((pNode = pEntry->next) != NULL) { - if (fp && (!fp(param, GET_HASH_NODE_DATA(pNode)))) { - pEntry->num -= 1; - atomic_sub_fetch_64(&pHashObj->size, 1); - - pEntry->next = pNode->next; - - if (pEntry->num == 0) { - assert(pEntry->next == NULL); - } else { - assert(pEntry->next != NULL); - } + taosHashEntryWLock(pHashObj, pEntry); - FREE_HASH_NODE(pHashObj, pNode); + SHashNode *pPrevNode = NULL; + SHashNode *pNode = pEntry->next; + while (pNode != NULL) { + if (fp(param, GET_HASH_NODE_DATA(pNode))) { + pPrevNode = pNode; + pNode = pNode->next; } else { - break; - } - } - - // handle the following node - if (pNode != NULL) { - assert(pNode == pEntry->next); - SHashNode *pNext = NULL; - - while ((pNext = pNode->next) != NULL) { - // not qualified, remove it - if (fp && (!fp(param, GET_HASH_NODE_DATA(pNext)))) { - pNode->next = pNext->next; - pEntry->num -= 1; - atomic_sub_fetch_64(&pHashObj->size, 1); - - if (pEntry->num == 0) { - assert(pEntry->next == NULL); - } else { - assert(pEntry->next != NULL); - } - - FREE_HASH_NODE(pHashObj, pNext); + if (pPrevNode == NULL) { + pEntry->next = pNode->next; } else { - pNode = pNext; + pPrevNode->next = pNode->next; } + pEntry->num -= 1; + atomic_sub_fetch_64(&pHashObj->size, 1); + SHashNode *next = pNode->next; + FREE_HASH_NODE(pNode); + pNode = next; } } - if (pHashObj->type == HASH_ENTRY_LOCK) { - taosWUnLockLatch(&pEntry->latch); - } + taosHashEntryWUnlock(pHashObj, pEntry); } - __rd_unlock(&pHashObj->lock, pHashObj->type); - return 0; + taosHashRUnlock(pHashObj); } void taosHashClear(SHashObj *pHashObj) { @@ -572,12 +606,12 @@ void taosHashClear(SHashObj *pHashObj) { SHashNode *pNode, *pNext; - __wr_lock(&pHashObj->lock, pHashObj->type); + taosHashWLock(pHashObj); for (int32_t i = 0; i < pHashObj->capacity; ++i) { SHashEntry *pEntry = pHashObj->hashList[i]; if (pEntry->num == 0) { - assert(pEntry->next == 0); + assert(pEntry->next == NULL); continue; } @@ -586,7 +620,7 @@ void taosHashClear(SHashObj *pHashObj) { while (pNode) { pNext = pNode->next; - FREE_HASH_NODE(pHashObj, pNode); + FREE_HASH_NODE(pNode); pNode = pNext; } @@ -596,7 +630,7 @@ void taosHashClear(SHashObj *pHashObj) { } pHashObj->size = 0; - __wr_unlock(&pHashObj->lock, pHashObj->type); + taosHashWUnlock(pHashObj); } // the input paras should be SHashObj **, so the origin input will be set by tfree(*pHashObj) @@ -616,25 +650,28 @@ void taosHashCleanup(SHashObj *pHashObj) { } taosArrayDestroy(&pHashObj->pMemBlock); - - memset(pHashObj, 0, sizeof(SHashObj)); - tfree(pHashObj); + free(pHashObj); } // for profile only -int32_t taosHashGetMaxOverflowLinkLength(const SHashObj *pHashObj) { +int32_t taosHashGetMaxOverflowLinkLength(SHashObj *pHashObj) { if (pHashObj == NULL || taosHashTableEmpty(pHashObj)) { return 0; } int32_t num = 0; + taosHashRLock(pHashObj); for (int32_t i = 0; i < pHashObj->size; ++i) { SHashEntry *pEntry = pHashObj->hashList[i]; + + // fine grain per entry lock is not held since this is used + // for profiling only and doesn't need an accurate count. if (num < pEntry->num) { num = pEntry->num; } } + taosHashRUnlock(pHashObj); return num; } @@ -644,27 +681,23 @@ void taosHashTableResize(SHashObj *pHashObj) { return; } - // double the original capacity - SHashNode *pNode = NULL; - SHashNode *pNext = NULL; - - int32_t newSize = (int32_t)(pHashObj->capacity << 1u); - if (newSize > HASH_MAX_CAPACITY) { - // uDebug("current capacity:%d, maximum capacity:%d, no resize applied due to limitation is reached", - // pHashObj->capacity, HASH_MAX_CAPACITY); + int32_t newCapacity = (int32_t)(pHashObj->capacity << 1u); + if (newCapacity > HASH_MAX_CAPACITY) { + uDebug("current capacity:%zu, maximum capacity:%d, no resize applied due to limitation is reached", + pHashObj->capacity, HASH_MAX_CAPACITY); return; } int64_t st = taosGetTimestampUs(); - void *pNewEntryList = realloc(pHashObj->hashList, sizeof(void *) * newSize); - if (pNewEntryList == NULL) { // todo handle error - // uDebug("cache resize failed due to out of memory, capacity remain:%d", pHashObj->capacity); + void *pNewEntryList = realloc(pHashObj->hashList, sizeof(void *) * newCapacity); + if (pNewEntryList == NULL) { + uDebug("cache resize failed due to out of memory, capacity remain:%zu", pHashObj->capacity); return; } pHashObj->hashList = pNewEntryList; - size_t inc = newSize - pHashObj->capacity; + size_t inc = newCapacity - pHashObj->capacity; void * p = calloc(inc, sizeof(SHashEntry)); for (int32_t i = 0; i < inc; ++i) { @@ -673,78 +706,46 @@ void taosHashTableResize(SHashObj *pHashObj) { taosArrayPush(pHashObj->pMemBlock, &p); - pHashObj->capacity = newSize; - for (int32_t i = 0; i < pHashObj->capacity; ++i) { - SHashEntry *pe = pHashObj->hashList[i]; - - if (pe->num == 0) { - assert(pe->next == NULL); - } else { - assert(pe->next != NULL); - } + pHashObj->capacity = newCapacity; + for (int32_t idx = 0; idx < pHashObj->capacity; ++idx) { + SHashEntry *pe = pHashObj->hashList[idx]; + SHashNode *pNode; + SHashNode *pNext; + SHashNode *pPrev = NULL; if (pe->num == 0) { assert(pe->next == NULL); continue; } - while ((pNode = pe->next) != NULL) { - int32_t j = HASH_INDEX(pNode->hashVal, pHashObj->capacity); - if (j != i) { - pe->num -= 1; - pe->next = pNode->next; - - if (pe->num == 0) { - assert(pe->next == NULL); - } else { - assert(pe->next != NULL); - } - - SHashEntry *pNewEntry = pHashObj->hashList[j]; - pushfrontNodeInEntryList(pNewEntry, pNode); - } else { - break; - } - } - - if (pNode != NULL) { - while ((pNext = pNode->next) != NULL) { - int32_t j = HASH_INDEX(pNext->hashVal, pHashObj->capacity); - if (j != i) { - pe->num -= 1; - - pNode->next = pNext->next; - pNext->next = NULL; + pNode = pe->next; - // added into new slot - SHashEntry *pNewEntry = pHashObj->hashList[j]; - - if (pNewEntry->num == 0) { - assert(pNewEntry->next == NULL); - } else { - assert(pNewEntry->next != NULL); - } + assert(pNode != NULL); - pushfrontNodeInEntryList(pNewEntry, pNext); + while (pNode != NULL) { + int32_t newIdx = HASH_INDEX(pNode->hashVal, pHashObj->capacity); + pNext = pNode->next; + if (newIdx != idx) { + pe->num -= 1; + if (pPrev == NULL) { + pe->next = pNext; } else { - pNode = pNext; + pPrev->next = pNext; } - } - if (pe->num == 0) { - assert(pe->next == NULL); + SHashEntry *pNewEntry = pHashObj->hashList[newIdx]; + pushfrontNodeInEntryList(pNewEntry, pNode); } else { - assert(pe->next != NULL); + pPrev = pNode; } - + pNode = pNext; } - } int64_t et = taosGetTimestampUs(); uDebug("hash table resize completed, new capacity:%d, load factor:%f, elapsed time:%fms", (int32_t)pHashObj->capacity, - ((double)pHashObj->size) / pHashObj->capacity, (et - st) / 1000.0); + ((double)pHashObj->size) / pHashObj->capacity, (et - st) / 1000.0); } SHashNode *doCreateHashNode(const void *key, size_t keyLen, const void *pData, size_t dsize, uint32_t hashVal) { @@ -757,8 +758,8 @@ SHashNode *doCreateHashNode(const void *key, size_t keyLen, const void *pData, s pNewNode->keyLen = (uint32_t)keyLen; pNewNode->hashVal = hashVal; - pNewNode->dataLen = (uint32_t) dsize; - pNewNode->count = 1; + pNewNode->dataLen = (uint32_t)dsize; + pNewNode->refCount = 1; pNewNode->removed = 0; pNewNode->next = NULL; @@ -805,40 +806,37 @@ static void *taosHashReleaseNode(SHashObj *pHashObj, void *p, int *slot) { *slot = HASH_INDEX(pOld->hashVal, pHashObj->capacity); SHashEntry *pe = pHashObj->hashList[*slot]; - // lock entry - if (pHashObj->type == HASH_ENTRY_LOCK) { - taosWLockLatch(&pe->latch); - } + taosHashEntryWLock(pHashObj, pe); SHashNode *pNode = pe->next; while (pNode) { - if (pNode == pOld) + if (pNode == pOld) break; prevNode = pNode; pNode = pNode->next; } - if (pNode) { + if (pNode) { pNode = pNode->next; while (pNode) { if (pNode->removed == 0) break; pNode = pNode->next; } - atomic_sub_fetch_32(&pOld->count, 1); - if (pOld->count <=0) { + atomic_sub_fetch_32(&pOld->refCount, 1); + if (pOld->refCount <=0) { if (prevNode) { prevNode->next = pOld->next; } else { pe->next = pOld->next; } - + pe->num--; atomic_sub_fetch_64(&pHashObj->size, 1); - FREE_HASH_NODE(pHashObj, pOld); - } + FREE_HASH_NODE(pOld); + } } else { uError("pNode:%p data:%p is not there!!!", pNode, p); } @@ -847,22 +845,20 @@ static void *taosHashReleaseNode(SHashObj *pHashObj, void *p, int *slot) { } void *taosHashIterate(SHashObj *pHashObj, void *p) { - if (pHashObj == NULL) return NULL; + if (pHashObj == NULL) return NULL; int slot = 0; char *data = NULL; // only add the read lock to disable the resize process - __rd_lock(&pHashObj->lock, pHashObj->type); + taosHashRLock(pHashObj); SHashNode *pNode = NULL; if (p) { pNode = taosHashReleaseNode(pHashObj, p, &slot); if (pNode == NULL) { SHashEntry *pe = pHashObj->hashList[slot]; - if (pHashObj->type == HASH_ENTRY_LOCK) { - taosWUnLockLatch(&pe->latch); - } + taosHashEntryWUnlock(pHashObj, pe); slot = slot + 1; } @@ -872,10 +868,7 @@ void *taosHashIterate(SHashObj *pHashObj, void *p) { for (; slot < pHashObj->capacity; ++slot) { SHashEntry *pe = pHashObj->hashList[slot]; - // lock entry - if (pHashObj->type == HASH_ENTRY_LOCK) { - taosWLockLatch(&pe->latch); - } + taosHashEntryWLock(pHashObj, pe); pNode = pe->next; while (pNode) { @@ -885,22 +878,18 @@ void *taosHashIterate(SHashObj *pHashObj, void *p) { if (pNode) break; - if (pHashObj->type == HASH_ENTRY_LOCK) { - taosWUnLockLatch(&pe->latch); - } + taosHashEntryWUnlock(pHashObj, pe); } } if (pNode) { SHashEntry *pe = pHashObj->hashList[slot]; - atomic_add_fetch_32(&pNode->count, 1); + atomic_add_fetch_32(&pNode->refCount, 1); data = GET_HASH_NODE_DATA(pNode); - if (pHashObj->type == HASH_ENTRY_LOCK) { - taosWUnLockLatch(&pe->latch); - } + taosHashEntryWUnlock(pHashObj, pe); } - __rd_unlock(&pHashObj->lock, pHashObj->type); + taosHashRUnlock(pHashObj); return data; } @@ -909,15 +898,13 @@ void taosHashCancelIterate(SHashObj *pHashObj, void *p) { if (pHashObj == NULL || p == NULL) return; // only add the read lock to disable the resize process - __rd_lock(&pHashObj->lock, pHashObj->type); + taosHashRLock(pHashObj); int slot; taosHashReleaseNode(pHashObj, p, &slot); SHashEntry *pe = pHashObj->hashList[slot]; - if (pHashObj->type == HASH_ENTRY_LOCK) { - taosWUnLockLatch(&pe->latch); - } - __rd_unlock(&pHashObj->lock, pHashObj->type); + taosHashEntryWUnlock(pHashObj, pe); + taosHashRUnlock(pHashObj); } diff --git a/src/util/src/terror.c b/src/util/src/terror.c index e78d1d37ee900268be5cdc7c2883b74284c65639..159ae7cd1d336bdf4ff2b1d58598ab4fd164f35a 100644 --- a/src/util/src/terror.c +++ b/src/util/src/terror.c @@ -299,7 +299,7 @@ TAOS_DEFINE_ERROR(TSDB_CODE_QRY_NOT_ENOUGH_BUFFER, "Query buffer limit ha TAOS_DEFINE_ERROR(TSDB_CODE_QRY_INCONSISTAN, "File inconsistance in replica") TAOS_DEFINE_ERROR(TSDB_CODE_QRY_INVALID_TIME_CONDITION, "One valid time range condition expected") TAOS_DEFINE_ERROR(TSDB_CODE_QRY_SYS_ERROR, "System error") -TAOS_DEFINE_ERROR(TSDB_CODE_QRY_UNIQUE_RESULT_TOO_LARGE, "Unique result num is too large") +TAOS_DEFINE_ERROR(TSDB_CODE_QRY_RESULT_TOO_LARGE, "result num is too large") // grant TAOS_DEFINE_ERROR(TSDB_CODE_GRANT_EXPIRED, "License expired") diff --git a/tests b/tests new file mode 160000 index 0000000000000000000000000000000000000000..1bec55ca9fb8ed129140a0118340ea8c670ef0bf --- /dev/null +++ b/tests @@ -0,0 +1 @@ +Subproject commit 1bec55ca9fb8ed129140a0118340ea8c670ef0bf