diff --git a/README-CN.md b/README-CN.md
index d7192c939780a272acdebc94baf474aeaf0d7a38..f851a906b88a0676abdc39150a2a93ae7fbe7f56 100644
--- a/README-CN.md
+++ b/README-CN.md
@@ -7,6 +7,7 @@
[![TDengine](TDenginelogo.png)](https://www.taosdata.com)
简体中文 | [English](./README.md)
+很多职位正在热招中,请看[这里](https://www.taosdata.com/cn/careers/)
# TDengine 简介
diff --git a/README.md b/README.md
index ab9e0348c8547c43bdbcb4df44a88c53429971e3..d5b6f1fa85b962253fe504fadff78e953d4da598 100644
--- a/README.md
+++ b/README.md
@@ -7,6 +7,7 @@
[![TDengine](TDenginelogo.png)](https://www.taosdata.com)
English | [简体中文](./README-CN.md)
+We are hiring, check [here](https://www.taosdata.com/en/careers/)
# What is TDengine?
diff --git a/cmake/define.inc b/cmake/define.inc
index 337a143e1f129d433f12d6772e9ed9c43d57c423..7894e6dab5d4ddd44e69f77702004183f431d3a6 100755
--- a/cmake/define.inc
+++ b/cmake/define.inc
@@ -45,6 +45,10 @@ IF (TD_TQ)
ADD_DEFINITIONS(-D_TD_TQ_)
ENDIF ()
+IF (TD_PRO)
+ ADD_DEFINITIONS(-D_TD_PRO_)
+ENDIF ()
+
IF (TD_MEM_CHECK)
ADD_DEFINITIONS(-DTAOS_MEM_CHECK)
ENDIF ()
diff --git a/cmake/input.inc b/cmake/input.inc
index 9d716e1e7345955f7b6b844c85ace7e7bd5c6080..d746cf52f6eb016795d6fa6d01f408925159c710 100755
--- a/cmake/input.inc
+++ b/cmake/input.inc
@@ -49,6 +49,9 @@ IF (${DBNAME} MATCHES "power")
ELSEIF (${DBNAME} MATCHES "tq")
SET(TD_TQ TRUE)
MESSAGE(STATUS "tq is true")
+ELSEIF (${DBNAME} MATCHES "pro")
+ SET(TD_PRO TRUE)
+ MESSAGE(STATUS "pro is true")
ENDIF ()
IF (${DLLTYPE} MATCHES "go")
diff --git a/documentation20/cn/01.evaluation/docs.md b/documentation20/cn/01.evaluation/docs.md
index 050046645c24e7db58ef2f39683433c3a4b53169..9ed9e2e7ebbcfdf63c9f8dddd8b6f716c4bb1a61 100644
--- a/documentation20/cn/01.evaluation/docs.md
+++ b/documentation20/cn/01.evaluation/docs.md
@@ -16,7 +16,7 @@ TDengine 的模块之一是时序数据库。但除此之外,为减少研发
采用 TDengine,可将典型的物联网、车联网、工业互联网大数据平台的总拥有成本大幅降低。但需要指出的是,因充分利用了物联网时序数据的特点,它无法用来处理网络爬虫、微博、微信、电商、ERP、CRM 等通用型数据。
-![TDengine技术生态图](page://images/eco_system.png)
+![TDengine技术生态图](../images/eco_system.png)
图 1. TDengine技术生态图
## TDengine 总体适用场景
diff --git a/documentation20/cn/03.architecture/01.taosd/docs.md b/documentation20/cn/03.architecture/01.taosd/docs.md
index 66d51ed2dc2ea1546ab167cad680c20b3fa9729c..c791d2c20d0daceec21949064f99289cf4994323 100644
--- a/documentation20/cn/03.architecture/01.taosd/docs.md
+++ b/documentation20/cn/03.architecture/01.taosd/docs.md
@@ -6,7 +6,7 @@
taosd包含rpc, dnode, vnode, tsdb, query, cq, sync, wal, mnode, http, monitor等模块,具体如下图:
-![modules.png](page://images/architecture/modules.png)
+![modules.png](../../images/architecture/modules.png)
taosd的启动入口是dnode模块,dnode然后启动其他模块,包括可选配置的http, monitor模块。taosc或dnode之间交互的消息都是通过rpc模块进行,dnode模块根据接收到的消息类型,将消息分发到vnode或mnode的消息队列,或由dnode模块自己消费。dnode的工作线程(worker)消费消息队列里的消息,交给mnode或vnode进行处理。下面对各个模块做简要说明。
@@ -41,13 +41,13 @@ RPC模块还提供数据压缩功能,如果数据包的字节数超过系统
taosd的消息消费由dnode通过读写线程池进行控制,是系统的中枢。该模块内的结构体图如下:
-![dnode.png](page://images/architecture/dnode.png)
+![dnode.png](../../images/architecture/dnode.png)
## VNODE模块
vnode是一独立的数据存储查询逻辑单元,但因为一个vnode只能容许一个DB,因此vnode内部没有account, DB, user等概念。为实现更好的模块化、封装以及未来的扩展,它有很多子模块,包括负责存储的TSDB,负责查询的Query, 负责数据复制的sync,负责数据库日志的的wal, 负责连续查询的cq(continuous query), 负责事件触发的流计算的event等模块,这些子模块只与vnode模块发生关系,与其他模块没有任何调用关系。模块图如下:
-![vnode.png](page://images/architecture/vnode.png)
+![vnode.png](../../images/architecture/vnode.png)
vnode模块向下,与dnodeVRead,dnodeVWrite发生互动,向上,与子模块发生互动。它主要的功能有:
diff --git a/documentation20/cn/03.architecture/02.replica/docs.md b/documentation20/cn/03.architecture/02.replica/docs.md
index 27ac7f123cdd2a56df9e65ae0fa13d1ff8faa23d..e80a03696b5321e327c19ac9445d3bf1dee8f28e 100644
--- a/documentation20/cn/03.architecture/02.replica/docs.md
+++ b/documentation20/cn/03.architecture/02.replica/docs.md
@@ -90,7 +90,7 @@ TDengine采取的是Master-Slave模式进行同步,与流行的RAFT一致性
具体的流程图如下:
-![replica-master.png](page://images/architecture/replica-master.png)
+![replica-master.png](../../images/architecture/replica-master.png)
选择Master的具体规则如下:
@@ -105,7 +105,7 @@ TDengine采取的是Master-Slave模式进行同步,与流行的RAFT一致性
如果vnode A是master, vnode B是slave, vnode A能接受客户端的写请求,而vnode B不能。当vnode A收到写的请求后,遵循下面的流程:
-![replica-forward.png](page://images/architecture/replica-forward.png)
+![replica-forward.png](../../images/architecture/replica-forward.png)
1. 应用对写请求做基本的合法性检查,通过,则给该请求包打上一个版本号(version, 单调递增)
2. 应用将打上版本号的写请求封装一个WAL Head, 写入WAL(Write Ahead Log)
@@ -140,7 +140,7 @@ TDengine采取的是Master-Slave模式进行同步,与流行的RAFT一致性
整个数据恢复流程分为两大步骤,第一步,先恢复archived data(file), 然后恢复wal。具体流程如下:
-![replica-restore.png](page://images/architecture/replica-restore.png)
+![replica-restore.png](../../images/architecture/replica-restore.png)
1. 通过已经建立的TCP连接,发送sync req给master节点
2. master收到sync req后,以client的身份,向vnode B主动建立一新的专用于同步的TCP连接(syncFd)
diff --git a/documentation20/cn/03.architecture/docs.md b/documentation20/cn/03.architecture/docs.md
index 3e9877b4465eac2ca05d99c88a620a0c6bf89689..a92382169c62c9b79de69a92249b681a69c02139 100644
--- a/documentation20/cn/03.architecture/docs.md
+++ b/documentation20/cn/03.architecture/docs.md
@@ -156,7 +156,7 @@ TDengine 的设计是基于单个硬件、软件系统不可靠,基于任何
TDengine 分布式架构的逻辑结构图如下:
-![TDengine架构示意图](page://images/architecture/structure.png)
+![TDengine架构示意图](../images/architecture/structure.png)
图 1 TDengine架构示意图
一个完整的 TDengine 系统是运行在一到多个物理节点上的,逻辑上,它包含数据节点(dnode)、TDengine应用驱动(taosc)以及应用(app)。系统中存在一到多个数据节点,这些数据节点组成一个集群(cluster)。应用通过taosc的API与TDengine集群进行互动。下面对每个逻辑单元进行简要介绍。
@@ -207,7 +207,7 @@ TDengine 分布式架构的逻辑结构图如下:
为解释vnode、mnode、taosc和应用之间的关系以及各自扮演的角色,下面对写入数据这个典型操作的流程进行剖析。
-![TDengine典型的操作流程](page://images/architecture/message.png)
+![TDengine典型的操作流程](../images/architecture/message.png)
图 2 TDengine典型的操作流程
1. 应用通过JDBC、ODBC或其他API接口发起插入数据的请求。
@@ -278,7 +278,7 @@ TDengine除vnode分片之外,还对时序数据按照时间段进行分区。
Master Vnode遵循下面的写入流程:
-![TDengine Master写入流程](page://images/architecture/write_master.png)
+![TDengine Master写入流程](../images/architecture/write_master.png)
图 3 TDengine Master写入流程
1. master vnode收到应用的数据插入请求,验证OK,进入下一步;
@@ -292,7 +292,7 @@ Master Vnode遵循下面的写入流程:
对于slave vnode,写入流程是:
-![TDengine Slave写入流程](page://images/architecture/write_slave.png)
+![TDengine Slave写入流程](../images/architecture/write_slave.png)
图 4 TDengine Slave写入流程
1. slave vnode收到Master vnode转发了的数据插入请求。检查last version是否与master一致,如果一致,进入下一步。如果不一致,需要进入同步状态。
@@ -434,7 +434,7 @@ SELECT COUNT(*) FROM d1001 WHERE ts >= '2017-7-14 00:00:00' AND ts < '2017-7-14
TDengine对每个数据采集点单独建表,但在实际应用中经常需要对不同的采集点数据进行聚合。为高效的进行聚合操作,TDengine引入超级表(STable)的概念。超级表用来代表一特定类型的数据采集点,它是包含多张表的表集合,集合里每张表的模式(schema)完全一致,但每张表都带有自己的静态标签,标签可以有多个,可以随时增加、删除和修改。应用可通过指定标签的过滤条件,对一个STable下的全部或部分表进行聚合或统计操作,这样大大简化应用的开发。其具体流程如下图所示:
-![多表聚合查询原理图](page://images/architecture/multi_tables.png)
+![多表聚合查询原理图](../images/architecture/multi_tables.png)
图 5 多表聚合查询原理图
1. 应用将一个查询条件发往系统;
diff --git a/documentation20/cn/08.connector/01.java/docs.md b/documentation20/cn/08.connector/01.java/docs.md
index b4537adad6f014712911d568a948b81f866b45f4..110b902b2051a88e14eaa73627780e56be158928 100644
--- a/documentation20/cn/08.connector/01.java/docs.md
+++ b/documentation20/cn/08.connector/01.java/docs.md
@@ -4,7 +4,7 @@
`taos-jdbcdriver` 的实现包括 2 种形式: JDBC-JNI 和 JDBC-RESTful(taos-jdbcdriver-2.0.18 开始支持 JDBC-RESTful)。 JDBC-JNI 通过调用客户端 libtaos.so(或 taos.dll )的本地方法实现, JDBC-RESTful 则在内部封装了 RESTful 接口实现。
-![tdengine-connector](page://images/tdengine-jdbc-connector.png)
+![tdengine-connector](../../images/tdengine-jdbc-connector.png)
上图显示了 3 种 Java 应用使用连接器访问 TDengine 的方式:
diff --git a/documentation20/cn/08.connector/docs.md b/documentation20/cn/08.connector/docs.md
index 3167404f8067610f0bf5f74fe41320decdcbcdf0..8cf3a889ceedb6cedcb5b7f1e581297b61986bcd 100644
--- a/documentation20/cn/08.connector/docs.md
+++ b/documentation20/cn/08.connector/docs.md
@@ -2,7 +2,7 @@
TDengine提供了丰富的应用程序开发接口,其中包括C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用。
-![image-connecotr](page://images/connector.png)
+![image-connecotr](../images/connector.png)
目前TDengine的连接器可支持的平台广泛,包括:X64/X86/ARM64/ARM32/MIPS/Alpha等硬件平台,以及Linux/Win64/Win32等开发环境。对照矩阵如下:
@@ -64,8 +64,7 @@ TDengine提供了丰富的应用程序开发接口,其中包括C/C++、Java、
编辑taos.cfg文件(默认路径/etc/taos/taos.cfg),将firstEP修改为TDengine服务器的End Point,例如:h1.taos.com:6030
-**提示: **
-
+**提示:**
1. **如本机没有部署TDengine服务,仅安装了应用驱动,则taos.cfg中仅需配置firstEP,无需配置FQDN。**
2. **为防止与服务器端连接时出现“unable to resolve FQDN”错误,建议确认客户端的hosts文件已经配置正确的FQDN值。**
diff --git a/documentation20/cn/09.connections/docs.md b/documentation20/cn/09.connections/docs.md
index d5a2f2763550e54a0c1829ff87c60b7bbca3defe..799cfc14a300d3f4c9fcbf8537f04984ae8e1df4 100644
--- a/documentation20/cn/09.connections/docs.md
+++ b/documentation20/cn/09.connections/docs.md
@@ -32,15 +32,15 @@ allow_loading_unsigned_plugins = taosdata-tdengine-datasource
用户可以直接通过 localhost:3000 的网址,登录 Grafana 服务器(用户名/密码:admin/admin),通过左侧 `Configuration -> Data Sources` 可以添加数据源,如下图所示:
-![img](page://images/connections/add_datasource1.jpg)
+![img](../images/connections/add_datasource1.jpg)
点击 `Add data source` 可进入新增数据源页面,在查询框中输入 TDengine 可选择添加,如下图所示:
-![img](page://images/connections/add_datasource2.jpg)
+![img](../images/connections/add_datasource2.jpg)
进入数据源配置页面,按照默认提示修改相应配置即可:
-![img](page://images/connections/add_datasource3.jpg)
+![img](../images/connections/add_datasource3.jpg)
* Host: TDengine 集群的中任意一台服务器的 IP 地址与 TDengine RESTful 接口的端口号(6041),默认 http://localhost:6041 。
* User:TDengine 用户名。
@@ -48,13 +48,13 @@ allow_loading_unsigned_plugins = taosdata-tdengine-datasource
点击 `Save & Test` 进行测试,成功会有如下提示:
-![img](page://images/connections/add_datasource4.jpg)
+![img](../images/connections/add_datasource4.jpg)
#### 创建 Dashboard
回到主界面创建 Dashboard,点击 Add Query 进入面板查询页面:
-![img](page://images/connections/create_dashboard1.jpg)
+![img](../images/connections/create_dashboard1.jpg)
如上图所示,在 Query 中选中 `TDengine` 数据源,在下方查询框可输入相应 sql 进行查询,具体说明如下:
@@ -65,7 +65,7 @@ allow_loading_unsigned_plugins = taosdata-tdengine-datasource
按照默认提示查询当前 TDengine 部署所在服务器指定间隔系统内存平均使用量如下:
-![img](page://images/connections/create_dashboard2.jpg)
+![img](../images/connections/create_dashboard2.jpg)
> 关于如何使用Grafana创建相应的监测界面以及更多有关使用Grafana的信息,请参考Grafana官方的[文档](https://grafana.com/docs/)。
@@ -75,11 +75,11 @@ allow_loading_unsigned_plugins = taosdata-tdengine-datasource
点击左侧 `Import` 按钮,并上传 `tdengine-grafana.json` 文件:
-![img](page://images/connections/import_dashboard1.jpg)
+![img](../images/connections/import_dashboard1.jpg)
导入完成之后可看到如下效果:
-![img](page://images/connections/import_dashboard2.jpg)
+![img](../images/connections/import_dashboard2.jpg)
## MATLAB
diff --git a/documentation20/en/01.evaluation/docs.md b/documentation20/en/01.evaluation/docs.md
index 5b2d0dd974203db1dafe8758e673a2f0970c3f17..b296ae999fbf63f65422993dde4586b6bec08497 100644
--- a/documentation20/en/01.evaluation/docs.md
+++ b/documentation20/en/01.evaluation/docs.md
@@ -15,7 +15,8 @@ One of the modules of TDengine is the time-series database. However, in addition
With TDengine, the total cost of ownership of typical IoT, Internet of Vehicles, and Industrial Internet Big Data platforms can be greatly reduced. However, since it makes full use of the characteristics of IoT time-series data, TDengine cannot be used to process general data from web crawlers, microblogs, WeChat, e-commerce, ERP, CRM, and other sources.
-![TDengine Technology Ecosystem](page://images/eco_system.png)
+![TDengine Technology Ecosystem](../images/eco_system.png)
+
Figure 1. TDengine Technology Ecosystem
## Overall Scenarios of TDengine
diff --git a/documentation20/en/03.architecture/docs.md b/documentation20/en/03.architecture/docs.md
index b9e21b1d4c775876c77b2c9ec999639f30bd0c00..ea73cd9f87f8cfacaced66ab79f00f2f3ab727cc 100644
--- a/documentation20/en/03.architecture/docs.md
+++ b/documentation20/en/03.architecture/docs.md
@@ -154,7 +154,7 @@ The design of TDengine is based on the assumption that one single node or softwa
Logical structure diagram of TDengine distributed architecture as following:
-![TDengine architecture diagram](page://images/architecture/structure.png)
+![TDengine architecture diagram](../images/architecture/structure.png)
Figure 1: TDengine architecture diagram
A complete TDengine system runs on one or more physical nodes. Logically, it includes data node (dnode), TDEngine application driver (TAOSC) and application (app). There are one or more data nodes in the system, which form a cluster. The application interacts with the TDengine cluster through TAOSC's API. The following is a brief introduction to each logical unit.
@@ -197,7 +197,7 @@ A complete TDengine system runs on one or more physical nodes. Logically, it inc
To explain the relationship between vnode, mnode, TAOSC and application and their respective roles, the following is an analysis of a typical data writing process.
-![typical process of TDengine](page://images/architecture/message.png)
+![typical process of TDengine](../images/architecture/message.png)
Figure 2: Typical process of TDengine
1. Application initiates a request to insert data through JDBC, ODBC, or other APIs.
@@ -266,7 +266,7 @@ If a database has N replicas, thus a virtual node group has N virtual nodes, but
Master Vnode uses a writing process as follows:
-![TDengine Master Writing Process](page://images/architecture/write_master.png)
+![TDengine Master Writing Process](../images/architecture/write_master.png)
Figure 3: TDengine Master writing process
1. Master vnode receives the application data insertion request, verifies, and moves to next step;
@@ -280,7 +280,7 @@ Master Vnode uses a writing process as follows:
For a slave vnode, the write process as follows:
-![TDengine Slave Writing Process](page://images/architecture/write_slave.png)
+![TDengine Slave Writing Process](../images/architecture/write_slave.png)
Figure 4: TDengine Slave Writing Process
1. Slave vnode receives a data insertion request forwarded by Master vnode;
@@ -412,7 +412,7 @@ For the data collected by device D1001, the number of records per hour is counte
TDengine creates a separate table for each data collection point, but in practical applications, it is often necessary to aggregate data from different data collection points. In order to perform aggregation operations efficiently, TDengine introduces the concept of STable. STable is used to represent a specific type of data collection point. It is a table set containing multiple tables. The schema of each table in the set is the same, but each table has its own static tag. The tags can be multiple and be added, deleted and modified at any time. Applications can aggregate or statistically operate all or a subset of tables under a STABLE by specifying tag filters, thus greatly simplifying the development of applications. The process is shown in the following figure:
-![Diagram of multi-table aggregation query](page://images/architecture/multi_tables.png)
+![Diagram of multi-table aggregation query](../images/architecture/multi_tables.png)
Figure 5: Diagram of multi-table aggregation query
1. Application sends a query condition to system;
diff --git a/documentation20/en/08.connector/01.java/docs.md b/documentation20/en/08.connector/01.java/docs.md
index 16adf906bea85d538ac408e1c40b18160aceed78..75cc380c141383cce0bc3c9790c91fa97563e3ca 100644
--- a/documentation20/en/08.connector/01.java/docs.md
+++ b/documentation20/en/08.connector/01.java/docs.md
@@ -4,7 +4,7 @@
The taos-jdbcdriver is implemented in two forms: JDBC-JNI and JDBC-RESTful (supported from taos-jdbcdriver-2.0.18). JDBC-JNI is implemented by calling the local methods of libtaos.so (or taos.dll) on the client, while JDBC-RESTful encapsulates the RESTful interface implementation internally.
-![tdengine-connector](page://images/tdengine-jdbc-connector.png)
+![tdengine-connector](../../images/tdengine-jdbc-connector.png)
The figure above shows the three ways Java applications can access the TDengine:
diff --git a/documentation20/en/08.connector/docs.md b/documentation20/en/08.connector/docs.md
index fd9d129e50fa4450aed2fbebe80eddb978ef1263..b4d2ee3a05850aa9b1a3c886ec26e4661b7f997b 100644
--- a/documentation20/en/08.connector/docs.md
+++ b/documentation20/en/08.connector/docs.md
@@ -2,7 +2,7 @@
TDengine provides many connectors for development, including C/C++, JAVA, Python, RESTful, Go, Node.JS, etc.
-![image-connector](page://images/connector.png)
+![image-connector](../images/connector.png)
At present, TDengine connectors support a wide range of platforms, including hardware platforms such as X64/X86/ARM64/ARM32/MIPS/Alpha, and development environments such as Linux/Win64/Win32. The comparison matrix is as follows:
diff --git a/documentation20/en/09.connections/docs.md b/documentation20/en/09.connections/docs.md
index 19544af0fa50af258f975532ad8399fcb8588b42..f1bbf0ff639719c7609f4a04685adf9c16a4e623 100644
--- a/documentation20/en/09.connections/docs.md
+++ b/documentation20/en/09.connections/docs.md
@@ -26,15 +26,15 @@ sudo cp -rf /usr/local/taos/connector/grafanaplugin /var/lib/grafana/plugins/tde
You can log in the Grafana server (username/password:admin/admin) through localhost:3000, and add data sources through `Configuration -> Data Sources` on the left panel, as shown in the following figure:
-![img](page://images/connections/add_datasource1.jpg)
+![img](../images/connections/add_datasource1.jpg)
Click `Add data source` to enter the Add Data Source page, and enter TDengine in the query box to select Add, as shown in the following figure:
-![img](page://images/connections/add_datasource2.jpg)
+![img](../images/connections/add_datasource2.jpg)
Enter the data source configuration page and modify the corresponding configuration according to the default prompt:
-![img](page://images/connections/add_datasource3.jpg)
+![img](../images/connections/add_datasource3.jpg)
- Host: IP address of any server in TDengine cluster and port number of TDengine RESTful interface (6041), default [http://localhost:6041](http://localhost:6041/)
- User: TDengine username.
@@ -42,13 +42,13 @@ Enter the data source configuration page and modify the corresponding configurat
Click `Save & Test` to test. Success will be prompted as follows:
-![img](page://images/connections/add_datasource4.jpg)
+![img](../images/connections/add_datasource4.jpg)
#### Create Dashboard
Go back to the home to create Dashboard, and click `Add Query` to enter the panel query page:
-![img](page://images/connections/create_dashboard1.jpg)
+![img](../images/connections/create_dashboard1.jpg)
As shown in the figure above, select the TDengine data source in Query, and enter the corresponding sql in the query box below to query. Details are as follows:
@@ -58,7 +58,7 @@ As shown in the figure above, select the TDengine data source in Query, and ente
According to the default prompt, query the average system memory usage at the specified interval of the server where the current TDengine deployed in as follows:
-![img](page://images/connections/create_dashboard2.jpg)
+![img](../images/connections/create_dashboard2.jpg)
> Please refer to Grafana [documents](https://grafana.com/docs/) for how to use Grafana to create the corresponding monitoring interface and for more about Grafana usage.
@@ -68,11 +68,11 @@ A `tdengine-grafana.json` importable dashboard is provided under the Grafana plu
Click the `Import` button on the left panel and upload the `tdengine-grafana.json` file:
-![img](page://images/connections/import_dashboard1.jpg)
+![img](../images/connections/import_dashboard1.jpg)
You can see as follows after Dashboard imported.
-![img](page://images/connections/import_dashboard2.jpg)
+![img](../images/connections/import_dashboard2.jpg)
## MATLAB
diff --git a/packaging/docker/dockerManifest.sh b/packaging/docker/dockerManifest.sh
index 98abe4e099d9bfe5b06d0a61d667391a9f667eb7..e4d3cda7f29fea96cabfe48f5b10ab668a085ea8 100755
--- a/packaging/docker/dockerManifest.sh
+++ b/packaging/docker/dockerManifest.sh
@@ -45,6 +45,7 @@ echo "version=${version}"
#docker manifest rm tdengine/tdengine:${version}
if [ "$verType" == "beta" ]; then
docker manifest inspect tdengine/tdengine-beta:latest
+ docker manifest create -a tdengine/tdengine-beta:latest tdengine/tdengine-amd64-beta:latest tdengine/tdengine-aarch64-beta:latest tdengine/tdengine-aarch32-beta:latest
docker manifest rm tdengine/tdengine-beta:latest
docker manifest create -a tdengine/tdengine-beta:${version} tdengine/tdengine-amd64-beta:${version} tdengine/tdengine-aarch64-beta:${version} tdengine/tdengine-aarch32-beta:${version}
docker manifest create -a tdengine/tdengine-beta:latest tdengine/tdengine-amd64-beta:latest tdengine/tdengine-aarch64-beta:latest tdengine/tdengine-aarch32-beta:latest
@@ -54,6 +55,7 @@ if [ "$verType" == "beta" ]; then
elif [ "$verType" == "stable" ]; then
docker manifest inspect tdengine/tdengine:latest
+ docker manifest create -a tdengine/tdengine:latest tdengine/tdengine-amd64:latest tdengine/tdengine-aarch64:latest tdengine/tdengine-aarch32:latest
docker manifest rm tdengine/tdengine:latest
docker manifest create -a tdengine/tdengine:${version} tdengine/tdengine-amd64:${version} tdengine/tdengine-aarch64:${version} tdengine/tdengine-aarch32:${version}
docker manifest create -a tdengine/tdengine:latest tdengine/tdengine-amd64:latest tdengine/tdengine-aarch64:latest tdengine/tdengine-aarch32:latest
diff --git a/packaging/release.sh b/packaging/release.sh
index 5ba6c01a0bd5689278bdb5c86b538b3c447f086a..44887c6cf749ecfecdef46799311de38dbbbed23 100755
--- a/packaging/release.sh
+++ b/packaging/release.sh
@@ -22,7 +22,7 @@ cpuType=x64 # [aarch32 | aarch64 | x64 | x86 | mips64 ...]
osType=Linux # [Linux | Kylin | Alpine | Raspberrypi | Darwin | Windows | Ningsi60 | Ningsi80 |...]
pagMode=full # [full | lite]
soMode=dynamic # [static | dynamic]
-dbName=taos # [taos | power | tq]
+dbName=taos # [taos | power | tq | pro]
allocator=glibc # [glibc | jemalloc]
verNumber=""
verNumberComp="1.0.0.0"
@@ -78,7 +78,7 @@ do
echo " -l [full | lite] "
echo " -a [glibc | jemalloc] "
echo " -s [static | dynamic] "
- echo " -d [taos | power | tq ] "
+ echo " -d [taos | power | tq | pro] "
echo " -n [version number] "
echo " -m [compatible version number] "
exit 0
@@ -253,6 +253,10 @@ if [ "$osType" != "Darwin" ]; then
${csudo} ./makepkg_tq.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${dbName} ${verNumberComp}
${csudo} ./makeclient_tq.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${dbName}
${csudo} ./makearbi_tq.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode}
+ elif [[ "$dbName" == "pro" ]]; then
+ ${csudo} ./makepkg_pro.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${dbName} ${verNumberComp}
+ ${csudo} ./makeclient_pro.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${dbName}
+ ${csudo} ./makearbi_pro.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode}
else
${csudo} ./makepkg_power.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${dbName} ${verNumberComp}
${csudo} ./makeclient_power.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${dbName}
@@ -262,4 +266,3 @@ else
cd ${script_dir}/tools
./makeclient.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${dbName}
fi
-
diff --git a/packaging/tools/install_arbi_pro.sh b/packaging/tools/install_arbi_pro.sh
new file mode 100755
index 0000000000000000000000000000000000000000..11165dbdd8bdf6afb4659250499cf1d9184c2395
--- /dev/null
+++ b/packaging/tools/install_arbi_pro.sh
@@ -0,0 +1,293 @@
+#!/bin/bash
+#
+# This file is used to install database on linux systems. The operating system
+# is required to use systemd to manage services at boot
+
+set -e
+#set -x
+
+# -----------------------Variables definition---------------------
+script_dir=$(dirname $(readlink -f "$0"))
+
+bin_link_dir="/usr/bin"
+#inc_link_dir="/usr/include"
+
+#install main path
+install_main_dir="/usr/local/tarbitrator"
+
+# old bin dir
+bin_dir="/usr/local/tarbitrator/bin"
+
+service_config_dir="/etc/systemd/system"
+
+# Color setting
+RED='\033[0;31m'
+GREEN='\033[1;32m'
+GREEN_DARK='\033[0;32m'
+GREEN_UNDERLINE='\033[4;32m'
+NC='\033[0m'
+
+csudo=""
+if command -v sudo > /dev/null; then
+ csudo="sudo"
+fi
+
+update_flag=0
+
+initd_mod=0
+service_mod=2
+if pidof systemd &> /dev/null; then
+ service_mod=0
+elif $(which service &> /dev/null); then
+ service_mod=1
+ service_config_dir="/etc/init.d"
+ if $(which chkconfig &> /dev/null); then
+ initd_mod=1
+ elif $(which insserv &> /dev/null); then
+ initd_mod=2
+ elif $(which update-rc.d &> /dev/null); then
+ initd_mod=3
+ else
+ service_mod=2
+ fi
+else
+ service_mod=2
+fi
+
+
+# get the operating system type for using the corresponding init file
+# ubuntu/debian(deb), centos/fedora(rpm), others: opensuse, redhat, ..., no verification
+#osinfo=$(awk -F= '/^NAME/{print $2}' /etc/os-release)
+if [[ -e /etc/os-release ]]; then
+ osinfo=$(cat /etc/os-release | grep "NAME" | cut -d '"' -f2) ||:
+else
+ osinfo=""
+fi
+#echo "osinfo: ${osinfo}"
+os_type=0
+if echo $osinfo | grep -qwi "ubuntu" ; then
+# echo "This is ubuntu system"
+ os_type=1
+elif echo $osinfo | grep -qwi "debian" ; then
+# echo "This is debian system"
+ os_type=1
+elif echo $osinfo | grep -qwi "Kylin" ; then
+# echo "This is Kylin system"
+ os_type=1
+elif echo $osinfo | grep -qwi "centos" ; then
+# echo "This is centos system"
+ os_type=2
+elif echo $osinfo | grep -qwi "fedora" ; then
+# echo "This is fedora system"
+ os_type=2
+else
+ echo " osinfo: ${osinfo}"
+ echo " This is an officially unverified linux system,"
+ echo " if there are any problems with the installation and operation, "
+ echo " please feel free to contact hanatech.com.cn for support."
+ os_type=1
+fi
+
+function kill_tarbitrator() {
+ pid=$(ps -ef | grep "tarbitrator" | grep -v "grep" | awk '{print $2}')
+ if [ -n "$pid" ]; then
+ ${csudo} kill -9 $pid || :
+ fi
+}
+
+function install_main_path() {
+ #create install main dir and all sub dir
+ ${csudo} rm -rf ${install_main_dir} || :
+ ${csudo} mkdir -p ${install_main_dir}
+ ${csudo} mkdir -p ${install_main_dir}/bin
+ #${csudo} mkdir -p ${install_main_dir}/include
+ ${csudo} mkdir -p ${install_main_dir}/init.d
+}
+
+function install_bin() {
+ # Remove links
+ ${csudo} rm -f ${bin_link_dir}/rmtarbitrator || :
+ ${csudo} rm -f ${bin_link_dir}/tarbitrator || :
+ ${csudo} cp -r ${script_dir}/bin/* ${install_main_dir}/bin && ${csudo} chmod 0555 ${install_main_dir}/bin/*
+
+ #Make link
+ [ -x ${install_main_dir}/bin/remove_arbi_prodb.sh ] && ${csudo} ln -s ${install_main_dir}/bin/remove_arbi_prodb.sh ${bin_link_dir}/rmtarbitrator || :
+ [ -x ${install_main_dir}/bin/tarbitrator ] && ${csudo} ln -s ${install_main_dir}/bin/tarbitrator ${bin_link_dir}/tarbitrator || :
+}
+
+function install_header() {
+ ${csudo} rm -f ${inc_link_dir}/taos.h ${inc_link_dir}/taoserror.h || :
+ ${csudo} cp -f ${script_dir}/inc/* ${install_main_dir}/include && ${csudo} chmod 644 ${install_main_dir}/include/*
+ ${csudo} ln -s ${install_main_dir}/include/taos.h ${inc_link_dir}/taos.h
+ ${csudo} ln -s ${install_main_dir}/include/taoserror.h ${inc_link_dir}/taoserror.h
+}
+
+function clean_service_on_sysvinit() {
+ #restart_config_str="taos:2345:respawn:${service_config_dir}/taosd start"
+ #${csudo} sed -i "\|${restart_config_str}|d" /etc/inittab || :
+
+ if pidof tarbitrator &> /dev/null; then
+ ${csudo} service tarbitratord stop || :
+ fi
+
+ if ((${initd_mod}==1)); then
+ if [ -e ${service_config_dir}/tarbitratord ]; then
+ ${csudo} chkconfig --del tarbitratord || :
+ fi
+ elif ((${initd_mod}==2)); then
+ if [ -e ${service_config_dir}/tarbitratord ]; then
+ ${csudo} insserv -r tarbitratord || :
+ fi
+ elif ((${initd_mod}==3)); then
+ if [ -e ${service_config_dir}/tarbitratord ]; then
+ ${csudo} update-rc.d -f tarbitratord remove || :
+ fi
+ fi
+
+ ${csudo} rm -f ${service_config_dir}/tarbitratord || :
+
+ if $(which init &> /dev/null); then
+ ${csudo} init q || :
+ fi
+}
+
+function install_service_on_sysvinit() {
+ clean_service_on_sysvinit
+ sleep 1
+
+ # Install prodbs service
+
+ if ((${os_type}==1)); then
+ ${csudo} cp -f ${script_dir}/init.d/tarbitratord.deb ${install_main_dir}/init.d/tarbitratord
+ ${csudo} cp ${script_dir}/init.d/tarbitratord.deb ${service_config_dir}/tarbitratord && ${csudo} chmod a+x ${service_config_dir}/tarbitratord
+ elif ((${os_type}==2)); then
+ ${csudo} cp -f ${script_dir}/init.d/tarbitratord.rpm ${install_main_dir}/init.d/tarbitratord
+ ${csudo} cp ${script_dir}/init.d/tarbitratord.rpm ${service_config_dir}/tarbitratord && ${csudo} chmod a+x ${service_config_dir}/tarbitratord
+ fi
+
+ if ((${initd_mod}==1)); then
+ ${csudo} chkconfig --add tarbitratord || :
+ ${csudo} chkconfig --level 2345 tarbitratord on || :
+ elif ((${initd_mod}==2)); then
+ ${csudo} insserv tarbitratord || :
+ ${csudo} insserv -d tarbitratord || :
+ elif ((${initd_mod}==3)); then
+ ${csudo} update-rc.d tarbitratord defaults || :
+ fi
+}
+
+function clean_service_on_systemd() {
+ tarbitratord_service_config="${service_config_dir}/tarbitratord.service"
+ if systemctl is-active --quiet tarbitratord; then
+ echo "tarbitrator is running, stopping it..."
+ ${csudo} systemctl stop tarbitratord &> /dev/null || echo &> /dev/null
+ fi
+ ${csudo} systemctl disable tarbitratord &> /dev/null || echo &> /dev/null
+
+ ${csudo} rm -f ${tarbitratord_service_config}
+}
+
+function install_service_on_systemd() {
+ clean_service_on_systemd
+
+ tarbitratord_service_config="${service_config_dir}/tarbitratord.service"
+
+ ${csudo} bash -c "echo '[Unit]' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'Description=ProDB arbitrator service' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'After=network-online.target' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'Wants=network-online.target' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo '[Service]' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'Type=simple' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'ExecStart=/usr/bin/tarbitrator' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'TimeoutStopSec=1000000s' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'LimitNOFILE=infinity' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'LimitNPROC=infinity' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'LimitCORE=infinity' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'TimeoutStartSec=0' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'StandardOutput=null' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'Restart=always' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'StartLimitBurst=3' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'StartLimitInterval=60s' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo '[Install]' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'WantedBy=multi-user.target' >> ${tarbitratord_service_config}"
+ ${csudo} systemctl enable tarbitratord
+}
+
+function install_service() {
+ if ((${service_mod}==0)); then
+ install_service_on_systemd
+ elif ((${service_mod}==1)); then
+ install_service_on_sysvinit
+ else
+ # must manual stop taosd
+ kill_tarbitrator
+ fi
+}
+
+function update_prodb() {
+ # Start to update
+ echo -e "${GREEN}Start to update ProDB's arbitrator ...${NC}"
+ # Stop the service if running
+ if pidof tarbitrator &> /dev/null; then
+ if ((${service_mod}==0)); then
+ ${csudo} systemctl stop tarbitratord || :
+ elif ((${service_mod}==1)); then
+ ${csudo} service tarbitratord stop || :
+ else
+ kill_tarbitrator
+ fi
+ sleep 1
+ fi
+
+ install_main_path
+ #install_header
+ install_bin
+ install_service
+
+ echo
+ #echo -e "${GREEN_DARK}To configure ProDB ${NC}: edit /etc/taos/taos.cfg"
+ if ((${service_mod}==0)); then
+ echo -e "${GREEN_DARK}To start arbitrator ${NC}: ${csudo} systemctl start tarbitratord${NC}"
+ elif ((${service_mod}==1)); then
+ echo -e "${GREEN_DARK}To start arbitrator ${NC}: ${csudo} service tarbitratord start${NC}"
+ else
+ echo -e "${GREEN_DARK}To start arbitrator ${NC}: ./tarbitrator${NC}"
+ fi
+ echo
+ echo -e "\033[44;32;1mProDB's arbitrator is updated successfully!${NC}"
+}
+
+function install_prodb() {
+ # Start to install
+ echo -e "${GREEN}Start to install ProDB's arbitrator ...${NC}"
+
+ install_main_path
+ #install_header
+ install_bin
+ install_service
+ echo
+ #echo -e "${GREEN_DARK}To configure ProDB ${NC}: edit /etc/taos/taos.cfg"
+ if ((${service_mod}==0)); then
+ echo -e "${GREEN_DARK}To start arbitrator ${NC}: ${csudo} systemctl start tarbitratord${NC}"
+ elif ((${service_mod}==1)); then
+ echo -e "${GREEN_DARK}To start arbitrator ${NC}: ${csudo} service tarbitratord start${NC}"
+ else
+ echo -e "${GREEN_DARK}To start arbitrator ${NC}: tarbitrator${NC}"
+ fi
+
+ echo -e "\033[44;32;1mProDB's arbitrator is installed successfully!${NC}"
+ echo
+}
+
+
+## ==============================Main program starts from here============================
+# Install server and client
+if [ -x ${bin_dir}/tarbitrator ]; then
+ update_flag=1
+ update_prodb
+else
+ install_prodb
+fi
+
diff --git a/packaging/tools/install_client_pro.sh b/packaging/tools/install_client_pro.sh
new file mode 100755
index 0000000000000000000000000000000000000000..fff8ae31200669ee3ab918a873e33fc32ece37c8
--- /dev/null
+++ b/packaging/tools/install_client_pro.sh
@@ -0,0 +1,248 @@
+#!/bin/bash
+#
+# This file is used to install ProDB client on linux systems. The operating system
+# is required to use systemd to manage services at boot
+
+set -e
+#set -x
+
+# -----------------------Variables definition---------------------
+
+osType=Linux
+pagMode=full
+
+if [ "$osType" != "Darwin" ]; then
+ script_dir=$(dirname $(readlink -f "$0"))
+ # Dynamic directory
+ data_dir="/var/lib/ProDB"
+ log_dir="/var/log/ProDB"
+else
+ script_dir=`dirname $0`
+ cd ${script_dir}
+ script_dir="$(pwd)"
+ data_dir="/var/lib/ProDB"
+ log_dir="~/ProDB/log"
+fi
+
+log_link_dir="/usr/local/ProDB/log"
+
+cfg_install_dir="/etc/ProDB"
+
+if [ "$osType" != "Darwin" ]; then
+ bin_link_dir="/usr/bin"
+ lib_link_dir="/usr/lib"
+ lib64_link_dir="/usr/lib64"
+ inc_link_dir="/usr/include"
+else
+ bin_link_dir="/usr/local/bin"
+ lib_link_dir="/usr/local/lib"
+ inc_link_dir="/usr/local/include"
+fi
+
+#install main path
+install_main_dir="/usr/local/ProDB"
+
+# old bin dir
+bin_dir="/usr/local/ProDB/bin"
+
+# Color setting
+RED='\033[0;31m'
+GREEN='\033[1;32m'
+GREEN_DARK='\033[0;32m'
+GREEN_UNDERLINE='\033[4;32m'
+NC='\033[0m'
+
+csudo=""
+if command -v sudo > /dev/null; then
+ csudo="sudo"
+fi
+
+update_flag=0
+
+function kill_client() {
+ pid=$(ps -ef | grep "prodbc" | grep -v "grep" | awk '{print $2}')
+ if [ -n "$pid" ]; then
+ ${csudo} kill -9 $pid || :
+ fi
+}
+
+function install_main_path() {
+ #create install main dir and all sub dir
+ ${csudo} rm -rf ${install_main_dir} || :
+ ${csudo} mkdir -p ${install_main_dir}
+ ${csudo} mkdir -p ${install_main_dir}/cfg
+ ${csudo} mkdir -p ${install_main_dir}/bin
+ ${csudo} mkdir -p ${install_main_dir}/connector
+ ${csudo} mkdir -p ${install_main_dir}/driver
+ ${csudo} mkdir -p ${install_main_dir}/examples
+ ${csudo} mkdir -p ${install_main_dir}/include
+}
+
+function install_bin() {
+ # Remove links
+ ${csudo} rm -f ${bin_link_dir}/prodbc || :
+ if [ "$osType" != "Darwin" ]; then
+ ${csudo} rm -f ${bin_link_dir}/prodemo || :
+ ${csudo} rm -f ${bin_link_dir}/prodump || :
+ fi
+ ${csudo} rm -f ${bin_link_dir}/rmprodb || :
+ ${csudo} rm -f ${bin_link_dir}/set_core || :
+
+ ${csudo} cp -r ${script_dir}/bin/* ${install_main_dir}/bin && ${csudo} chmod 0555 ${install_main_dir}/bin/*
+
+ #Make link
+ [ -x ${install_main_dir}/bin/prodbc ] && ${csudo} ln -s ${install_main_dir}/bin/prodbc ${bin_link_dir}/prodbc || :
+ if [ "$osType" != "Darwin" ]; then
+ [ -x ${install_main_dir}/bin/prodemo ] && ${csudo} ln -s ${install_main_dir}/bin/prodemo ${bin_link_dir}/prodemo || :
+ [ -x ${install_main_dir}/bin/prodump ] && ${csudo} ln -s ${install_main_dir}/bin/prodump ${bin_link_dir}/prodump || :
+ fi
+ [ -x ${install_main_dir}/bin/remove_client_prodb.sh ] && ${csudo} ln -s ${install_main_dir}/bin/remove_client_prodb.sh ${bin_link_dir}/rmprodb || :
+ [ -x ${install_main_dir}/bin/set_core.sh ] && ${csudo} ln -s ${install_main_dir}/bin/set_core.sh ${bin_link_dir}/set_core || :
+}
+
+function clean_lib() {
+ sudo rm -f /usr/lib/libtaos.* || :
+ sudo rm -rf ${lib_dir} || :
+}
+
+function install_lib() {
+ # Remove links
+ ${csudo} rm -f ${lib_link_dir}/libtaos.* || :
+ ${csudo} rm -f ${lib64_link_dir}/libtaos.* || :
+ #${csudo} rm -rf ${v15_java_app_dir} || :
+
+ ${csudo} cp -rf ${script_dir}/driver/* ${install_main_dir}/driver && ${csudo} chmod 777 ${install_main_dir}/driver/*
+
+ if [ "$osType" != "Darwin" ]; then
+ ${csudo} ln -s ${install_main_dir}/driver/libtaos.* ${lib_link_dir}/libtaos.so.1
+ ${csudo} ln -s ${lib_link_dir}/libtaos.so.1 ${lib_link_dir}/libtaos.so
+
+ if [ -d "${lib64_link_dir}" ]; then
+ ${csudo} ln -s ${install_main_dir}/driver/libtaos.* ${lib64_link_dir}/libtaos.so.1 || :
+ ${csudo} ln -s ${lib64_link_dir}/libtaos.so.1 ${lib64_link_dir}/libtaos.so || :
+ fi
+ else
+ ${csudo} ln -s ${install_main_dir}/driver/libtaos.* ${lib_link_dir}/libtaos.1.dylib
+ ${csudo} ln -s ${lib_link_dir}/libtaos.1.dylib ${lib_link_dir}/libtaos.dylib
+ fi
+
+ ${csudo} ldconfig
+}
+
+function install_header() {
+ ${csudo} rm -f ${inc_link_dir}/taos.h ${inc_link_dir}/taoserror.h || :
+ ${csudo} cp -f ${script_dir}/inc/* ${install_main_dir}/include && ${csudo} chmod 644 ${install_main_dir}/include/*
+ ${csudo} ln -s ${install_main_dir}/include/taos.h ${inc_link_dir}/taos.h
+ ${csudo} ln -s ${install_main_dir}/include/taoserror.h ${inc_link_dir}/taoserror.h
+}
+
+function install_config() {
+ #${csudo} rm -f ${install_main_dir}/cfg/taos.cfg || :
+
+ if [ ! -f ${cfg_install_dir}/taos.cfg ]; then
+ ${csudo} mkdir -p ${cfg_install_dir}
+ [ -f ${script_dir}/cfg/taos.cfg ] && ${csudo} cp ${script_dir}/cfg/taos.cfg ${cfg_install_dir}
+ ${csudo} chmod 644 ${cfg_install_dir}/*
+ fi
+
+ ${csudo} cp -f ${script_dir}/cfg/taos.cfg ${install_main_dir}/cfg/taos.cfg.org
+ ${csudo} ln -s ${cfg_install_dir}/taos.cfg ${install_main_dir}/cfg
+}
+
+
+function install_log() {
+ ${csudo} rm -rf ${log_dir} || :
+
+ if [ "$osType" != "Darwin" ]; then
+ ${csudo} mkdir -p ${log_dir} && ${csudo} chmod 777 ${log_dir}
+ else
+ mkdir -p ${log_dir} && ${csudo} chmod 777 ${log_dir}
+ fi
+ ${csudo} ln -s ${log_dir} ${install_main_dir}/log
+}
+
+function install_connector() {
+ ${csudo} cp -rf ${script_dir}/connector/* ${install_main_dir}/connector
+}
+
+function install_examples() {
+ if [ -d ${script_dir}/examples ]; then
+ ${csudo} cp -rf ${script_dir}/examples/* ${install_main_dir}/examples
+ fi
+}
+
+function update_prodb() {
+ # Start to update
+ if [ ! -e prodb.tar.gz ]; then
+ echo "File prodb.tar.gz does not exist"
+ exit 1
+ fi
+ tar -zxf prodb.tar.gz
+
+ echo -e "${GREEN}Start to update ProDB client...${NC}"
+ # Stop the client shell if running
+ if pidof prodbc &> /dev/null; then
+ kill_client
+ sleep 1
+ fi
+
+ install_main_path
+
+ install_log
+ install_header
+ install_lib
+ if [ "$pagMode" != "lite" ]; then
+ install_connector
+ fi
+ install_examples
+ install_bin
+ install_config
+
+ echo
+ echo -e "\033[44;32;1mProDB client is updated successfully!${NC}"
+
+ rm -rf $(tar -tf prodb.tar.gz)
+}
+
+function install_prodb() {
+ # Start to install
+ if [ ! -e prodb.tar.gz ]; then
+ echo "File prodb.tar.gz does not exist"
+ exit 1
+ fi
+ tar -zxf prodb.tar.gz
+
+ echo -e "${GREEN}Start to install ProDB client...${NC}"
+
+ install_main_path
+ install_log
+ install_header
+ install_lib
+ if [ "$pagMode" != "lite" ]; then
+ install_connector
+ fi
+ install_examples
+ install_bin
+ install_config
+
+ echo
+ echo -e "\033[44;32;1mProDB client is installed successfully!${NC}"
+
+ rm -rf $(tar -tf prodb.tar.gz)
+}
+
+
+## ==============================Main program starts from here============================
+# Install or updata client and client
+# if server is already install, don't install client
+ if [ -e ${bin_dir}/prodbs ]; then
+ echo -e "\033[44;32;1mThere are already installed ProDB server, so don't need install client!${NC}"
+ exit 0
+ fi
+
+ if [ -x ${bin_dir}/prodbc ]; then
+ update_flag=1
+ update_prodb
+ else
+ install_prodb
+ fi
diff --git a/packaging/tools/install_pro.sh b/packaging/tools/install_pro.sh
new file mode 100755
index 0000000000000000000000000000000000000000..564561441646d4bd27f22c5abd9250a9c3377002
--- /dev/null
+++ b/packaging/tools/install_pro.sh
@@ -0,0 +1,948 @@
+#!/bin/bash
+#
+# This file is used to install database on linux systems. The operating system
+# is required to use systemd to manage services at boot
+
+set -e
+#set -x
+
+verMode=edge
+pagMode=full
+
+iplist=""
+serverFqdn=""
+# -----------------------Variables definition---------------------
+script_dir=$(dirname $(readlink -f "$0"))
+# Dynamic directory
+data_dir="/var/lib/ProDB"
+log_dir="/var/log/ProDB"
+
+data_link_dir="/usr/local/ProDB/data"
+log_link_dir="/usr/local/ProDB/log"
+
+cfg_install_dir="/etc/ProDB"
+
+bin_link_dir="/usr/bin"
+lib_link_dir="/usr/lib"
+lib64_link_dir="/usr/lib64"
+inc_link_dir="/usr/include"
+
+#install main path
+install_main_dir="/usr/local/ProDB"
+
+# old bin dir
+bin_dir="/usr/local/ProDB/bin"
+
+service_config_dir="/etc/systemd/system"
+nginx_port=6060
+nginx_dir="/usr/local/nginxd"
+
+# Color setting
+RED='\033[0;31m'
+GREEN='\033[1;32m'
+GREEN_DARK='\033[0;32m'
+GREEN_UNDERLINE='\033[4;32m'
+NC='\033[0m'
+
+csudo=""
+if command -v sudo > /dev/null; then
+ csudo="sudo"
+fi
+
+update_flag=0
+
+initd_mod=0
+service_mod=2
+if pidof systemd &> /dev/null; then
+ service_mod=0
+elif $(which service &> /dev/null); then
+ service_mod=1
+ service_config_dir="/etc/init.d"
+ if $(which chkconfig &> /dev/null); then
+ initd_mod=1
+ elif $(which insserv &> /dev/null); then
+ initd_mod=2
+ elif $(which update-rc.d &> /dev/null); then
+ initd_mod=3
+ else
+ service_mod=2
+ fi
+else
+ service_mod=2
+fi
+
+
+# get the operating system type for using the corresponding init file
+# ubuntu/debian(deb), centos/fedora(rpm), others: opensuse, redhat, ..., no verification
+#osinfo=$(awk -F= '/^NAME/{print $2}' /etc/os-release)
+if [[ -e /etc/os-release ]]; then
+ osinfo=$(cat /etc/os-release | grep "NAME" | cut -d '"' -f2) ||:
+else
+ osinfo=""
+fi
+#echo "osinfo: ${osinfo}"
+os_type=0
+if echo $osinfo | grep -qwi "ubuntu" ; then
+# echo "This is ubuntu system"
+ os_type=1
+elif echo $osinfo | grep -qwi "debian" ; then
+# echo "This is debian system"
+ os_type=1
+elif echo $osinfo | grep -qwi "Kylin" ; then
+# echo "This is Kylin system"
+ os_type=1
+elif echo $osinfo | grep -qwi "centos" ; then
+# echo "This is centos system"
+ os_type=2
+elif echo $osinfo | grep -qwi "fedora" ; then
+# echo "This is fedora system"
+ os_type=2
+else
+ echo " osinfo: ${osinfo}"
+ echo " This is an officially unverified linux system,"
+ echo " if there are any problems with the installation and operation, "
+ echo " please feel free to contact hanatech.com.cn for support."
+ os_type=1
+fi
+
+
+# ============================= get input parameters =================================================
+
+# install.sh -v [server | client] -e [yes | no] -i [systemd | service | ...]
+
+# set parameters by default value
+interactiveFqdn=yes # [yes | no]
+verType=server # [server | client]
+initType=systemd # [systemd | service | ...]
+
+while getopts "hv:e:i:" arg
+do
+ case $arg in
+ e)
+ #echo "interactiveFqdn=$OPTARG"
+ interactiveFqdn=$( echo $OPTARG )
+ ;;
+ v)
+ #echo "verType=$OPTARG"
+ verType=$(echo $OPTARG)
+ ;;
+ i)
+ #echo "initType=$OPTARG"
+ initType=$(echo $OPTARG)
+ ;;
+ h)
+ echo "Usage: `basename $0` -v [server | client] -e [yes | no]"
+ exit 0
+ ;;
+ ?) #unknow option
+ echo "unkonw argument"
+ exit 1
+ ;;
+ esac
+done
+
+function kill_process() {
+ pid=$(ps -ef | grep "$1" | grep -v "grep" | awk '{print $2}')
+ if [ -n "$pid" ]; then
+ ${csudo} kill -9 $pid || :
+ fi
+}
+
+function install_main_path() {
+ #create install main dir and all sub dir
+ ${csudo} rm -rf ${install_main_dir} || :
+ ${csudo} mkdir -p ${install_main_dir}
+ ${csudo} mkdir -p ${install_main_dir}/cfg
+ ${csudo} mkdir -p ${install_main_dir}/bin
+ ${csudo} mkdir -p ${install_main_dir}/connector
+ ${csudo} mkdir -p ${install_main_dir}/driver
+ ${csudo} mkdir -p ${install_main_dir}/examples
+ ${csudo} mkdir -p ${install_main_dir}/include
+ ${csudo} mkdir -p ${install_main_dir}/init.d
+ if [ "$verMode" == "cluster" ]; then
+ ${csudo} mkdir -p ${nginx_dir}
+ fi
+}
+
+function install_bin() {
+ # Remove links
+ ${csudo} rm -f ${bin_link_dir}/prodbc || :
+ ${csudo} rm -f ${bin_link_dir}/prodbs || :
+ ${csudo} rm -f ${bin_link_dir}/prodemo || :
+ ${csudo} rm -f ${bin_link_dir}/rmprodb || :
+ ${csudo} rm -f ${bin_link_dir}/tarbitrator || :
+ ${csudo} rm -f ${bin_link_dir}/set_core || :
+
+ ${csudo} cp -r ${script_dir}/bin/* ${install_main_dir}/bin && ${csudo} chmod 0555 ${install_main_dir}/bin/*
+
+ #Make link
+ [ -x ${install_main_dir}/bin/prodbc ] && ${csudo} ln -s ${install_main_dir}/bin/prodbc ${bin_link_dir}/prodbc || :
+ [ -x ${install_main_dir}/bin/prodbs ] && ${csudo} ln -s ${install_main_dir}/bin/prodbs ${bin_link_dir}/prodbs || :
+ [ -x ${install_main_dir}/bin/prodemo ] && ${csudo} ln -s ${install_main_dir}/bin/prodemo ${bin_link_dir}/prodemo || :
+ [ -x ${install_main_dir}/bin/remove_pro.sh ] && ${csudo} ln -s ${install_main_dir}/bin/remove_pro.sh ${bin_link_dir}/rmprodb || :
+ [ -x ${install_main_dir}/bin/set_core.sh ] && ${csudo} ln -s ${install_main_dir}/bin/set_core.sh ${bin_link_dir}/set_core || :
+ [ -x ${install_main_dir}/bin/tarbitrator ] && ${csudo} ln -s ${install_main_dir}/bin/tarbitrator ${bin_link_dir}/tarbitrator || :
+
+ if [ "$verMode" == "cluster" ]; then
+ ${csudo} cp -r ${script_dir}/nginxd/* ${nginx_dir} && ${csudo} chmod 0555 ${nginx_dir}/*
+ ${csudo} mkdir -p ${nginx_dir}/logs
+ ${csudo} chmod 777 ${nginx_dir}/sbin/nginx
+ fi
+}
+
+function install_lib() {
+ # Remove links
+ ${csudo} rm -f ${lib_link_dir}/libtaos.* || :
+ ${csudo} rm -f ${lib64_link_dir}/libtaos.* || :
+ ${csudo} cp -rf ${script_dir}/driver/* ${install_main_dir}/driver && ${csudo} chmod 777 ${install_main_dir}/driver/*
+
+ ${csudo} ln -s ${install_main_dir}/driver/libtaos.* ${lib_link_dir}/libtaos.so.1
+ ${csudo} ln -s ${lib_link_dir}/libtaos.so.1 ${lib_link_dir}/libtaos.so
+
+ if [[ -d ${lib64_link_dir} && ! -e ${lib64_link_dir}/libtaos.so ]]; then
+ ${csudo} ln -s ${install_main_dir}/driver/libtaos.* ${lib64_link_dir}/libtaos.so.1 || :
+ ${csudo} ln -s ${lib64_link_dir}/libtaos.so.1 ${lib64_link_dir}/libtaos.so || :
+ fi
+
+ if [ "$osType" != "Darwin" ]; then
+ ${csudo} ldconfig
+ else
+ ${csudo} update_dyld_shared_cache
+ fi
+}
+
+function install_header() {
+ ${csudo} rm -f ${inc_link_dir}/taos.h ${inc_link_dir}/taoserror.h || :
+ ${csudo} cp -f ${script_dir}/inc/* ${install_main_dir}/include && ${csudo} chmod 644 ${install_main_dir}/include/*
+ ${csudo} ln -s ${install_main_dir}/include/taos.h ${inc_link_dir}/taos.h
+ ${csudo} ln -s ${install_main_dir}/include/taoserror.h ${inc_link_dir}/taoserror.h
+}
+
+function install_jemalloc() {
+ jemalloc_dir=${script_dir}/jemalloc
+
+ if [ -d ${jemalloc_dir} ]; then
+ ${csudo} /usr/bin/install -c -d /usr/local/bin
+
+ if [ -f ${jemalloc_dir}/bin/jemalloc-config ]; then
+ ${csudo} /usr/bin/install -c -m 755 ${jemalloc_dir}/bin/jemalloc-config /usr/local/bin
+ fi
+ if [ -f ${jemalloc_dir}/bin/jemalloc.sh ]; then
+ ${csudo} /usr/bin/install -c -m 755 ${jemalloc_dir}/bin/jemalloc.sh /usr/local/bin
+ fi
+ if [ -f ${jemalloc_dir}/bin/jeprof ]; then
+ ${csudo} /usr/bin/install -c -m 755 ${jemalloc_dir}/bin/jeprof /usr/local/bin
+ fi
+ if [ -f ${jemalloc_dir}/include/jemalloc/jemalloc.h ]; then
+ ${csudo} /usr/bin/install -c -d /usr/local/include/jemalloc
+ ${csudo} /usr/bin/install -c -m 644 ${jemalloc_dir}/include/jemalloc/jemalloc.h /usr/local/include/jemalloc
+ fi
+ if [ -f ${jemalloc_dir}/lib/libjemalloc.so.2 ]; then
+ ${csudo} /usr/bin/install -c -d /usr/local/lib
+ ${csudo} /usr/bin/install -c -m 755 ${jemalloc_dir}/lib/libjemalloc.so.2 /usr/local/lib
+ ${csudo} ln -sf libjemalloc.so.2 /usr/local/lib/libjemalloc.so
+ ${csudo} /usr/bin/install -c -d /usr/local/lib
+ if [ -f ${jemalloc_dir}/lib/libjemalloc.a ]; then
+ ${csudo} /usr/bin/install -c -m 755 ${jemalloc_dir}/lib/libjemalloc.a /usr/local/lib
+ fi
+ if [ -f ${jemalloc_dir}/lib/libjemalloc_pic.a ]; then
+ ${csudo} /usr/bin/install -c -m 755 ${jemalloc_dir}/lib/libjemalloc_pic.a /usr/local/lib
+ fi
+ if [ -f ${jemalloc_dir}/lib/libjemalloc_pic.a ]; then
+ ${csudo} /usr/bin/install -c -d /usr/local/lib/pkgconfig
+ ${csudo} /usr/bin/install -c -m 644 ${jemalloc_dir}/lib/pkgconfig/jemalloc.pc /usr/local/lib/pkgconfig
+ fi
+ fi
+ if [ -f ${jemalloc_dir}/share/doc/jemalloc/jemalloc.html ]; then
+ ${csudo} /usr/bin/install -c -d /usr/local/share/doc/jemalloc
+ ${csudo} /usr/bin/install -c -m 644 ${jemalloc_dir}/share/doc/jemalloc/jemalloc.html /usr/local/share/doc/jemalloc
+ fi
+ if [ -f ${jemalloc_dir}/share/man/man3/jemalloc.3 ]; then
+ ${csudo} /usr/bin/install -c -d /usr/local/share/man/man3
+ ${csudo} /usr/bin/install -c -m 644 ${jemalloc_dir}/share/man/man3/jemalloc.3 /usr/local/share/man/man3
+ fi
+
+ if [ -d /etc/ld.so.conf.d ]; then
+ ${csudo} echo "/usr/local/lib" > /etc/ld.so.conf.d/jemalloc.conf
+ ${csudo} ldconfig
+ else
+ echo "/etc/ld.so.conf.d not found!"
+ fi
+ fi
+}
+
+function add_newHostname_to_hosts() {
+ localIp="127.0.0.1"
+ OLD_IFS="$IFS"
+ IFS=" "
+ iphost=$(cat /etc/hosts | grep $1 | awk '{print $1}')
+ arr=($iphost)
+ IFS="$OLD_IFS"
+ for s in ${arr[@]}
+ do
+ if [[ "$s" == "$localIp" ]]; then
+ return
+ fi
+ done
+ ${csudo} echo "127.0.0.1 $1" >> /etc/hosts ||:
+}
+
+function set_hostname() {
+ echo -e -n "${GREEN}Please enter one hostname(must not be 'localhost')${NC}:"
+ read newHostname
+ while true; do
+ if [[ ! -z "$newHostname" && "$newHostname" != "localhost" ]]; then
+ break
+ else
+ read -p "Please enter one hostname(must not be 'localhost'):" newHostname
+ fi
+ done
+
+ ${csudo} hostname $newHostname ||:
+ retval=`echo $?`
+ if [[ $retval != 0 ]]; then
+ echo
+ echo "set hostname fail!"
+ return
+ fi
+
+ #ubuntu/centos /etc/hostname
+ if [[ -e /etc/hostname ]]; then
+ ${csudo} echo $newHostname > /etc/hostname ||:
+ fi
+
+ #debian: #HOSTNAME=yourname
+ if [[ -e /etc/sysconfig/network ]]; then
+ ${csudo} sed -i -r "s/#*\s*(HOSTNAME=\s*).*/\1$newHostname/" /etc/sysconfig/network ||:
+ fi
+
+ ${csudo} sed -i -r "s/#*\s*(fqdn\s*).*/\1$newHostname/" ${cfg_install_dir}/taos.cfg
+ serverFqdn=$newHostname
+
+ if [[ -e /etc/hosts ]]; then
+ add_newHostname_to_hosts $newHostname
+ fi
+}
+
+function is_correct_ipaddr() {
+ newIp=$1
+ OLD_IFS="$IFS"
+ IFS=" "
+ arr=($iplist)
+ IFS="$OLD_IFS"
+ for s in ${arr[@]}
+ do
+ if [[ "$s" == "$newIp" ]]; then
+ return 0
+ fi
+ done
+
+ return 1
+}
+
+function set_ipAsFqdn() {
+ iplist=$(ip address |grep inet |grep -v inet6 |grep -v 127.0.0.1 |awk '{print $2}' |awk -F "/" '{print $1}') ||:
+ if [ -z "$iplist" ]; then
+ iplist=$(ifconfig |grep inet |grep -v inet6 |grep -v 127.0.0.1 |awk '{print $2}' |awk -F ":" '{print $2}') ||:
+ fi
+
+ if [ -z "$iplist" ]; then
+ echo
+ echo -e -n "${GREEN}Unable to get local ip, use 127.0.0.1${NC}"
+ localFqdn="127.0.0.1"
+ # Write the local FQDN to configuration file
+ ${csudo} sed -i -r "s/#*\s*(fqdn\s*).*/\1$localFqdn/" ${cfg_install_dir}/taos.cfg
+ serverFqdn=$localFqdn
+ echo
+ return
+ fi
+
+ echo -e -n "${GREEN}Please choose an IP from local IP list${NC}:"
+ echo
+ echo -e -n "${GREEN}$iplist${NC}"
+ echo
+ echo
+ echo -e -n "${GREEN}Notes: if IP is used as the node name, data can NOT be migrated to other machine directly${NC}:"
+ read localFqdn
+ while true; do
+ if [ ! -z "$localFqdn" ]; then
+ # Check if correct ip address
+ is_correct_ipaddr $localFqdn
+ retval=`echo $?`
+ if [[ $retval != 0 ]]; then
+ read -p "Please choose an IP from local IP list:" localFqdn
+ else
+ # Write the local FQDN to configuration file
+ ${csudo} sed -i -r "s/#*\s*(fqdn\s*).*/\1$localFqdn/" ${cfg_install_dir}/taos.cfg
+ serverFqdn=$localFqdn
+ break
+ fi
+ else
+ read -p "Please choose an IP from local IP list:" localFqdn
+ fi
+ done
+}
+
+function local_fqdn_check() {
+ #serverFqdn=$(hostname)
+ echo
+ echo -e -n "System hostname is: ${GREEN}$serverFqdn${NC}"
+ echo
+ if [[ "$serverFqdn" == "" ]] || [[ "$serverFqdn" == "localhost" ]]; then
+ echo -e -n "${GREEN}It is strongly recommended to configure a hostname for this machine ${NC}"
+ echo
+
+ while true
+ do
+ read -r -p "Set hostname now? [Y/n] " input
+ if [ ! -n "$input" ]; then
+ set_hostname
+ break
+ else
+ case $input in
+ [yY][eE][sS]|[yY])
+ set_hostname
+ break
+ ;;
+
+ [nN][oO]|[nN])
+ set_ipAsFqdn
+ break
+ ;;
+
+ *)
+ echo "Invalid input..."
+ ;;
+ esac
+ fi
+ done
+ fi
+}
+
+function install_config() {
+ if [ ! -f ${cfg_install_dir}/taos.cfg ]; then
+ ${csudo} mkdir -p ${cfg_install_dir}
+ [ -f ${script_dir}/cfg/taos.cfg ] && ${csudo} cp ${script_dir}/cfg/taos.cfg ${cfg_install_dir}
+ ${csudo} chmod 644 ${cfg_install_dir}/*
+ fi
+
+ ${csudo} cp -f ${script_dir}/cfg/taos.cfg ${install_main_dir}/cfg/taos.cfg.org
+ ${csudo} ln -s ${cfg_install_dir}/taos.cfg ${install_main_dir}/cfg
+
+ [ ! -z $1 ] && return 0 || : # only install client
+
+ if ((${update_flag}==1)); then
+ return 0
+ fi
+
+ if [ "$interactiveFqdn" == "no" ]; then
+ return 0
+ fi
+
+ local_fqdn_check
+
+ #FQDN_FORMAT="(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)"
+ #FQDN_FORMAT="(:[1-6][0-9][0-9][0-9][0-9]$)"
+ #PORT_FORMAT="(/[1-6][0-9][0-9][0-9][0-9]?/)"
+ #FQDN_PATTERN=":[0-9]{1,5}$"
+
+ # first full-qualified domain name (FQDN) for ProDB cluster system
+ echo
+ echo -e -n "${GREEN}Enter FQDN:port (like h1.hanatech.com.cn:6030) of an existing ProDB cluster node to join${NC}"
+ echo
+ echo -e -n "${GREEN}OR leave it blank to build one${NC}:"
+ read firstEp
+ while true; do
+ if [ ! -z "$firstEp" ]; then
+ # check the format of the firstEp
+ #if [[ $firstEp == $FQDN_PATTERN ]]; then
+ # Write the first FQDN to configuration file
+ ${csudo} sed -i -r "s/#*\s*(firstEp\s*).*/\1$firstEp/" ${cfg_install_dir}/taos.cfg
+ break
+ #else
+ # read -p "Please enter the correct FQDN:port: " firstEp
+ #fi
+ else
+ break
+ fi
+ done
+}
+
+
+function install_log() {
+ ${csudo} rm -rf ${log_dir} || :
+ ${csudo} mkdir -p ${log_dir} && ${csudo} chmod 777 ${log_dir}
+
+ ${csudo} ln -s ${log_dir} ${install_main_dir}/log
+}
+
+function install_data() {
+ ${csudo} mkdir -p ${data_dir}
+
+ ${csudo} ln -s ${data_dir} ${install_main_dir}/data
+}
+
+function install_connector() {
+ ${csudo} cp -rf ${script_dir}/connector/* ${install_main_dir}/connector
+}
+
+function install_examples() {
+ if [ -d ${script_dir}/examples ]; then
+ ${csudo} cp -rf ${script_dir}/examples/* ${install_main_dir}/examples
+ fi
+}
+
+function clean_service_on_sysvinit() {
+ if pidof prodbs &> /dev/null; then
+ ${csudo} service prodbs stop || :
+ fi
+
+ if pidof tarbitrator &> /dev/null; then
+ ${csudo} service tarbitratord stop || :
+ fi
+
+ if ((${initd_mod}==1)); then
+ if [ -e ${service_config_dir}/prodbs ]; then
+ ${csudo} chkconfig --del prodbs || :
+ fi
+
+ if [ -e ${service_config_dir}/tarbitratord ]; then
+ ${csudo} chkconfig --del tarbitratord || :
+ fi
+ elif ((${initd_mod}==2)); then
+ if [ -e ${service_config_dir}/prodbs ]; then
+ ${csudo} insserv -r prodbs || :
+ fi
+ if [ -e ${service_config_dir}/tarbitratord ]; then
+ ${csudo} insserv -r tarbitratord || :
+ fi
+ elif ((${initd_mod}==3)); then
+ if [ -e ${service_config_dir}/prodbs ]; then
+ ${csudo} update-rc.d -f prodbs remove || :
+ fi
+ if [ -e ${service_config_dir}/tarbitratord ]; then
+ ${csudo} update-rc.d -f tarbitratord remove || :
+ fi
+ fi
+
+ ${csudo} rm -f ${service_config_dir}/prodbs || :
+ ${csudo} rm -f ${service_config_dir}/tarbitratord || :
+
+ if $(which init &> /dev/null); then
+ ${csudo} init q || :
+ fi
+}
+
+function install_service_on_sysvinit() {
+ clean_service_on_sysvinit
+ sleep 1
+
+ # Install prodbs service
+
+ if ((${os_type}==1)); then
+ ${csudo} cp -f ${script_dir}/init.d/prodbs.deb ${install_main_dir}/init.d/prodbs
+ ${csudo} cp ${script_dir}/init.d/prodbs.deb ${service_config_dir}/prodbs && ${csudo} chmod a+x ${service_config_dir}/prodbs
+ ${csudo} cp -f ${script_dir}/init.d/tarbitratord.deb ${install_main_dir}/init.d/tarbitratord
+ ${csudo} cp ${script_dir}/init.d/tarbitratord.deb ${service_config_dir}/tarbitratord && ${csudo} chmod a+x ${service_config_dir}/tarbitratord
+ elif ((${os_type}==2)); then
+ ${csudo} cp -f ${script_dir}/init.d/prodbs.rpm ${install_main_dir}/init.d/prodbs
+ ${csudo} cp ${script_dir}/init.d/prodbs.rpm ${service_config_dir}/prodbs && ${csudo} chmod a+x ${service_config_dir}/prodbs
+ ${csudo} cp -f ${script_dir}/init.d/tarbitratord.rpm ${install_main_dir}/init.d/tarbitratord
+ ${csudo} cp ${script_dir}/init.d/tarbitratord.rpm ${service_config_dir}/tarbitratord && ${csudo} chmod a+x ${service_config_dir}/tarbitratord
+ fi
+
+ if ((${initd_mod}==1)); then
+ ${csudo} chkconfig --add prodbs || :
+ ${csudo} chkconfig --level 2345 prodbs on || :
+ ${csudo} chkconfig --add tarbitratord || :
+ ${csudo} chkconfig --level 2345 tarbitratord on || :
+ elif ((${initd_mod}==2)); then
+ ${csudo} insserv prodbs || :
+ ${csudo} insserv -d prodbs || :
+ ${csudo} insserv tarbitratord || :
+ ${csudo} insserv -d tarbitratord || :
+ elif ((${initd_mod}==3)); then
+ ${csudo} update-rc.d prodbs defaults || :
+ ${csudo} update-rc.d tarbitratord defaults || :
+ fi
+}
+
+function clean_service_on_systemd() {
+ prodbs_service_config="${service_config_dir}/prodbs.service"
+ if systemctl is-active --quiet prodbs; then
+ echo "ProDB is running, stopping it..."
+ ${csudo} systemctl stop prodbs &> /dev/null || echo &> /dev/null
+ fi
+ ${csudo} systemctl disable prodbs &> /dev/null || echo &> /dev/null
+ ${csudo} rm -f ${prodbs_service_config}
+
+ tarbitratord_service_config="${service_config_dir}/tarbitratord.service"
+ if systemctl is-active --quiet tarbitratord; then
+ echo "tarbitrator is running, stopping it..."
+ ${csudo} systemctl stop tarbitratord &> /dev/null || echo &> /dev/null
+ fi
+ ${csudo} systemctl disable tarbitratord &> /dev/null || echo &> /dev/null
+ ${csudo} rm -f ${tarbitratord_service_config}
+
+ if [ "$verMode" == "cluster" ]; then
+ nginx_service_config="${service_config_dir}/nginxd.service"
+ if systemctl is-active --quiet nginxd; then
+ echo "Nginx for ProDB is running, stopping it..."
+ ${csudo} systemctl stop nginxd &> /dev/null || echo &> /dev/null
+ fi
+ ${csudo} systemctl disable nginxd &> /dev/null || echo &> /dev/null
+ ${csudo} rm -f ${nginx_service_config}
+ fi
+}
+
+function install_service_on_systemd() {
+ clean_service_on_systemd
+
+ prodbs_service_config="${service_config_dir}/prodbs.service"
+ ${csudo} bash -c "echo '[Unit]' >> ${prodbs_service_config}"
+ ${csudo} bash -c "echo 'Description=ProDB server service' >> ${prodbs_service_config}"
+ ${csudo} bash -c "echo 'After=network-online.target' >> ${prodbs_service_config}"
+ ${csudo} bash -c "echo 'Wants=network-online.target' >> ${prodbs_service_config}"
+ ${csudo} bash -c "echo >> ${prodbs_service_config}"
+ ${csudo} bash -c "echo '[Service]' >> ${prodbs_service_config}"
+ ${csudo} bash -c "echo 'Type=simple' >> ${prodbs_service_config}"
+ ${csudo} bash -c "echo 'ExecStart=/usr/bin/prodbs' >> ${prodbs_service_config}"
+ ${csudo} bash -c "echo 'ExecStartPre=/usr/local/ProDB/bin/startPre.sh' >> ${prodbs_service_config}"
+ ${csudo} bash -c "echo 'TimeoutStopSec=1000000s' >> ${prodbs_service_config}"
+ ${csudo} bash -c "echo 'LimitNOFILE=infinity' >> ${prodbs_service_config}"
+ ${csudo} bash -c "echo 'LimitNPROC=infinity' >> ${prodbs_service_config}"
+ ${csudo} bash -c "echo 'LimitCORE=infinity' >> ${prodbs_service_config}"
+ ${csudo} bash -c "echo 'TimeoutStartSec=0' >> ${prodbs_service_config}"
+ ${csudo} bash -c "echo 'StandardOutput=null' >> ${prodbs_service_config}"
+ ${csudo} bash -c "echo 'Restart=always' >> ${prodbs_service_config}"
+ ${csudo} bash -c "echo 'StartLimitBurst=3' >> ${prodbs_service_config}"
+ ${csudo} bash -c "echo 'StartLimitInterval=60s' >> ${prodbs_service_config}"
+ ${csudo} bash -c "echo >> ${prodbs_service_config}"
+ ${csudo} bash -c "echo '[Install]' >> ${prodbs_service_config}"
+ ${csudo} bash -c "echo 'WantedBy=multi-user.target' >> ${prodbs_service_config}"
+ ${csudo} systemctl enable prodbs
+
+ tarbitratord_service_config="${service_config_dir}/tarbitratord.service"
+ ${csudo} bash -c "echo '[Unit]' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'Description=ProDB arbitrator service' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'After=network-online.target' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'Wants=network-online.target' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo '[Service]' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'Type=simple' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'ExecStart=/usr/bin/tarbitrator' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'TimeoutStopSec=1000000s' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'LimitNOFILE=infinity' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'LimitNPROC=infinity' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'LimitCORE=infinity' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'TimeoutStartSec=0' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'StandardOutput=null' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'Restart=always' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'StartLimitBurst=3' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'StartLimitInterval=60s' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo '[Install]' >> ${tarbitratord_service_config}"
+ ${csudo} bash -c "echo 'WantedBy=multi-user.target' >> ${tarbitratord_service_config}"
+ #${csudo} systemctl enable tarbitratord
+
+ if [ "$verMode" == "cluster" ]; then
+ nginx_service_config="${service_config_dir}/nginxd.service"
+ ${csudo} bash -c "echo '[Unit]' >> ${nginx_service_config}"
+ ${csudo} bash -c "echo 'Description=Nginx For PowrDB Service' >> ${nginx_service_config}"
+ ${csudo} bash -c "echo 'After=network-online.target' >> ${nginx_service_config}"
+ ${csudo} bash -c "echo 'Wants=network-online.target' >> ${nginx_service_config}"
+ ${csudo} bash -c "echo >> ${nginx_service_config}"
+ ${csudo} bash -c "echo '[Service]' >> ${nginx_service_config}"
+ ${csudo} bash -c "echo 'Type=forking' >> ${nginx_service_config}"
+ ${csudo} bash -c "echo 'PIDFile=/usr/local/nginxd/logs/nginx.pid' >> ${nginx_service_config}"
+ ${csudo} bash -c "echo 'ExecStart=/usr/local/nginxd/sbin/nginx' >> ${nginx_service_config}"
+ ${csudo} bash -c "echo 'ExecStop=/usr/local/nginxd/sbin/nginx -s stop' >> ${nginx_service_config}"
+ ${csudo} bash -c "echo 'TimeoutStopSec=1000000s' >> ${nginx_service_config}"
+ ${csudo} bash -c "echo 'LimitNOFILE=infinity' >> ${nginx_service_config}"
+ ${csudo} bash -c "echo 'LimitNPROC=infinity' >> ${nginx_service_config}"
+ ${csudo} bash -c "echo 'LimitCORE=infinity' >> ${nginx_service_config}"
+ ${csudo} bash -c "echo 'TimeoutStartSec=0' >> ${nginx_service_config}"
+ ${csudo} bash -c "echo 'StandardOutput=null' >> ${nginx_service_config}"
+ ${csudo} bash -c "echo 'Restart=always' >> ${nginx_service_config}"
+ ${csudo} bash -c "echo 'StartLimitBurst=3' >> ${nginx_service_config}"
+ ${csudo} bash -c "echo 'StartLimitInterval=60s' >> ${nginx_service_config}"
+ ${csudo} bash -c "echo >> ${nginx_service_config}"
+ ${csudo} bash -c "echo '[Install]' >> ${nginx_service_config}"
+ ${csudo} bash -c "echo 'WantedBy=multi-user.target' >> ${nginx_service_config}"
+ if ! ${csudo} systemctl enable nginxd &> /dev/null; then
+ ${csudo} systemctl daemon-reexec
+ ${csudo} systemctl enable nginxd
+ fi
+ ${csudo} systemctl start nginxd
+ fi
+}
+
+function install_service() {
+ if ((${service_mod}==0)); then
+ install_service_on_systemd
+ elif ((${service_mod}==1)); then
+ install_service_on_sysvinit
+ else
+ # must manual stop prodbs
+ kill_process prodbs
+ fi
+}
+
+vercomp () {
+ if [[ $1 == $2 ]]; then
+ return 0
+ fi
+ local IFS=.
+ local i ver1=($1) ver2=($2)
+ # fill empty fields in ver1 with zeros
+ for ((i=${#ver1[@]}; i<${#ver2[@]}; i++)); do
+ ver1[i]=0
+ done
+
+ for ((i=0; i<${#ver1[@]}; i++)); do
+ if [[ -z ${ver2[i]} ]]
+ then
+ # fill empty fields in ver2 with zeros
+ ver2[i]=0
+ fi
+ if ((10#${ver1[i]} > 10#${ver2[i]}))
+ then
+ return 1
+ fi
+ if ((10#${ver1[i]} < 10#${ver2[i]}))
+ then
+ return 2
+ fi
+ done
+ return 0
+}
+
+function is_version_compatible() {
+ curr_version=`ls ${script_dir}/driver/libtaos.so* |cut -d '.' -f 3-6`
+
+ if [ -f ${script_dir}/driver/vercomp.txt ]; then
+ min_compatible_version=`cat ${script_dir}/driver/vercomp.txt`
+ else
+ min_compatible_version=$(${script_dir}/bin/prodbs -V | head -1 | cut -d ' ' -f 5)
+ fi
+
+ vercomp $curr_version $min_compatible_version
+ case $? in
+ 0) return 0;;
+ 1) return 0;;
+ 2) return 1;;
+ esac
+}
+
+function update_prodb() {
+ # Start to update
+ if [ ! -e prodb.tar.gz ]; then
+ echo "File prodb.tar.gz does not exist"
+ exit 1
+ fi
+ tar -zxf prodb.tar.gz
+ install_jemalloc
+
+ # Check if version compatible
+ if ! is_version_compatible; then
+ echo -e "${RED}Version incompatible${NC}"
+ return 1
+ fi
+
+ echo -e "${GREEN}Start to update ProDB...${NC}"
+ # Stop the service if running
+ if pidof prodbs &> /dev/null; then
+ if ((${service_mod}==0)); then
+ ${csudo} systemctl stop prodbs || :
+ elif ((${service_mod}==1)); then
+ ${csudo} service prodbs stop || :
+ else
+ kill_process prodbs
+ fi
+ sleep 1
+ fi
+ if [ "$verMode" == "cluster" ]; then
+ if pidof nginx &> /dev/null; then
+ if ((${service_mod}==0)); then
+ ${csudo} systemctl stop nginxd || :
+ elif ((${service_mod}==1)); then
+ ${csudo} service nginxd stop || :
+ else
+ kill_process nginx
+ fi
+ sleep 1
+ fi
+ fi
+
+ install_main_path
+
+ install_log
+ install_header
+ install_lib
+ if [ "$pagMode" != "lite" ]; then
+ install_connector
+ fi
+ install_examples
+ if [ -z $1 ]; then
+ install_bin
+ install_service
+ install_config
+
+ openresty_work=false
+ if [ "$verMode" == "cluster" ]; then
+ # Check if openresty is installed
+ # Check if nginx is installed successfully
+ if type curl &> /dev/null; then
+ if curl -sSf http://127.0.0.1:${nginx_port} &> /dev/null; then
+ echo -e "\033[44;32;1mNginx for ProDB is updated successfully!${NC}"
+ openresty_work=true
+ else
+ echo -e "\033[44;31;5mNginx for ProDB does not work! Please try again!\033[0m"
+ fi
+ fi
+ fi
+
+ #echo
+ #echo -e "\033[44;32;1mProDB is updated successfully!${NC}"
+ echo
+ echo -e "${GREEN_DARK}To configure ProDB ${NC}: edit /etc/ProDB/taos.cfg"
+ if ((${service_mod}==0)); then
+ echo -e "${GREEN_DARK}To start ProDB ${NC}: ${csudo} systemctl start prodbs${NC}"
+ elif ((${service_mod}==1)); then
+ echo -e "${GREEN_DARK}To start ProDB ${NC}: ${csudo} service prodbs start${NC}"
+ else
+ echo -e "${GREEN_DARK}To start ProDB ${NC}: ./prodbs${NC}"
+ fi
+
+ if [ ${openresty_work} = 'true' ]; then
+ echo -e "${GREEN_DARK}To access ProDB ${NC}: use ${GREEN_UNDERLINE}prodbc -h $serverFqdn${NC} in shell OR from ${GREEN_UNDERLINE}http://127.0.0.1:${nginx_port}${NC}"
+ else
+ echo -e "${GREEN_DARK}To access ProDB ${NC}: use ${GREEN_UNDERLINE}prodbc -h $serverFqdn${NC} in shell${NC}"
+ fi
+
+ echo
+ echo -e "\033[44;32;1mProDB is updated successfully!${NC}"
+ else
+ install_bin
+ install_config
+
+ echo
+ echo -e "\033[44;32;1mProDB client is updated successfully!${NC}"
+ fi
+
+ rm -rf $(tar -tf prodb.tar.gz)
+}
+
+function install_prodb() {
+ # Start to install
+ if [ ! -e prodb.tar.gz ]; then
+ echo "File prodb.tar.gz does not exist"
+ exit 1
+ fi
+ tar -zxf prodb.tar.gz
+
+ echo -e "${GREEN}Start to install ProDB...${NC}"
+
+ install_main_path
+
+ if [ -z $1 ]; then
+ install_data
+ fi
+
+ install_log
+ install_header
+ install_lib
+ install_jemalloc
+ if [ "$pagMode" != "lite" ]; then
+ install_connector
+ fi
+ install_examples
+
+ if [ -z $1 ]; then # install service and client
+ # For installing new
+ install_bin
+ install_service
+
+ openresty_work=false
+ if [ "$verMode" == "cluster" ]; then
+ # Check if nginx is installed successfully
+ if type curl &> /dev/null; then
+ if curl -sSf http://127.0.0.1:${nginx_port} &> /dev/null; then
+ echo -e "\033[44;32;1mNginx for ProDB is installed successfully!${NC}"
+ openresty_work=true
+ else
+ echo -e "\033[44;31;5mNginx for ProDB does not work! Please try again!\033[0m"
+ fi
+ fi
+ fi
+
+ install_config
+
+ # Ask if to start the service
+ #echo
+ #echo -e "\033[44;32;1mProDB is installed successfully!${NC}"
+ echo
+ echo -e "${GREEN_DARK}To configure ProDB ${NC}: edit /etc/ProDB/taos.cfg"
+ if ((${service_mod}==0)); then
+ echo -e "${GREEN_DARK}To start ProDB ${NC}: ${csudo} systemctl start prodbs${NC}"
+ elif ((${service_mod}==1)); then
+ echo -e "${GREEN_DARK}To start ProDB ${NC}: ${csudo} service prodbs start${NC}"
+ else
+ echo -e "${GREEN_DARK}To start ProDB ${NC}: prodbs${NC}"
+ fi
+
+ if [ ! -z "$firstEp" ]; then
+ tmpFqdn=${firstEp%%:*}
+ substr=":"
+ if [[ $firstEp =~ $substr ]];then
+ tmpPort=${firstEp#*:}
+ else
+ tmpPort=""
+ fi
+ if [[ "$tmpPort" != "" ]];then
+ echo -e "${GREEN_DARK}To access ProDB ${NC}: prodbc -h $tmpFqdn -P $tmpPort${GREEN_DARK} to login into cluster, then${NC}"
+ else
+ echo -e "${GREEN_DARK}To access ProDB ${NC}: prodbc -h $tmpFqdn${GREEN_DARK} to login into cluster, then${NC}"
+ fi
+ echo -e "${GREEN_DARK}execute ${NC}: create dnode 'newDnodeFQDN:port'; ${GREEN_DARK}to add this new node${NC}"
+ echo
+ elif [ ! -z "$serverFqdn" ]; then
+ echo -e "${GREEN_DARK}To access ProDB ${NC}: prodbc -h $serverFqdn${GREEN_DARK} to login into ProDB server${NC}"
+ echo
+ fi
+ echo -e "\033[44;32;1mProDB is installed successfully!${NC}"
+ echo
+ else # Only install client
+ install_bin
+ install_config
+
+ echo
+ echo -e "\033[44;32;1mProDB client is installed successfully!${NC}"
+ fi
+
+ rm -rf $(tar -tf prodb.tar.gz)
+}
+
+
+## ==============================Main program starts from here============================
+serverFqdn=$(hostname)
+if [ "$verType" == "server" ]; then
+ # Install server and client
+ if [ -x ${bin_dir}/prodbs ]; then
+ update_flag=1
+ update_prodb
+ else
+ install_prodb
+ fi
+elif [ "$verType" == "client" ]; then
+ interactiveFqdn=no
+ # Only install client
+ if [ -x ${bin_dir}/prodbc ]; then
+ update_flag=1
+ update_prodb client
+ else
+ install_prodb client
+ fi
+else
+ echo "please input correct verType"
+fi
diff --git a/packaging/tools/makearbi_pro.sh b/packaging/tools/makearbi_pro.sh
new file mode 100755
index 0000000000000000000000000000000000000000..6ce3765e44acc408ced9730c54b793338eb37b38
--- /dev/null
+++ b/packaging/tools/makearbi_pro.sh
@@ -0,0 +1,75 @@
+#!/bin/bash
+#
+# Generate arbitrator's tar.gz setup package for all os system
+
+set -e
+#set -x
+
+curr_dir=$(pwd)
+compile_dir=$1
+version=$2
+build_time=$3
+cpuType=$4
+osType=$5
+verMode=$6
+verType=$7
+pagMode=$8
+
+script_dir="$(dirname $(readlink -f $0))"
+top_dir="$(readlink -f ${script_dir}/../..)"
+
+# create compressed install file.
+build_dir="${compile_dir}/build"
+code_dir="${top_dir}/src"
+release_dir="${top_dir}/release"
+
+#package_name='linux'
+if [ "$verMode" == "cluster" ]; then
+ install_dir="${release_dir}/ProDB-enterprise-arbitrator-${version}"
+else
+ install_dir="${release_dir}/ProDB-arbitrator-${version}"
+fi
+
+# Directories and files.
+bin_files="${build_dir}/bin/tarbitrator ${script_dir}/remove_arbi_pro.sh"
+install_files="${script_dir}/install_arbi_pro.sh"
+
+#header_files="${code_dir}/inc/taos.h ${code_dir}/inc/taoserror.h"
+init_file_tarbitrator_deb=${script_dir}/../deb/tarbitratord
+init_file_tarbitrator_rpm=${script_dir}/../rpm/tarbitratord
+
+# make directories.
+mkdir -p ${install_dir} && cp ${install_files} ${install_dir} && chmod a+x ${install_dir}/install_arbi_pro.sh || :
+#mkdir -p ${install_dir}/inc && cp ${header_files} ${install_dir}/inc || :
+mkdir -p ${install_dir}/bin && cp ${bin_files} ${install_dir}/bin && chmod a+x ${install_dir}/bin/* || :
+mkdir -p ${install_dir}/init.d && cp ${init_file_tarbitrator_deb} ${install_dir}/init.d/tarbitratord.deb || :
+mkdir -p ${install_dir}/init.d && cp ${init_file_tarbitrator_rpm} ${install_dir}/init.d/tarbitratord.rpm || :
+
+cd ${release_dir}
+
+if [ "$verMode" == "cluster" ]; then
+ pkg_name=${install_dir}-${osType}-${cpuType}
+elif [ "$verMode" == "edge" ]; then
+ pkg_name=${install_dir}-${osType}-${cpuType}
+else
+ echo "unknow verMode, nor cluster or edge"
+ exit 1
+fi
+
+if [ "$verType" == "beta" ]; then
+ pkg_name=${pkg_name}-${verType}
+elif [ "$verType" == "stable" ]; then
+ pkg_name=${pkg_name}
+else
+ echo "unknow verType, nor stabel or beta"
+ exit 1
+fi
+
+tar -zcv -f "$(basename ${pkg_name}).tar.gz" $(basename ${install_dir}) --remove-files || :
+exitcode=$?
+if [ "$exitcode" != "0" ]; then
+ echo "tar ${pkg_name}.tar.gz error !!!"
+ exit $exitcode
+fi
+
+cd ${curr_dir}
diff --git a/packaging/tools/makeclient_pro.sh b/packaging/tools/makeclient_pro.sh
new file mode 100755
index 0000000000000000000000000000000000000000..599c91fbf082955887c677b750aa12f946c0890b
--- /dev/null
+++ b/packaging/tools/makeclient_pro.sh
@@ -0,0 +1,225 @@
+#!/bin/bash
+#
+# Generate tar.gz package for linux client in all os system
+set -e
+#set -x
+
+curr_dir=$(pwd)
+compile_dir=$1
+version=$2
+build_time=$3
+cpuType=$4
+osType=$5
+verMode=$6
+verType=$7
+pagMode=$8
+
+if [ "$osType" != "Darwin" ]; then
+ script_dir="$(dirname $(readlink -f $0))"
+ top_dir="$(readlink -f ${script_dir}/../..)"
+else
+ script_dir=`dirname $0`
+ cd ${script_dir}
+ script_dir="$(pwd)"
+ top_dir=${script_dir}/../..
+fi
+
+# create compressed install file.
+build_dir="${compile_dir}/build"
+code_dir="${top_dir}/src"
+release_dir="${top_dir}/release"
+
+#package_name='linux'
+
+if [ "$verMode" == "cluster" ]; then
+ install_dir="${release_dir}/ProDB-enterprise-client-${version}"
+else
+ install_dir="${release_dir}/ProDB-client-${version}"
+fi
+
+# Directories and files.
+
+if [ "$osType" != "Darwin" ]; then
+ lib_files="${build_dir}/lib/libtaos.so.${version}"
+else
+ bin_files="${build_dir}/bin/taos ${script_dir}/remove_client_pro.sh"
+ lib_files="${build_dir}/lib/libtaos.${version}.dylib"
+fi
+
+header_files="${code_dir}/inc/taos.h ${code_dir}/inc/taoserror.h"
+if [ "$verMode" == "cluster" ]; then
+ cfg_dir="${top_dir}/../enterprise/packaging/cfg"
+else
+ cfg_dir="${top_dir}/packaging/cfg"
+fi
+
+install_files="${script_dir}/install_client_pro.sh"
+
+# make directories.
+mkdir -p ${install_dir}
+mkdir -p ${install_dir}/inc && cp ${header_files} ${install_dir}/inc
+mkdir -p ${install_dir}/cfg && cp ${cfg_dir}/taos.cfg ${install_dir}/cfg/taos.cfg
+
+sed -i '/dataDir/ {s/taos/ProDB/g}' ${install_dir}/cfg/taos.cfg
+sed -i '/logDir/ {s/taos/ProDB/g}' ${install_dir}/cfg/taos.cfg
+sed -i "s/TDengine/ProDB/g" ${install_dir}/cfg/taos.cfg
+
+mkdir -p ${install_dir}/bin
+if [ "$osType" != "Darwin" ]; then
+ if [ "$pagMode" == "lite" ]; then
+ strip ${build_dir}/bin/taos
+ cp ${build_dir}/bin/taos ${install_dir}/bin/prodbc
+ cp ${script_dir}/remove_pro.sh ${install_dir}/bin
+ else
+ cp ${build_dir}/bin/taos ${install_dir}/bin/prodbc
+ cp ${script_dir}/remove_pro.sh ${install_dir}/bin
+ cp ${build_dir}/bin/taosdemo ${install_dir}/bin/prodemo
+ cp ${build_dir}/bin/taosdump ${install_dir}/bin/prodump
+ cp ${script_dir}/set_core.sh ${install_dir}/bin
+ cp ${script_dir}/get_client.sh ${install_dir}/bin
+ cp ${script_dir}/taosd-dump-cfg.gdb ${install_dir}/bin
+ fi
+else
+ cp ${bin_files} ${install_dir}/bin
+fi
+chmod a+x ${install_dir}/bin/* || :
+
+if [ -f ${build_dir}/bin/jemalloc-config ]; then
+ mkdir -p ${install_dir}/jemalloc/{bin,lib,lib/pkgconfig,include/jemalloc,share/doc/jemalloc,share/man/man3}
+ cp ${build_dir}/bin/jemalloc-config ${install_dir}/jemalloc/bin
+ if [ -f ${build_dir}/bin/jemalloc.sh ]; then
+ cp ${build_dir}/bin/jemalloc.sh ${install_dir}/jemalloc/bin
+ fi
+ if [ -f ${build_dir}/bin/jeprof ]; then
+ cp ${build_dir}/bin/jeprof ${install_dir}/jemalloc/bin
+ fi
+ if [ -f ${build_dir}/include/jemalloc/jemalloc.h ]; then
+ cp ${build_dir}/include/jemalloc/jemalloc.h ${install_dir}/jemalloc/include/jemalloc
+ fi
+ if [ -f ${build_dir}/lib/libjemalloc.so.2 ]; then
+ cp ${build_dir}/lib/libjemalloc.so.2 ${install_dir}/jemalloc/lib
+ ln -sf libjemalloc.so.2 ${install_dir}/jemalloc/lib/libjemalloc.so
+ fi
+ if [ -f ${build_dir}/lib/libjemalloc.a ]; then
+ cp ${build_dir}/lib/libjemalloc.a ${install_dir}/jemalloc/lib
+ fi
+ if [ -f ${build_dir}/lib/libjemalloc_pic.a ]; then
+ cp ${build_dir}/lib/libjemalloc_pic.a ${install_dir}/jemalloc/lib
+ fi
+ if [ -f ${build_dir}/lib/pkgconfig/jemalloc.pc ]; then
+ cp ${build_dir}/lib/pkgconfig/jemalloc.pc ${install_dir}/jemalloc/lib/pkgconfig
+ fi
+ if [ -f ${build_dir}/share/doc/jemalloc/jemalloc.html ]; then
+ cp ${build_dir}/share/doc/jemalloc/jemalloc.html ${install_dir}/jemalloc/share/doc/jemalloc
+ fi
+ if [ -f ${build_dir}/share/man/man3/jemalloc.3 ]; then
+ cp ${build_dir}/share/man/man3/jemalloc.3 ${install_dir}/jemalloc/share/man/man3
+ fi
+fi
+
+cd ${install_dir}
+
+if [ "$osType" != "Darwin" ]; then
+ tar -zcv -f prodb.tar.gz * --remove-files || :
+else
+ tar -zcv -f prodb.tar.gz * || :
+ mv prodb.tar.gz ..
+ rm -rf ./*
+ mv ../prodb.tar.gz .
+fi
+
+cd ${curr_dir}
+cp ${install_files} ${install_dir}
+if [ "$osType" == "Darwin" ]; then
+ sed 's/osType=Linux/osType=Darwin/g' ${install_dir}/install_client_pro.sh >> install_client_prodb_temp.sh
+ mv install_client_prodb_temp.sh ${install_dir}/install_client_pro.sh
+fi
+if [ "$pagMode" == "lite" ]; then
+ sed 's/pagMode=full/pagMode=lite/g' ${install_dir}/install_client_pro.sh >> install_client_prodb_temp.sh
+ mv install_client_prodb_temp.sh ${install_dir}/install_client_pro.sh
+fi
+chmod a+x ${install_dir}/install_client_pro.sh
+
+# Copy example code
+mkdir -p ${install_dir}/examples
+examples_dir="${top_dir}/tests/examples"
+cp -r ${examples_dir}/c ${install_dir}/examples
+sed -i '/passwd/ {s/taosdata/prodb/g}' ${install_dir}/examples/c/*.c
+sed -i '/root/ {s/taosdata/prodb/g}' ${install_dir}/examples/c/*.c
+
+if [[ "$pagMode" != "lite" ]] && [[ "$cpuType" != "aarch32" ]]; then
+ cp -r ${examples_dir}/JDBC ${install_dir}/examples
+ cp -r ${examples_dir}/matlab ${install_dir}/examples
+ mv ${install_dir}/examples/matlab/TDengineDemo.m ${install_dir}/examples/matlab/ProDBDemo.m
+ sed -i '/password/ {s/taosdata/prodb/g}' ${install_dir}/examples/matlab/ProDBDemo.m
+ cp -r ${examples_dir}/python ${install_dir}/examples
+ sed -i '/password/ {s/taosdata/prodb/g}' ${install_dir}/examples/python/read_example.py
+ cp -r ${examples_dir}/R ${install_dir}/examples
+ sed -i '/password/ {s/taosdata/prodb/g}' ${install_dir}/examples/R/command.txt
+ cp -r ${examples_dir}/go ${install_dir}/examples
+ mv ${install_dir}/examples/go/taosdemo.go ${install_dir}/examples/go/prodemo.go
+ sed -i '/root/ {s/taosdata/prodb/g}' ${install_dir}/examples/go/prodemo.go
+fi
+# Copy driver
+mkdir -p ${install_dir}/driver
+cp ${lib_files} ${install_dir}/driver
+
+# Copy connector
+connector_dir="${code_dir}/connector"
+mkdir -p ${install_dir}/connector
+
+if [[ "$pagMode" != "lite" ]] && [[ "$cpuType" != "aarch32" ]]; then
+ if [ "$osType" != "Darwin" ]; then
+ cp ${build_dir}/lib/*.jar ${install_dir}/connector ||:
+ fi
+ if [ -d "${connector_dir}/grafanaplugin/dist" ]; then
+ cp -r ${connector_dir}/grafanaplugin/dist ${install_dir}/connector/grafanaplugin
+ else
+ echo "WARNING: grafanaplugin bunlded dir not found, please check if want to use it!"
+ fi
+ if find ${connector_dir}/go -mindepth 1 -maxdepth 1 | read; then
+ cp -r ${connector_dir}/go ${install_dir}/connector
+ else
+ echo "WARNING: go connector not found, please check if want to use it!"
+ fi
+ cp -r ${connector_dir}/python ${install_dir}/connector
+ mv ${install_dir}/connector/python/taos ${install_dir}/connector/python/prodb
+ sed -i '/password/ {s/taosdata/prodb/g}' ${install_dir}/connector/python/prodb/cinterface.py
+ sed -i '/password/ {s/taosdata/prodb/g}' ${install_dir}/connector/python/prodb/subscription.py
+ sed -i '/self._password/ {s/taosdata/prodb/g}' ${install_dir}/connector/python/prodb/connection.py
+fi
+
+cd ${release_dir}
+
+if [ "$verMode" == "cluster" ]; then
+ pkg_name=${install_dir}-${osType}-${cpuType}
+elif [ "$verMode" == "edge" ]; then
+ pkg_name=${install_dir}-${osType}-${cpuType}
+else
+ echo "unknow verMode, nor cluster or edge"
+ exit 1
+fi
+
+if [ "$pagMode" == "lite" ]; then
+ pkg_name=${pkg_name}-Lite
+fi
+
+if [ "$verType" == "beta" ]; then
+ pkg_name=${pkg_name}-${verType}
+elif [ "$verType" == "stable" ]; then
+ pkg_name=${pkg_name}
+else
+ echo "unknow verType, nor stable or beta"
+ exit 1
+fi
+
+if [ "$osType" != "Darwin" ]; then
+ tar -zcv -f "$(basename ${pkg_name}).tar.gz" $(basename ${install_dir}) --remove-files || :
+else
+ tar -zcv -f "$(basename ${pkg_name}).tar.gz" $(basename ${install_dir}) || :
+ mv "$(basename ${pkg_name}).tar.gz" ..
+ rm -rf ./*
+ mv ../"$(basename ${pkg_name}).tar.gz" .
+fi
+
+cd ${curr_dir}
diff --git a/packaging/tools/makepkg_pro.sh b/packaging/tools/makepkg_pro.sh
new file mode 100755
index 0000000000000000000000000000000000000000..ffe4566b42017a7bffa6166ae28e18ca29bd03cd
--- /dev/null
+++ b/packaging/tools/makepkg_pro.sh
@@ -0,0 +1,193 @@
+#!/bin/bash
+#
+# Generate tar.gz package for all os system
+
+set -e
+#set -x
+
+curr_dir=$(pwd)
+compile_dir=$1
+version=$2
+build_time=$3
+cpuType=$4
+osType=$5
+verMode=$6
+verType=$7
+pagMode=$8
+versionComp=$9
+
+script_dir="$(dirname $(readlink -f $0))"
+top_dir="$(readlink -f ${script_dir}/../..)"
+
+# create compressed install file.
+build_dir="${compile_dir}/build"
+code_dir="${top_dir}/src"
+release_dir="${top_dir}/release"
+
+#package_name='linux'
+if [ "$verMode" == "cluster" ]; then
+ install_dir="${release_dir}/ProDB-enterprise-server-${version}"
+else
+ install_dir="${release_dir}/ProDB-server-${version}"
+fi
+
+lib_files="${build_dir}/lib/libtaos.so.${version}"
+header_files="${code_dir}/inc/taos.h ${code_dir}/inc/taoserror.h"
+if [ "$verMode" == "cluster" ]; then
+ cfg_dir="${top_dir}/../enterprise/packaging/cfg"
+else
+ cfg_dir="${top_dir}/packaging/cfg"
+fi
+install_files="${script_dir}/install_pro.sh"
+nginx_dir="${code_dir}/../../enterprise/src/plugins/web"
+
+# make directories.
+mkdir -p ${install_dir}
+mkdir -p ${install_dir}/inc && cp ${header_files} ${install_dir}/inc
+mkdir -p ${install_dir}/cfg && cp ${cfg_dir}/taos.cfg ${install_dir}/cfg/taos.cfg
+
+#mkdir -p ${install_dir}/bin && cp ${bin_files} ${install_dir}/bin && chmod a+x ${install_dir}/bin/* || :
+mkdir -p ${install_dir}/bin
+if [ "$pagMode" == "lite" ]; then
+ strip ${build_dir}/bin/taosd
+ strip ${build_dir}/bin/taos
+ cp ${build_dir}/bin/taos ${install_dir}/bin/prodbc
+ cp ${build_dir}/bin/taosd ${install_dir}/bin/prodbs
+ cp ${script_dir}/remove_pro.sh ${install_dir}/bin
+else
+ cp ${build_dir}/bin/taos ${install_dir}/bin/prodbc
+ cp ${build_dir}/bin/taosd ${install_dir}/bin/prodbs
+ cp ${script_dir}/remove_pro.sh ${install_dir}/bin
+ cp ${build_dir}/bin/taosdemo ${install_dir}/bin/prodemo
+ cp ${build_dir}/bin/taosdump ${install_dir}/bin/prodump
+ cp ${build_dir}/bin/tarbitrator ${install_dir}/bin
+ cp ${script_dir}/set_core.sh ${install_dir}/bin
+ cp ${script_dir}/get_client.sh ${install_dir}/bin
+ cp ${script_dir}/startPre.sh ${install_dir}/bin
+ cp ${script_dir}/taosd-dump-cfg.gdb ${install_dir}/bin
+fi
+chmod a+x ${install_dir}/bin/* || :
+
+if [ "$verMode" == "cluster" ]; then
+ sed 's/verMode=edge/verMode=cluster/g' ${install_dir}/bin/remove_pro.sh >> remove_prodb_temp.sh
+ mv remove_prodb_temp.sh ${install_dir}/bin/remove_pro.sh
+
+ mkdir -p ${install_dir}/nginxd && cp -r ${nginx_dir}/* ${install_dir}/nginxd
+ cp ${nginx_dir}/png/taos.png ${install_dir}/nginxd/admin/images/taos.png
+ rm -rf ${install_dir}/nginxd/png
+
+ sed -i "s/TDengine/ProDB/g" ${install_dir}/nginxd/admin/*.html
+ sed -i "s/TDengine/ProDB/g" ${install_dir}/nginxd/admin/js/*.js
+
+ sed -i '/dataDir/ {s/taos/ProDB/g}' ${install_dir}/cfg/taos.cfg
+ sed -i '/logDir/ {s/taos/ProDB/g}' ${install_dir}/cfg/taos.cfg
+ sed -i "s/TDengine/ProDB/g" ${install_dir}/cfg/taos.cfg
+
+ if [ "$cpuType" == "aarch64" ]; then
+ cp -f ${install_dir}/nginxd/sbin/arm/64bit/nginx ${install_dir}/nginxd/sbin/
+ elif [ "$cpuType" == "aarch32" ]; then
+ cp -f ${install_dir}/nginxd/sbin/arm/32bit/nginx ${install_dir}/nginxd/sbin/
+ fi
+ rm -rf ${install_dir}/nginxd/sbin/arm
+fi
+
+cd ${install_dir}
+tar -zcv -f prodb.tar.gz * --remove-files || :
+exitcode=$?
+if [ "$exitcode" != "0" ]; then
+ echo "tar prodb.tar.gz error !!!"
+ exit $exitcode
+fi
+
+cd ${curr_dir}
+cp ${install_files} ${install_dir}
+if [ "$verMode" == "cluster" ]; then
+ sed 's/verMode=edge/verMode=cluster/g' ${install_dir}/install_pro.sh >> install_prodb_temp.sh
+ mv install_prodb_temp.sh ${install_dir}/install_pro.sh
+fi
+if [ "$pagMode" == "lite" ]; then
+ sed 's/pagMode=full/pagMode=lite/g' ${install_dir}/install.sh >> install_prodb_temp.sh
+ mv install_prodb_temp.sh ${install_dir}/install_pro.sh
+fi
+chmod a+x ${install_dir}/install_pro.sh
+
+# Copy example code
+mkdir -p ${install_dir}/examples
+examples_dir="${top_dir}/tests/examples"
+cp -r ${examples_dir}/c ${install_dir}/examples
+sed -i '/passwd/ {s/taosdata/prodb/g}' ${install_dir}/examples/c/*.c
+sed -i '/root/ {s/taosdata/prodb/g}' ${install_dir}/examples/c/*.c
+
+if [[ "$pagMode" != "lite" ]] && [[ "$cpuType" != "aarch32" ]]; then
+ cp -r ${examples_dir}/JDBC ${install_dir}/examples
+ cp -r ${examples_dir}/matlab ${install_dir}/examples
+ mv ${install_dir}/examples/matlab/TDengineDemo.m ${install_dir}/examples/matlab/ProDBDemo.m
+ sed -i '/password/ {s/taosdata/prodb/g}' ${install_dir}/examples/matlab/ProDBDemo.m
+ cp -r ${examples_dir}/python ${install_dir}/examples
+ sed -i '/password/ {s/taosdata/prodb/g}' ${install_dir}/examples/python/read_example.py
+ cp -r ${examples_dir}/R ${install_dir}/examples
+ sed -i '/password/ {s/taosdata/prodb/g}' ${install_dir}/examples/R/command.txt
+ cp -r ${examples_dir}/go ${install_dir}/examples
+ mv ${install_dir}/examples/go/taosdemo.go ${install_dir}/examples/go/prodemo.go
+ sed -i '/root/ {s/taosdata/prodb/g}' ${install_dir}/examples/go/prodemo.go
+fi
+# Copy driver
+mkdir -p ${install_dir}/driver && cp ${lib_files} ${install_dir}/driver && echo "${versionComp}" > ${install_dir}/driver/vercomp.txt
+
+# Copy connector
+connector_dir="${code_dir}/connector"
+mkdir -p ${install_dir}/connector
+if [[ "$pagMode" != "lite" ]] && [[ "$cpuType" != "aarch32" ]]; then
+ cp ${build_dir}/lib/*.jar ${install_dir}/connector ||:
+
+ if [ -d "${connector_dir}/grafanaplugin/dist" ]; then
+ cp -r ${connector_dir}/grafanaplugin/dist ${install_dir}/connector/grafanaplugin
+ else
+ echo "WARNING: grafanaplugin bundled dir not found, please check if want to use it!"
+ fi
+ if find ${connector_dir}/go -mindepth 1 -maxdepth 1 | read; then
+ cp -r ${connector_dir}/go ${install_dir}/connector
+ else
+ echo "WARNING: go connector not found, please check if want to use it!"
+ fi
+ cp -r ${connector_dir}/python ${install_dir}/connector/
+ mv ${install_dir}/connector/python/taos ${install_dir}/connector/python/prodb
+ sed -i '/password/ {s/taosdata/prodb/g}' ${install_dir}/connector/python/prodb/cinterface.py
+
+ sed -i '/password/ {s/taosdata/prodb/g}' ${install_dir}/connector/python/prodb/subscription.py
+
+ sed -i '/self._password/ {s/taosdata/prodb/g}' ${install_dir}/connector/python/prodb/connection.py
+fi
+
+cd ${release_dir}
+
+if [ "$verMode" == "cluster" ]; then
+ pkg_name=${install_dir}-${osType}-${cpuType}
+elif [ "$verMode" == "edge" ]; then
+ pkg_name=${install_dir}-${osType}-${cpuType}
+else
+ echo "unknow verMode, nor cluster or edge"
+ exit 1
+fi
+
+if [ "$pagMode" == "lite" ]; then
+ pkg_name=${pkg_name}-Lite
+fi
+
+if [ "$verType" == "beta" ]; then
+ pkg_name=${pkg_name}-${verType}
+elif [ "$verType" == "stable" ]; then
+ pkg_name=${pkg_name}
+else
+ echo "unknow verType, nor stabel or beta"
+ exit 1
+fi
+
+tar -zcv -f "$(basename ${pkg_name}).tar.gz" $(basename ${install_dir}) --remove-files || :
+exitcode=$?
+if [ "$exitcode" != "0" ]; then
+ echo "tar ${pkg_name}.tar.gz error !!!"
+ exit $exitcode
+fi
+
+cd ${curr_dir}
diff --git a/packaging/tools/remove_arbi_pro.sh b/packaging/tools/remove_arbi_pro.sh
new file mode 100755
index 0000000000000000000000000000000000000000..ff10478881628bdaf027c618a1b89f204ebbdb35
--- /dev/null
+++ b/packaging/tools/remove_arbi_pro.sh
@@ -0,0 +1,130 @@
+#!/bin/bash
+#
+# Script to stop the service and uninstall ProDB's arbitrator
+
+set -e
+#set -x
+
+verMode=edge
+
+RED='\033[0;31m'
+GREEN='\033[1;32m'
+NC='\033[0m'
+
+#install main path
+install_main_dir="/usr/local/tarbitrator"
+bin_link_dir="/usr/bin"
+
+service_config_dir="/etc/systemd/system"
+tarbitrator_service_name="tarbitratord"
+csudo=""
+if command -v sudo > /dev/null; then
+ csudo="sudo"
+fi
+
+initd_mod=0
+service_mod=2
+if pidof systemd &> /dev/null; then
+ service_mod=0
+elif $(which service &> /dev/null); then
+ service_mod=1
+ service_config_dir="/etc/init.d"
+ if $(which chkconfig &> /dev/null); then
+ initd_mod=1
+ elif $(which insserv &> /dev/null); then
+ initd_mod=2
+ elif $(which update-rc.d &> /dev/null); then
+ initd_mod=3
+ else
+ service_mod=2
+ fi
+else
+ service_mod=2
+fi
+
+function kill_tarbitrator() {
+ pid=$(ps -ef | grep "tarbitrator" | grep -v "grep" | awk '{print $2}')
+ if [ -n "$pid" ]; then
+ ${csudo} kill -9 $pid || :
+ fi
+}
+
+function clean_bin() {
+ # Remove link
+ ${csudo} rm -f ${bin_link_dir}/tarbitrator || :
+}
+
+function clean_header() {
+ # Remove link
+ ${csudo} rm -f ${inc_link_dir}/taos.h || :
+ ${csudo} rm -f ${inc_link_dir}/taoserror.h || :
+}
+
+function clean_log() {
+ # Remove link
+ ${csudo} rm -rf /arbitrator.log || :
+}
+
+function clean_service_on_systemd() {
+ tarbitratord_service_config="${service_config_dir}/${tarbitrator_service_name}.service"
+
+ if systemctl is-active --quiet ${tarbitrator_service_name}; then
+ echo "ProDB tarbitrator is running, stopping it..."
+ ${csudo} systemctl stop ${tarbitrator_service_name} &> /dev/null || echo &> /dev/null
+ fi
+ ${csudo} systemctl disable ${tarbitrator_service_name} &> /dev/null || echo &> /dev/null
+
+ ${csudo} rm -f ${tarbitratord_service_config}
+}
+
+function clean_service_on_sysvinit() {
+ if pidof tarbitrator &> /dev/null; then
+ echo "ProDB's tarbitrator is running, stopping it..."
+ ${csudo} service tarbitratord stop || :
+ fi
+
+ if ((${initd_mod}==1)); then
+ if [ -e ${service_config_dir}/tarbitratord ]; then
+ ${csudo} chkconfig --del tarbitratord || :
+ fi
+ elif ((${initd_mod}==2)); then
+ if [ -e ${service_config_dir}/tarbitratord ]; then
+ ${csudo} insserv -r tarbitratord || :
+ fi
+ elif ((${initd_mod}==3)); then
+ if [ -e ${service_config_dir}/tarbitratord ]; then
+ ${csudo} update-rc.d -f tarbitratord remove || :
+ fi
+ fi
+
+ ${csudo} rm -f ${service_config_dir}/tarbitratord || :
+
+ if $(which init &> /dev/null); then
+ ${csudo} init q || :
+ fi
+}
+
+function clean_service() {
+ if ((${service_mod}==0)); then
+ clean_service_on_systemd
+ elif ((${service_mod}==1)); then
+ clean_service_on_sysvinit
+ else
+ # must manual stop
+ kill_tarbitrator
+ fi
+}
+
+# Stop service and disable booting start.
+clean_service
+# Remove binary file and links
+clean_bin
+# Remove header file.
+##clean_header
+# Remove log file
+clean_log
+
+${csudo} rm -rf ${install_main_dir}
+
+echo -e "${GREEN}ProDB's arbitrator is removed successfully!${NC}"
+echo
diff --git a/packaging/tools/remove_client_pro.sh b/packaging/tools/remove_client_pro.sh
new file mode 100755
index 0000000000000000000000000000000000000000..59e4e8997620af035821df5a975fe58f1357c9dc
--- /dev/null
+++ b/packaging/tools/remove_client_pro.sh
@@ -0,0 +1,79 @@
+#!/bin/bash
+#
+# Script to stop the client and uninstall database, but retain the config and log files.
+set -e
+# set -x
+
+RED='\033[0;31m'
+GREEN='\033[1;32m'
+NC='\033[0m'
+
+#install main path
+install_main_dir="/usr/local/ProDB"
+
+log_link_dir="/usr/local/ProDB/log"
+cfg_link_dir="/usr/local/ProDB/cfg"
+bin_link_dir="/usr/bin"
+lib_link_dir="/usr/lib"
+lib64_link_dir="/usr/lib64"
+inc_link_dir="/usr/include"
+
+csudo=""
+if command -v sudo > /dev/null; then
+ csudo="sudo"
+fi
+
+function kill_client() {
+ if [ -n "$(pidof prodbc)" ]; then
+ ${csudo} kill -9 $pid || :
+ fi
+}
+
+function clean_bin() {
+ # Remove link
+ ${csudo} rm -f ${bin_link_dir}/prodbc || :
+ ${csudo} rm -f ${bin_link_dir}/prodemo || :
+ ${csudo} rm -f ${bin_link_dir}/prodump || :
+ ${csudo} rm -f ${bin_link_dir}/rmprodb || :
+ ${csudo} rm -f ${bin_link_dir}/set_core || :
+}
+
+function clean_lib() {
+ # Remove link
+ ${csudo} rm -f ${lib_link_dir}/libtaos.* || :
+ ${csudo} rm -f ${lib64_link_dir}/libtaos.* || :
+}
+
+function clean_header() {
+ # Remove link
+ ${csudo} rm -f ${inc_link_dir}/taos.h || :
+ ${csudo} rm -f ${inc_link_dir}/taoserror.h || :
+}
+
+function clean_config() {
+ # Remove link
+ ${csudo} rm -f ${cfg_link_dir}/* || :
+}
+
+function clean_log() {
+ # Remove link
+ ${csudo} rm -rf ${log_link_dir} || :
+}
+
+# Stop client.
+kill_client
+# Remove binary file and links
+clean_bin
+# Remove header file.
+clean_header
+# Remove lib file
+clean_lib
+# Remove link log directory
+clean_log
+# Remove link configuration file
+clean_config
+
+${csudo} rm -rf ${install_main_dir}
+
+echo -e "${GREEN}ProDB client is removed successfully!${NC}"
+echo
diff --git a/packaging/tools/remove_pro.sh b/packaging/tools/remove_pro.sh
new file mode 100755
index 0000000000000000000000000000000000000000..f6dad22bc21b02a9d717d530c50bc19c5a718478
--- /dev/null
+++ b/packaging/tools/remove_pro.sh
@@ -0,0 +1,210 @@
+#!/bin/bash
+#
+# Script to stop the service and uninstall ProDB, but retain the config, data and log files.
+
+set -e
+#set -x
+
+verMode=edge
+
+RED='\033[0;31m'
+GREEN='\033[1;32m'
+NC='\033[0m'
+
+#install main path
+install_main_dir="/usr/local/ProDB"
+data_link_dir="/usr/local/ProDB/data"
+log_link_dir="/usr/local/ProDB/log"
+cfg_link_dir="/usr/local/ProDB/cfg"
+bin_link_dir="/usr/bin"
+lib_link_dir="/usr/lib"
+lib64_link_dir="/usr/lib64"
+inc_link_dir="/usr/include"
+install_nginxd_dir="/usr/local/nginxd"
+
+service_config_dir="/etc/systemd/system"
+prodb_service_name="prodbs"
+tarbitrator_service_name="tarbitratord"
+nginx_service_name="nginxd"
+csudo=""
+if command -v sudo > /dev/null; then
+ csudo="sudo"
+fi
+
+initd_mod=0
+service_mod=2
+if pidof systemd &> /dev/null; then
+ service_mod=0
+elif $(which service &> /dev/null); then
+ service_mod=1
+ service_config_dir="/etc/init.d"
+ if $(which chkconfig &> /dev/null); then
+ initd_mod=1
+ elif $(which insserv &> /dev/null); then
+ initd_mod=2
+ elif $(which update-rc.d &> /dev/null); then
+ initd_mod=3
+ else
+ service_mod=2
+ fi
+else
+ service_mod=2
+fi
+
+function kill_prodbs() {
+ pid=$(ps -ef | grep "prodbs" | grep -v "grep" | awk '{print $2}')
+ if [ -n "$pid" ]; then
+ ${csudo} kill -9 $pid || :
+ fi
+}
+
+function kill_tarbitrator() {
+ pid=$(ps -ef | grep "tarbitrator" | grep -v "grep" | awk '{print $2}')
+ if [ -n "$pid" ]; then
+ ${csudo} kill -9 $pid || :
+ fi
+}
+
+function clean_bin() {
+ # Remove link
+ ${csudo} rm -f ${bin_link_dir}/prodbc || :
+ ${csudo} rm -f ${bin_link_dir}/prodbs || :
+ ${csudo} rm -f ${bin_link_dir}/prodemo || :
+ ${csudo} rm -f ${bin_link_dir}/prodump || :
+ ${csudo} rm -f ${bin_link_dir}/rmprodb || :
+ ${csudo} rm -f ${bin_link_dir}/tarbitrator || :
+ ${csudo} rm -f ${bin_link_dir}/set_core || :
+}
+
+function clean_lib() {
+ # Remove link
+ ${csudo} rm -f ${lib_link_dir}/libtaos.* || :
+ ${csudo} rm -f ${lib64_link_dir}/libtaos.* || :
+}
+
+function clean_header() {
+ # Remove link
+ ${csudo} rm -f ${inc_link_dir}/taos.h || :
+ ${csudo} rm -f ${inc_link_dir}/taoserror.h || :
+}
+
+function clean_config() {
+ # Remove link
+ ${csudo} rm -f ${cfg_link_dir}/* || :
+}
+
+function clean_log() {
+ # Remove link
+ ${csudo} rm -rf ${log_link_dir} || :
+}
+
+function clean_service_on_systemd() {
+ prodb_service_config="${service_config_dir}/${prodb_service_name}.service"
+ if systemctl is-active --quiet ${prodb_service_name}; then
+ echo "ProDB prodbs is running, stopping it..."
+ ${csudo} systemctl stop ${prodb_service_name} &> /dev/null || echo &> /dev/null
+ fi
+ ${csudo} systemctl disable ${prodb_service_name} &> /dev/null || echo &> /dev/null
+ ${csudo} rm -f ${prodb_service_config}
+
+ tarbitratord_service_config="${service_config_dir}/${tarbitrator_service_name}.service"
+ if systemctl is-active --quiet ${tarbitrator_service_name}; then
+ echo "ProDB tarbitrator is running, stopping it..."
+ ${csudo} systemctl stop ${tarbitrator_service_name} &> /dev/null || echo &> /dev/null
+ fi
+ ${csudo} systemctl disable ${tarbitrator_service_name} &> /dev/null || echo &> /dev/null
+ ${csudo} rm -f ${tarbitratord_service_config}
+
+ if [ "$verMode" == "cluster" ]; then
+ nginx_service_config="${service_config_dir}/${nginx_service_name}.service"
+ if [ -d ${bin_dir}/web ]; then
+ if systemctl is-active --quiet ${nginx_service_name}; then
+ echo "Nginx for ProDB is running, stopping it..."
+ ${csudo} systemctl stop ${nginx_service_name} &> /dev/null || echo &> /dev/null
+ fi
+ ${csudo} systemctl disable ${nginx_service_name} &> /dev/null || echo &> /dev/null
+
+ ${csudo} rm -f ${nginx_service_config}
+ fi
+ fi
+}
+
+function clean_service_on_sysvinit() {
+ if pidof prodbs &> /dev/null; then
+ echo "ProDB prodbs is running, stopping it..."
+ ${csudo} service prodbs stop || :
+ fi
+
+ if pidof tarbitrator &> /dev/null; then
+ echo "ProDB tarbitrator is running, stopping it..."
+ ${csudo} service tarbitratord stop || :
+ fi
+
+ if ((${initd_mod}==1)); then
+ if [ -e ${service_config_dir}/prodbs ]; then
+ ${csudo} chkconfig --del prodbs || :
+ fi
+ if [ -e ${service_config_dir}/tarbitratord ]; then
+ ${csudo} chkconfig --del tarbitratord || :
+ fi
+ elif ((${initd_mod}==2)); then
+ if [ -e ${service_config_dir}/prodbs ]; then
+ ${csudo} insserv -r prodbs || :
+ fi
+ if [ -e ${service_config_dir}/tarbitratord ]; then
+ ${csudo} insserv -r tarbitratord || :
+ fi
+ elif ((${initd_mod}==3)); then
+ if [ -e ${service_config_dir}/prodbs ]; then
+ ${csudo} update-rc.d -f prodbs remove || :
+ fi
+ if [ -e ${service_config_dir}/tarbitratord ]; then
+ ${csudo} update-rc.d -f tarbitratord remove || :
+ fi
+ fi
+
+ ${csudo} rm -f ${service_config_dir}/prodbs || :
+ ${csudo} rm -f ${service_config_dir}/tarbitratord || :
+
+ if $(which init &> /dev/null); then
+ ${csudo} init q || :
+ fi
+}
+
+function clean_service() {
+ if ((${service_mod}==0)); then
+ clean_service_on_systemd
+ elif ((${service_mod}==1)); then
+ clean_service_on_sysvinit
+ else
+ # must manual stop taosd
+ kill_prodbs
+ kill_tarbitrator
+ fi
+}
+
+# Stop service and disable booting start.
+clean_service
+# Remove binary file and links
+clean_bin
+# Remove header file.
+clean_header
+# Remove lib file
+clean_lib
+# Remove link log directory
+clean_log
+# Remove link configuration file
+clean_config
+# Remove data link directory
+${csudo} rm -rf ${data_link_dir} || :
+
+${csudo} rm -rf ${install_main_dir}
+${csudo} rm -rf ${install_nginxd_dir}
+if [[ -e /etc/os-release ]]; then
+ osinfo=$(awk -F= '/^NAME/{print $2}' /etc/os-release)
+else
+ osinfo=""
+fi
+
+echo -e "${GREEN}ProDB is removed successfully!${NC}"
+echo
diff --git a/src/client/inc/tscParseLine.h b/src/client/inc/tscParseLine.h
index 401dcafdfbefd28e79ebdf30d810e194564a5056..e36c0bbc0b2d9c02e798aed1fe1e68dbe02939f5 100644
--- a/src/client/inc/tscParseLine.h
+++ b/src/client/inc/tscParseLine.h
@@ -54,6 +54,9 @@ typedef struct {
int tscSmlInsert(TAOS* taos, TAOS_SML_DATA_POINT* points, int numPoint, SSmlLinesInfo* info);
bool checkDuplicateKey(char *key, SHashObj *pHash, SSmlLinesInfo* info);
+bool isValidInteger(char *str);
+bool isValidFloat(char *str);
+
int32_t isValidChildTableName(const char *pTbName, int16_t len);
bool convertSmlValueType(TAOS_SML_KV *pVal, char *value,
diff --git a/src/client/src/tscParseLineProtocol.c b/src/client/src/tscParseLineProtocol.c
index e26e439492cec9c83b624c2bbb2bbc3a95de97b0..22392ba306faeed05af5d695ca0090057ac211cf 100644
--- a/src/client/src/tscParseLineProtocol.c
+++ b/src/client/src/tscParseLineProtocol.c
@@ -1137,7 +1137,7 @@ static void escapeSpecialCharacter(uint8_t field, const char **pos) {
*pos = cur;
}
-static bool isValidInteger(char *str) {
+bool isValidInteger(char *str) {
char *c = str;
if (*c != '+' && *c != '-' && !isdigit(*c)) {
return false;
@@ -1152,7 +1152,7 @@ static bool isValidInteger(char *str) {
return true;
}
-static bool isValidFloat(char *str) {
+bool isValidFloat(char *str) {
char *c = str;
uint8_t has_dot, has_exp, has_sign;
has_dot = 0;
@@ -1212,7 +1212,7 @@ static bool isTinyInt(char *pVal, uint16_t len) {
if (len <= 2) {
return false;
}
- if (!strcmp(&pVal[len - 2], "i8")) {
+ if (!strcasecmp(&pVal[len - 2], "i8")) {
//printf("Type is int8(%s)\n", pVal);
return true;
}
@@ -1226,7 +1226,7 @@ static bool isTinyUint(char *pVal, uint16_t len) {
if (pVal[0] == '-') {
return false;
}
- if (!strcmp(&pVal[len - 2], "u8")) {
+ if (!strcasecmp(&pVal[len - 2], "u8")) {
//printf("Type is uint8(%s)\n", pVal);
return true;
}
@@ -1237,7 +1237,7 @@ static bool isSmallInt(char *pVal, uint16_t len) {
if (len <= 3) {
return false;
}
- if (!strcmp(&pVal[len - 3], "i16")) {
+ if (!strcasecmp(&pVal[len - 3], "i16")) {
//printf("Type is int16(%s)\n", pVal);
return true;
}
@@ -1251,7 +1251,7 @@ static bool isSmallUint(char *pVal, uint16_t len) {
if (pVal[0] == '-') {
return false;
}
- if (strcmp(&pVal[len - 3], "u16") == 0) {
+ if (strcasecmp(&pVal[len - 3], "u16") == 0) {
//printf("Type is uint16(%s)\n", pVal);
return true;
}
@@ -1262,7 +1262,7 @@ static bool isInt(char *pVal, uint16_t len) {
if (len <= 3) {
return false;
}
- if (strcmp(&pVal[len - 3], "i32") == 0) {
+ if (strcasecmp(&pVal[len - 3], "i32") == 0) {
//printf("Type is int32(%s)\n", pVal);
return true;
}
@@ -1276,7 +1276,7 @@ static bool isUint(char *pVal, uint16_t len) {
if (pVal[0] == '-') {
return false;
}
- if (strcmp(&pVal[len - 3], "u32") == 0) {
+ if (strcasecmp(&pVal[len - 3], "u32") == 0) {
//printf("Type is uint32(%s)\n", pVal);
return true;
}
@@ -1287,7 +1287,7 @@ static bool isBigInt(char *pVal, uint16_t len) {
if (len <= 3) {
return false;
}
- if (strcmp(&pVal[len - 3], "i64") == 0) {
+ if (strcasecmp(&pVal[len - 3], "i64") == 0) {
//printf("Type is int64(%s)\n", pVal);
return true;
}
@@ -1301,7 +1301,7 @@ static bool isBigUint(char *pVal, uint16_t len) {
if (pVal[0] == '-') {
return false;
}
- if (strcmp(&pVal[len - 3], "u64") == 0) {
+ if (strcasecmp(&pVal[len - 3], "u64") == 0) {
//printf("Type is uint64(%s)\n", pVal);
return true;
}
@@ -1312,7 +1312,7 @@ static bool isFloat(char *pVal, uint16_t len) {
if (len <= 3) {
return false;
}
- if (strcmp(&pVal[len - 3], "f32") == 0) {
+ if (strcasecmp(&pVal[len - 3], "f32") == 0) {
//printf("Type is float(%s)\n", pVal);
return true;
}
@@ -1323,7 +1323,7 @@ static bool isDouble(char *pVal, uint16_t len) {
if (len <= 3) {
return false;
}
- if (strcmp(&pVal[len - 3], "f64") == 0) {
+ if (strcasecmp(&pVal[len - 3], "f64") == 0) {
//printf("Type is double(%s)\n", pVal);
return true;
}
@@ -1331,34 +1331,24 @@ static bool isDouble(char *pVal, uint16_t len) {
}
static bool isBool(char *pVal, uint16_t len, bool *bVal) {
- if ((len == 1) &&
- (pVal[len - 1] == 't' ||
- pVal[len - 1] == 'T')) {
+ if ((len == 1) && !strcasecmp(&pVal[len - 1], "t")) {
//printf("Type is bool(%c)\n", pVal[len - 1]);
*bVal = true;
return true;
}
- if ((len == 1) &&
- (pVal[len - 1] == 'f' ||
- pVal[len - 1] == 'F')) {
+ if ((len == 1) && !strcasecmp(&pVal[len - 1], "f")) {
//printf("Type is bool(%c)\n", pVal[len - 1]);
*bVal = false;
return true;
}
- if((len == 4) &&
- (!strcmp(&pVal[len - 4], "true") ||
- !strcmp(&pVal[len - 4], "True") ||
- !strcmp(&pVal[len - 4], "TRUE"))) {
+ if((len == 4) && !strcasecmp(&pVal[len - 4], "true")) {
//printf("Type is bool(%s)\n", &pVal[len - 4]);
*bVal = true;
return true;
}
- if((len == 5) &&
- (!strcmp(&pVal[len - 5], "false") ||
- !strcmp(&pVal[len - 5], "False") ||
- !strcmp(&pVal[len - 5], "FALSE"))) {
+ if((len == 5) && !strcasecmp(&pVal[len - 5], "false")) {
//printf("Type is bool(%s)\n", &pVal[len - 5]);
*bVal = false;
return true;
@@ -1384,7 +1374,7 @@ static bool isNchar(char *pVal, uint16_t len) {
if (len < 3) {
return false;
}
- if (pVal[0] == 'L' && pVal[1] == '"' && pVal[len - 1] == '"') {
+ if ((pVal[0] == 'l' || pVal[0] == 'L')&& pVal[1] == '"' && pVal[len - 1] == '"') {
//printf("Type is nchar(%s)\n", pVal);
return true;
}
@@ -1434,7 +1424,7 @@ static bool isTimeStamp(char *pVal, uint16_t len, SMLTimeStampType *tsType) {
return false;
}
-static bool convertStrToNumber(TAOS_SML_KV *pVal, char*str, SSmlLinesInfo* info) {
+static bool convertStrToNumber(TAOS_SML_KV *pVal, char *str, SSmlLinesInfo* info) {
errno = 0;
uint8_t type = pVal->type;
int16_t length = pVal->length;
@@ -1442,6 +1432,7 @@ static bool convertStrToNumber(TAOS_SML_KV *pVal, char*str, SSmlLinesInfo* info)
uint64_t val_u;
double val_d;
+ strntolower_s(str, str, (int32_t)strlen(str));
if (IS_FLOAT_TYPE(type)) {
val_d = strtod(str, NULL);
} else {
@@ -1659,9 +1650,19 @@ bool convertSmlValueType(TAOS_SML_KV *pVal, char *value,
memcpy(pVal->value, &bVal, pVal->length);
return true;
}
- //Handle default(no appendix) as float
- if (isValidInteger(value) || isValidFloat(value)) {
- pVal->type = TSDB_DATA_TYPE_FLOAT;
+ //Handle default(no appendix) interger type as BIGINT
+ if (isValidInteger(value)) {
+ pVal->type = TSDB_DATA_TYPE_BIGINT;
+ pVal->length = (int16_t)tDataTypes[pVal->type].bytes;
+ if (!convertStrToNumber(pVal, value, info)) {
+ return false;
+ }
+ return true;
+ }
+
+ //Handle default(no appendix) floating number type as DOUBLE
+ if (isValidFloat(value)) {
+ pVal->type = TSDB_DATA_TYPE_DOUBLE;
pVal->length = (int16_t)tDataTypes[pVal->type].bytes;
if (!convertStrToNumber(pVal, value, info)) {
return false;
@@ -1724,6 +1725,7 @@ int32_t convertSmlTimeStamp(TAOS_SML_KV *pVal, char *value,
SMLTimeStampType type;
int64_t tsVal;
+ strntolower_s(value, value, len);
if (!isTimeStamp(value, len, &type)) {
return TSDB_CODE_TSC_INVALID_TIME_STAMP;
}
diff --git a/src/client/src/tscParseOpenTSDB.c b/src/client/src/tscParseOpenTSDB.c
index 8e0322cab07ba462b7320cef02011b27b18785d5..14693ae361b533287e3377e644908c5c8a613c79 100644
--- a/src/client/src/tscParseOpenTSDB.c
+++ b/src/client/src/tscParseOpenTSDB.c
@@ -38,7 +38,7 @@ static int32_t parseTelnetMetric(TAOS_SML_DATA_POINT *pSml, const char **index,
uint16_t len = 0;
pSml->stableName = tcalloc(TSDB_TABLE_NAME_LEN + 1, 1); // +1 to avoid 1772 line over write
- if (pSml->stableName == NULL){
+ if (pSml->stableName == NULL) {
return TSDB_CODE_TSC_OUT_OF_MEMORY;
}
if (isdigit(*cur)) {
@@ -58,7 +58,13 @@ static int32_t parseTelnetMetric(TAOS_SML_DATA_POINT *pSml, const char **index,
break;
}
- pSml->stableName[len] = *cur;
+ //convert dot to underscore for now, will be removed once dot is allowed in tbname.
+ if (*cur == '.') {
+ pSml->stableName[len] = '_';
+ } else {
+ pSml->stableName[len] = *cur;
+ }
+
cur++;
len++;
}
@@ -455,6 +461,13 @@ int32_t parseMetricFromJSON(cJSON *root, TAOS_SML_DATA_POINT* pSml, SSmlLinesInf
return TSDB_CODE_TSC_INVALID_JSON;
}
+ //convert dot to underscore for now, will be removed once dot is allowed in tbname.
+ for (int i = 0; i < strlen(metric->valuestring); ++i) {
+ if (metric->valuestring[i] == '.') {
+ metric->valuestring[i] = '_';
+ }
+ }
+
tstrncpy(pSml->stableName, metric->valuestring, stableLen + 1);
return TSDB_CODE_SUCCESS;
@@ -485,6 +498,7 @@ int32_t parseTimestampFromJSONObj(cJSON *root, int64_t *tsVal, SSmlLinesInfo* in
}
size_t typeLen = strlen(type->valuestring);
+ strntolower_s(type->valuestring, type->valuestring, (int32_t)typeLen);
if (typeLen == 1 && type->valuestring[0] == 's') {
//seconds
*tsVal = (int64_t)(*tsVal * 1e9);
@@ -505,6 +519,8 @@ int32_t parseTimestampFromJSONObj(cJSON *root, int64_t *tsVal, SSmlLinesInfo* in
default:
return TSDB_CODE_TSC_INVALID_JSON;
}
+ } else {
+ return TSDB_CODE_TSC_INVALID_JSON;
}
return TSDB_CODE_SUCCESS;
@@ -725,16 +741,34 @@ int32_t parseValueFromJSON(cJSON *root, TAOS_SML_KV *pVal, SSmlLinesInfo* info)
break;
}
case cJSON_Number: {
- //convert default JSON Number type to float
- pVal->type = TSDB_DATA_TYPE_FLOAT;
- pVal->length = (int16_t)tDataTypes[pVal->type].bytes;
- pVal->value = tcalloc(pVal->length, 1);
- *(float *)(pVal->value) = (float)(root->valuedouble);
+ //convert default JSON Number type to BIGINT/DOUBLE
+ if (isValidInteger(root->numberstring)) {
+ pVal->type = TSDB_DATA_TYPE_BIGINT;
+ pVal->length = (int16_t)tDataTypes[pVal->type].bytes;
+ pVal->value = tcalloc(pVal->length, 1);
+ *(int64_t *)(pVal->value) = (int64_t)(root->valuedouble);
+ } else if (isValidFloat(root->numberstring)) {
+ pVal->type = TSDB_DATA_TYPE_DOUBLE;
+ pVal->length = (int16_t)tDataTypes[pVal->type].bytes;
+ pVal->value = tcalloc(pVal->length, 1);
+ *(double *)(pVal->value) = (double)(root->valuedouble);
+ } else {
+ return TSDB_CODE_TSC_INVALID_JSON_TYPE;
+ }
break;
}
case cJSON_String: {
- //convert default JSON String type to nchar
- pVal->type = TSDB_DATA_TYPE_NCHAR;
+ /* set default JSON type to binary/nchar according to
+ * user configured parameter tsDefaultJSONStrType
+ */
+ if (strcasecmp(tsDefaultJSONStrType, "binary") == 0) {
+ pVal->type = TSDB_DATA_TYPE_BINARY;
+ } else if (strcasecmp(tsDefaultJSONStrType, "nchar") == 0) {
+ pVal->type = TSDB_DATA_TYPE_NCHAR;
+ } else {
+ tscError("OTD:0x%"PRIx64" Invalid default JSON string type set from config %s", info->id, tsDefaultJSONStrType);
+ return TSDB_CODE_TSC_INVALID_JSON_CONFIG;
+ }
//pVal->length = wcslen((wchar_t *)root->valuestring) * TSDB_NCHAR_SIZE;
pVal->length = (int16_t)strlen(root->valuestring);
pVal->value = tcalloc(pVal->length + 1, 1);
diff --git a/src/client/src/tscSQLParser.c b/src/client/src/tscSQLParser.c
index ad9480422110e94bcd4cf676f128d13fc9fe7041..8b2932b541066dfa8ce93143bb3c563ddd432230 100644
--- a/src/client/src/tscSQLParser.c
+++ b/src/client/src/tscSQLParser.c
@@ -4447,7 +4447,16 @@ static int32_t validateMatchExpr(tSqlExpr* pExpr, STableMeta* pTableMeta, int32_
regex_t regex;
char regErrBuf[256] = {0};
- const char* pattern = pRight->value.pz;
+ //remove the quote at the begin end of original sql string.
+ uint32_t lenPattern = pRight->exprToken.n - 2;
+ char* pattern = malloc(lenPattern + 1);
+ strncpy(pattern, pRight->exprToken.z+1, lenPattern);
+ pattern[lenPattern] = '\0';
+
+ tfree(pRight->value.pz);
+ pRight->value.pz = pattern;
+ pRight->value.nLen = lenPattern;
+
int cflags = REG_EXTENDED;
if ((errCode = regcomp(®ex, pattern, cflags)) != 0) {
regerror(errCode, ®ex, regErrBuf, sizeof(regErrBuf));
diff --git a/src/common/inc/tglobal.h b/src/common/inc/tglobal.h
index 604ce89432bcf662b319fb2ec11f55026450a2be..beeb4e8b243e21e33f4162b07ec218c40e4029f9 100644
--- a/src/common/inc/tglobal.h
+++ b/src/common/inc/tglobal.h
@@ -216,7 +216,7 @@ extern int32_t cqDebugFlag;
extern int32_t debugFlag;
#ifdef TD_TSZ
-// lossy
+// lossy
extern char lossyColumns[];
extern double fPrecision;
extern double dPrecision;
@@ -224,9 +224,12 @@ extern uint32_t maxRange;
extern uint32_t curRange;
extern char Compressor[];
#endif
-// long query
+// long query
extern int8_t tsDeadLockKillQuery;
+// schemaless
+extern char tsDefaultJSONStrType[];
+
typedef struct {
char dir[TSDB_FILENAME_LEN];
int level;
diff --git a/src/common/src/tglobal.c b/src/common/src/tglobal.c
index 339fa35bb3009db96c9c6e0cabea6b60881f05c5..7d352a9dc1f0a9a97c7c1920f898fb286a01497f 100644
--- a/src/common/src/tglobal.c
+++ b/src/common/src/tglobal.c
@@ -282,6 +282,9 @@ char Compressor[32] = "ZSTD_COMPRESSOR"; // ZSTD_COMPRESSOR or GZIP_COMPRESS
// long query death-lock
int8_t tsDeadLockKillQuery = 0;
+// default JSON string type
+char tsDefaultJSONStrType[7] = "binary";
+
int32_t (*monStartSystemFp)() = NULL;
void (*monStopSystemFp)() = NULL;
void (*monExecuteSQLFp)(char *sql) = NULL;
@@ -1637,6 +1640,17 @@ static void doInitGlobalConfig(void) {
cfg.unitType = TAOS_CFG_UTYPE_NONE;
taosInitConfigOption(cfg);
+ // default JSON string type option "binary"/"nchar"
+ cfg.option = "defaultJSONStrType";
+ cfg.ptr = tsDefaultJSONStrType;
+ cfg.valType = TAOS_CFG_VTYPE_STRING;
+ cfg.cfgType = TSDB_CFG_CTYPE_B_CONFIG | TSDB_CFG_CTYPE_B_SHOW | TSDB_CFG_CTYPE_B_CLIENT;
+ cfg.minValue = 0;
+ cfg.maxValue = 0;
+ cfg.ptrLength = tListLen(tsDefaultJSONStrType);
+ cfg.unitType = TAOS_CFG_UTYPE_NONE;
+ taosInitConfigOption(cfg);
+
#ifdef TD_TSZ
// lossy compress
cfg.option = "lossyColumns";
diff --git a/src/inc/taosdef.h b/src/inc/taosdef.h
index fda6347223e752895872a4073dd2786abcd65e6f..1ea5246ae4533bedd8bec63d5a8b8a618c8334ee 100644
--- a/src/inc/taosdef.h
+++ b/src/inc/taosdef.h
@@ -85,6 +85,8 @@ extern const int32_t TYPE_BYTES[16];
#define TSDB_DEFAULT_PASS "powerdb"
#elif (_TD_TQ_ == true)
#define TSDB_DEFAULT_PASS "tqueue"
+#elif (_TD_PRO_ == true)
+#define TSDB_DEFAULT_PASS "prodb"
#else
#define TSDB_DEFAULT_PASS "taosdata"
#endif
diff --git a/src/inc/taoserror.h b/src/inc/taoserror.h
index d59b88c7e698b3e965b5923efdc760e0289f7250..887c51f10c5a6c10e213bb893725a837a0be77cd 100644
--- a/src/inc/taoserror.h
+++ b/src/inc/taoserror.h
@@ -110,7 +110,8 @@ int32_t* taosGetErrno();
#define TSDB_CODE_TSC_DUP_TAG_NAMES TAOS_DEF_ERROR_CODE(0, 0x0220) //"duplicated tag names")
#define TSDB_CODE_TSC_INVALID_JSON TAOS_DEF_ERROR_CODE(0, 0x0221) //"Invalid JSON format")
#define TSDB_CODE_TSC_INVALID_JSON_TYPE TAOS_DEF_ERROR_CODE(0, 0x0222) //"Invalid JSON data type")
-#define TSDB_CODE_TSC_VALUE_OUT_OF_RANGE TAOS_DEF_ERROR_CODE(0, 0x0223) //"Value out of range")
+#define TSDB_CODE_TSC_INVALID_JSON_CONFIG TAOS_DEF_ERROR_CODE(0, 0x0223) //"Invalid JSON configuration")
+#define TSDB_CODE_TSC_VALUE_OUT_OF_RANGE TAOS_DEF_ERROR_CODE(0, 0x0224) //"Value out of range")
// mnode
#define TSDB_CODE_MND_MSG_NOT_PROCESSED TAOS_DEF_ERROR_CODE(0, 0x0300) //"Message not processed")
diff --git a/src/kit/shell/src/shellEngine.c b/src/kit/shell/src/shellEngine.c
index 76419ff565d739597e5e4db057e56db794ca46d9..7f0d1e58fcc1acc9ae5ccf367b23f8182b07ec5b 100644
--- a/src/kit/shell/src/shellEngine.c
+++ b/src/kit/shell/src/shellEngine.c
@@ -44,6 +44,13 @@ char PROMPT_HEADER[] = "tq> ";
char CONTINUE_PROMPT[] = " -> ";
int prompt_size = 4;
+#elif (_TD_PRO_ == true)
+char CLIENT_VERSION[] = "Welcome to the ProDB shell from %s, Client Version:%s\n"
+ "Copyright (c) 2020 by Hanatech, Inc. All rights reserved.\n\n";
+char PROMPT_HEADER[] = "ProDB> ";
+
+char CONTINUE_PROMPT[] = " -> ";
+int prompt_size = 7;
#else
char CLIENT_VERSION[] = "Welcome to the TDengine shell from %s, Client Version:%s\n"
"Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.\n\n";
diff --git a/src/kit/taosdemo/CMakeLists.txt b/src/kit/taosdemo/CMakeLists.txt
index 2034093ad5841c267b722930681127d745d27153..bdfdf74715a07d5fca1a4e73b75842ffa47b6e68 100644
--- a/src/kit/taosdemo/CMakeLists.txt
+++ b/src/kit/taosdemo/CMakeLists.txt
@@ -8,12 +8,14 @@ IF (GIT_FOUND)
MESSAGE("Git found")
EXECUTE_PROCESS(
COMMAND ${GIT_EXECUTABLE} log --pretty=oneline -n 1 ${CMAKE_CURRENT_LIST_DIR}/taosdemo.c
+ WORKING_DIRECTORY ${CMAKE_CURRENT_LIST_DIR}
RESULT_VARIABLE RESULT
OUTPUT_VARIABLE TAOSDEMO_COMMIT_SHA1)
IF ("${TAOSDEMO_COMMIT_SHA1}" STREQUAL "")
- MESSAGE("taosdemo's latest commit in short is:" ${TAOSDEMO_COMMIT_SHA1})
+ SET(TAOSDEMO_COMMIT_SHA1 "unknown")
ELSE ()
STRING(SUBSTRING "${TAOSDEMO_COMMIT_SHA1}" 0 7 TAOSDEMO_COMMIT_SHA1)
+ STRING(STRIP "${TAOSDEMO_COMMIT_SHA1}" TAOSDEMO_COMMIT_SHA1)
ENDIF ()
EXECUTE_PROCESS(
COMMAND ${GIT_EXECUTABLE} status -z -s ${CMAKE_CURRENT_LIST_DIR}/taosdemo.c
@@ -25,14 +27,13 @@ IF (GIT_FOUND)
RESULT_VARIABLE RESULT
OUTPUT_VARIABLE TAOSDEMO_STATUS)
ENDIF (TD_LINUX)
- MESSAGE("taosdemo.c status: " ${TAOSDEMO_STATUS})
ELSE()
MESSAGE("Git not found")
SET(TAOSDEMO_COMMIT_SHA1 "unknown")
SET(TAOSDEMO_STATUS "unknown")
ENDIF (GIT_FOUND)
-STRING(STRIP "${TAOSDEMO_COMMIT_SHA1}" TAOSDEMO_COMMIT_SHA1)
+
MESSAGE("taosdemo's latest commit in short is:" ${TAOSDEMO_COMMIT_SHA1})
STRING(STRIP "${TAOSDEMO_STATUS}" TAOSDEMO_STATUS)
diff --git a/src/kit/taosdemo/taosdemo.c b/src/kit/taosdemo/taosdemo.c
index ec75ff0840e56b4d571a7ccc195db476bf1c8e4f..88ff34991514212854aac3296bfd81a57e480f08 100644
--- a/src/kit/taosdemo/taosdemo.c
+++ b/src/kit/taosdemo/taosdemo.c
@@ -79,10 +79,10 @@ extern char configDir[];
#define DEFAULT_START_TIME 1500000000000
#define MAX_PREPARED_RAND 1000000
-#define INT_BUFF_LEN 11
+#define INT_BUFF_LEN 12
#define BIGINT_BUFF_LEN 21
-#define SMALLINT_BUFF_LEN 6
-#define TINYINT_BUFF_LEN 4
+#define SMALLINT_BUFF_LEN 7
+#define TINYINT_BUFF_LEN 5
#define BOOL_BUFF_LEN 6
#define FLOAT_BUFF_LEN 22
#define DOUBLE_BUFF_LEN 42
@@ -590,16 +590,22 @@ static void init_rand_data();
/* ************ Global variables ************ */
int32_t g_randint[MAX_PREPARED_RAND];
+uint32_t g_randuint[MAX_PREPARED_RAND];
int64_t g_randbigint[MAX_PREPARED_RAND];
+uint64_t g_randubigint[MAX_PREPARED_RAND];
float g_randfloat[MAX_PREPARED_RAND];
double g_randdouble[MAX_PREPARED_RAND];
char *g_randbool_buff = NULL;
char *g_randint_buff = NULL;
+char *g_randuint_buff = NULL;
char *g_rand_voltage_buff = NULL;
char *g_randbigint_buff = NULL;
+char *g_randubigint_buff = NULL;
char *g_randsmallint_buff = NULL;
+char *g_randusmallint_buff = NULL;
char *g_randtinyint_buff = NULL;
+char *g_randutinyint_buff = NULL;
char *g_randfloat_buff = NULL;
char *g_rand_current_buff = NULL;
char *g_rand_phase_buff = NULL;
@@ -622,6 +628,8 @@ SArguments g_args = {
"powerdb", // password
#elif (_TD_TQ_ == true)
"tqueue", // password
+#elif (_TD_PRO_ == true)
+ "prodb", // password
#else
"taosdata", // password
#endif
@@ -764,6 +772,11 @@ static void printHelp() {
"The password to use when connecting to the server. By default is 'tqueue'");
printf("%s%s%s%s\n", indent, "-c, --config-dir=CONFIG_DIR", "\t",
"Configuration directory. By default is '/etc/tq/'.");
+#elif (_TD_PRO_ == true)
+ printf("%s%s%s%s\n", indent, "-p, --password", "\t\t",
+ "The password to use when connecting to the server. By default is 'prodb'");
+ printf("%s%s%s%s\n", indent, "-c, --config-dir=CONFIG_DIR", "\t",
+ "Configuration directory. By default is '/etc/ProDB/'.");
#else
printf("%s%s%s%s\n", indent, "-p, --password", "\t\t",
"The password to use when connecting to the server.");
@@ -1568,7 +1581,11 @@ static void parse_args(int argc, char *argv[], SArguments *arguments) {
&& strcasecmp(dataType, "DOUBLE")
&& strcasecmp(dataType, "BINARY")
&& strcasecmp(dataType, "TIMESTAMP")
- && strcasecmp(dataType, "NCHAR")) {
+ && strcasecmp(dataType, "NCHAR")
+ && strcasecmp(dataType, "UTINYINT")
+ && strcasecmp(dataType, "USMALLINT")
+ && strcasecmp(dataType, "UINT")
+ && strcasecmp(dataType, "UBIGINT")) {
printHelp();
errorPrint("%s", "-b: Invalid data_type!\n");
exit(EXIT_FAILURE);
@@ -1594,6 +1611,14 @@ static void parse_args(int argc, char *argv[], SArguments *arguments) {
arguments->data_type[0] = TSDB_DATA_TYPE_BOOL;
} else if (0 == strcasecmp(dataType, "TIMESTAMP")) {
arguments->data_type[0] = TSDB_DATA_TYPE_TIMESTAMP;
+ } else if (0 == strcasecmp(dataType, "UTINYINT")) {
+ arguments->data_type[0] = TSDB_DATA_TYPE_UTINYINT;
+ } else if (0 == strcasecmp(dataType, "USMALLINT")) {
+ arguments->data_type[0] = TSDB_DATA_TYPE_USMALLINT;
+ } else if (0 == strcasecmp(dataType, "UINT")) {
+ arguments->data_type[0] = TSDB_DATA_TYPE_UINT;
+ } else if (0 == strcasecmp(dataType, "UBIGINT")) {
+ arguments->data_type[0] = TSDB_DATA_TYPE_UBIGINT;
} else {
arguments->data_type[0] = TSDB_DATA_TYPE_NULL;
}
@@ -1615,7 +1640,11 @@ static void parse_args(int argc, char *argv[], SArguments *arguments) {
&& strcasecmp(token, "DOUBLE")
&& strcasecmp(token, "BINARY")
&& strcasecmp(token, "TIMESTAMP")
- && strcasecmp(token, "NCHAR")) {
+ && strcasecmp(token, "NCHAR")
+ && strcasecmp(token, "UTINYINT")
+ && strcasecmp(token, "USMALLINT")
+ && strcasecmp(token, "UINT")
+ && strcasecmp(token, "UBIGINT")) {
printHelp();
free(g_dupstr);
errorPrint("%s", "-b: Invalid data_type!\n");
@@ -1631,7 +1660,7 @@ static void parse_args(int argc, char *argv[], SArguments *arguments) {
} else if (0 == strcasecmp(token, "BIGINT")) {
arguments->data_type[index] = TSDB_DATA_TYPE_BIGINT;
} else if (0 == strcasecmp(token, "DOUBLE")) {
- arguments->data_type[index] = TSDB_DATA_TYPE_FLOAT;
+ arguments->data_type[index] = TSDB_DATA_TYPE_DOUBLE;
} else if (0 == strcasecmp(token, "TINYINT")) {
arguments->data_type[index] = TSDB_DATA_TYPE_TINYINT;
} else if (0 == strcasecmp(token, "BINARY")) {
@@ -1642,6 +1671,14 @@ static void parse_args(int argc, char *argv[], SArguments *arguments) {
arguments->data_type[index] = TSDB_DATA_TYPE_BOOL;
} else if (0 == strcasecmp(token, "TIMESTAMP")) {
arguments->data_type[index] = TSDB_DATA_TYPE_TIMESTAMP;
+ } else if (0 == strcasecmp(token, "UTINYINT")) {
+ arguments->data_type[index] = TSDB_DATA_TYPE_UTINYINT;
+ } else if (0 == strcasecmp(token, "USMALLINT")) {
+ arguments->data_type[index] = TSDB_DATA_TYPE_USMALLINT;
+ } else if (0 == strcasecmp(token, "UINT")) {
+ arguments->data_type[index] = TSDB_DATA_TYPE_UINT;
+ } else if (0 == strcasecmp(token, "UBIGINT")) {
+ arguments->data_type[index] = TSDB_DATA_TYPE_UBIGINT;
} else {
arguments->data_type[index] = TSDB_DATA_TYPE_NULL;
}
@@ -1945,18 +1982,22 @@ static void parse_args(int argc, char *argv[], SArguments *arguments) {
break;
case TSDB_DATA_TYPE_INT:
+ case TSDB_DATA_TYPE_UINT:
g_args.lenOfOneRow += INT_BUFF_LEN;
break;
case TSDB_DATA_TYPE_BIGINT:
+ case TSDB_DATA_TYPE_UBIGINT:
g_args.lenOfOneRow += BIGINT_BUFF_LEN;
break;
case TSDB_DATA_TYPE_SMALLINT:
+ case TSDB_DATA_TYPE_USMALLINT:
g_args.lenOfOneRow += SMALLINT_BUFF_LEN;
break;
case TSDB_DATA_TYPE_TINYINT:
+ case TSDB_DATA_TYPE_UTINYINT:
g_args.lenOfOneRow += TINYINT_BUFF_LEN;
break;
@@ -2188,6 +2229,23 @@ static int32_t rand_tinyint()
return g_randint[cursor % MAX_PREPARED_RAND] % 128;
}
+static char *rand_utinyint_str()
+{
+ static int cursor;
+ cursor++;
+ if (cursor > (MAX_PREPARED_RAND - 1)) cursor = 0;
+ return g_randutinyint_buff +
+ ((cursor % MAX_PREPARED_RAND) * TINYINT_BUFF_LEN);
+}
+
+static int32_t rand_utinyint()
+{
+ static int cursor;
+ cursor++;
+ if (cursor > (MAX_PREPARED_RAND - 1)) cursor = 0;
+ return g_randuint[cursor % MAX_PREPARED_RAND] % 255;
+}
+
static char *rand_smallint_str()
{
static int cursor;
@@ -2202,7 +2260,24 @@ static int32_t rand_smallint()
static int cursor;
cursor++;
if (cursor > (MAX_PREPARED_RAND - 1)) cursor = 0;
- return g_randint[cursor % MAX_PREPARED_RAND] % 32767;
+ return g_randint[cursor % MAX_PREPARED_RAND] % 32768;
+}
+
+static char *rand_usmallint_str()
+{
+ static int cursor;
+ cursor++;
+ if (cursor > (MAX_PREPARED_RAND - 1)) cursor = 0;
+ return g_randusmallint_buff +
+ ((cursor % MAX_PREPARED_RAND) * SMALLINT_BUFF_LEN);
+}
+
+static int32_t rand_usmallint()
+{
+ static int cursor;
+ cursor++;
+ if (cursor > (MAX_PREPARED_RAND - 1)) cursor = 0;
+ return g_randuint[cursor % MAX_PREPARED_RAND] % 65535;
}
static char *rand_int_str()
@@ -2221,6 +2296,22 @@ static int32_t rand_int()
return g_randint[cursor % MAX_PREPARED_RAND];
}
+static char *rand_uint_str()
+{
+ static int cursor;
+ cursor++;
+ if (cursor > (MAX_PREPARED_RAND - 1)) cursor = 0;
+ return g_randuint_buff + ((cursor % MAX_PREPARED_RAND) * INT_BUFF_LEN);
+}
+
+static int32_t rand_uint()
+{
+ static int cursor;
+ cursor++;
+ if (cursor > (MAX_PREPARED_RAND - 1)) cursor = 0;
+ return g_randuint[cursor % MAX_PREPARED_RAND];
+}
+
static char *rand_bigint_str()
{
static int cursor;
@@ -2238,6 +2329,23 @@ static int64_t rand_bigint()
return g_randbigint[cursor % MAX_PREPARED_RAND];
}
+static char *rand_ubigint_str()
+{
+ static int cursor;
+ cursor++;
+ if (cursor > (MAX_PREPARED_RAND - 1)) cursor = 0;
+ return g_randubigint_buff +
+ ((cursor % MAX_PREPARED_RAND) * BIGINT_BUFF_LEN);
+}
+
+static int64_t rand_ubigint()
+{
+ static int cursor;
+ cursor++;
+ if (cursor > (MAX_PREPARED_RAND - 1)) cursor = 0;
+ return g_randubigint[cursor % MAX_PREPARED_RAND];
+}
+
static char *rand_float_str()
{
static int cursor;
@@ -2375,9 +2483,18 @@ static void init_rand_data() {
assert(g_rand_phase_buff);
g_randdouble_buff = calloc(1, DOUBLE_BUFF_LEN * MAX_PREPARED_RAND);
assert(g_randdouble_buff);
+ g_randuint_buff = calloc(1, INT_BUFF_LEN * MAX_PREPARED_RAND);
+ assert(g_randuint_buff);
+ g_randutinyint_buff = calloc(1, TINYINT_BUFF_LEN * MAX_PREPARED_RAND);
+ assert(g_randutinyint_buff);
+ g_randusmallint_buff = calloc(1, SMALLINT_BUFF_LEN * MAX_PREPARED_RAND);
+ assert(g_randusmallint_buff);
+ g_randubigint_buff = calloc(1, BIGINT_BUFF_LEN * MAX_PREPARED_RAND);
+ assert(g_randubigint_buff);
for (int i = 0; i < MAX_PREPARED_RAND; i++) {
- g_randint[i] = (int)(taosRandom() % 65535);
+ g_randint[i] = (int)(taosRandom() % RAND_MAX - (RAND_MAX >> 1));
+ g_randuint[i] = (int)(taosRandom());
sprintf(g_randint_buff + i * INT_BUFF_LEN, "%d",
g_randint[i]);
sprintf(g_rand_voltage_buff + i * INT_BUFF_LEN, "%d",
@@ -2386,15 +2503,24 @@ static void init_rand_data() {
sprintf(g_randbool_buff + i * BOOL_BUFF_LEN, "%s",
((g_randint[i] % 2) & 1)?"true":"false");
sprintf(g_randsmallint_buff + i * SMALLINT_BUFF_LEN, "%d",
- g_randint[i] % 32767);
+ g_randint[i] % 32768);
sprintf(g_randtinyint_buff + i * TINYINT_BUFF_LEN, "%d",
g_randint[i] % 128);
-
- g_randbigint[i] = (int64_t)(taosRandom() % 2147483648);
+ sprintf(g_randuint_buff + i * INT_BUFF_LEN, "%d",
+ g_randuint[i]);
+ sprintf(g_randusmallint_buff + i * SMALLINT_BUFF_LEN, "%d",
+ g_randuint[i] % 65535);
+ sprintf(g_randutinyint_buff + i * TINYINT_BUFF_LEN, "%d",
+ g_randuint[i] % 255);
+
+ g_randbigint[i] = (int64_t)(taosRandom() % RAND_MAX - (RAND_MAX >> 1));
+ g_randubigint[i] = (uint64_t)(taosRandom());
sprintf(g_randbigint_buff + i * BIGINT_BUFF_LEN, "%"PRId64"",
g_randbigint[i]);
+ sprintf(g_randubigint_buff + i * BIGINT_BUFF_LEN, "%"PRId64"",
+ g_randubigint[i]);
- g_randfloat[i] = (float)(taosRandom() / 1000.0);
+ g_randfloat[i] = (float)(taosRandom() / 1000.0) * (taosRandom() % 2 > 0.5 ? 1 : -1);
sprintf(g_randfloat_buff + i * FLOAT_BUFF_LEN, "%f",
g_randfloat[i]);
sprintf(g_rand_current_buff + i * FLOAT_BUFF_LEN, "%f",
@@ -2404,7 +2530,7 @@ static void init_rand_data() {
(float)((115 + g_randint[i] % 10
+ g_randfloat[i]/1000000000)/360));
- g_randdouble[i] = (double)(taosRandom() / 1000000.0);
+ g_randdouble[i] = (double)(taosRandom() / 1000000.0) * (taosRandom() % 2 > 0.5 ? 1 : -1);
sprintf(g_randdouble_buff + i * DOUBLE_BUFF_LEN, "%f",
g_randdouble[i]);
}
@@ -2967,18 +3093,34 @@ static void xDumpFieldToFile(FILE* fp, const char* val,
fprintf(fp, "%d", *((int8_t *)val));
break;
+ case TSDB_DATA_TYPE_UTINYINT:
+ fprintf(fp, "%d", *((uint8_t *)val));
+ break;
+
case TSDB_DATA_TYPE_SMALLINT:
fprintf(fp, "%d", *((int16_t *)val));
break;
+ case TSDB_DATA_TYPE_USMALLINT:
+ fprintf(fp, "%d", *((uint16_t *)val));
+ break;
+
case TSDB_DATA_TYPE_INT:
fprintf(fp, "%d", *((int32_t *)val));
break;
+ case TSDB_DATA_TYPE_UINT:
+ fprintf(fp, "%d", *((uint32_t *)val));
+ break;
+
case TSDB_DATA_TYPE_BIGINT:
fprintf(fp, "%"PRId64"", *((int64_t *)val));
break;
+ case TSDB_DATA_TYPE_UBIGINT:
+ fprintf(fp, "%"PRId64"", *((uint64_t *)val));
+ break;
+
case TSDB_DATA_TYPE_FLOAT:
fprintf(fp, "%.5f", GET_FLOAT_VAL(val));
break;
@@ -3465,7 +3607,23 @@ static char* generateTagValuesForStb(SSuperTable* stbInfo, int64_t tableSeq) {
} else if (0 == strncasecmp(stbInfo->tags[i].dataType,
"timestamp", strlen("timestamp"))) {
dataLen += snprintf(dataBuf + dataLen, TSDB_MAX_SQL_LEN - dataLen,
- "%"PRId64",", rand_bigint());
+ "%"PRId64",", rand_ubigint());
+ } else if (0 == strncasecmp(stbInfo->tags[i].dataType,
+ "utinyint", strlen("utinyint"))) {
+ dataLen += snprintf(dataBuf + dataLen, TSDB_MAX_SQL_LEN - dataLen,
+ "%d,", rand_utinyint());
+ } else if (0 == strncasecmp(stbInfo->tags[i].dataType,
+ "usmallint", strlen("usmallint"))) {
+ dataLen += snprintf(dataBuf + dataLen, TSDB_MAX_SQL_LEN - dataLen,
+ "%d,", rand_usmallint());
+ } else if (0 == strncasecmp(stbInfo->tags[i].dataType,
+ "uint", strlen("uint"))) {
+ dataLen += snprintf(dataBuf + dataLen, TSDB_MAX_SQL_LEN - dataLen,
+ "%d,", rand_uint());
+ } else if (0 == strncasecmp(stbInfo->tags[i].dataType,
+ "ubigint", strlen("ubigint"))) {
+ dataLen += snprintf(dataBuf + dataLen, TSDB_MAX_SQL_LEN - dataLen,
+ "%"PRId64",", rand_ubigint());
} else {
errorPrint2("No support data type: %s\n", stbInfo->tags[i].dataType);
tmfree(dataBuf);
@@ -3495,18 +3653,22 @@ static int calcRowLen(SSuperTable* superTbls) {
break;
case TSDB_DATA_TYPE_INT:
+ case TSDB_DATA_TYPE_UINT:
lenOfOneRow += INT_BUFF_LEN;
break;
case TSDB_DATA_TYPE_BIGINT:
+ case TSDB_DATA_TYPE_UBIGINT:
lenOfOneRow += BIGINT_BUFF_LEN;
break;
case TSDB_DATA_TYPE_SMALLINT:
+ case TSDB_DATA_TYPE_USMALLINT:
lenOfOneRow += SMALLINT_BUFF_LEN;
break;
case TSDB_DATA_TYPE_TINYINT:
+ case TSDB_DATA_TYPE_UTINYINT:
lenOfOneRow += TINYINT_BUFF_LEN;
break;
@@ -3537,27 +3699,41 @@ static int calcRowLen(SSuperTable* superTbls) {
int tagIndex;
int lenOfTagOfOneRow = 0;
for (tagIndex = 0; tagIndex < superTbls->tagCount; tagIndex++) {
- char* dataType = superTbls->tags[tagIndex].dataType;
-
- if (strcasecmp(dataType, "BINARY") == 0) {
+ char * dataType = superTbls->tags[tagIndex].dataType;
+ switch (superTbls->tags[tagIndex].data_type)
+ {
+ case TSDB_DATA_TYPE_BINARY:
lenOfTagOfOneRow += superTbls->tags[tagIndex].dataLen + 3;
- } else if (strcasecmp(dataType, "NCHAR") == 0) {
+ break;
+ case TSDB_DATA_TYPE_NCHAR:
lenOfTagOfOneRow += superTbls->tags[tagIndex].dataLen + 3;
- } else if (strcasecmp(dataType, "INT") == 0) {
+ break;
+ case TSDB_DATA_TYPE_INT:
+ case TSDB_DATA_TYPE_UINT:
lenOfTagOfOneRow += superTbls->tags[tagIndex].dataLen + INT_BUFF_LEN;
- } else if (strcasecmp(dataType, "BIGINT") == 0) {
+ break;
+ case TSDB_DATA_TYPE_BIGINT:
+ case TSDB_DATA_TYPE_UBIGINT:
lenOfTagOfOneRow += superTbls->tags[tagIndex].dataLen + BIGINT_BUFF_LEN;
- } else if (strcasecmp(dataType, "SMALLINT") == 0) {
+ break;
+ case TSDB_DATA_TYPE_SMALLINT:
+ case TSDB_DATA_TYPE_USMALLINT:
lenOfTagOfOneRow += superTbls->tags[tagIndex].dataLen + SMALLINT_BUFF_LEN;
- } else if (strcasecmp(dataType, "TINYINT") == 0) {
+ break;
+ case TSDB_DATA_TYPE_TINYINT:
+ case TSDB_DATA_TYPE_UTINYINT:
lenOfTagOfOneRow += superTbls->tags[tagIndex].dataLen + TINYINT_BUFF_LEN;
- } else if (strcasecmp(dataType, "BOOL") == 0) {
+ break;
+ case TSDB_DATA_TYPE_BOOL:
lenOfTagOfOneRow += superTbls->tags[tagIndex].dataLen + BOOL_BUFF_LEN;
- } else if (strcasecmp(dataType, "FLOAT") == 0) {
+ break;
+ case TSDB_DATA_TYPE_FLOAT:
lenOfTagOfOneRow += superTbls->tags[tagIndex].dataLen + FLOAT_BUFF_LEN;
- } else if (strcasecmp(dataType, "DOUBLE") == 0) {
+ break;
+ case TSDB_DATA_TYPE_DOUBLE:
lenOfTagOfOneRow += superTbls->tags[tagIndex].dataLen + DOUBLE_BUFF_LEN;
- } else {
+ break;
+ default:
errorPrint2("get error tag type : %s\n", dataType);
exit(EXIT_FAILURE);
}
@@ -3690,40 +3866,60 @@ static int getSuperTableFromServer(TAOS * taos, char* dbName,
tstrncpy(superTbls->tags[tagIndex].field,
(char *)row[TSDB_DESCRIBE_METRIC_FIELD_INDEX],
fields[TSDB_DESCRIBE_METRIC_FIELD_INDEX].bytes);
- tstrncpy(superTbls->tags[tagIndex].dataType,
- (char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
- min(DATATYPE_BUFF_LEN,
- fields[TSDB_DESCRIBE_METRIC_TYPE_INDEX].bytes) + 1);
- if (0 == strncasecmp(superTbls->tags[tagIndex].dataType,
+ if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
"INT", strlen("INT"))) {
superTbls->tags[tagIndex].data_type = TSDB_DATA_TYPE_INT;
- } else if (0 == strncasecmp(superTbls->tags[tagIndex].dataType,
+ } else if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
"TINYINT", strlen("TINYINT"))) {
superTbls->tags[tagIndex].data_type = TSDB_DATA_TYPE_TINYINT;
- } else if (0 == strncasecmp(superTbls->tags[tagIndex].dataType,
+ } else if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
"SMALLINT", strlen("SMALLINT"))) {
superTbls->tags[tagIndex].data_type = TSDB_DATA_TYPE_SMALLINT;
- } else if (0 == strncasecmp(superTbls->tags[tagIndex].dataType,
+ } else if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
"BIGINT", strlen("BIGINT"))) {
superTbls->tags[tagIndex].data_type = TSDB_DATA_TYPE_BIGINT;
- } else if (0 == strncasecmp(superTbls->tags[tagIndex].dataType,
+ } else if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
"FLOAT", strlen("FLOAT"))) {
superTbls->tags[tagIndex].data_type = TSDB_DATA_TYPE_FLOAT;
- } else if (0 == strncasecmp(superTbls->tags[tagIndex].dataType,
+ } else if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
"DOUBLE", strlen("DOUBLE"))) {
superTbls->tags[tagIndex].data_type = TSDB_DATA_TYPE_DOUBLE;
- } else if (0 == strncasecmp(superTbls->tags[tagIndex].dataType,
+ } else if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
"BINARY", strlen("BINARY"))) {
superTbls->tags[tagIndex].data_type = TSDB_DATA_TYPE_BINARY;
- } else if (0 == strncasecmp(superTbls->tags[tagIndex].dataType,
+ } else if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
"NCHAR", strlen("NCHAR"))) {
superTbls->tags[tagIndex].data_type = TSDB_DATA_TYPE_NCHAR;
- } else if (0 == strncasecmp(superTbls->tags[tagIndex].dataType,
+ } else if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
"BOOL", strlen("BOOL"))) {
superTbls->tags[tagIndex].data_type = TSDB_DATA_TYPE_BOOL;
- } else if (0 == strncasecmp(superTbls->tags[tagIndex].dataType,
+ } else if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
"TIMESTAMP", strlen("TIMESTAMP"))) {
superTbls->tags[tagIndex].data_type = TSDB_DATA_TYPE_TIMESTAMP;
+ } else if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
+ "TINYINT UNSIGNED", strlen("TINYINT UNSIGNED"))) {
+ superTbls->tags[tagIndex].data_type = TSDB_DATA_TYPE_UTINYINT;
+ tstrncpy(superTbls->tags[tagIndex].dataType,"UTINYINT",
+ min(DATATYPE_BUFF_LEN,
+ fields[TSDB_DESCRIBE_METRIC_TYPE_INDEX].bytes) + 1);
+ } else if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
+ "SMALLINT UNSIGNED", strlen("SMALLINT UNSIGNED"))) {
+ superTbls->tags[tagIndex].data_type = TSDB_DATA_TYPE_USMALLINT;
+ tstrncpy(superTbls->tags[tagIndex].dataType,"USMALLINT",
+ min(DATATYPE_BUFF_LEN,
+ fields[TSDB_DESCRIBE_METRIC_TYPE_INDEX].bytes) + 1);
+ } else if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
+ "INT UNSIGNED", strlen("INT UNSIGNED"))) {
+ superTbls->tags[tagIndex].data_type = TSDB_DATA_TYPE_UINT;
+ tstrncpy(superTbls->tags[tagIndex].dataType,"UINT",
+ min(DATATYPE_BUFF_LEN,
+ fields[TSDB_DESCRIBE_METRIC_TYPE_INDEX].bytes) + 1);
+ }else if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
+ "BIGINT UNSIGNED", strlen("BIGINT UNSIGNED"))) {
+ superTbls->tags[tagIndex].data_type = TSDB_DATA_TYPE_UBIGINT;
+ tstrncpy(superTbls->tags[tagIndex].dataType,"UBIGINT",
+ min(DATATYPE_BUFF_LEN,
+ fields[TSDB_DESCRIBE_METRIC_TYPE_INDEX].bytes) + 1);
} else {
superTbls->tags[tagIndex].data_type = TSDB_DATA_TYPE_NULL;
}
@@ -3733,46 +3929,78 @@ static int getSuperTableFromServer(TAOS * taos, char* dbName,
(char *)row[TSDB_DESCRIBE_METRIC_NOTE_INDEX],
min(NOTE_BUFF_LEN,
fields[TSDB_DESCRIBE_METRIC_NOTE_INDEX].bytes) + 1);
+ if (strstr((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX], "UNSIGNED") == NULL)
+ {
+ tstrncpy(superTbls->tags[tagIndex].dataType,
+ (char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
+ min(DATATYPE_BUFF_LEN,
+ fields[TSDB_DESCRIBE_METRIC_TYPE_INDEX].bytes) + 1);
+ }
tagIndex++;
} else {
tstrncpy(superTbls->columns[columnIndex].field,
(char *)row[TSDB_DESCRIBE_METRIC_FIELD_INDEX],
fields[TSDB_DESCRIBE_METRIC_FIELD_INDEX].bytes);
- tstrncpy(superTbls->columns[columnIndex].dataType,
- (char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
- min(DATATYPE_BUFF_LEN,
- fields[TSDB_DESCRIBE_METRIC_TYPE_INDEX].bytes) + 1);
- if (0 == strncasecmp(superTbls->columns[columnIndex].dataType,
- "INT", strlen("INT"))) {
+
+ if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
+ "INT", strlen("INT")) &&
+ strstr((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX], "UNSIGNED") == NULL) {
superTbls->columns[columnIndex].data_type = TSDB_DATA_TYPE_INT;
- } else if (0 == strncasecmp(superTbls->columns[columnIndex].dataType,
- "TINYINT", strlen("TINYINT"))) {
+ } else if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
+ "TINYINT", strlen("TINYINT")) &&
+ strstr((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX], "UNSIGNED") == NULL) {
superTbls->columns[columnIndex].data_type = TSDB_DATA_TYPE_TINYINT;
- } else if (0 == strncasecmp(superTbls->columns[columnIndex].dataType,
- "SMALLINT", strlen("SMALLINT"))) {
+ } else if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
+ "SMALLINT", strlen("SMALLINT")) &&
+ strstr((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX], "UNSIGNED") == NULL) {
superTbls->columns[columnIndex].data_type = TSDB_DATA_TYPE_SMALLINT;
- } else if (0 == strncasecmp(superTbls->columns[columnIndex].dataType,
- "BIGINT", strlen("BIGINT"))) {
+ } else if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
+ "BIGINT", strlen("BIGINT")) &&
+ strstr((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX], "UNSIGNED") == NULL) {
superTbls->columns[columnIndex].data_type = TSDB_DATA_TYPE_BIGINT;
- } else if (0 == strncasecmp(superTbls->columns[columnIndex].dataType,
+ } else if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
"FLOAT", strlen("FLOAT"))) {
superTbls->columns[columnIndex].data_type = TSDB_DATA_TYPE_FLOAT;
- } else if (0 == strncasecmp(superTbls->columns[columnIndex].dataType,
+ } else if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
"DOUBLE", strlen("DOUBLE"))) {
superTbls->columns[columnIndex].data_type = TSDB_DATA_TYPE_DOUBLE;
- } else if (0 == strncasecmp(superTbls->columns[columnIndex].dataType,
+ } else if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
"BINARY", strlen("BINARY"))) {
superTbls->columns[columnIndex].data_type = TSDB_DATA_TYPE_BINARY;
- } else if (0 == strncasecmp(superTbls->columns[columnIndex].dataType,
+ } else if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
"NCHAR", strlen("NCHAR"))) {
superTbls->columns[columnIndex].data_type = TSDB_DATA_TYPE_NCHAR;
- } else if (0 == strncasecmp(superTbls->columns[columnIndex].dataType,
+ } else if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
"BOOL", strlen("BOOL"))) {
superTbls->columns[columnIndex].data_type = TSDB_DATA_TYPE_BOOL;
- } else if (0 == strncasecmp(superTbls->columns[columnIndex].dataType,
+ } else if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
"TIMESTAMP", strlen("TIMESTAMP"))) {
superTbls->columns[columnIndex].data_type = TSDB_DATA_TYPE_TIMESTAMP;
+ } else if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
+ "TINYINT UNSIGNED", strlen("TINYINT UNSIGNED"))) {
+ superTbls->columns[columnIndex].data_type = TSDB_DATA_TYPE_UTINYINT;
+ tstrncpy(superTbls->columns[columnIndex].dataType,"UTINYINT",
+ min(DATATYPE_BUFF_LEN,
+ fields[TSDB_DESCRIBE_METRIC_TYPE_INDEX].bytes) + 1);
+ } else if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
+ "SMALLINT UNSIGNED", strlen("SMALLINT UNSIGNED"))) {
+ superTbls->columns[columnIndex].data_type = TSDB_DATA_TYPE_USMALLINT;
+ tstrncpy(superTbls->columns[columnIndex].dataType,"USMALLINT",
+ min(DATATYPE_BUFF_LEN,
+ fields[TSDB_DESCRIBE_METRIC_TYPE_INDEX].bytes) + 1);
+ } else if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
+ "INT UNSIGNED", strlen("INT UNSIGNED"))) {
+ superTbls->columns[columnIndex].data_type = TSDB_DATA_TYPE_UINT;
+ tstrncpy(superTbls->columns[columnIndex].dataType,"UINT",
+ min(DATATYPE_BUFF_LEN,
+ fields[TSDB_DESCRIBE_METRIC_TYPE_INDEX].bytes) + 1);
+ } else if (0 == strncasecmp((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
+ "BIGINT UNSIGNED", strlen("BIGINT UNSIGNED"))) {
+ superTbls->columns[columnIndex].data_type = TSDB_DATA_TYPE_UBIGINT;
+ tstrncpy(superTbls->columns[columnIndex].dataType,"UBIGINT",
+ min(DATATYPE_BUFF_LEN,
+ fields[TSDB_DESCRIBE_METRIC_TYPE_INDEX].bytes) + 1);
} else {
superTbls->columns[columnIndex].data_type = TSDB_DATA_TYPE_NULL;
}
@@ -3782,6 +4010,13 @@ static int getSuperTableFromServer(TAOS * taos, char* dbName,
(char *)row[TSDB_DESCRIBE_METRIC_NOTE_INDEX],
min(NOTE_BUFF_LEN,
fields[TSDB_DESCRIBE_METRIC_NOTE_INDEX].bytes) + 1);
+
+ if (strstr((char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX], "UNSIGNED") == NULL) {
+ tstrncpy(superTbls->columns[columnIndex].dataType,
+ (char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
+ min(DATATYPE_BUFF_LEN,
+ fields[TSDB_DESCRIBE_METRIC_TYPE_INDEX].bytes) + 1);
+ }
columnIndex++;
}
@@ -3906,6 +4141,30 @@ static int createSuperTable(
lenOfOneRow += TIMESTAMP_BUFF_LEN;
break;
+ case TSDB_DATA_TYPE_UTINYINT:
+ len += snprintf(cols + len, COL_BUFFER_LEN - len, ",C%d %s",
+ colIndex, "TINYINT UNSIGNED");
+ lenOfOneRow += TINYINT_BUFF_LEN;
+ break;
+
+ case TSDB_DATA_TYPE_USMALLINT:
+ len += snprintf(cols + len, COL_BUFFER_LEN - len, ",C%d %s",
+ colIndex, "SMALLINT UNSIGNED");
+ lenOfOneRow += SMALLINT_BUFF_LEN;
+ break;
+
+ case TSDB_DATA_TYPE_UINT:
+ len += snprintf(cols + len, COL_BUFFER_LEN - len, ",C%d %s",
+ colIndex, "INT UNSIGNED");
+ lenOfOneRow += INT_BUFF_LEN;
+ break;
+
+ case TSDB_DATA_TYPE_UBIGINT:
+ len += snprintf(cols + len, COL_BUFFER_LEN - len, ",C%d %s",
+ colIndex, "BIGINT UNSIGNED");
+ lenOfOneRow += BIGINT_BUFF_LEN;
+ break;
+
default:
taos_close(taos);
free(command);
@@ -3996,6 +4255,22 @@ static int createSuperTable(
len += snprintf(tags + len, TSDB_MAX_TAGS_LEN - len,
"T%d %s,", tagIndex, "DOUBLE");
lenOfTagOfOneRow += superTbl->tags[tagIndex].dataLen + DOUBLE_BUFF_LEN;
+ } else if (strcasecmp(dataType, "UTINYINT") == 0) {
+ len += snprintf(tags + len, TSDB_MAX_TAGS_LEN - len,
+ "T%d %s,", tagIndex, "TINYINT UNSIGNED");
+ lenOfTagOfOneRow += superTbl->tags[tagIndex].dataLen + TINYINT_BUFF_LEN;
+ } else if (strcasecmp(dataType, "USMALLINT") == 0) {
+ len += snprintf(tags + len, TSDB_MAX_TAGS_LEN - len,
+ "T%d %s,", tagIndex, "SMALLINT UNSIGNED");
+ lenOfTagOfOneRow += superTbl->tags[tagIndex].dataLen + SMALLINT_BUFF_LEN;
+ } else if (strcasecmp(dataType, "UINT") == 0) {
+ len += snprintf(tags + len, TSDB_MAX_TAGS_LEN - len,
+ "T%d %s,", tagIndex, "INT UNSIGNED");
+ lenOfTagOfOneRow += superTbl->tags[tagIndex].dataLen + INT_BUFF_LEN;
+ } else if (strcasecmp(dataType, "UBIGINT") == 0) {
+ len += snprintf(tags + len, TSDB_MAX_TAGS_LEN - len,
+ "T%d %s,", tagIndex, "BIGINT UNSIGNED");
+ lenOfTagOfOneRow += superTbl->tags[tagIndex].dataLen + BIGINT_BUFF_LEN;
} else {
taos_close(taos);
free(command);
@@ -4151,19 +4426,17 @@ int createDatabasesAndStables(char *command) {
errorPrint("create super table %"PRIu64" failed!\n\n", j);
continue;
}
- }
-
- ret = getSuperTableFromServer(taos, g_Dbs.db[i].dbName,
+ } else {
+ ret = getSuperTableFromServer(taos, g_Dbs.db[i].dbName,
&g_Dbs.db[i].superTbls[j]);
- if (0 != ret) {
- errorPrint2("\nget super table %s.%s info failed!\n\n",
- g_Dbs.db[i].dbName, g_Dbs.db[i].superTbls[j].stbName);
- continue;
+ if (0 != ret) {
+ errorPrint2("\nget super table %s.%s info failed!\n\n",
+ g_Dbs.db[i].dbName, g_Dbs.db[i].superTbls[j].stbName);
+ continue;
+ }
}
-
validStbCount ++;
}
-
g_Dbs.db[i].superTblCount = validStbCount;
}
@@ -4656,6 +4929,18 @@ static bool getColumnAndTagTypeFromInsertJsonFile(
} else if (0 == strncasecmp(superTbls->columns[c].dataType,
"TIMESTAMP", strlen("TIMESTAMP"))) {
superTbls->columns[c].data_type = TSDB_DATA_TYPE_TIMESTAMP;
+ } else if (0 == strncasecmp(superTbls->columns[c].dataType,
+ "UTINYINT", strlen("UTINYINT"))) {
+ superTbls->columns[c].data_type = TSDB_DATA_TYPE_UTINYINT;
+ } else if (0 == strncasecmp(superTbls->columns[c].dataType,
+ "USMALLINT", strlen("USMALLINT"))) {
+ superTbls->columns[c].data_type = TSDB_DATA_TYPE_USMALLINT;
+ } else if (0 == strncasecmp(superTbls->columns[c].dataType,
+ "UINT", strlen("UINT"))) {
+ superTbls->columns[c].data_type = TSDB_DATA_TYPE_UINT;
+ } else if (0 == strncasecmp(superTbls->columns[c].dataType,
+ "UBIGINT", strlen("UBIGINT"))) {
+ superTbls->columns[c].data_type = TSDB_DATA_TYPE_UBIGINT;
} else {
superTbls->columns[c].data_type = TSDB_DATA_TYPE_NULL;
}
@@ -4761,6 +5046,18 @@ static bool getColumnAndTagTypeFromInsertJsonFile(
} else if (0 == strncasecmp(superTbls->tags[t].dataType,
"TIMESTAMP", strlen("TIMESTAMP"))) {
superTbls->tags[t].data_type = TSDB_DATA_TYPE_TIMESTAMP;
+ } else if (0 == strncasecmp(superTbls->tags[t].dataType,
+ "UTINYINT", strlen("UTINYINT"))) {
+ superTbls->tags[t].data_type = TSDB_DATA_TYPE_UTINYINT;
+ } else if (0 == strncasecmp(superTbls->tags[t].dataType,
+ "USMALLINT", strlen("USMALLINT"))) {
+ superTbls->tags[t].data_type = TSDB_DATA_TYPE_USMALLINT;
+ } else if (0 == strncasecmp(superTbls->tags[t].dataType,
+ "UINT", strlen("UINT"))) {
+ superTbls->tags[t].data_type = TSDB_DATA_TYPE_UINT;
+ } else if (0 == strncasecmp(superTbls->tags[t].dataType,
+ "UBIGINT", strlen("UBIGINT"))) {
+ superTbls->tags[t].data_type = TSDB_DATA_TYPE_UBIGINT;
} else {
superTbls->tags[t].data_type = TSDB_DATA_TYPE_NULL;
}
@@ -6178,9 +6475,22 @@ static int64_t generateStbRowData(
tstrncpy(pstr + dataLen, tmp, min(tmpLen + 1, INT_BUFF_LEN));
break;
+ case TSDB_DATA_TYPE_UINT:
+ tmp = rand_uint_str();
+ tmpLen = strlen(tmp);
+ tstrncpy(pstr + dataLen, tmp, min(tmpLen + 1, INT_BUFF_LEN));
+ break;
+
case TSDB_DATA_TYPE_BIGINT:
tmp = rand_bigint_str();
- tstrncpy(pstr + dataLen, tmp, BIGINT_BUFF_LEN);
+ tmpLen = strlen(tmp);
+ tstrncpy(pstr + dataLen, tmp, min(tmpLen + 1, BIGINT_BUFF_LEN));
+ break;
+
+ case TSDB_DATA_TYPE_UBIGINT:
+ tmp = rand_ubigint_str();
+ tmpLen = strlen(tmp);
+ tstrncpy(pstr + dataLen, tmp, min(tmpLen + 1, BIGINT_BUFF_LEN));
break;
case TSDB_DATA_TYPE_FLOAT:
@@ -6194,38 +6504,49 @@ static int64_t generateStbRowData(
tmp = rand_float_str();
}
tmpLen = strlen(tmp);
- tstrncpy(pstr + dataLen, tmp, min(tmpLen +1, FLOAT_BUFF_LEN));
+ tstrncpy(pstr + dataLen, tmp, min(tmpLen + 1, FLOAT_BUFF_LEN));
break;
case TSDB_DATA_TYPE_DOUBLE:
tmp = rand_double_str();
tmpLen = strlen(tmp);
- tstrncpy(pstr + dataLen, tmp, min(tmpLen +1, DOUBLE_BUFF_LEN));
+ tstrncpy(pstr + dataLen, tmp, min(tmpLen + 1, DOUBLE_BUFF_LEN));
break;
case TSDB_DATA_TYPE_SMALLINT:
tmp = rand_smallint_str();
tmpLen = strlen(tmp);
- tstrncpy(pstr + dataLen, tmp,
- min(tmpLen + 1, SMALLINT_BUFF_LEN));
+ tstrncpy(pstr + dataLen, tmp, min(tmpLen + 1, SMALLINT_BUFF_LEN));
+ break;
+
+ case TSDB_DATA_TYPE_USMALLINT:
+ tmp = rand_usmallint_str();
+ tmpLen = strlen(tmp);
+ tstrncpy(pstr + dataLen, tmp, min(tmpLen + 1, SMALLINT_BUFF_LEN));
break;
case TSDB_DATA_TYPE_TINYINT:
tmp = rand_tinyint_str();
tmpLen = strlen(tmp);
- tstrncpy(pstr + dataLen, tmp, min(tmpLen +1, TINYINT_BUFF_LEN));
+ tstrncpy(pstr + dataLen, tmp, min(tmpLen + 1, TINYINT_BUFF_LEN));
+ break;
+
+ case TSDB_DATA_TYPE_UTINYINT:
+ tmp = rand_utinyint_str();
+ tmpLen = strlen(tmp);
+ tstrncpy(pstr + dataLen, tmp, min(tmpLen + 1, TINYINT_BUFF_LEN));
break;
case TSDB_DATA_TYPE_BOOL:
tmp = rand_bool_str();
tmpLen = strlen(tmp);
- tstrncpy(pstr + dataLen, tmp, min(tmpLen +1, BOOL_BUFF_LEN));
+ tstrncpy(pstr + dataLen, tmp, min(tmpLen + 1, BOOL_BUFF_LEN));
break;
case TSDB_DATA_TYPE_TIMESTAMP:
tmp = rand_bigint_str();
tmpLen = strlen(tmp);
- tstrncpy(pstr + dataLen, tmp, min(tmpLen +1, BIGINT_BUFF_LEN));
+ tstrncpy(pstr + dataLen, tmp, min(tmpLen + 1, BIGINT_BUFF_LEN));
break;
case TSDB_DATA_TYPE_NULL:
@@ -6236,9 +6557,8 @@ static int64_t generateStbRowData(
stbInfo->columns[i].dataType);
exit(EXIT_FAILURE);
}
-
if (tmp) {
- dataLen += strlen(tmp);
+ dataLen += tmpLen;
}
}
@@ -6246,7 +6566,7 @@ static int64_t generateStbRowData(
return 0;
}
- tstrncpy(pstr + dataLen, ")", 2);
+ dataLen += snprintf(pstr + dataLen, 2, ")");
verbosePrint("%s() LN%d, dataLen:%"PRId64"\n", __func__, __LINE__, dataLen);
verbosePrint("%s() LN%d, recBuf:\n\t%s\n", __func__, __LINE__, recBuf);
@@ -6323,6 +6643,22 @@ static int64_t generateData(char *recBuf, char *data_type,
free(s);
break;
+ case TSDB_DATA_TYPE_UTINYINT:
+ pstr += sprintf(pstr, ",%d", rand_utinyint() );
+ break;
+
+ case TSDB_DATA_TYPE_USMALLINT:
+ pstr += sprintf(pstr, ",%d", rand_usmallint());
+ break;
+
+ case TSDB_DATA_TYPE_UINT:
+ pstr += sprintf(pstr, ",%d", rand_uint());
+ break;
+
+ case TSDB_DATA_TYPE_UBIGINT:
+ pstr += sprintf(pstr, ",%"PRId64"", rand_ubigint());
+ break;
+
case TSDB_DATA_TYPE_NULL:
break;
@@ -6381,7 +6717,7 @@ static int generateSampleFromRand(
case TSDB_DATA_TYPE_NCHAR:
dataLen = (columns)?columns[c].dataLen:g_args.binwidth;
- rand_string(data, dataLen);
+ rand_string(data, dataLen - 1);
pos += sprintf(buff + pos, "%s,", data);
break;
@@ -6394,10 +6730,18 @@ static int generateSampleFromRand(
pos += sprintf(buff + pos, "%s,", tmp);
break;
+ case TSDB_DATA_TYPE_UINT:
+ pos += sprintf(buff + pos, "%s,", rand_uint_str());
+ break;
+
case TSDB_DATA_TYPE_BIGINT:
pos += sprintf(buff + pos, "%s,", rand_bigint_str());
break;
+ case TSDB_DATA_TYPE_UBIGINT:
+ pos += sprintf(buff + pos, "%s,", rand_ubigint_str());
+ break;
+
case TSDB_DATA_TYPE_FLOAT:
if (g_args.demo_mode) {
if (c == 0) {
@@ -6419,10 +6763,18 @@ static int generateSampleFromRand(
pos += sprintf(buff + pos, "%s,", rand_smallint_str());
break;
+ case TSDB_DATA_TYPE_USMALLINT:
+ pos += sprintf(buff + pos, "%s,", rand_usmallint_str());
+ break;
+
case TSDB_DATA_TYPE_TINYINT:
pos += sprintf(buff + pos, "%s,", rand_tinyint_str());
break;
+ case TSDB_DATA_TYPE_UTINYINT:
+ pos += sprintf(buff + pos, "%s,", rand_utinyint_str());
+ break;
+
case TSDB_DATA_TYPE_BOOL:
pos += sprintf(buff + pos, "%s,", rand_bool_str());
break;
@@ -6946,13 +7298,17 @@ static int32_t prepareStmtBindArrayByType(
char *value)
{
int32_t *bind_int;
+ uint32_t *bind_uint;
int64_t *bind_bigint;
+ uint64_t *bind_ubigint;
float *bind_float;
double *bind_double;
int8_t *bind_bool;
int64_t *bind_ts2;
int16_t *bind_smallint;
+ uint16_t *bind_usmallint;
int8_t *bind_tinyint;
+ uint8_t *bind_utinyint;
switch(data_type) {
case TSDB_DATA_TYPE_BINARY:
@@ -7017,6 +7373,22 @@ static int32_t prepareStmtBindArrayByType(
bind->length = &bind->buffer_length;
bind->is_null = NULL;
break;
+
+ case TSDB_DATA_TYPE_UINT:
+ bind_uint = malloc(sizeof(uint32_t));
+ assert(bind_uint);
+
+ if (value) {
+ *bind_uint = atoi(value);
+ } else {
+ *bind_uint = rand_int();
+ }
+ bind->buffer_type = TSDB_DATA_TYPE_UINT;
+ bind->buffer_length = sizeof(uint32_t);
+ bind->buffer = bind_uint;
+ bind->length = &bind->buffer_length;
+ bind->is_null = NULL;
+ break;
case TSDB_DATA_TYPE_BIGINT:
bind_bigint = malloc(sizeof(int64_t));
@@ -7034,6 +7406,22 @@ static int32_t prepareStmtBindArrayByType(
bind->is_null = NULL;
break;
+ case TSDB_DATA_TYPE_UBIGINT:
+ bind_ubigint = malloc(sizeof(uint64_t));
+ assert(bind_ubigint);
+
+ if (value) {
+ *bind_ubigint = atoll(value);
+ } else {
+ *bind_ubigint = rand_bigint();
+ }
+ bind->buffer_type = TSDB_DATA_TYPE_UBIGINT;
+ bind->buffer_length = sizeof(uint64_t);
+ bind->buffer = bind_ubigint;
+ bind->length = &bind->buffer_length;
+ bind->is_null = NULL;
+ break;
+
case TSDB_DATA_TYPE_FLOAT:
bind_float = malloc(sizeof(float));
assert(bind_float);
@@ -7082,6 +7470,22 @@ static int32_t prepareStmtBindArrayByType(
bind->is_null = NULL;
break;
+ case TSDB_DATA_TYPE_USMALLINT:
+ bind_usmallint = malloc(sizeof(uint16_t));
+ assert(bind_usmallint);
+
+ if (value) {
+ *bind_usmallint = (uint16_t)atoi(value);
+ } else {
+ *bind_usmallint = rand_smallint();
+ }
+ bind->buffer_type = TSDB_DATA_TYPE_SMALLINT;
+ bind->buffer_length = sizeof(uint16_t);
+ bind->buffer = bind_usmallint;
+ bind->length = &bind->buffer_length;
+ bind->is_null = NULL;
+ break;
+
case TSDB_DATA_TYPE_TINYINT:
bind_tinyint = malloc(sizeof(int8_t));
assert(bind_tinyint);
@@ -7098,6 +7502,22 @@ static int32_t prepareStmtBindArrayByType(
bind->is_null = NULL;
break;
+ case TSDB_DATA_TYPE_UTINYINT:
+ bind_utinyint = malloc(sizeof(uint8_t));
+ assert(bind_utinyint);
+
+ if (value) {
+ *bind_utinyint = (int8_t)atoi(value);
+ } else {
+ *bind_utinyint = rand_tinyint();
+ }
+ bind->buffer_type = TSDB_DATA_TYPE_UTINYINT;
+ bind->buffer_length = sizeof(uint8_t);
+ bind->buffer = bind_utinyint;
+ bind->length = &bind->buffer_length;
+ bind->is_null = NULL;
+ break;
+
case TSDB_DATA_TYPE_BOOL:
bind_bool = malloc(sizeof(int8_t));
assert(bind_bool);
@@ -7172,11 +7592,15 @@ static int32_t prepareStmtBindArrayByTypeForRand(
char *value)
{
int32_t *bind_int;
+ uint32_t *bind_uint;
int64_t *bind_bigint;
+ uint64_t *bind_ubigint;
float *bind_float;
double *bind_double;
int16_t *bind_smallint;
+ uint16_t *bind_usmallint;
int8_t *bind_tinyint;
+ uint8_t *bind_utinyint;
int8_t *bind_bool;
int64_t *bind_ts2;
@@ -7246,6 +7670,23 @@ static int32_t prepareStmtBindArrayByTypeForRand(
*ptr += bind->buffer_length;
break;
+ case TSDB_DATA_TYPE_UINT:
+ bind_uint = (uint32_t *)*ptr;
+
+ if (value) {
+ *bind_uint = atoi(value);
+ } else {
+ *bind_uint = rand_int();
+ }
+ bind->buffer_type = TSDB_DATA_TYPE_UINT;
+ bind->buffer_length = sizeof(uint32_t);
+ bind->buffer = bind_uint;
+ bind->length = &bind->buffer_length;
+ bind->is_null = NULL;
+
+ *ptr += bind->buffer_length;
+ break;
+
case TSDB_DATA_TYPE_BIGINT:
bind_bigint = (int64_t *)*ptr;
@@ -7263,6 +7704,23 @@ static int32_t prepareStmtBindArrayByTypeForRand(
*ptr += bind->buffer_length;
break;
+ case TSDB_DATA_TYPE_UBIGINT:
+ bind_ubigint = (uint64_t *)*ptr;
+
+ if (value) {
+ *bind_ubigint = atoll(value);
+ } else {
+ *bind_ubigint = rand_bigint();
+ }
+ bind->buffer_type = TSDB_DATA_TYPE_UBIGINT;
+ bind->buffer_length = sizeof(uint64_t);
+ bind->buffer = bind_ubigint;
+ bind->length = &bind->buffer_length;
+ bind->is_null = NULL;
+
+ *ptr += bind->buffer_length;
+ break;
+
case TSDB_DATA_TYPE_FLOAT:
bind_float = (float *)*ptr;
@@ -7314,6 +7772,23 @@ static int32_t prepareStmtBindArrayByTypeForRand(
*ptr += bind->buffer_length;
break;
+ case TSDB_DATA_TYPE_USMALLINT:
+ bind_usmallint = (uint16_t *)*ptr;
+
+ if (value) {
+ *bind_usmallint = (uint16_t)atoi(value);
+ } else {
+ *bind_usmallint = rand_smallint();
+ }
+ bind->buffer_type = TSDB_DATA_TYPE_USMALLINT;
+ bind->buffer_length = sizeof(uint16_t);
+ bind->buffer = bind_usmallint;
+ bind->length = &bind->buffer_length;
+ bind->is_null = NULL;
+
+ *ptr += bind->buffer_length;
+ break;
+
case TSDB_DATA_TYPE_TINYINT:
bind_tinyint = (int8_t *)*ptr;
@@ -7331,6 +7806,23 @@ static int32_t prepareStmtBindArrayByTypeForRand(
*ptr += bind->buffer_length;
break;
+ case TSDB_DATA_TYPE_UTINYINT:
+ bind_utinyint = (uint8_t *)*ptr;
+
+ if (value) {
+ *bind_utinyint = (uint8_t)atoi(value);
+ } else {
+ *bind_utinyint = rand_tinyint();
+ }
+ bind->buffer_type = TSDB_DATA_TYPE_UTINYINT;
+ bind->buffer_length = sizeof(uint8_t);
+ bind->buffer = bind_utinyint;
+ bind->length = &bind->buffer_length;
+ bind->is_null = NULL;
+
+ *ptr += bind->buffer_length;
+ break;
+
case TSDB_DATA_TYPE_BOOL:
bind_bool = (int8_t *)*ptr;
@@ -7661,7 +8153,7 @@ UNUSED_FUNC static int32_t prepareStbStmtRand(
}
#if STMT_BIND_PARAM_BATCH == 1
-static int execBindParamBatch(
+static int execStbBindParamBatch(
threadInfo *pThreadInfo,
char *tableName,
int64_t tableSeq,
@@ -7675,7 +8167,9 @@ static int execBindParamBatch(
TAOS_STMT *stmt = pThreadInfo->stmt;
SSuperTable *stbInfo = pThreadInfo->stbInfo;
- uint32_t columnCount = (stbInfo)?pThreadInfo->stbInfo->columnCount:g_args.columnCount;
+ assert(stbInfo);
+
+ uint32_t columnCount = pThreadInfo->stbInfo->columnCount;
uint32_t thisBatch = MAX_SAMPLES - (*pSamplePos);
@@ -7700,104 +8194,101 @@ static int execBindParamBatch(
param->buffer = pThreadInfo->bind_ts_array;
} else {
- data_type = (stbInfo)?stbInfo->columns[c-1].data_type:g_args.data_type[c-1];
+ data_type = stbInfo->columns[c-1].data_type;
char *tmpP;
switch(data_type) {
case TSDB_DATA_TYPE_BINARY:
+ param->buffer_length =
+ stbInfo->columns[c-1].dataLen;
+
+ tmpP =
+ (char *)((uintptr_t)*(uintptr_t*)(stbInfo->sampleBindBatchArray
+ +sizeof(char*)*(c-1)));
+
+ verbosePrint("%s() LN%d, tmpP=%p pos=%"PRId64" width=%"PRIxPTR" position=%"PRId64"\n",
+ __func__, __LINE__, tmpP, *pSamplePos, param->buffer_length,
+ (*pSamplePos) * param->buffer_length);
+
+ param->buffer = (void *)(tmpP + *pSamplePos * param->buffer_length);
+ break;
+
case TSDB_DATA_TYPE_NCHAR:
param->buffer_length =
- ((stbInfo)?stbInfo->columns[c-1].dataLen:g_args.binwidth);
+ stbInfo->columns[c-1].dataLen;
tmpP =
(char *)((uintptr_t)*(uintptr_t*)(stbInfo->sampleBindBatchArray
+sizeof(char*)*(c-1)));
- verbosePrint("%s() LN%d, tmpP=%p pos=%"PRId64" width=%d position=%"PRId64"\n",
- __func__, __LINE__, tmpP, *pSamplePos,
- (((stbInfo)?stbInfo->columns[c-1].dataLen:g_args.binwidth)),
- (*pSamplePos) *
- (((stbInfo)?stbInfo->columns[c-1].dataLen:g_args.binwidth)));
+ verbosePrint("%s() LN%d, tmpP=%p pos=%"PRId64" width=%"PRIxPTR" position=%"PRId64"\n",
+ __func__, __LINE__, tmpP, *pSamplePos, param->buffer_length,
+ (*pSamplePos) * param->buffer_length);
- param->buffer = (void *)(tmpP + *pSamplePos *
- (((stbInfo)?stbInfo->columns[c-1].dataLen:g_args.binwidth))
- );
+ param->buffer = (void *)(tmpP + *pSamplePos * param->buffer_length);
break;
case TSDB_DATA_TYPE_INT:
+ case TSDB_DATA_TYPE_UINT:
param->buffer_length = sizeof(int32_t);
- param->buffer = (stbInfo)?
+ param->buffer =
(void *)((uintptr_t)*(uintptr_t*)(stbInfo->sampleBindBatchArray+sizeof(char*)*(c-1))
- + stbInfo->columns[c-1].dataLen * (*pSamplePos)):
- (void *)((uintptr_t)*(uintptr_t*)(g_sampleBindBatchArray+sizeof(char*)*(c-1))
- + sizeof(int32_t)*(*pSamplePos));
+ + stbInfo->columns[c-1].dataLen * (*pSamplePos));
break;
case TSDB_DATA_TYPE_TINYINT:
+ case TSDB_DATA_TYPE_UTINYINT:
param->buffer_length = sizeof(int8_t);
- param->buffer = (stbInfo)?
+ param->buffer =
(void *)((uintptr_t)*(uintptr_t*)(
stbInfo->sampleBindBatchArray
+sizeof(char*)*(c-1))
- + stbInfo->columns[c-1].dataLen*(*pSamplePos)):
- (void *)((uintptr_t)*(uintptr_t*)(
- g_sampleBindBatchArray+sizeof(char*)*(c-1))
- + sizeof(int8_t)*(*pSamplePos));
+ + stbInfo->columns[c-1].dataLen*(*pSamplePos));
break;
case TSDB_DATA_TYPE_SMALLINT:
+ case TSDB_DATA_TYPE_USMALLINT:
param->buffer_length = sizeof(int16_t);
- param->buffer = (stbInfo)?
+ param->buffer =
(void *)((uintptr_t)*(uintptr_t*)(stbInfo->sampleBindBatchArray+sizeof(char*)*(c-1))
- + stbInfo->columns[c-1].dataLen * (*pSamplePos)):
- (void *)((uintptr_t)*(uintptr_t*)(g_sampleBindBatchArray+sizeof(char*)*(c-1))
- + sizeof(int16_t)*(*pSamplePos));
+ + stbInfo->columns[c-1].dataLen * (*pSamplePos));
break;
case TSDB_DATA_TYPE_BIGINT:
+ case TSDB_DATA_TYPE_UBIGINT:
param->buffer_length = sizeof(int64_t);
- param->buffer = (stbInfo)?
+ param->buffer =
(void *)((uintptr_t)*(uintptr_t*)(stbInfo->sampleBindBatchArray+sizeof(char*)*(c-1))
- + stbInfo->columns[c-1].dataLen * (*pSamplePos)):
- (void *)((uintptr_t)*(uintptr_t*)(g_sampleBindBatchArray+sizeof(char*)*(c-1))
- + sizeof(int64_t)*(*pSamplePos));
+ + stbInfo->columns[c-1].dataLen * (*pSamplePos));
break;
case TSDB_DATA_TYPE_BOOL:
param->buffer_length = sizeof(int8_t);
- param->buffer = (stbInfo)?
+ param->buffer =
(void *)((uintptr_t)*(uintptr_t*)(stbInfo->sampleBindBatchArray+sizeof(char*)*(c-1))
- + stbInfo->columns[c-1].dataLen * (*pSamplePos)):
- (void *)((uintptr_t)*(uintptr_t*)(g_sampleBindBatchArray+sizeof(char*)*(c-1))
- + sizeof(int8_t)*(*pSamplePos));
+ + stbInfo->columns[c-1].dataLen * (*pSamplePos));
break;
case TSDB_DATA_TYPE_FLOAT:
param->buffer_length = sizeof(float);
- param->buffer = (stbInfo)?
+ param->buffer =
(void *)((uintptr_t)*(uintptr_t*)(stbInfo->sampleBindBatchArray+sizeof(char*)*(c-1))
- + stbInfo->columns[c-1].dataLen * (*pSamplePos)):
- (void *)((uintptr_t)*(uintptr_t*)(g_sampleBindBatchArray+sizeof(char*)*(c-1))
- + sizeof(float)*(*pSamplePos));
+ + stbInfo->columns[c-1].dataLen * (*pSamplePos));
break;
case TSDB_DATA_TYPE_DOUBLE:
param->buffer_length = sizeof(double);
- param->buffer = (stbInfo)?
+ param->buffer =
(void *)((uintptr_t)*(uintptr_t*)(stbInfo->sampleBindBatchArray+sizeof(char*)*(c-1))
- + stbInfo->columns[c-1].dataLen * (*pSamplePos)):
- (void *)((uintptr_t)*(uintptr_t*)(g_sampleBindBatchArray+sizeof(char*)*(c-1))
- + sizeof(double)*(*pSamplePos));
+ + stbInfo->columns[c-1].dataLen * (*pSamplePos));
break;
case TSDB_DATA_TYPE_TIMESTAMP:
param->buffer_length = sizeof(int64_t);
- param->buffer = (stbInfo)?
+ param->buffer =
(void *)((uintptr_t)*(uintptr_t*)(stbInfo->sampleBindBatchArray+sizeof(char*)*(c-1))
- + stbInfo->columns[c-1].dataLen * (*pSamplePos)):
- (void *)((uintptr_t)*(uintptr_t*)(g_sampleBindBatchArray+sizeof(char*)*(c-1))
- + sizeof(int64_t)*(*pSamplePos));
+ + stbInfo->columns[c-1].dataLen * (*pSamplePos));
break;
default:
@@ -7818,7 +8309,7 @@ static int execBindParamBatch(
if (param->buffer_type == TSDB_DATA_TYPE_NCHAR) {
param->length[b] = strlen(
(char *)param->buffer + b *
- ((stbInfo)?stbInfo->columns[c].dataLen:g_args.binwidth)
+ stbInfo->columns[c].dataLen
);
} else {
param->length[b] = param->buffer_length;
@@ -7902,24 +8393,28 @@ static int parseSamplefileToStmtBatch(
switch(data_type) {
case TSDB_DATA_TYPE_INT:
+ case TSDB_DATA_TYPE_UINT:
tmpP = calloc(1, sizeof(int) * MAX_SAMPLES);
assert(tmpP);
*(uintptr_t*)(sampleBindBatchArray+ sizeof(uintptr_t*)*c) = (uintptr_t)tmpP;
break;
case TSDB_DATA_TYPE_TINYINT:
+ case TSDB_DATA_TYPE_UTINYINT:
tmpP = calloc(1, sizeof(int8_t) * MAX_SAMPLES);
assert(tmpP);
*(uintptr_t*)(sampleBindBatchArray+ sizeof(uintptr_t*)*c) = (uintptr_t)tmpP;
break;
case TSDB_DATA_TYPE_SMALLINT:
+ case TSDB_DATA_TYPE_USMALLINT:
tmpP = calloc(1, sizeof(int16_t) * MAX_SAMPLES);
assert(tmpP);
*(uintptr_t*)(sampleBindBatchArray+ sizeof(uintptr_t*)*c) = (uintptr_t)tmpP;
break;
case TSDB_DATA_TYPE_BIGINT:
+ case TSDB_DATA_TYPE_UBIGINT:
tmpP = calloc(1, sizeof(int64_t) * MAX_SAMPLES);
assert(tmpP);
*(uintptr_t*)(sampleBindBatchArray+ sizeof(uintptr_t*)*c) = (uintptr_t)tmpP;
@@ -7998,6 +8493,7 @@ static int parseSamplefileToStmtBatch(
switch(data_type) {
case TSDB_DATA_TYPE_INT:
+ case TSDB_DATA_TYPE_UINT:
*((int32_t*)((uintptr_t)*(uintptr_t*)(sampleBindBatchArray
+sizeof(char*)*c)+sizeof(int32_t)*i)) =
atoi(tmpStr);
@@ -8016,18 +8512,21 @@ static int parseSamplefileToStmtBatch(
break;
case TSDB_DATA_TYPE_TINYINT:
+ case TSDB_DATA_TYPE_UTINYINT:
*((int8_t*)((uintptr_t)*(uintptr_t*)(sampleBindBatchArray
+sizeof(char*)*c)+sizeof(int8_t)*i)) =
(int8_t)atoi(tmpStr);
break;
case TSDB_DATA_TYPE_SMALLINT:
+ case TSDB_DATA_TYPE_USMALLINT:
*((int16_t*)((uintptr_t)*(uintptr_t*)(sampleBindBatchArray
+sizeof(char*)*c)+sizeof(int16_t)*i)) =
(int16_t)atoi(tmpStr);
break;
case TSDB_DATA_TYPE_BIGINT:
+ case TSDB_DATA_TYPE_UBIGINT:
*((int64_t*)((uintptr_t)*(uintptr_t*)(sampleBindBatchArray
+sizeof(char*)*c)+sizeof(int64_t)*i)) =
(int64_t)atol(tmpStr);
@@ -8365,7 +8864,7 @@ static int32_t prepareStbStmt(
}
#if STMT_BIND_PARAM_BATCH == 1
- return execBindParamBatch(
+ return execStbBindParamBatch(
pThreadInfo,
tableName,
tableSeq,
@@ -10053,15 +10552,18 @@ static void startMultiThreadInsertData(int threads, char* db_name,
}
}
- fprintf(stderr, "insert delay, avg: %10.2fms, max: %10.2fms, min: %10.2fms\n\n",
- (double)avgDelay/1000.0,
- (double)maxDelay/1000.0,
- (double)minDelay/1000.0);
- if (g_fpOfInsertResult) {
- fprintf(g_fpOfInsertResult, "insert delay, avg:%10.2fms, max: %10.2fms, min: %10.2fms\n\n",
- (double)avgDelay/1000.0,
- (double)maxDelay/1000.0,
- (double)minDelay/1000.0);
+ if (minDelay != UINT64_MAX) {
+ fprintf(stderr, "insert delay, avg: %10.2fms, max: %10.2fms, min: %10.2fms\n\n",
+ (double)avgDelay/1000.0,
+ (double)maxDelay/1000.0,
+ (double)minDelay/1000.0);
+
+ if (g_fpOfInsertResult) {
+ fprintf(g_fpOfInsertResult, "insert delay, avg:%10.2fms, max: %10.2fms, min: %10.2fms\n\n",
+ (double)avgDelay/1000.0,
+ (double)maxDelay/1000.0,
+ (double)minDelay/1000.0);
+ }
}
//taos_close(taos);
diff --git a/src/kit/taosdump/CMakeLists.txt b/src/kit/taosdump/CMakeLists.txt
index c3c914e96fc096f59aa701d3496455c754356aa8..75ce520c2e6bd4fd24e25b6297c7f99cc7fdfe75 100644
--- a/src/kit/taosdump/CMakeLists.txt
+++ b/src/kit/taosdump/CMakeLists.txt
@@ -6,6 +6,61 @@ INCLUDE_DIRECTORIES(${TD_COMMUNITY_DIR}/src/query/inc)
INCLUDE_DIRECTORIES(inc)
AUX_SOURCE_DIRECTORY(. SRC)
+FIND_PACKAGE(Git)
+IF(GIT_FOUND)
+ EXECUTE_PROCESS(
+ COMMAND ${GIT_EXECUTABLE} log --pretty=oneline -n 1 ${CMAKE_CURRENT_LIST_DIR}/taosdump.c
+ WORKING_DIRECTORY ${CMAKE_CURRENT_LIST_DIR}
+ RESULT_VARIABLE RESULT
+ OUTPUT_VARIABLE TAOSDUMP_COMMIT_SHA1
+ )
+ IF ("${TAOSDUMP_COMMIT_SHA1}" STREQUAL "")
+ SET(TAOSDUMP_COMMIT_SHA1 "unknown")
+ ELSE ()
+ STRING(SUBSTRING "${TAOSDUMP_COMMIT_SHA1}" 0 7 TAOSDUMP_COMMIT_SHA1)
+ STRING(STRIP "${TAOSDUMP_COMMIT_SHA1}" TAOSDUMP_COMMIT_SHA1)
+ ENDIF ()
+ EXECUTE_PROCESS(
+ COMMAND ${GIT_EXECUTABLE} status -z -s ${CMAKE_CURRENT_LIST_DIR}/taosdump.c
+ RESULT_VARIABLE RESULT
+ OUTPUT_VARIABLE TAOSDUMP_STATUS
+ )
+ IF (TD_LINUX)
+ EXECUTE_PROCESS(
+ COMMAND bash "-c" "echo '${TAOSDUMP_STATUS}' | awk '{print $1}'"
+ RESULT_VARIABLE RESULT
+ OUTPUT_VARIABLE TAOSDUMP_STATUS
+ )
+ ENDIF (TD_LINUX)
+ELSE()
+ MESSAGE("Git not found")
+ SET(TAOSDUMP_COMMIT_SHA1 "unknown")
+ SET(TAOSDUMP_STATUS "unknown")
+ENDIF (GIT_FOUND)
+
+MESSAGE("taosdump's latest commit in short is:" ${TAOSDUMP_COMMIT_SHA1})
+STRING(STRIP "${TAOSDUMP_STATUS}" TAOSDUMP_STATUS)
+
+IF (TAOSDUMP_STATUS MATCHES "M")
+ SET(TAOSDUMP_STATUS "modified")
+ELSE()
+ SET(TAOSDUMP_STATUS "")
+ENDIF ()
+
+MESSAGE("taosdump's status is:" ${TAOSDUMP_STATUS})
+
+ADD_DEFINITIONS(-DTAOSDUMP_COMMIT_SHA1="${TAOSDUMP_COMMIT_SHA1}")
+ADD_DEFINITIONS(-DTAOSDUMP_STATUS="${TAOSDUMP_STATUS}")
+
+MESSAGE("VERNUMBER is:" ${VERNUMBER})
+IF ("${VERNUMBER}" STREQUAL "")
+ SET(TD_VERSION_NUMBER "TDengine-version-unknown")
+ELSE()
+ SET(TD_VERSION_NUMBER ${VERNUMBER})
+ENDIF ()
+MESSAGE("TD_VERSION_NUMBER is:" ${TD_VERSION_NUMBER})
+ADD_DEFINITIONS(-DTD_VERNUMBER="${TD_VERSION_NUMBER}")
+
IF (TD_LINUX)
ADD_EXECUTABLE(taosdump ${SRC})
IF (TD_SOMODE_STATIC)
diff --git a/src/kit/taosdump/taosdump.c b/src/kit/taosdump/taosdump.c
index ef9e584978f12e636bccca28689062388ffd595c..b0235a7dcaaecb8e9f9a2ea651358380f26cd42b 100644
--- a/src/kit/taosdump/taosdump.c
+++ b/src/kit/taosdump/taosdump.c
@@ -25,7 +25,6 @@
#include "tsclient.h"
#include "tsdb.h"
#include "tutil.h"
-#include
#define TSDB_SUPPORT_NANOSECOND 1
@@ -60,7 +59,14 @@ typedef struct {
fprintf(stderr, "VERB: "fmt, __VA_ARGS__); } while(0)
#define errorPrint(fmt, ...) \
- do { fprintf(stderr, "\033[31m"); fprintf(stderr, "ERROR: "fmt, __VA_ARGS__); fprintf(stderr, "\033[0m"); } while(0)
+ do { fprintf(stderr, "\033[31m"); \
+ fprintf(stderr, "ERROR: "fmt, __VA_ARGS__); \
+ fprintf(stderr, "\033[0m"); } while(0)
+
+#define okPrint(fmt, ...) \
+ do { fprintf(stderr, "\033[32m"); \
+ fprintf(stderr, "OK: "fmt, __VA_ARGS__); \
+ fprintf(stderr, "\033[0m"); } while(0)
static bool isStringNumber(char *input)
{
@@ -113,7 +119,7 @@ enum _show_tables_index {
TSDB_MAX_SHOW_TABLES
};
-// ---------------------------------- DESCRIBE METRIC CONFIGURE ------------------------------
+// ---------------------------------- DESCRIBE STABLE CONFIGURE ------------------------------
enum _describe_table_index {
TSDB_DESCRIBE_METRIC_FIELD_INDEX,
TSDB_DESCRIBE_METRIC_TYPE_INDEX,
@@ -141,10 +147,28 @@ extern char version[];
#define DB_PRECISION_LEN 8
#define DB_STATUS_LEN 16
+typedef struct {
+ char name[TSDB_TABLE_NAME_LEN];
+ bool belongStb;
+ char stable[TSDB_TABLE_NAME_LEN];
+} TableInfo;
+
+typedef struct {
+ char name[TSDB_TABLE_NAME_LEN];
+ char stable[TSDB_TABLE_NAME_LEN];
+} TableRecord;
+
+typedef struct {
+ bool isStable;
+ int64_t dumpNtbCount;
+ TableRecord **dumpNtbInfos;
+ TableRecord tableRecord;
+} TableRecordInfo;
+
typedef struct {
char name[TSDB_DB_NAME_LEN];
char create_time[32];
- int32_t ntables;
+ int64_t ntables;
int32_t vgroups;
int16_t replica;
int16_t quorum;
@@ -164,28 +188,22 @@ typedef struct {
char precision[DB_PRECISION_LEN]; // time resolution
int8_t update;
char status[DB_STATUS_LEN];
+ int64_t dumpTbCount;
+ TableRecordInfo **dumpTbInfos;
} SDbInfo;
-typedef struct {
- char name[TSDB_TABLE_NAME_LEN];
- char metric[TSDB_TABLE_NAME_LEN];
-} STableRecord;
-
-typedef struct {
- bool isMetric;
- STableRecord tableRecord;
-} STableRecordInfo;
-
typedef struct {
pthread_t threadID;
int32_t threadIndex;
int32_t totalThreads;
char dbName[TSDB_DB_NAME_LEN];
- int precision;
- void *taosCon;
+ char stbName[TSDB_TABLE_NAME_LEN];
+ int precision;
+ TAOS *taos;
int64_t rowsOfDumpOut;
int64_t tablesOfDumpOut;
-} SThreadParaObj;
+ int64_t tableFrom;
+} threadInfo;
typedef struct {
int64_t totalRowsOfDumpOut;
@@ -197,6 +215,7 @@ typedef struct {
static int64_t g_totalDumpOutRows = 0;
SDbInfo **g_dbInfos = NULL;
+TableInfo *g_tablesList = NULL;
const char *argp_program_version = version;
const char *argp_program_bug_address = "";
@@ -210,7 +229,7 @@ static char doc[] = "";
/* to force a line-break, e.g.\n<-- here."; */
/* A description of the arguments we accept. */
-static char args_doc[] = "dbname [tbname ...]\n--databases dbname ...\n--all-databases\n-i inpath\n-o outpath";
+static char args_doc[] = "dbname [tbname ...]\n--databases db1,db2,... \n--all-databases\n-i inpath\n-o outpath";
/* Keys for options without short-options. */
#define OPT_ABORT 1 /* –abort */
@@ -239,7 +258,7 @@ static struct argp_option options[] = {
{"encode", 'e', "ENCODE", 0, "Input file encoding.", 1},
// dump unit options
{"all-databases", 'A', 0, 0, "Dump all databases.", 2},
- {"databases", 'D', 0, 0, "Dump assigned databases", 2},
+ {"databases", 'D', "DATABASES", 0, "Dump inputed databases. Use comma to seprate databases\' name.", 2},
{"allow-sys", 'a', 0, 0, "Allow to dump sys database", 2},
// dump format options
{"schemaonly", 's', 0, 0, "Only dump schema.", 2},
@@ -255,6 +274,8 @@ static struct argp_option options[] = {
{0}
};
+#define HUMAN_TIME_LEN 28
+
/* Used by main to communicate with parse_opt. */
typedef struct arguments {
// connection option
@@ -272,14 +293,15 @@ typedef struct arguments {
// dump unit option
bool all_databases;
bool databases;
+ char *databasesSeq;
// dump format option
bool schemaonly;
bool with_property;
bool avro;
int64_t start_time;
- char humanStartTime[28];
+ char humanStartTime[HUMAN_TIME_LEN];
int64_t end_time;
- char humanEndTime[28];
+ char humanEndTime[HUMAN_TIME_LEN];
char precision[8];
int32_t data_batch;
@@ -290,13 +312,13 @@ typedef struct arguments {
int32_t thread_num;
int abort;
char **arg_list;
- int arg_list_len;
- bool isDumpIn;
- bool debug_print;
- bool verbose_print;
- bool performance_print;
+ int arg_list_len;
+ bool isDumpIn;
+ bool debug_print;
+ bool verbose_print;
+ bool performance_print;
- int dbCount;
+ int dumpDbCount;
} SArguments;
/* Our argp parser. */
@@ -311,25 +333,21 @@ static int taosDumpOut();
static int taosDumpIn();
static void taosDumpCreateDbClause(SDbInfo *dbInfo, bool isDumpProperty,
FILE *fp);
-static int taosDumpDb(SDbInfo *dbInfo, FILE *fp, TAOS *taosCon);
-static int32_t taosDumpStable(char *table, FILE *fp, TAOS* taosCon,
- char* dbName);
-static void taosDumpCreateTableClause(STableDef *tableDes, int numOfCols,
+//static int taosDumpDb(SDbInfo *dbInfo, FILE *fp, TAOS *taos);
+static int dumpStable(char *table, FILE *fp, TAOS* taos,
+ SDbInfo *dbInfo);
+static int taosDumpCreateTableClause(STableDef *tableDes, int numOfCols,
FILE *fp, char* dbName);
-static void taosDumpCreateMTableClause(STableDef *tableDes, char *metric,
+static void taosDumpCreateMTableClause(STableDef *tableDes, char *stable,
int numOfCols, FILE *fp, char* dbName);
-static int32_t taosDumpTable(char *tbName, char *metric,
- FILE *fp, TAOS* taosCon, char* dbName, int precision);
-static int taosDumpTableData(FILE *fp, char *tbName,
- TAOS* taosCon, char* dbName,
+static int64_t taosDumpTable(char *tbName, char *stable,
+ FILE *fp, TAOS* taos, char* dbName, int precision);
+static int64_t taosDumpTableData(FILE *fp, char *tbName,
+ TAOS* taos, char* dbName,
int precision,
char *jsonAvroSchema);
static int taosCheckParam(struct arguments *arguments);
static void taosFreeDbInfos();
-static void taosStartDumpOutWorkThreads(
- int32_t numOfThread,
- char *dbName,
- int precision);
struct arguments g_args = {
// connection option
@@ -348,10 +366,11 @@ struct arguments g_args = {
"./dump_result.txt",
NULL,
// dump unit option
- false,
- false,
+ false, // all_databases
+ false, // databases
+ NULL, // databasesSeq
// dump format option
- false, // schemeonly
+ false, // schemaonly
true, // with_property
false, // avro format
-INT64_MAX + 1, // start_time
@@ -372,16 +391,64 @@ struct arguments g_args = {
false, // debug_print
false, // verbose_print
false, // performance_print
- 0, // dbCount
+ 0, // dumpDbCount
};
+// get taosdump commit number version
+#ifndef TAOSDUMP_COMMIT_SHA1
+#define TAOSDUMP_COMMIT_SHA1 "unknown"
+#endif
+
+#ifndef TD_VERNUMBER
+#define TD_VERNUMBER "unknown"
+#endif
+
+#ifndef TAOSDUMP_STATUS
+#define TAOSDUMP_STATUS "unknown"
+#endif
+
+static void printVersion() {
+ char tdengine_ver[] = TD_VERNUMBER;
+ char taosdump_ver[] = TAOSDUMP_COMMIT_SHA1;
+ char taosdump_status[] = TAOSDUMP_STATUS;
+
+ if (strlen(taosdump_status) == 0) {
+ printf("taosdump version %s-%s\n",
+ tdengine_ver, taosdump_ver);
+ } else {
+ printf("taosdump version %s-%s, status:%s\n",
+ tdengine_ver, taosdump_ver, taosdump_status);
+ }
+}
+
+UNUSED_FUNC void errorWrongValue(char *program, char *wrong_arg, char *wrong_value)
+{
+ fprintf(stderr, "%s %s: %s is an invalid value\n", program, wrong_arg, wrong_value);
+ fprintf(stderr, "Try `taosdemo --help' or `taosdemo --usage' for more information.\n");
+}
+
+static void errorUnrecognized(char *program, char *wrong_arg)
+{
+ fprintf(stderr, "%s: unrecognized options '%s'\n", program, wrong_arg);
+ fprintf(stderr, "Try `taosdemo --help' or `taosdemo --usage' for more information.\n");
+}
+
+static void errorPrintReqArg(char *program, char *wrong_arg)
+{
+ fprintf(stderr,
+ "%s: option requires an argument -- '%s'\n",
+ program, wrong_arg);
+ fprintf(stderr,
+ "Try `taosdemo --help' or `taosdemo --usage' for more information.\n");
+}
+
static void errorPrintReqArg2(char *program, char *wrong_arg)
{
fprintf(stderr,
"%s: option requires a number argument '-%s'\n",
program, wrong_arg);
fprintf(stderr,
- "Try `taosdump --help' or `taosdump --usage' for more information.\n");
+ "Try `taosdemo --help' or `taosdemo --usage' for more information.\n");
}
static void errorPrintReqArg3(char *program, char *wrong_arg)
@@ -390,7 +457,7 @@ static void errorPrintReqArg3(char *program, char *wrong_arg)
"%s: option '%s' requires an argument\n",
program, wrong_arg);
fprintf(stderr,
- "Try `taosdump --help' or `taosdump --usage' for more information.\n");
+ "Try `taosdemo --help' or `taosdemo --usage' for more information.\n");
}
/* Parse a single option. */
@@ -526,66 +593,29 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) {
}
static int queryDbImpl(TAOS *taos, char *command) {
- int i;
TAOS_RES *res = NULL;
int32_t code = -1;
- for (i = 0; i < 5; i++) {
- if (NULL != res) {
- taos_free_result(res);
- res = NULL;
- }
-
- res = taos_query(taos, command);
- code = taos_errno(res);
- if (0 == code) {
- break;
- }
+ if (NULL != res) {
+ taos_free_result(res);
+ res = NULL;
}
+ res = taos_query(taos, command);
+ code = taos_errno(res);
+
if (code != 0) {
- errorPrint("Failed to run <%s>, reason: %s\n", command, taos_errstr(res));
+ errorPrint("Failed to run <%s>, reason: %s\n",
+ command, taos_errstr(res));
taos_free_result(res);
//taos_close(taos);
- return -1;
+ return code;
}
taos_free_result(res);
return 0;
}
-UNUSED_FUNC static void parse_precision_first(
- int argc, char *argv[], SArguments *arguments) {
- for (int i = 1; i < argc; i++) {
- if (strcmp(argv[i], "-C") == 0) {
- if (NULL == argv[i+1]) {
- errorPrint("%s need a valid value following!\n", argv[i]);
- exit(-1);
- }
- char *tmp = strdup(argv[i+1]);
- if (tmp == NULL) {
- errorPrint("%s() LN%d, strdup() cannot allocate memory\n",
- __func__, __LINE__);
- exit(-1);
- }
- if ((0 != strncasecmp(tmp, "ms", strlen("ms")))
- && (0 != strncasecmp(tmp, "us", strlen("us")))
-#if TSDB_SUPPORT_NANOSECOND == 1
- && (0 != strncasecmp(tmp, "ns", strlen("ns")))
-#endif
- ) {
- //
- errorPrint("input precision: %s is invalid value\n", tmp);
- free(tmp);
- exit(-1);
- }
- tstrncpy(g_args.precision, tmp,
- min(DB_PRECISION_LEN, strlen(tmp) + 1));
- free(tmp);
- }
- }
-}
-
static void parse_args(
int argc, char *argv[], SArguments *arguments) {
@@ -611,8 +641,40 @@ static void parse_args(
} else if (strcmp(argv[i], "-PP") == 0) {
arguments->performance_print = true;
strcpy(argv[i], "");
- } else if (strcmp(argv[i], "-A") == 0) {
+ } else if ((strcmp(argv[i], "-A") == 0)
+ || (0 == strncmp(
+ argv[i], "--all-database",
+ strlen("--all-database")))) {
g_args.all_databases = true;
+ } else if ((strncmp(argv[i], "-D", strlen("-D")) == 0)
+ || (0 == strncmp(
+ argv[i], "--database",
+ strlen("--database")))) {
+ if (2 == strlen(argv[i])) {
+ if (argc == i+1) {
+ errorPrintReqArg(argv[0], "D");
+ exit(EXIT_FAILURE);
+ }
+ arguments->databasesSeq = argv[++i];
+ } else if (0 == strncmp(argv[i], "--databases=", strlen("--databases="))) {
+ arguments->databasesSeq = (char *)(argv[i] + strlen("--databases="));
+ } else if (0 == strncmp(argv[i], "-D", strlen("-D"))) {
+ arguments->databasesSeq = (char *)(argv[i] + strlen("-D"));
+ } else if (strlen("--databases") == strlen(argv[i])) {
+ if (argc == i+1) {
+ errorPrintReqArg3(argv[0], "--databases");
+ exit(EXIT_FAILURE);
+ }
+ arguments->databasesSeq = argv[++i];
+ } else {
+ errorUnrecognized(argv[0], argv[i]);
+ exit(EXIT_FAILURE);
+ }
+ g_args.databases = true;
+ } else if (0 == strncmp(argv[i], "--version", strlen("--version")) ||
+ 0 == strncmp(argv[i], "-V", strlen("-V"))) {
+ printVersion();
+ exit(EXIT_SUCCESS);
} else {
continue;
}
@@ -623,9 +685,9 @@ static void parse_args(
static void copyHumanTimeToArg(char *timeStr, bool isStartTime)
{
if (isStartTime)
- strcpy(g_args.humanStartTime, timeStr);
+ tstrncpy(g_args.humanStartTime, timeStr, HUMAN_TIME_LEN);
else
- strcpy(g_args.humanEndTime, timeStr);
+ tstrncpy(g_args.humanEndTime, timeStr, HUMAN_TIME_LEN);
}
static void copyTimestampToArg(char *timeStr, bool isStartTime)
@@ -661,6 +723,8 @@ static void parse_timestamp(
} else {
copyTimestampToArg(tmp, isStartTime);
}
+
+ free(tmp);
}
}
}
@@ -686,535 +750,627 @@ static int getPrecisionByString(char *precision)
return -1;
}
-/*
-static void parse_timestamp(
- int argc, char *argv[], SArguments *arguments) {
- for (int i = 1; i < argc; i++) {
- if ((strcmp(argv[i], "-S") == 0)
- || (strcmp(argv[i], "-E") == 0)) {
- if (NULL == argv[i+1]) {
- errorPrint("%s need a valid value following!\n", argv[i]);
- exit(-1);
- }
- char *tmp = strdup(argv[i+1]);
- if (NULL == tmp) {
- errorPrint("%s() LN%d, strdup() cannot allocate memory\n",
- __func__, __LINE__);
- exit(-1);
- }
+static void taosFreeDbInfos() {
+ if (g_dbInfos == NULL) return;
+ for (int i = 0; i < g_args.dumpDbCount; i++)
+ tfree(g_dbInfos[i]);
+ tfree(g_dbInfos);
+}
- int64_t tmpEpoch;
- if (strchr(tmp, ':') && strchr(tmp, '-')) {
- strcpy(g_args.humanStartTime, tmp)
- int32_t timePrec;
- if (0 == strncasecmp(arguments->precision,
- "ms", strlen("ms"))) {
- timePrec = TSDB_TIME_PRECISION_MILLI;
- } else if (0 == strncasecmp(arguments->precision,
- "us", strlen("us"))) {
- timePrec = TSDB_TIME_PRECISION_MICRO;
-#if TSDB_SUPPORT_NANOSECOND == 1
- } else if (0 == strncasecmp(arguments->precision,
- "ns", strlen("ns"))) {
- timePrec = TSDB_TIME_PRECISION_NANO;
-#endif
- } else {
- errorPrint("Invalid time precision: %s",
- arguments->precision);
- free(tmp);
- return;
- }
+// check table is normal table or super table
+static int taosGetTableRecordInfo(
+ char *dbName,
+ char *table, TableRecordInfo *pTableRecordInfo, TAOS *taos) {
+ TAOS_ROW row = NULL;
+ bool isSet = false;
+ TAOS_RES *result = NULL;
- if (TSDB_CODE_SUCCESS != taosParseTime(
- tmp, &tmpEpoch, strlen(tmp),
- timePrec, 0)) {
- errorPrint("Input %s, end time error!\n", tmp);
- free(tmp);
- return;
- }
- } else {
- tstrncpy(arguments->precision, "n/a", strlen("n/a") + 1);
- tmpEpoch = atoll(tmp);
- }
+ memset(pTableRecordInfo, 0, sizeof(TableRecordInfo));
- sprintf(argv[i+1], "%"PRId64"", tmpEpoch);
- debugPrint("%s() LN%d, tmp is: %s, argv[%d]: %s\n",
- __func__, __LINE__, tmp, i, argv[i]);
- free(tmp);
- }
+ char command[COMMAND_SIZE];
+
+ sprintf(command, "USE %s", dbName);
+ result = taos_query(taos, command);
+ int32_t code = taos_errno(result);
+ if (code != 0) {
+ errorPrint("invalid database %s, reason: %s\n",
+ dbName, taos_errstr(result));
+ return 0;
}
-}
-*/
-int main(int argc, char *argv[]) {
- static char verType[32] = {0};
- sprintf(verType, "version: %s\n", version);
- argp_program_version = verType;
+ sprintf(command, "SHOW TABLES LIKE \'%s\'", table);
- int ret = 0;
- /* Parse our arguments; every option seen by parse_opt will be
- reflected in arguments. */
- if (argc > 1) {
-// parse_precision_first(argc, argv, &g_args);
- parse_timestamp(argc, argv, &g_args);
- parse_args(argc, argv, &g_args);
+ result = taos_query(taos, command);
+ code = taos_errno(result);
+
+ if (code != 0) {
+ errorPrint("%s() LN%d, failed to run command <%s>. reason: %s\n",
+ __func__, __LINE__, command, taos_errstr(result));
+ taos_free_result(result);
+ return -1;
}
- argp_parse(&argp, argc, argv, 0, 0, &g_args);
+ TAOS_FIELD *fields = taos_fetch_fields(result);
- if (g_args.abort) {
-#ifndef _ALPINE
- error(10, 0, "ABORTED");
-#else
- abort();
-#endif
+ while ((row = taos_fetch_row(result)) != NULL) {
+ isSet = true;
+ pTableRecordInfo->isStable = false;
+ tstrncpy(pTableRecordInfo->tableRecord.name,
+ (char *)row[TSDB_SHOW_TABLES_NAME_INDEX],
+ min(TSDB_TABLE_NAME_LEN,
+ fields[TSDB_SHOW_TABLES_NAME_INDEX].bytes + 1));
+ tstrncpy(pTableRecordInfo->tableRecord.stable,
+ (char *)row[TSDB_SHOW_TABLES_METRIC_INDEX],
+ min(TSDB_TABLE_NAME_LEN,
+ fields[TSDB_SHOW_TABLES_METRIC_INDEX].bytes + 1));
+ break;
}
- printf("====== arguments config ======\n");
- {
- printf("host: %s\n", g_args.host);
- printf("user: %s\n", g_args.user);
- printf("password: %s\n", g_args.password);
- printf("port: %u\n", g_args.port);
- printf("mysqlFlag: %d\n", g_args.mysqlFlag);
- printf("outpath: %s\n", g_args.outpath);
- printf("inpath: %s\n", g_args.inpath);
- printf("resultFile: %s\n", g_args.resultFile);
- printf("encode: %s\n", g_args.encode);
- printf("all_databases: %s\n", g_args.all_databases?"true":"false");
- printf("databases: %d\n", g_args.databases);
- printf("schemaonly: %s\n", g_args.schemaonly?"true":"false");
- printf("with_property: %s\n", g_args.with_property?"true":"false");
- printf("avro format: %s\n", g_args.avro?"true":"false");
- printf("start_time: %" PRId64 "\n", g_args.start_time);
- printf("human readable start time: %s \n", g_args.humanStartTime);
- printf("end_time: %" PRId64 "\n", g_args.end_time);
- printf("human readable end time: %s \n", g_args.humanEndTime);
- printf("precision: %s\n", g_args.precision);
- printf("data_batch: %d\n", g_args.data_batch);
- printf("max_sql_len: %d\n", g_args.max_sql_len);
- printf("table_batch: %d\n", g_args.table_batch);
- printf("thread_num: %d\n", g_args.thread_num);
- printf("allow_sys: %d\n", g_args.allow_sys);
- printf("abort: %d\n", g_args.abort);
- printf("isDumpIn: %d\n", g_args.isDumpIn);
- printf("arg_list_len: %d\n", g_args.arg_list_len);
- printf("debug_print: %d\n", g_args.debug_print);
+ taos_free_result(result);
+ result = NULL;
- for (int32_t i = 0; i < g_args.arg_list_len; i++) {
- printf("arg_list[%d]: %s\n", i, g_args.arg_list[i]);
- }
+ if (isSet) {
+ return 0;
}
- printf("==============================\n");
- if (taosCheckParam(&g_args) < 0) {
- exit(EXIT_FAILURE);
+
+ sprintf(command, "SHOW STABLES LIKE \'%s\'", table);
+
+ result = taos_query(taos, command);
+ code = taos_errno(result);
+
+ if (code != 0) {
+ errorPrint("%s() LN%d, failed to run command <%s>. reason: %s\n",
+ __func__, __LINE__, command, taos_errstr(result));
+ taos_free_result(result);
+ return -1;
}
- g_fpOfResult = fopen(g_args.resultFile, "a");
- if (NULL == g_fpOfResult) {
- errorPrint("Failed to open %s for save result\n", g_args.resultFile);
- exit(-1);
- };
+ while ((row = taos_fetch_row(result)) != NULL) {
+ isSet = true;
+ pTableRecordInfo->isStable = true;
+ tstrncpy(pTableRecordInfo->tableRecord.stable, table,
+ TSDB_TABLE_NAME_LEN);
+ break;
+ }
- fprintf(g_fpOfResult, "#############################################################################\n");
- fprintf(g_fpOfResult, "============================== arguments config =============================\n");
- {
- fprintf(g_fpOfResult, "host: %s\n", g_args.host);
- fprintf(g_fpOfResult, "user: %s\n", g_args.user);
- fprintf(g_fpOfResult, "password: %s\n", g_args.password);
- fprintf(g_fpOfResult, "port: %u\n", g_args.port);
- fprintf(g_fpOfResult, "mysqlFlag: %d\n", g_args.mysqlFlag);
- fprintf(g_fpOfResult, "outpath: %s\n", g_args.outpath);
- fprintf(g_fpOfResult, "inpath: %s\n", g_args.inpath);
- fprintf(g_fpOfResult, "resultFile: %s\n", g_args.resultFile);
- fprintf(g_fpOfResult, "encode: %s\n", g_args.encode);
- fprintf(g_fpOfResult, "all_databases: %s\n", g_args.all_databases?"true":"false");
- fprintf(g_fpOfResult, "databases: %d\n", g_args.databases);
- fprintf(g_fpOfResult, "schemaonly: %s\n", g_args.schemaonly?"true":"false");
- fprintf(g_fpOfResult, "with_property: %s\n", g_args.with_property?"true":"false");
- fprintf(g_fpOfResult, "avro format: %s\n", g_args.avro?"true":"false");
- fprintf(g_fpOfResult, "start_time: %" PRId64 "\n", g_args.start_time);
- fprintf(g_fpOfResult, "human readable start time: %s \n", g_args.humanStartTime);
- fprintf(g_fpOfResult, "end_time: %" PRId64 "\n", g_args.end_time);
- fprintf(g_fpOfResult, "human readable end time: %s \n", g_args.humanEndTime);
- fprintf(g_fpOfResult, "precision: %s\n", g_args.precision);
- fprintf(g_fpOfResult, "data_batch: %d\n", g_args.data_batch);
- fprintf(g_fpOfResult, "max_sql_len: %d\n", g_args.max_sql_len);
- fprintf(g_fpOfResult, "table_batch: %d\n", g_args.table_batch);
- fprintf(g_fpOfResult, "thread_num: %d\n", g_args.thread_num);
- fprintf(g_fpOfResult, "allow_sys: %d\n", g_args.allow_sys);
- fprintf(g_fpOfResult, "abort: %d\n", g_args.abort);
- fprintf(g_fpOfResult, "isDumpIn: %d\n", g_args.isDumpIn);
- fprintf(g_fpOfResult, "arg_list_len: %d\n", g_args.arg_list_len);
+ taos_free_result(result);
+ result = NULL;
- for (int32_t i = 0; i < g_args.arg_list_len; i++) {
- fprintf(g_fpOfResult, "arg_list[%d]: %s\n", i, g_args.arg_list[i]);
- }
+ if (isSet) {
+ return 0;
}
+ errorPrint("%s() LN%d, invalid table/stable %s\n",
+ __func__, __LINE__, table);
+ return -1;
+}
- g_numOfCores = (int32_t)sysconf(_SC_NPROCESSORS_ONLN);
+static int inDatabasesSeq(
+ char *name,
+ int len)
+{
+ if (strstr(g_args.databasesSeq, ",") == NULL) {
+ if (0 == strncmp(g_args.databasesSeq, name, len)) {
+ return 0;
+ }
+ } else {
+ char *dupSeq = strdup(g_args.databasesSeq);
+ char *running = dupSeq;
+ char *dbname = strsep(&running, ",");
+ while (dbname) {
+ if (0 == strncmp(dbname, name, len)) {
+ tfree(dupSeq);
+ return 0;
+ }
- time_t tTime = time(NULL);
- struct tm tm = *localtime(&tTime);
+ dbname = strsep(&running, ",");
+ }
- if (g_args.isDumpIn) {
- fprintf(g_fpOfResult, "============================== DUMP IN ============================== \n");
- fprintf(g_fpOfResult, "# DumpIn start time: %d-%02d-%02d %02d:%02d:%02d\n",
- tm.tm_year + 1900, tm.tm_mon + 1,
- tm.tm_mday, tm.tm_hour, tm.tm_min, tm.tm_sec);
- if (taosDumpIn() < 0) {
- ret = -1;
+ }
+
+ return -1;
+}
+
+static int getDumpDbCount()
+{
+ int count = 0;
+
+ TAOS *taos = NULL;
+ TAOS_RES *result = NULL;
+ char *command = "show databases";
+ TAOS_ROW row;
+
+ /* Connect to server */
+ taos = taos_connect(g_args.host, g_args.user, g_args.password,
+ NULL, g_args.port);
+ if (NULL == taos) {
+ errorPrint("Failed to connect to TDengine server %s\n", g_args.host);
+ return 0;
+ }
+
+ result = taos_query(taos, command);
+ int32_t code = taos_errno(result);
+
+ if (0 != code) {
+ errorPrint("%s() LN%d, failed to run command <%s>, reason: %s\n",
+ __func__, __LINE__, command, taos_errstr(result));
+ return 0;
+ }
+
+ TAOS_FIELD *fields = taos_fetch_fields(result);
+
+ while ((row = taos_fetch_row(result)) != NULL) {
+ // sys database name : 'log', but subsequent version changed to 'log'
+ if ((strncasecmp(row[TSDB_SHOW_DB_NAME_INDEX], "log",
+ fields[TSDB_SHOW_DB_NAME_INDEX].bytes) == 0)
+ && (!g_args.allow_sys)) {
+ continue;
}
- } else {
- fprintf(g_fpOfResult, "============================== DUMP OUT ============================== \n");
- fprintf(g_fpOfResult, "# DumpOut start time: %d-%02d-%02d %02d:%02d:%02d\n",
- tm.tm_year + 1900, tm.tm_mon + 1,
- tm.tm_mday, tm.tm_hour, tm.tm_min, tm.tm_sec);
- if (taosDumpOut() < 0) {
- ret = -1;
- } else {
- fprintf(g_fpOfResult, "\n============================== TOTAL STATISTICS ============================== \n");
- fprintf(g_fpOfResult, "# total database count: %d\n",
- g_resultStatistics.totalDatabasesOfDumpOut);
- fprintf(g_fpOfResult, "# total super table count: %d\n",
- g_resultStatistics.totalSuperTblsOfDumpOut);
- fprintf(g_fpOfResult, "# total child table count: %"PRId64"\n",
- g_resultStatistics.totalChildTblsOfDumpOut);
- fprintf(g_fpOfResult, "# total row count: %"PRId64"\n",
- g_resultStatistics.totalRowsOfDumpOut);
+
+ if (g_args.databases) { // input multi dbs
+ if (inDatabasesSeq(
+ (char *)row[TSDB_SHOW_DB_NAME_INDEX],
+ fields[TSDB_SHOW_DB_NAME_INDEX].bytes) != 0)
+ continue;
+ } else if (!g_args.all_databases) { // only input one db
+ if (strncasecmp(g_args.arg_list[0],
+ (char *)row[TSDB_SHOW_DB_NAME_INDEX],
+ fields[TSDB_SHOW_DB_NAME_INDEX].bytes) != 0)
+ continue;
}
+
+ count++;
}
- fprintf(g_fpOfResult, "\n");
- fclose(g_fpOfResult);
+ if (count == 0) {
+ errorPrint("%d databases valid to dump\n", count);
+ }
- return ret;
+ return count;
}
-static void taosFreeDbInfos() {
- if (g_dbInfos == NULL) return;
- for (int i = 0; i < g_args.dbCount; i++)
- tfree(g_dbInfos[i]);
- tfree(g_dbInfos);
-}
+static int64_t dumpNormalTableWithoutStb(TAOS *taos, SDbInfo *dbInfo, char *ntbName)
+{
+ int64_t count = 0;
-// check table is normal table or super table
-static int taosGetTableRecordInfo(
- char *table, STableRecordInfo *pTableRecordInfo, TAOS *taosCon) {
- TAOS_ROW row = NULL;
- bool isSet = false;
- TAOS_RES *result = NULL;
+ char tmpBuf[4096] = {0};
+ FILE *fp = NULL;
- memset(pTableRecordInfo, 0, sizeof(STableRecordInfo));
+ if (g_args.outpath[0] != 0) {
+ sprintf(tmpBuf, "%s/%s.%s.sql",
+ g_args.outpath, dbInfo->name, ntbName);
+ } else {
+ sprintf(tmpBuf, "%s.%s.sql",
+ dbInfo->name, ntbName);
+ }
- char* tempCommand = (char *)malloc(COMMAND_SIZE);
- if (tempCommand == NULL) {
- errorPrint("%s() LN%d, failed to allocate memory\n",
- __func__, __LINE__);
+ fp = fopen(tmpBuf, "w");
+ if (fp == NULL) {
+ errorPrint("%s() LN%d, failed to open file %s\n",
+ __func__, __LINE__, tmpBuf);
return -1;
}
- sprintf(tempCommand, "show tables like %s", table);
+ count = taosDumpTable(ntbName, NULL,
+ fp, taos, dbInfo->name, getPrecisionByString(dbInfo->precision));
- result = taos_query(taosCon, tempCommand);
- int32_t code = taos_errno(result);
+ fclose(fp);
+ return count;
+}
- if (code != 0) {
- errorPrint("%s() LN%d, failed to run command %s\n",
- __func__, __LINE__, tempCommand);
- free(tempCommand);
- taos_free_result(result);
- return -1;
+static int64_t dumpNormalTable(FILE *fp, TAOS *taos, char *dbName, char *tbName,
+ char *stbName,
+ int precision)
+{
+ int64_t count = 0;
+ count = taosDumpTable(tbName, stbName,
+ fp, taos, dbName, precision);
+
+ return count;
+}
+
+static void *dumpNtbOfDb(void *arg) {
+ threadInfo *pThreadInfo = (threadInfo *)arg;
+
+ debugPrint("dump table from = \t%"PRId64"\n", pThreadInfo->tableFrom);
+ debugPrint("dump table count = \t%"PRId64"\n",
+ pThreadInfo->tablesOfDumpOut);
+
+ FILE *fp = NULL;
+ char tmpBuf[4096] = {0};
+
+ if (g_args.outpath[0] != 0) {
+ sprintf(tmpBuf, "%s/%s.%d.sql",
+ g_args.outpath, pThreadInfo->dbName, pThreadInfo->threadIndex);
+ } else {
+ sprintf(tmpBuf, "%s.%d.sql",
+ pThreadInfo->dbName, pThreadInfo->threadIndex);
}
- TAOS_FIELD *fields = taos_fetch_fields(result);
+ fp = fopen(tmpBuf, "w");
- while ((row = taos_fetch_row(result)) != NULL) {
- isSet = true;
- pTableRecordInfo->isMetric = false;
- tstrncpy(pTableRecordInfo->tableRecord.name,
- (char *)row[TSDB_SHOW_TABLES_NAME_INDEX],
- min(TSDB_TABLE_NAME_LEN,
- fields[TSDB_SHOW_TABLES_NAME_INDEX].bytes + 1));
- tstrncpy(pTableRecordInfo->tableRecord.metric,
- (char *)row[TSDB_SHOW_TABLES_METRIC_INDEX],
- min(TSDB_TABLE_NAME_LEN,
- fields[TSDB_SHOW_TABLES_METRIC_INDEX].bytes + 1));
- break;
+ if (fp == NULL) {
+ errorPrint("%s() LN%d, failed to open file %s\n",
+ __func__, __LINE__, tmpBuf);
+ return NULL;
}
- taos_free_result(result);
- result = NULL;
+ for (int64_t i = 0; i < pThreadInfo->tablesOfDumpOut; i++) {
+ debugPrint("[%d] No.\t%"PRId64" table name: %s\n",
+ pThreadInfo->threadIndex, i,
+ ((TableInfo *)(g_tablesList + pThreadInfo->tableFrom+i))->name);
+ dumpNormalTable(fp,
+ pThreadInfo->taos,
+ pThreadInfo->dbName,
+ ((TableInfo *)(g_tablesList + pThreadInfo->tableFrom+i))->name,
+ ((TableInfo *)(g_tablesList + pThreadInfo->tableFrom+i))->stable,
+ pThreadInfo->precision);
+ }
- if (isSet) {
- free(tempCommand);
- return 0;
+ fclose(fp);
+
+ return NULL;
+}
+
+static void *dumpNormalTablesOfStb(void *arg) {
+ threadInfo *pThreadInfo = (threadInfo *)arg;
+
+ debugPrint("dump table from = \t%"PRId64"\n", pThreadInfo->tableFrom);
+ debugPrint("dump table count = \t%"PRId64"\n", pThreadInfo->tablesOfDumpOut);
+
+ char command[COMMAND_SIZE];
+
+ sprintf(command, "SELECT TBNAME FROM %s.%s LIMIT %"PRId64" OFFSET %"PRId64"",
+ pThreadInfo->dbName, pThreadInfo->stbName,
+ pThreadInfo->tablesOfDumpOut, pThreadInfo->tableFrom);
+
+ TAOS_RES *res = taos_query(pThreadInfo->taos, command);
+ int32_t code = taos_errno(res);
+ if (code) {
+ errorPrint("%s() LN%d, failed to run command <%s>. reason: %s\n",
+ __func__, __LINE__, command, taos_errstr(res));
+ taos_free_result(res);
+ return NULL;
}
- sprintf(tempCommand, "show stables like %s", table);
+ FILE *fp = NULL;
+ char tmpBuf[4096] = {0};
- result = taos_query(taosCon, tempCommand);
- code = taos_errno(result);
+ if (g_args.outpath[0] != 0) {
+ sprintf(tmpBuf, "%s/%s.%d.sql",
+ g_args.outpath, pThreadInfo->dbName, pThreadInfo->threadIndex);
+ } else {
+ sprintf(tmpBuf, "%s.%d.sql",
+ pThreadInfo->dbName, pThreadInfo->threadIndex);
+ }
- if (code != 0) {
- errorPrint("%s() LN%d, failed to run command %s\n",
- __func__, __LINE__, tempCommand);
- free(tempCommand);
- taos_free_result(result);
- return -1;
+ fp = fopen(tmpBuf, "w");
+
+ if (fp == NULL) {
+ errorPrint("%s() LN%d, failed to open file %s\n",
+ __func__, __LINE__, tmpBuf);
+ return NULL;
}
- while ((row = taos_fetch_row(result)) != NULL) {
- isSet = true;
- pTableRecordInfo->isMetric = true;
- tstrncpy(pTableRecordInfo->tableRecord.metric, table,
- TSDB_TABLE_NAME_LEN);
- break;
+ TAOS_ROW row = NULL;
+ int64_t i = 0;
+ while((row = taos_fetch_row(res)) != NULL) {
+ debugPrint("[%d] sub table %"PRId64": name: %s\n",
+ pThreadInfo->threadIndex, i++, (char *)row[TSDB_SHOW_TABLES_NAME_INDEX]);
+
+ dumpNormalTable(fp,
+ pThreadInfo->taos,
+ pThreadInfo->dbName,
+ (char *)row[TSDB_SHOW_TABLES_NAME_INDEX],
+ (char *)row[TSDB_SHOW_TABLES_METRIC_INDEX],
+ pThreadInfo->precision);
}
- taos_free_result(result);
- result = NULL;
+ fclose(fp);
+ return NULL;
+}
- if (isSet) {
- free(tempCommand);
+static int64_t dumpNtbOfDbByThreads(
+ SDbInfo *dbInfo,
+ int64_t ntbCount)
+{
+ if (ntbCount <= 0) {
return 0;
}
- errorPrint("%s() LN%d, invalid table/metric %s\n",
- __func__, __LINE__, table);
- free(tempCommand);
- return -1;
-}
+ int threads = g_args.thread_num;
+
+ int64_t a = ntbCount / threads;
+ if (a < 1) {
+ threads = ntbCount;
+ a = 1;
+ }
+
+ assert(threads);
+ int64_t b = ntbCount % threads;
+
+ threadInfo *infos = calloc(1, threads * sizeof(threadInfo));
+ pthread_t *pids = calloc(1, threads * sizeof(pthread_t));
+ assert(pids);
+ assert(infos);
-static int32_t taosSaveAllNormalTableToTempFile(TAOS *taosCon, char*meter,
- char* metric, int* fd) {
- STableRecord tableRecord;
+ for (int64_t i = 0; i < threads; i++) {
+ threadInfo *pThreadInfo = infos + i;
+ pThreadInfo->taos = taos_connect(
+ g_args.host,
+ g_args.user,
+ g_args.password,
+ dbInfo->name,
+ g_args.port
+ );
+ if (NULL == pThreadInfo->taos) {
+ errorPrint("%s() LN%d, Failed to connect to TDengine, reason: %s\n",
+ __func__,
+ __LINE__,
+ taos_errstr(NULL));
+ free(pids);
+ free(infos);
- if (-1 == *fd) {
- *fd = open(".tables.tmp.0",
- O_RDWR | O_CREAT, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH);
- if (*fd == -1) {
- errorPrint("%s() LN%d, failed to open temp file: .tables.tmp.0\n",
- __func__, __LINE__);
return -1;
}
+
+ pThreadInfo->threadIndex = i;
+ pThreadInfo->tablesOfDumpOut = (itableFrom = (i==0)?0:
+ ((threadInfo *)(infos + i - 1))->tableFrom +
+ ((threadInfo *)(infos + i - 1))->tablesOfDumpOut;
+ strcpy(pThreadInfo->dbName, dbInfo->name);
+ pThreadInfo->precision = getPrecisionByString(dbInfo->precision);
+
+ pthread_create(pids + i, NULL, dumpNtbOfDb, pThreadInfo);
+ }
+
+ for (int64_t i = 0; i < threads; i++) {
+ pthread_join(pids[i], NULL);
}
- memset(&tableRecord, 0, sizeof(STableRecord));
- tstrncpy(tableRecord.name, meter, TSDB_TABLE_NAME_LEN);
- tstrncpy(tableRecord.metric, metric, TSDB_TABLE_NAME_LEN);
+ for (int64_t i = 0; i < threads; i++) {
+ threadInfo *pThreadInfo = infos + i;
+ taos_close(pThreadInfo->taos);
+ }
+
+ free(pids);
+ free(infos);
- taosWrite(*fd, &tableRecord, sizeof(STableRecord));
return 0;
}
-static int32_t taosSaveTableOfMetricToTempFile(
- TAOS *taosCon, char* metric,
- int32_t* totalNumOfThread) {
- TAOS_ROW row;
- int fd = -1;
- STableRecord tableRecord;
+static int64_t getNtbCountOfStb(TAOS *taos, char *dbName, char *stbName)
+{
+ int64_t count = 0;
- char* tmpCommand = (char *)malloc(COMMAND_SIZE);
- if (tmpCommand == NULL) {
- errorPrint("%s() LN%d, failed to allocate memory\n", __func__, __LINE__);
- return -1;
- }
+ char command[COMMAND_SIZE];
- sprintf(tmpCommand, "select tbname from %s", metric);
+ sprintf(command, "SELECT COUNT(TBNAME) FROM %s.%s", dbName, stbName);
- TAOS_RES *res = taos_query(taosCon, tmpCommand);
+ TAOS_RES *res = taos_query(taos, command);
int32_t code = taos_errno(res);
if (code != 0) {
- errorPrint("%s() LN%d, failed to run command %s\n",
- __func__, __LINE__, tmpCommand);
- free(tmpCommand);
+ errorPrint("%s() LN%d, failed to run command <%s>. reason: %s\n",
+ __func__, __LINE__, command, taos_errstr(res));
taos_free_result(res);
return -1;
}
- free(tmpCommand);
- char tmpBuf[MAX_FILE_NAME_LEN];
- memset(tmpBuf, 0, MAX_FILE_NAME_LEN);
- sprintf(tmpBuf, ".select-tbname.tmp");
- fd = open(tmpBuf, O_RDWR | O_CREAT | O_TRUNC, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH);
- if (fd == -1) {
- errorPrint("%s() LN%d, failed to open temp file: %s\n",
- __func__, __LINE__, tmpBuf);
- taos_free_result(res);
- return -1;
- }
+ TAOS_ROW row = NULL;
- TAOS_FIELD *fields = taos_fetch_fields(res);
+ if ((row = taos_fetch_row(res)) != NULL) {
+ count = *(int64_t*)row[TSDB_SHOW_TABLES_NAME_INDEX];
+ }
- int32_t numOfTable = 0;
- while ((row = taos_fetch_row(res)) != NULL) {
+ return count;
+}
- memset(&tableRecord, 0, sizeof(STableRecord));
- tstrncpy(tableRecord.name, (char *)row[0], fields[0].bytes);
- tstrncpy(tableRecord.metric, metric, TSDB_TABLE_NAME_LEN);
+static int64_t dumpNtbOfStbByThreads(
+ TAOS *taos,
+ SDbInfo *dbInfo, char *stbName)
+{
+ int64_t ntbCount = getNtbCountOfStb(taos, dbInfo->name, stbName);
- taosWrite(fd, &tableRecord, sizeof(STableRecord));
- numOfTable++;
+ if (ntbCount <= 0) {
+ return 0;
}
- taos_free_result(res);
- lseek(fd, 0, SEEK_SET);
- int maxThreads = g_args.thread_num;
- int tableOfPerFile ;
- if (numOfTable <= g_args.thread_num) {
- tableOfPerFile = 1;
- maxThreads = numOfTable;
- } else {
- tableOfPerFile = numOfTable / g_args.thread_num;
- if (0 != numOfTable % g_args.thread_num) {
- tableOfPerFile += 1;
- }
- }
+ int threads = g_args.thread_num;
- char* tblBuf = (char*)calloc(1, tableOfPerFile * sizeof(STableRecord));
- if (NULL == tblBuf){
- errorPrint("%s() LN%d, failed to calloc %" PRIzu "\n",
- __func__, __LINE__, tableOfPerFile * sizeof(STableRecord));
- close(fd);
- return -1;
+ int64_t a = ntbCount / threads;
+ if (a < 1) {
+ threads = ntbCount;
+ a = 1;
}
- int32_t numOfThread = *totalNumOfThread;
- int subFd = -1;
- for (; numOfThread <= maxThreads; numOfThread++) {
- memset(tmpBuf, 0, MAX_FILE_NAME_LEN);
- sprintf(tmpBuf, ".tables.tmp.%d", numOfThread);
- subFd = open(tmpBuf, O_RDWR | O_CREAT | O_TRUNC, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH);
- if (subFd == -1) {
- errorPrint("%s() LN%d, failed to open temp file: %s\n",
- __func__, __LINE__, tmpBuf);
- for (int32_t loopCnt = 0; loopCnt < numOfThread; loopCnt++) {
- sprintf(tmpBuf, ".tables.tmp.%d", loopCnt);
- (void)remove(tmpBuf);
- }
- sprintf(tmpBuf, ".select-tbname.tmp");
- (void)remove(tmpBuf);
- free(tblBuf);
- close(fd);
+ assert(threads);
+ int64_t b = ntbCount % threads;
+
+ pthread_t *pids = calloc(1, threads * sizeof(pthread_t));
+ threadInfo *infos = calloc(1, threads * sizeof(threadInfo));
+ assert(pids);
+ assert(infos);
+
+ for (int64_t i = 0; i < threads; i++) {
+ threadInfo *pThreadInfo = infos + i;
+ pThreadInfo->taos = taos_connect(
+ g_args.host,
+ g_args.user,
+ g_args.password,
+ dbInfo->name,
+ g_args.port
+ );
+ if (NULL == pThreadInfo->taos) {
+ errorPrint("%s() LN%d, Failed to connect to TDengine, reason: %s\n",
+ __func__,
+ __LINE__,
+ taos_errstr(NULL));
+ free(pids);
+ free(infos);
+
return -1;
}
- // read tableOfPerFile for fd, write to subFd
- ssize_t readLen = read(fd, tblBuf, tableOfPerFile * sizeof(STableRecord));
- if (readLen <= 0) {
- close(subFd);
- break;
- }
- taosWrite(subFd, tblBuf, readLen);
- close(subFd);
+ pThreadInfo->threadIndex = i;
+ pThreadInfo->tablesOfDumpOut = (itableFrom = (i==0)?0:
+ ((threadInfo *)(infos + i - 1))->tableFrom +
+ ((threadInfo *)(infos + i - 1))->tablesOfDumpOut;
+ strcpy(pThreadInfo->dbName, dbInfo->name);
+ pThreadInfo->precision = getPrecisionByString(dbInfo->precision);
+
+ strcpy(pThreadInfo->stbName, stbName);
+ pthread_create(pids + i, NULL, dumpNormalTablesOfStb, pThreadInfo);
+ }
+
+ for (int64_t i = 0; i < threads; i++) {
+ pthread_join(pids[i], NULL);
}
- sprintf(tmpBuf, ".select-tbname.tmp");
- (void)remove(tmpBuf);
- if (fd >= 0) {
- close(fd);
- fd = -1;
+ int64_t records = 0;
+ for (int64_t i = 0; i < threads; i++) {
+ threadInfo *pThreadInfo = infos + i;
+ records += pThreadInfo->rowsOfDumpOut;
+ taos_close(pThreadInfo->taos);
}
- *totalNumOfThread = numOfThread;
+ free(pids);
+ free(infos);
- free(tblBuf);
- return 0;
+ return records;
}
-static int getDbCount()
+static int64_t dumpCreateSTableClauseOfDb(
+ SDbInfo *dbInfo, FILE *fp)
{
- int count;
+ TAOS *taos = taos_connect(g_args.host,
+ g_args.user, g_args.password, dbInfo->name, g_args.port);
+ if (NULL == taos) {
+ errorPrint(
+ "Failed to connect to TDengine server %s by specified database %s\n",
+ g_args.host, dbInfo->name);
+ return 0;
+ }
- TAOS *taos = NULL;
- TAOS_RES *result = NULL;
- char *command = NULL;
TAOS_ROW row;
+ char command[COMMAND_SIZE] = {0};
- command = (char *)malloc(COMMAND_SIZE);
- if (command == NULL) {
- errorPrint("%s() LN%d, failed to allocate command buffer\n", __func__, __LINE__);
- return 0;
+ sprintf(command, "SHOW %s.STABLES", dbInfo->name);
+
+ TAOS_RES* res = taos_query(taos, command);
+ int32_t code = taos_errno(res);
+ if (code != 0) {
+ errorPrint("%s() LN%d, failed to run command <%s>, reason: %s\n",
+ __func__, __LINE__, command, taos_errstr(res));
+ taos_free_result(res);
+ taos_close(taos);
+ exit(-1);
}
- /* Connect to server */
- taos = taos_connect(g_args.host, g_args.user, g_args.password,
- NULL, g_args.port);
+ int64_t superTblCnt = 0;
+ while ((row = taos_fetch_row(res)) != NULL) {
+ if (0 == dumpStable(row[TSDB_SHOW_TABLES_NAME_INDEX], fp, taos, dbInfo)) {
+ superTblCnt ++;
+ }
+ }
+
+ taos_free_result(res);
+
+ fprintf(g_fpOfResult,
+ "# super table counter: %"PRId64"\n",
+ superTblCnt);
+ g_resultStatistics.totalSuperTblsOfDumpOut += superTblCnt;
+
+ taos_close(taos);
+
+ return superTblCnt;
+}
+
+static int64_t dumpNTablesOfDb(SDbInfo *dbInfo)
+{
+ TAOS *taos = taos_connect(g_args.host,
+ g_args.user, g_args.password, dbInfo->name, g_args.port);
if (NULL == taos) {
- errorPrint("Failed to connect to TDengine server %s\n", g_args.host);
- free(command);
+ errorPrint(
+ "Failed to connect to TDengine server %s by specified database %s\n",
+ g_args.host, dbInfo->name);
return 0;
}
- sprintf(command, "show databases");
+ char command[COMMAND_SIZE];
+ TAOS_RES *result;
+ int32_t code;
+
+ sprintf(command, "USE %s", dbInfo->name);
result = taos_query(taos, command);
- int32_t code = taos_errno(result);
+ code = taos_errno(result);
+ if (code != 0) {
+ errorPrint("invalid database %s, reason: %s\n",
+ dbInfo->name, taos_errstr(result));
+ taos_close(taos);
+ return 0;
+ }
- if (0 != code) {
- errorPrint("%s() LN%d, failed to run command: %s, reason: %s\n",
- __func__, __LINE__, command, taos_errstr(result));
- free(command);
+ sprintf(command, "SHOW TABLES");
+ result = taos_query(taos, command);
+ code = taos_errno(result);
+ if (code != 0) {
+ errorPrint("Failed to show %s\'s tables, reason: %s\n",
+ dbInfo->name, taos_errstr(result));
+ taos_close(taos);
return 0;
}
- TAOS_FIELD *fields = taos_fetch_fields(result);
+ g_tablesList = calloc(1, dbInfo->ntables * sizeof(TableInfo));
- while ((row = taos_fetch_row(result)) != NULL) {
- // sys database name : 'log', but subsequent version changed to 'log'
- if ((strncasecmp(row[TSDB_SHOW_DB_NAME_INDEX], "log",
- fields[TSDB_SHOW_DB_NAME_INDEX].bytes) == 0)
- && (!g_args.allow_sys)) {
- continue;
- }
+ TAOS_ROW row;
+ int64_t count = 0;
+ while(NULL != (row = taos_fetch_row(result))) {
+ debugPrint("%s() LN%d, No.\t%"PRId64" table name: %s\n",
+ __func__, __LINE__,
+ count, (char *)row[TSDB_SHOW_TABLES_NAME_INDEX]);
+ tstrncpy(((TableInfo *)(g_tablesList + count))->name,
+ (char *)row[TSDB_SHOW_TABLES_NAME_INDEX], TSDB_TABLE_NAME_LEN);
+ char *stbName = (char *) row[TSDB_SHOW_TABLES_METRIC_INDEX];
+ if (stbName) {
+ tstrncpy(((TableInfo *)(g_tablesList + count))->stable,
+ (char *)row[TSDB_SHOW_TABLES_METRIC_INDEX], TSDB_TABLE_NAME_LEN);
+ ((TableInfo *)(g_tablesList + count))->belongStb = true;
+ }
+ count ++;
+ }
+ taos_close(taos);
- if (g_args.databases) { // input multi dbs
- for (int i = 0; g_args.arg_list[i]; i++) {
- if (strncasecmp(g_args.arg_list[i],
- (char *)row[TSDB_SHOW_DB_NAME_INDEX],
- fields[TSDB_SHOW_DB_NAME_INDEX].bytes) == 0)
- goto _dump_db_point;
- }
- continue;
- } else if (!g_args.all_databases) { // only input one db
- if (strncasecmp(g_args.arg_list[0],
- (char *)row[TSDB_SHOW_DB_NAME_INDEX],
- fields[TSDB_SHOW_DB_NAME_INDEX].bytes) == 0)
- goto _dump_db_point;
- else
- continue;
- }
+ int64_t records = dumpNtbOfDbByThreads(dbInfo, count);
-_dump_db_point:
+ free(g_tablesList);
+ g_tablesList = NULL;
- count++;
+ return records;
+}
- if (g_args.databases) {
- if (count > g_args.arg_list_len) break;
+static int64_t dumpWholeDatabase(SDbInfo *dbInfo, FILE *fp)
+{
+ taosDumpCreateDbClause(dbInfo, g_args.with_property, fp);
- } else if (!g_args.all_databases) {
- if (count >= 1) break;
- }
- }
+ fprintf(g_fpOfResult, "\n#### database: %s\n",
+ dbInfo->name);
+ g_resultStatistics.totalDatabasesOfDumpOut++;
- if (count == 0) {
- errorPrint("%d databases valid to dump\n", count);
- }
+ dumpCreateSTableClauseOfDb(dbInfo, fp);
- free(command);
- return count;
+ return dumpNTablesOfDb(dbInfo);
}
static int taosDumpOut() {
TAOS *taos = NULL;
TAOS_RES *result = NULL;
- char *command = NULL;
TAOS_ROW row;
FILE *fp = NULL;
int32_t count = 0;
- STableRecordInfo tableRecordInfo;
+ TableRecordInfo tableRecordInfo;
char tmpBuf[4096] = {0};
if (g_args.outpath[0] != 0) {
@@ -1230,25 +1386,24 @@ static int taosDumpOut() {
return -1;
}
- g_args.dbCount = getDbCount();
+ g_args.dumpDbCount = getDumpDbCount();
+ debugPrint("%s() LN%d, dump db count: %d\n",
+ __func__, __LINE__, g_args.dumpDbCount);
- if (0 == g_args.dbCount) {
- errorPrint("%d databases valid to dump\n", g_args.dbCount);
+ if (0 == g_args.dumpDbCount) {
+ errorPrint("%d databases valid to dump\n", g_args.dumpDbCount);
+ fclose(fp);
return -1;
}
- g_dbInfos = (SDbInfo **)calloc(g_args.dbCount, sizeof(SDbInfo *));
+ g_dbInfos = (SDbInfo **)calloc(g_args.dumpDbCount, sizeof(SDbInfo *));
if (g_dbInfos == NULL) {
errorPrint("%s() LN%d, failed to allocate memory\n",
__func__, __LINE__);
goto _exit_failure;
}
- command = (char *)malloc(COMMAND_SIZE);
- if (command == NULL) {
- errorPrint("%s() LN%d, failed to allocate memory\n", __func__, __LINE__);
- goto _exit_failure;
- }
+ char command[COMMAND_SIZE];
/* Connect to server */
taos = taos_connect(g_args.host, g_args.user, g_args.password,
@@ -1268,7 +1423,7 @@ static int taosDumpOut() {
int32_t code = taos_errno(result);
if (code != 0) {
- errorPrint("%s() LN%d, failed to run command: %s, reason: %s\n",
+ errorPrint("%s() LN%d, failed to run command <%s>, reason: %s\n",
__func__, __LINE__, command, taos_errstr(result));
goto _exit_failure;
}
@@ -1284,24 +1439,18 @@ static int taosDumpOut() {
}
if (g_args.databases) { // input multi dbs
- for (int i = 0; g_args.arg_list[i]; i++) {
- if (strncasecmp(g_args.arg_list[i],
- (char *)row[TSDB_SHOW_DB_NAME_INDEX],
- fields[TSDB_SHOW_DB_NAME_INDEX].bytes) == 0)
- goto _dump_db_point;
+ if (inDatabasesSeq(
+ (char *)row[TSDB_SHOW_DB_NAME_INDEX],
+ fields[TSDB_SHOW_DB_NAME_INDEX].bytes) != 0) {
+ continue;
}
- continue;
} else if (!g_args.all_databases) { // only input one db
if (strncasecmp(g_args.arg_list[0],
(char *)row[TSDB_SHOW_DB_NAME_INDEX],
- fields[TSDB_SHOW_DB_NAME_INDEX].bytes) == 0)
- goto _dump_db_point;
- else
+ fields[TSDB_SHOW_DB_NAME_INDEX].bytes) != 0)
continue;
}
-_dump_db_point:
-
g_dbInfos[count] = (SDbInfo *)calloc(1, sizeof(SDbInfo));
if (g_dbInfos[count] == NULL) {
errorPrint("%s() LN%d, failed to allocate %"PRIu64" memory\n",
@@ -1309,41 +1458,59 @@ _dump_db_point:
goto _exit_failure;
}
+ okPrint("%s exists\n", (char *)row[TSDB_SHOW_DB_NAME_INDEX]);
tstrncpy(g_dbInfos[count]->name, (char *)row[TSDB_SHOW_DB_NAME_INDEX],
- min(TSDB_DB_NAME_LEN, fields[TSDB_SHOW_DB_NAME_INDEX].bytes + 1));
+ min(TSDB_DB_NAME_LEN,
+ fields[TSDB_SHOW_DB_NAME_INDEX].bytes + 1));
if (g_args.with_property) {
- g_dbInfos[count]->ntables = *((int32_t *)row[TSDB_SHOW_DB_NTABLES_INDEX]);
- g_dbInfos[count]->vgroups = *((int32_t *)row[TSDB_SHOW_DB_VGROUPS_INDEX]);
- g_dbInfos[count]->replica = *((int16_t *)row[TSDB_SHOW_DB_REPLICA_INDEX]);
- g_dbInfos[count]->quorum = *((int16_t *)row[TSDB_SHOW_DB_QUORUM_INDEX]);
- g_dbInfos[count]->days = *((int16_t *)row[TSDB_SHOW_DB_DAYS_INDEX]);
-
- tstrncpy(g_dbInfos[count]->keeplist, (char *)row[TSDB_SHOW_DB_KEEP_INDEX],
+ g_dbInfos[count]->ntables =
+ *((int32_t *)row[TSDB_SHOW_DB_NTABLES_INDEX]);
+ g_dbInfos[count]->vgroups =
+ *((int32_t *)row[TSDB_SHOW_DB_VGROUPS_INDEX]);
+ g_dbInfos[count]->replica =
+ *((int16_t *)row[TSDB_SHOW_DB_REPLICA_INDEX]);
+ g_dbInfos[count]->quorum =
+ *((int16_t *)row[TSDB_SHOW_DB_QUORUM_INDEX]);
+ g_dbInfos[count]->days =
+ *((int16_t *)row[TSDB_SHOW_DB_DAYS_INDEX]);
+
+ tstrncpy(g_dbInfos[count]->keeplist,
+ (char *)row[TSDB_SHOW_DB_KEEP_INDEX],
min(32, fields[TSDB_SHOW_DB_KEEP_INDEX].bytes + 1));
//g_dbInfos[count]->daysToKeep = *((int16_t *)row[TSDB_SHOW_DB_KEEP_INDEX]);
//g_dbInfos[count]->daysToKeep1;
//g_dbInfos[count]->daysToKeep2;
- g_dbInfos[count]->cache = *((int32_t *)row[TSDB_SHOW_DB_CACHE_INDEX]);
- g_dbInfos[count]->blocks = *((int32_t *)row[TSDB_SHOW_DB_BLOCKS_INDEX]);
- g_dbInfos[count]->minrows = *((int32_t *)row[TSDB_SHOW_DB_MINROWS_INDEX]);
- g_dbInfos[count]->maxrows = *((int32_t *)row[TSDB_SHOW_DB_MAXROWS_INDEX]);
- g_dbInfos[count]->wallevel = *((int8_t *)row[TSDB_SHOW_DB_WALLEVEL_INDEX]);
- g_dbInfos[count]->fsync = *((int32_t *)row[TSDB_SHOW_DB_FSYNC_INDEX]);
- g_dbInfos[count]->comp = (int8_t)(*((int8_t *)row[TSDB_SHOW_DB_COMP_INDEX]));
- g_dbInfos[count]->cachelast = (int8_t)(*((int8_t *)row[TSDB_SHOW_DB_CACHELAST_INDEX]));
+ g_dbInfos[count]->cache =
+ *((int32_t *)row[TSDB_SHOW_DB_CACHE_INDEX]);
+ g_dbInfos[count]->blocks =
+ *((int32_t *)row[TSDB_SHOW_DB_BLOCKS_INDEX]);
+ g_dbInfos[count]->minrows =
+ *((int32_t *)row[TSDB_SHOW_DB_MINROWS_INDEX]);
+ g_dbInfos[count]->maxrows =
+ *((int32_t *)row[TSDB_SHOW_DB_MAXROWS_INDEX]);
+ g_dbInfos[count]->wallevel =
+ *((int8_t *)row[TSDB_SHOW_DB_WALLEVEL_INDEX]);
+ g_dbInfos[count]->fsync =
+ *((int32_t *)row[TSDB_SHOW_DB_FSYNC_INDEX]);
+ g_dbInfos[count]->comp =
+ (int8_t)(*((int8_t *)row[TSDB_SHOW_DB_COMP_INDEX]));
+ g_dbInfos[count]->cachelast =
+ (int8_t)(*((int8_t *)row[TSDB_SHOW_DB_CACHELAST_INDEX]));
tstrncpy(g_dbInfos[count]->precision,
(char *)row[TSDB_SHOW_DB_PRECISION_INDEX],
DB_PRECISION_LEN);
- g_dbInfos[count]->update = *((int8_t *)row[TSDB_SHOW_DB_UPDATE_INDEX]);
+ g_dbInfos[count]->update =
+ *((int8_t *)row[TSDB_SHOW_DB_UPDATE_INDEX]);
}
count++;
if (g_args.databases) {
- if (count > g_args.arg_list_len) break;
-
+ if (count > g_args.dumpDbCount)
+ break;
} else if (!g_args.all_databases) {
- if (count >= 1) break;
+ if (count >= 1)
+ break;
}
}
@@ -1352,94 +1519,52 @@ _dump_db_point:
goto _exit_failure;
}
- if (g_args.databases || g_args.all_databases) { // case: taosdump --databases dbx dby ... OR taosdump --all-databases
+ if (g_args.databases || g_args.all_databases) { // case: taosdump --databases dbx,dby ... OR taosdump --all-databases
for (int i = 0; i < count; i++) {
- taosDumpDb(g_dbInfos[i], fp, taos);
+ int64_t records = 0;
+ records = dumpWholeDatabase(g_dbInfos[i], fp);
+ if (records >= 0) {
+ okPrint("Database %s dumped\n", g_dbInfos[i]->name);
+ g_totalDumpOutRows += records;
+ }
}
} else {
- if (g_args.arg_list_len == 1) { // case: taosdump
- taosDumpDb(g_dbInfos[0], fp, taos);
- } else { // case: taosdump tablex tabley ...
+ if (1 == g_args.arg_list_len) {
+ int64_t records = dumpWholeDatabase(g_dbInfos[0], fp);
+ if (records >= 0) {
+ okPrint("Database %s dumped\n", g_dbInfos[0]->name);
+ g_totalDumpOutRows += records;
+ }
+ } else {
taosDumpCreateDbClause(g_dbInfos[0], g_args.with_property, fp);
- fprintf(g_fpOfResult, "\n#### database: %s\n",
- g_dbInfos[0]->name);
- g_resultStatistics.totalDatabasesOfDumpOut++;
-
- sprintf(command, "use %s", g_dbInfos[0]->name);
+ }
- result = taos_query(taos, command);
- code = taos_errno(result);
- if (code != 0) {
- errorPrint("invalid database %s\n", g_dbInfos[0]->name);
- goto _exit_failure;
+ int superTblCnt = 0 ;
+ for (int i = 1; g_args.arg_list[i]; i++) {
+ if (taosGetTableRecordInfo(g_dbInfos[0]->name,
+ g_args.arg_list[i],
+ &tableRecordInfo, taos) < 0) {
+ errorPrint("input the invalid table %s\n",
+ g_args.arg_list[i]);
+ continue;
}
- fprintf(fp, "USE %s;\n\n", g_dbInfos[0]->name);
-
- int32_t totalNumOfThread = 1; // 0: all normal table into .tables.tmp.0
- int normalTblFd = -1;
- int32_t retCode;
- int superTblCnt = 0 ;
- for (int i = 1; g_args.arg_list[i]; i++) {
- if (taosGetTableRecordInfo(g_args.arg_list[i],
- &tableRecordInfo, taos) < 0) {
- errorPrint("input the invalid table %s\n",
- g_args.arg_list[i]);
- continue;
- }
-
- if (tableRecordInfo.isMetric) { // dump all table of this metric
- int ret = taosDumpStable(
- tableRecordInfo.tableRecord.metric,
- fp, taos, g_dbInfos[0]->name);
- if (0 == ret) {
- superTblCnt++;
- }
- retCode = taosSaveTableOfMetricToTempFile(
- taos, tableRecordInfo.tableRecord.metric,
- &totalNumOfThread);
- } else {
- if (tableRecordInfo.tableRecord.metric[0] != '\0') { // dump this sub table and it's metric
- int ret = taosDumpStable(
- tableRecordInfo.tableRecord.metric,
- fp, taos, g_dbInfos[0]->name);
- if (0 == ret) {
- superTblCnt++;
- }
- }
- retCode = taosSaveAllNormalTableToTempFile(
- taos, tableRecordInfo.tableRecord.name,
- tableRecordInfo.tableRecord.metric, &normalTblFd);
- }
-
- if (retCode < 0) {
- if (-1 != normalTblFd){
- taosClose(normalTblFd);
- }
- goto _clean_tmp_file;
+ int64_t records = 0;
+ if (tableRecordInfo.isStable) { // dump all table of this stable
+ int ret = dumpStable(
+ tableRecordInfo.tableRecord.stable,
+ fp, taos, g_dbInfos[0]);
+ if (ret >= 0) {
+ superTblCnt++;
+ records = dumpNtbOfStbByThreads(taos, g_dbInfos[0], g_args.arg_list[i]);
}
+ } else {
+ records = dumpNormalTableWithoutStb(taos, g_dbInfos[0], g_args.arg_list[i]);
}
- // TODO: save dump super table into result_output.txt
- fprintf(g_fpOfResult, "# super table counter: %d\n",
- superTblCnt);
- g_resultStatistics.totalSuperTblsOfDumpOut += superTblCnt;
-
- if (-1 != normalTblFd){
- taosClose(normalTblFd);
- }
-
- // start multi threads to dumpout
-
- taosStartDumpOutWorkThreads(totalNumOfThread,
- g_dbInfos[0]->name,
- getPrecisionByString(g_dbInfos[0]->precision));
-
- char tmpFileName[MAX_FILE_NAME_LEN];
-_clean_tmp_file:
- for (int loopCnt = 0; loopCnt < totalNumOfThread; loopCnt++) {
- sprintf(tmpFileName, ".tables.tmp.%d", loopCnt);
- remove(tmpFileName);
+ if (records >= 0) {
+ okPrint("table: %s dumped\n", g_args.arg_list[i]);
+ g_totalDumpOutRows += records;
}
}
}
@@ -1448,7 +1573,6 @@ _clean_tmp_file:
fclose(fp);
taos_close(taos);
taos_free_result(result);
- tfree(command);
taosFreeDbInfos();
fprintf(stderr, "dump out rows: %" PRId64 "\n", g_totalDumpOutRows);
return 0;
@@ -1457,7 +1581,6 @@ _exit_failure:
fclose(fp);
taos_close(taos);
taos_free_result(result);
- tfree(command);
taosFreeDbInfos();
errorPrint("dump out rows: %" PRId64 "\n", g_totalDumpOutRows);
return -1;
@@ -1465,18 +1588,18 @@ _exit_failure:
static int taosGetTableDes(
char* dbName, char *table,
- STableDef *stableDes, TAOS* taosCon, bool isSuperTable) {
+ STableDef *stableDes, TAOS* taos, bool isSuperTable) {
TAOS_ROW row = NULL;
TAOS_RES* res = NULL;
- int count = 0;
+ int colCount = 0;
char sqlstr[COMMAND_SIZE];
sprintf(sqlstr, "describe %s.%s;", dbName, table);
- res = taos_query(taosCon, sqlstr);
+ res = taos_query(taos, sqlstr);
int32_t code = taos_errno(res);
if (code != 0) {
- errorPrint("%s() LN%d, failed to run command <%s>, reason:%s\n",
+ errorPrint("%s() LN%d, failed to run command <%s>, reason: %s\n",
__func__, __LINE__, sqlstr, taos_errstr(res));
taos_free_result(res);
return -1;
@@ -1486,41 +1609,41 @@ static int taosGetTableDes(
tstrncpy(stableDes->name, table, TSDB_TABLE_NAME_LEN);
while ((row = taos_fetch_row(res)) != NULL) {
- tstrncpy(stableDes->cols[count].field,
+ tstrncpy(stableDes->cols[colCount].field,
(char *)row[TSDB_DESCRIBE_METRIC_FIELD_INDEX],
min(TSDB_COL_NAME_LEN + 1,
fields[TSDB_DESCRIBE_METRIC_FIELD_INDEX].bytes + 1));
- tstrncpy(stableDes->cols[count].type,
+ tstrncpy(stableDes->cols[colCount].type,
(char *)row[TSDB_DESCRIBE_METRIC_TYPE_INDEX],
min(16, fields[TSDB_DESCRIBE_METRIC_TYPE_INDEX].bytes + 1));
- stableDes->cols[count].length =
+ stableDes->cols[colCount].length =
*((int *)row[TSDB_DESCRIBE_METRIC_LENGTH_INDEX]);
- tstrncpy(stableDes->cols[count].note,
+ tstrncpy(stableDes->cols[colCount].note,
(char *)row[TSDB_DESCRIBE_METRIC_NOTE_INDEX],
min(COL_NOTE_LEN,
fields[TSDB_DESCRIBE_METRIC_NOTE_INDEX].bytes + 1));
- count++;
+ colCount++;
}
taos_free_result(res);
res = NULL;
if (isSuperTable) {
- return count;
+ return colCount;
}
// if child-table have tag, using select tagName from table to get tagValue
- for (int i = 0 ; i < count; i++) {
+ for (int i = 0 ; i < colCount; i++) {
if (strcmp(stableDes->cols[i].note, "TAG") != 0) continue;
sprintf(sqlstr, "select %s from %s.%s",
stableDes->cols[i].field, dbName, table);
- res = taos_query(taosCon, sqlstr);
+ res = taos_query(taos, sqlstr);
code = taos_errno(res);
if (code != 0) {
- errorPrint("%s() LN%d, failed to run command <%s>, reason:%s\n",
+ errorPrint("%s() LN%d, failed to run command <%s>, reason: %s\n",
__func__, __LINE__, sqlstr, taos_errstr(res));
taos_free_result(res);
return -1;
@@ -1536,7 +1659,7 @@ static int taosGetTableDes(
return -1;
}
- if (row[0] == NULL) {
+ if (row[TSDB_SHOW_TABLES_NAME_INDEX] == NULL) {
sprintf(stableDes->cols[i].note, "%s", "NULL");
taos_free_result(res);
res = NULL;
@@ -1549,32 +1672,33 @@ static int taosGetTableDes(
switch (fields[0].type) {
case TSDB_DATA_TYPE_BOOL:
sprintf(stableDes->cols[i].note, "%d",
- ((((int32_t)(*((char *)row[0]))) == 1) ? 1 : 0));
+ ((((int32_t)(*((char *)row[TSDB_SHOW_TABLES_NAME_INDEX]))) == 1) ? 1 : 0));
break;
case TSDB_DATA_TYPE_TINYINT:
- sprintf(stableDes->cols[i].note, "%d", *((int8_t *)row[0]));
+ sprintf(stableDes->cols[i].note, "%d",
+ *((int8_t *)row[TSDB_SHOW_TABLES_NAME_INDEX]));
break;
case TSDB_DATA_TYPE_SMALLINT:
- sprintf(stableDes->cols[i].note, "%d", *((int16_t *)row[0]));
+ sprintf(stableDes->cols[i].note, "%d", *((int16_t *)row[TSDB_SHOW_TABLES_NAME_INDEX]));
break;
case TSDB_DATA_TYPE_INT:
- sprintf(stableDes->cols[i].note, "%d", *((int32_t *)row[0]));
+ sprintf(stableDes->cols[i].note, "%d", *((int32_t *)row[TSDB_SHOW_TABLES_NAME_INDEX]));
break;
case TSDB_DATA_TYPE_BIGINT:
- sprintf(stableDes->cols[i].note, "%" PRId64 "", *((int64_t *)row[0]));
+ sprintf(stableDes->cols[i].note, "%" PRId64 "", *((int64_t *)row[TSDB_SHOW_TABLES_NAME_INDEX]));
break;
case TSDB_DATA_TYPE_FLOAT:
- sprintf(stableDes->cols[i].note, "%f", GET_FLOAT_VAL(row[0]));
+ sprintf(stableDes->cols[i].note, "%f", GET_FLOAT_VAL(row[TSDB_SHOW_TABLES_NAME_INDEX]));
break;
case TSDB_DATA_TYPE_DOUBLE:
- sprintf(stableDes->cols[i].note, "%f", GET_DOUBLE_VAL(row[0]));
+ sprintf(stableDes->cols[i].note, "%f", GET_DOUBLE_VAL(row[TSDB_SHOW_TABLES_NAME_INDEX]));
break;
case TSDB_DATA_TYPE_BINARY:
{
memset(stableDes->cols[i].note, 0, sizeof(stableDes->cols[i].note));
stableDes->cols[i].note[0] = '\'';
char tbuf[COL_NOTE_LEN];
- converStringToReadable((char *)row[0], length[0], tbuf, COL_NOTE_LEN);
+ converStringToReadable((char *)row[TSDB_SHOW_TABLES_NAME_INDEX], length[0], tbuf, COL_NOTE_LEN);
char* pstr = stpcpy(&(stableDes->cols[i].note[1]), tbuf);
*(pstr++) = '\'';
break;
@@ -1583,18 +1707,18 @@ static int taosGetTableDes(
{
memset(stableDes->cols[i].note, 0, sizeof(stableDes->cols[i].note));
char tbuf[COL_NOTE_LEN-2]; // need reserve 2 bytes for ' '
- convertNCharToReadable((char *)row[0], length[0], tbuf, COL_NOTE_LEN);
+ convertNCharToReadable((char *)row[TSDB_SHOW_TABLES_NAME_INDEX], length[0], tbuf, COL_NOTE_LEN);
sprintf(stableDes->cols[i].note, "\'%s\'", tbuf);
break;
}
case TSDB_DATA_TYPE_TIMESTAMP:
- sprintf(stableDes->cols[i].note, "%" PRId64 "", *(int64_t *)row[0]);
+ sprintf(stableDes->cols[i].note, "%" PRId64 "", *(int64_t *)row[TSDB_SHOW_TABLES_NAME_INDEX]);
#if 0
if (!g_args.mysqlFlag) {
- sprintf(tableDes->cols[i].note, "%" PRId64 "", *(int64_t *)row[0]);
+ sprintf(tableDes->cols[i].note, "%" PRId64 "", *(int64_t *)row[TSDB_SHOW_TABLES_NAME_INDEX]);
} else {
char buf[64] = "\0";
- int64_t ts = *((int64_t *)row[0]);
+ int64_t ts = *((int64_t *)row[TSDB_SHOW_TABLES_NAME_INDEX]);
time_t tt = (time_t)(ts / 1000);
struct tm *ptm = localtime(&tt);
strftime(buf, 64, "%y-%m-%d %H:%M:%S", ptm);
@@ -1602,499 +1726,137 @@ static int taosGetTableDes(
}
#endif
break;
- default:
- break;
- }
-
- taos_free_result(res);
- res = NULL;
- }
-
- return count;
-}
-
-static int convertSchemaToAvroSchema(STableDef *stableDes, char **avroSchema)
-{
- errorPrint("%s() LN%d TODO: covert table schema to avro schema\n",
- __func__, __LINE__);
- return 0;
-}
-
-static int32_t taosDumpTable(
- char *tbName, char *metric,
- FILE *fp, TAOS* taosCon, char* dbName, int precision) {
- int count = 0;
-
- STableDef *tableDes = (STableDef *)calloc(1, sizeof(STableDef)
- + sizeof(SColDes) * TSDB_MAX_COLUMNS);
-
- if (metric != NULL && metric[0] != '\0') { // dump table schema which is created by using super table
- /*
- count = taosGetTableDes(metric, tableDes, taosCon);
-
- if (count < 0) {
- free(tableDes);
- return -1;
- }
-
- taosDumpCreateTableClause(tableDes, count, fp);
-
- memset(tableDes, 0, sizeof(STableDef) + sizeof(SColDes) * TSDB_MAX_COLUMNS);
- */
-
- count = taosGetTableDes(dbName, tbName, tableDes, taosCon, false);
-
- if (count < 0) {
- free(tableDes);
- return -1;
- }
-
- // create child-table using super-table
- taosDumpCreateMTableClause(tableDes, metric, count, fp, dbName);
-
- } else { // dump table definition
- count = taosGetTableDes(dbName, tbName, tableDes, taosCon, false);
-
- if (count < 0) {
- free(tableDes);
- return -1;
- }
-
- // create normal-table or super-table
- taosDumpCreateTableClause(tableDes, count, fp, dbName);
- }
-
- char *jsonAvroSchema = NULL;
- if (g_args.avro) {
- convertSchemaToAvroSchema(tableDes, &jsonAvroSchema);
- }
-
- free(tableDes);
-
- int32_t ret = 0;
- if (!g_args.schemaonly) {
- ret = taosDumpTableData(fp, tbName, taosCon, dbName, precision,
- jsonAvroSchema);
- }
-
- return ret;
-}
-
-static void taosDumpCreateDbClause(
- SDbInfo *dbInfo, bool isDumpProperty, FILE *fp) {
- char sqlstr[TSDB_MAX_SQL_LEN] = {0};
-
- char *pstr = sqlstr;
- pstr += sprintf(pstr, "CREATE DATABASE IF NOT EXISTS %s ", dbInfo->name);
- if (isDumpProperty) {
- pstr += sprintf(pstr,
- "REPLICA %d QUORUM %d DAYS %d KEEP %s CACHE %d BLOCKS %d MINROWS %d MAXROWS %d FSYNC %d CACHELAST %d COMP %d PRECISION '%s' UPDATE %d",
- dbInfo->replica, dbInfo->quorum, dbInfo->days,
- dbInfo->keeplist,
- dbInfo->cache,
- dbInfo->blocks, dbInfo->minrows, dbInfo->maxrows,
- dbInfo->fsync,
- dbInfo->cachelast,
- dbInfo->comp, dbInfo->precision, dbInfo->update);
- }
-
- pstr += sprintf(pstr, ";");
- fprintf(fp, "%s\n\n", sqlstr);
-}
-
-static void* taosDumpOutWorkThreadFp(void *arg)
-{
- SThreadParaObj *pThread = (SThreadParaObj*)arg;
- STableRecord tableRecord;
- int fd;
-
- setThreadName("dumpOutWorkThrd");
-
- char tmpBuf[4096] = {0};
- sprintf(tmpBuf, ".tables.tmp.%d", pThread->threadIndex);
- fd = open(tmpBuf, O_RDWR | O_CREAT, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH);
- if (fd == -1) {
- errorPrint("%s() LN%d, failed to open temp file: %s\n",
- __func__, __LINE__, tmpBuf);
- return NULL;
- }
-
- FILE *fp = NULL;
- memset(tmpBuf, 0, 4096);
-
- if (g_args.outpath[0] != 0) {
- sprintf(tmpBuf, "%s/%s.tables.%d.sql",
- g_args.outpath, pThread->dbName, pThread->threadIndex);
- } else {
- sprintf(tmpBuf, "%s.tables.%d.sql",
- pThread->dbName, pThread->threadIndex);
- }
-
- fp = fopen(tmpBuf, "w");
- if (fp == NULL) {
- errorPrint("%s() LN%d, failed to open file %s\n",
- __func__, __LINE__, tmpBuf);
- close(fd);
- return NULL;
- }
-
- memset(tmpBuf, 0, 4096);
- sprintf(tmpBuf, "use %s", pThread->dbName);
-
- TAOS_RES* tmpResult = taos_query(pThread->taosCon, tmpBuf);
- int32_t code = taos_errno(tmpResult);
- if (code != 0) {
- errorPrint("%s() LN%d, invalid database %s. reason: %s\n",
- __func__, __LINE__, pThread->dbName, taos_errstr(tmpResult));
- taos_free_result(tmpResult);
- fclose(fp);
- close(fd);
- return NULL;
- }
-
-#if 0
- int fileNameIndex = 1;
- int tablesInOneFile = 0;
-#endif
- int64_t lastRowsPrint = 5000000;
- fprintf(fp, "USE %s;\n\n", pThread->dbName);
- while (1) {
- ssize_t readLen = read(fd, &tableRecord, sizeof(STableRecord));
- if (readLen <= 0) break;
-
- int ret = taosDumpTable(
- tableRecord.name, tableRecord.metric,
- fp, pThread->taosCon, pThread->dbName,
- pThread->precision);
- if (ret >= 0) {
- // TODO: sum table count and table rows by self
- pThread->tablesOfDumpOut++;
- pThread->rowsOfDumpOut += ret;
-
- if (pThread->rowsOfDumpOut >= lastRowsPrint) {
- printf(" %"PRId64 " rows already be dumpout from database %s\n",
- pThread->rowsOfDumpOut, pThread->dbName);
- lastRowsPrint += 5000000;
- }
-
-#if 0
- tablesInOneFile++;
- if (tablesInOneFile >= g_args.table_batch) {
- fclose(fp);
- tablesInOneFile = 0;
-
- memset(tmpBuf, 0, 4096);
- if (g_args.outpath[0] != 0) {
- sprintf(tmpBuf, "%s/%s.tables.%d-%d.sql",
- g_args.outpath, pThread->dbName,
- pThread->threadIndex, fileNameIndex);
- } else {
- sprintf(tmpBuf, "%s.tables.%d-%d.sql",
- pThread->dbName, pThread->threadIndex, fileNameIndex);
- }
- fileNameIndex++;
-
- fp = fopen(tmpBuf, "w");
- if (fp == NULL) {
- errorPrint("%s() LN%d, failed to open file %s\n",
- __func__, __LINE__, tmpBuf);
- close(fd);
- taos_free_result(tmpResult);
- return NULL;
- }
- }
-#endif
- }
- }
-
- taos_free_result(tmpResult);
- close(fd);
- fclose(fp);
-
- return NULL;
-}
-
-static void taosStartDumpOutWorkThreads(int32_t numOfThread, char *dbName, int precision)
-{
- pthread_attr_t thattr;
- SThreadParaObj *threadObj =
- (SThreadParaObj *)calloc(numOfThread, sizeof(SThreadParaObj));
-
- if (threadObj == NULL) {
- errorPrint("%s() LN%d, memory allocation failed!\n",
- __func__, __LINE__);
- return;
- }
-
- for (int t = 0; t < numOfThread; ++t) {
- SThreadParaObj *pThread = threadObj + t;
- pThread->rowsOfDumpOut = 0;
- pThread->tablesOfDumpOut = 0;
- pThread->threadIndex = t;
- pThread->totalThreads = numOfThread;
- tstrncpy(pThread->dbName, dbName, TSDB_DB_NAME_LEN);
- pThread->precision = precision;
- pThread->taosCon = taos_connect(g_args.host, g_args.user, g_args.password,
- NULL, g_args.port);
- if (pThread->taosCon == NULL) {
- errorPrint("Failed to connect to TDengine server %s\n", g_args.host);
- free(threadObj);
- return;
- }
- pthread_attr_init(&thattr);
- pthread_attr_setdetachstate(&thattr, PTHREAD_CREATE_JOINABLE);
-
- if (pthread_create(&(pThread->threadID), &thattr,
- taosDumpOutWorkThreadFp,
- (void*)pThread) != 0) {
- errorPrint("%s() LN%d, thread:%d failed to start\n",
- __func__, __LINE__, pThread->threadIndex);
- exit(-1);
- }
- }
-
- for (int32_t t = 0; t < numOfThread; ++t) {
- pthread_join(threadObj[t].threadID, NULL);
- }
-
- // TODO: sum all thread dump table count and rows of per table, then save into result_output.txt
- int64_t totalRowsOfDumpOut = 0;
- int64_t totalChildTblsOfDumpOut = 0;
- for (int32_t t = 0; t < numOfThread; ++t) {
- totalChildTblsOfDumpOut += threadObj[t].tablesOfDumpOut;
- totalRowsOfDumpOut += threadObj[t].rowsOfDumpOut;
- }
-
- fprintf(g_fpOfResult, "# child table counter: %"PRId64"\n",
- totalChildTblsOfDumpOut);
- fprintf(g_fpOfResult, "# row counter: %"PRId64"\n",
- totalRowsOfDumpOut);
- g_resultStatistics.totalChildTblsOfDumpOut += totalChildTblsOfDumpOut;
- g_resultStatistics.totalRowsOfDumpOut += totalRowsOfDumpOut;
- free(threadObj);
-}
-
-static int32_t taosDumpStable(char *table, FILE *fp,
- TAOS* taosCon, char* dbName) {
-
- uint64_t sizeOfTableDes =
- (uint64_t)(sizeof(STableDef) + sizeof(SColDes) * TSDB_MAX_COLUMNS);
- STableDef *stableDes = (STableDef *)calloc(1, sizeOfTableDes);
- if (NULL == stableDes) {
- errorPrint("%s() LN%d, failed to allocate %"PRIu64" memory\n",
- __func__, __LINE__, sizeOfTableDes);
- exit(-1);
- }
-
- int count = taosGetTableDes(dbName, table, stableDes, taosCon, true);
-
- if (count < 0) {
- free(stableDes);
- errorPrint("%s() LN%d, failed to get stable[%s] schema\n",
- __func__, __LINE__, table);
- exit(-1);
- }
-
- taosDumpCreateTableClause(stableDes, count, fp, dbName);
-
- free(stableDes);
- return 0;
-}
-
-static int32_t taosDumpCreateSuperTableClause(TAOS* taosCon, char* dbName, FILE *fp)
-{
- TAOS_ROW row;
- int fd = -1;
- STableRecord tableRecord;
- char sqlstr[TSDB_MAX_SQL_LEN] = {0};
-
- sprintf(sqlstr, "show %s.stables", dbName);
-
- TAOS_RES* res = taos_query(taosCon, sqlstr);
- int32_t code = taos_errno(res);
- if (code != 0) {
- errorPrint("%s() LN%d, failed to run command <%s>, reason: %s\n",
- __func__, __LINE__, sqlstr, taos_errstr(res));
- taos_free_result(res);
- exit(-1);
- }
-
- TAOS_FIELD *fields = taos_fetch_fields(res);
-
- char tmpFileName[MAX_FILE_NAME_LEN];
- memset(tmpFileName, 0, MAX_FILE_NAME_LEN);
- sprintf(tmpFileName, ".stables.tmp");
- fd = open(tmpFileName, O_RDWR | O_CREAT, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH);
- if (fd == -1) {
- errorPrint("%s() LN%d, failed to open temp file: %s\n",
- __func__, __LINE__, tmpFileName);
- taos_free_result(res);
- (void)remove(".stables.tmp");
- exit(-1);
- }
-
- while ((row = taos_fetch_row(res)) != NULL) {
- memset(&tableRecord, 0, sizeof(STableRecord));
- tstrncpy(tableRecord.name, (char *)row[TSDB_SHOW_TABLES_NAME_INDEX],
- min(TSDB_TABLE_NAME_LEN,
- fields[TSDB_SHOW_TABLES_NAME_INDEX].bytes + 1));
- taosWrite(fd, &tableRecord, sizeof(STableRecord));
- }
-
- taos_free_result(res);
- (void)lseek(fd, 0, SEEK_SET);
-
- int superTblCnt = 0;
- while (1) {
- ssize_t readLen = read(fd, &tableRecord, sizeof(STableRecord));
- if (readLen <= 0) break;
-
- int ret = taosDumpStable(tableRecord.name, fp, taosCon, dbName);
- if (0 == ret) {
- superTblCnt++;
+ default:
+ break;
}
- }
- // TODO: save dump super table into result_output.txt
- fprintf(g_fpOfResult, "# super table counter: %d\n", superTblCnt);
- g_resultStatistics.totalSuperTblsOfDumpOut += superTblCnt;
+ taos_free_result(res);
+ res = NULL;
+ }
- close(fd);
- (void)remove(".stables.tmp");
+ return colCount;
+}
+static int convertSchemaToAvroSchema(STableDef *stableDes, char **avroSchema)
+{
+ errorPrint("%s() LN%d TODO: covert table schema to avro schema\n",
+ __func__, __LINE__);
return 0;
}
+static int64_t taosDumpTable(
+ char *tbName, char *stable,
+ FILE *fp, TAOS* taos, char* dbName, int precision) {
+ int colCount = 0;
-static int taosDumpDb(SDbInfo *dbInfo, FILE *fp, TAOS *taosCon) {
- TAOS_ROW row;
- int fd = -1;
- STableRecord tableRecord;
+ STableDef *tableDes = (STableDef *)calloc(1, sizeof(STableDef)
+ + sizeof(SColDes) * TSDB_MAX_COLUMNS);
- taosDumpCreateDbClause(dbInfo, g_args.with_property, fp);
+ if (stable != NULL && stable[0] != '\0') { // dump table schema which is created by using super table
+ /*
+ colCount = taosGetTableDes(stable, tableDes, taos);
- fprintf(g_fpOfResult, "\n#### database: %s\n",
- dbInfo->name);
- g_resultStatistics.totalDatabasesOfDumpOut++;
+ if (count < 0) {
+ free(tableDes);
+ return -1;
+ }
- char sqlstr[TSDB_MAX_SQL_LEN] = {0};
+ taosDumpCreateTableClause(tableDes, count, fp);
- fprintf(fp, "USE %s;\n\n", dbInfo->name);
+ memset(tableDes, 0, sizeof(STableDef) + sizeof(SColDes) * TSDB_MAX_COLUMNS);
+ */
- (void)taosDumpCreateSuperTableClause(taosCon, dbInfo->name, fp);
+ colCount = taosGetTableDes(dbName, tbName, tableDes, taos, false);
- sprintf(sqlstr, "show %s.tables", dbInfo->name);
+ if (colCount < 0) {
+ free(tableDes);
+ return -1;
+ }
- TAOS_RES* res = taos_query(taosCon, sqlstr);
- int code = taos_errno(res);
- if (code != 0) {
- errorPrint("%s() LN%d, failed to run command <%s>, reason:%s\n",
- __func__, __LINE__, sqlstr, taos_errstr(res));
- taos_free_result(res);
- return -1;
+ // create child-table using super-table
+ taosDumpCreateMTableClause(tableDes, stable, colCount, fp, dbName);
+
+ } else { // dump table definition
+ colCount = taosGetTableDes(dbName, tbName, tableDes, taos, false);
+
+ if (colCount < 0) {
+ free(tableDes);
+ return -1;
+ }
+
+ // create normal-table or super-table
+ taosDumpCreateTableClause(tableDes, colCount, fp, dbName);
}
- char tmpBuf[MAX_FILE_NAME_LEN];
- memset(tmpBuf, 0, MAX_FILE_NAME_LEN);
- sprintf(tmpBuf, ".show-tables.tmp");
- fd = open(tmpBuf, O_RDWR | O_CREAT | O_TRUNC, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH);
- if (fd == -1) {
- errorPrint("%s() LN%d, failed to open temp file: %s\n",
- __func__, __LINE__, tmpBuf);
- taos_free_result(res);
- return -1;
+ char *jsonAvroSchema = NULL;
+ if (g_args.avro) {
+ convertSchemaToAvroSchema(tableDes, &jsonAvroSchema);
}
- TAOS_FIELD *fields = taos_fetch_fields(res);
+ free(tableDes);
- int32_t numOfTable = 0;
- while ((row = taos_fetch_row(res)) != NULL) {
- memset(&tableRecord, 0, sizeof(STableRecord));
- tstrncpy(tableRecord.name, (char *)row[TSDB_SHOW_TABLES_NAME_INDEX],
- min(TSDB_TABLE_NAME_LEN,
- fields[TSDB_SHOW_TABLES_NAME_INDEX].bytes + 1));
- tstrncpy(tableRecord.metric, (char *)row[TSDB_SHOW_TABLES_METRIC_INDEX],
- min(TSDB_TABLE_NAME_LEN,
- fields[TSDB_SHOW_TABLES_METRIC_INDEX].bytes + 1));
+ int64_t ret = 0;
+ if (!g_args.schemaonly) {
+ ret = taosDumpTableData(fp, tbName, taos, dbName, precision,
+ jsonAvroSchema);
+ }
- taosWrite(fd, &tableRecord, sizeof(STableRecord));
+ return ret;
+}
- numOfTable++;
- }
- taos_free_result(res);
- lseek(fd, 0, SEEK_SET);
+static void taosDumpCreateDbClause(
+ SDbInfo *dbInfo, bool isDumpProperty, FILE *fp) {
+ char sqlstr[TSDB_MAX_SQL_LEN] = {0};
- int maxThreads = g_args.thread_num;
- int tableOfPerFile ;
- if (numOfTable <= g_args.thread_num) {
- tableOfPerFile = 1;
- maxThreads = numOfTable;
- } else {
- tableOfPerFile = numOfTable / g_args.thread_num;
- if (0 != numOfTable % g_args.thread_num) {
- tableOfPerFile += 1;
- }
+ char *pstr = sqlstr;
+ pstr += sprintf(pstr, "CREATE DATABASE IF NOT EXISTS %s ", dbInfo->name);
+ if (isDumpProperty) {
+ pstr += sprintf(pstr,
+ "REPLICA %d QUORUM %d DAYS %d KEEP %s CACHE %d BLOCKS %d MINROWS %d MAXROWS %d FSYNC %d CACHELAST %d COMP %d PRECISION '%s' UPDATE %d",
+ dbInfo->replica, dbInfo->quorum, dbInfo->days,
+ dbInfo->keeplist,
+ dbInfo->cache,
+ dbInfo->blocks, dbInfo->minrows, dbInfo->maxrows,
+ dbInfo->fsync,
+ dbInfo->cachelast,
+ dbInfo->comp, dbInfo->precision, dbInfo->update);
}
- char* tblBuf = (char*)calloc(1, tableOfPerFile * sizeof(STableRecord));
- if (NULL == tblBuf){
- errorPrint("failed to calloc %" PRIzu "\n",
- tableOfPerFile * sizeof(STableRecord));
- close(fd);
- return -1;
- }
+ pstr += sprintf(pstr, ";");
+ fprintf(fp, "%s\n\n", sqlstr);
+}
- int32_t numOfThread = 0;
- int subFd = -1;
- for (numOfThread = 0; numOfThread < maxThreads; numOfThread++) {
- memset(tmpBuf, 0, MAX_FILE_NAME_LEN);
- sprintf(tmpBuf, ".tables.tmp.%d", numOfThread);
- subFd = open(tmpBuf, O_RDWR | O_CREAT | O_TRUNC, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH);
- if (subFd == -1) {
- errorPrint("%s() LN%d, failed to open temp file: %s\n",
- __func__, __LINE__, tmpBuf);
- for (int32_t loopCnt = 0; loopCnt < numOfThread; loopCnt++) {
- sprintf(tmpBuf, ".tables.tmp.%d", loopCnt);
- (void)remove(tmpBuf);
- }
- sprintf(tmpBuf, ".show-tables.tmp");
- (void)remove(tmpBuf);
- free(tblBuf);
- close(fd);
- return -1;
- }
+static int dumpStable(char *stbName, FILE *fp,
+ TAOS* taos, SDbInfo *dbInfo)
+{
- // read tableOfPerFile for fd, write to subFd
- ssize_t readLen = read(fd, tblBuf, tableOfPerFile * sizeof(STableRecord));
- if (readLen <= 0) {
- close(subFd);
- break;
- }
- taosWrite(subFd, tblBuf, readLen);
- close(subFd);
+ uint64_t sizeOfTableDes =
+ (uint64_t)(sizeof(STableDef) + sizeof(SColDes) * TSDB_MAX_COLUMNS);
+
+ STableDef *stableDes = (STableDef *)calloc(1, sizeOfTableDes);
+ if (NULL == stableDes) {
+ errorPrint("%s() LN%d, failed to allocate %"PRIu64" memory\n",
+ __func__, __LINE__, sizeOfTableDes);
+ exit(-1);
}
- sprintf(tmpBuf, ".show-tables.tmp");
- (void)remove(tmpBuf);
+ int colCount = taosGetTableDes(dbInfo->name,
+ stbName, stableDes, taos, true);
- if (fd >= 0) {
- close(fd);
- fd = -1;
+ if (colCount < 0) {
+ free(stableDes);
+ errorPrint("%s() LN%d, failed to get stable[%s] schema\n",
+ __func__, __LINE__, stbName);
+ exit(-1);
}
- // start multi threads to dumpout
- taosStartDumpOutWorkThreads(numOfThread, dbInfo->name,
- getPrecisionByString(dbInfo->precision));
- for (int loopCnt = 0; loopCnt < numOfThread; loopCnt++) {
- sprintf(tmpBuf, ".tables.tmp.%d", loopCnt);
- (void)remove(tmpBuf);
- }
+ taosDumpCreateTableClause(stableDes, colCount, fp, dbInfo->name);
+ free(stableDes);
- free(tblBuf);
return 0;
}
-static void taosDumpCreateTableClause(STableDef *tableDes, int numOfCols,
+static int taosDumpCreateTableClause(STableDef *tableDes, int numOfCols,
FILE *fp, char* dbName) {
int counter = 0;
int count_temp = 0;
@@ -2141,10 +1903,11 @@ static void taosDumpCreateTableClause(STableDef *tableDes, int numOfCols,
pstr += sprintf(pstr, ");");
- fprintf(fp, "%s\n\n", sqlstr);
+ debugPrint("%s() LN%d, write string: %s\n", __func__, __LINE__, sqlstr);
+ return fprintf(fp, "%s\n\n", sqlstr);
}
-static void taosDumpCreateMTableClause(STableDef *tableDes, char *metric,
+static void taosDumpCreateMTableClause(STableDef *tableDes, char *stable,
int numOfCols, FILE *fp, char* dbName) {
int counter = 0;
int count_temp = 0;
@@ -2161,7 +1924,7 @@ static void taosDumpCreateMTableClause(STableDef *tableDes, char *metric,
pstr += sprintf(tmpBuf,
"CREATE TABLE IF NOT EXISTS %s.%s USING %s.%s TAGS (",
- dbName, tableDes->name, dbName, metric);
+ dbName, tableDes->name, dbName, stable);
for (; counter < numOfCols; counter++) {
if (tableDes->cols[counter].note[0] != '\0') break;
@@ -2361,8 +2124,8 @@ static int64_t writeResultToSql(TAOS_RES *res, FILE *fp, char *dbName, char *tbN
return 0;
}
-static int taosDumpTableData(FILE *fp, char *tbName,
- TAOS* taosCon, char* dbName, int precision,
+static int64_t taosDumpTableData(FILE *fp, char *tbName,
+ TAOS* taos, char* dbName, int precision,
char *jsonAvroSchema) {
int64_t totalRows = 0;
@@ -2395,7 +2158,7 @@ static int taosDumpTableData(FILE *fp, char *tbName,
"select * from %s.%s where _c0 >= %" PRId64 " and _c0 <= %" PRId64 " order by _c0 asc;",
dbName, tbName, start_time, end_time);
- TAOS_RES* res = taos_query(taosCon, sqlstr);
+ TAOS_RES* res = taos_query(taos, sqlstr);
int32_t code = taos_errno(res);
if (code != 0) {
errorPrint("failed to run command %s, reason: %s\n",
@@ -2426,12 +2189,6 @@ static int taosCheckParam(struct arguments *arguments) {
return -1;
}
- if (g_args.arg_list_len == 0) {
- if ((!g_args.all_databases) && (!g_args.isDumpIn)) {
- errorPrint("%s", "taosdump requires parameters for database and operation\n");
- return -1;
- }
- }
/*
if (g_args.isDumpIn && (strcmp(g_args.outpath, DEFAULT_DUMP_FILE) != 0)) {
fprintf(stderr, "duplicate parameter input and output file path\n");
@@ -2837,7 +2594,7 @@ static int taosDumpInOneFile(TAOS* taos, FILE* fp, char* fcharset,
static void* taosDumpInWorkThreadFp(void *arg)
{
- SThreadParaObj *pThread = (SThreadParaObj*)arg;
+ threadInfo *pThread = (threadInfo*)arg;
setThreadName("dumpInWorkThrd");
for (int32_t f = 0; f < g_tsSqlFileNum; ++f) {
@@ -2849,7 +2606,7 @@ static void* taosDumpInWorkThreadFp(void *arg)
}
fprintf(stderr, ", Success Open input file: %s\n",
SQLFileName);
- taosDumpInOneFile(pThread->taosCon, fp, g_tsCharset, g_args.encode, SQLFileName);
+ taosDumpInOneFile(pThread->taos, fp, g_tsCharset, g_args.encode, SQLFileName);
}
}
@@ -2859,15 +2616,15 @@ static void* taosDumpInWorkThreadFp(void *arg)
static void taosStartDumpInWorkThreads()
{
pthread_attr_t thattr;
- SThreadParaObj *pThread;
+ threadInfo *pThread;
int32_t totalThreads = g_args.thread_num;
if (totalThreads > g_tsSqlFileNum) {
totalThreads = g_tsSqlFileNum;
}
- SThreadParaObj *threadObj = (SThreadParaObj *)calloc(
- totalThreads, sizeof(SThreadParaObj));
+ threadInfo *threadObj = (threadInfo *)calloc(
+ totalThreads, sizeof(threadInfo));
if (NULL == threadObj) {
errorPrint("%s() LN%d, memory allocation failed\n", __func__, __LINE__);
@@ -2877,9 +2634,9 @@ static void taosStartDumpInWorkThreads()
pThread = threadObj + t;
pThread->threadIndex = t;
pThread->totalThreads = totalThreads;
- pThread->taosCon = taos_connect(g_args.host, g_args.user, g_args.password,
+ pThread->taos = taos_connect(g_args.host, g_args.user, g_args.password,
NULL, g_args.port);
- if (pThread->taosCon == NULL) {
+ if (pThread->taos == NULL) {
errorPrint("Failed to connect to TDengine server %s\n", g_args.host);
free(threadObj);
return;
@@ -2900,7 +2657,7 @@ static void taosStartDumpInWorkThreads()
}
for (int t = 0; t < totalThreads; ++t) {
- taos_close(threadObj[t].taosCon);
+ taos_close(threadObj[t].taos);
}
free(threadObj);
}
@@ -2950,3 +2707,154 @@ static int taosDumpIn() {
return 0;
}
+int main(int argc, char *argv[]) {
+ static char verType[32] = {0};
+ sprintf(verType, "version: %s\n", version);
+ argp_program_version = verType;
+
+ int ret = 0;
+ /* Parse our arguments; every option seen by parse_opt will be
+ reflected in arguments. */
+ if (argc > 1) {
+// parse_precision_first(argc, argv, &g_args);
+ parse_timestamp(argc, argv, &g_args);
+ parse_args(argc, argv, &g_args);
+ }
+
+ argp_parse(&argp, argc, argv, 0, 0, &g_args);
+
+ if (g_args.abort) {
+#ifndef _ALPINE
+ error(10, 0, "ABORTED");
+#else
+ abort();
+#endif
+ }
+
+ printf("====== arguments config ======\n");
+ {
+ printf("host: %s\n", g_args.host);
+ printf("user: %s\n", g_args.user);
+ printf("password: %s\n", g_args.password);
+ printf("port: %u\n", g_args.port);
+ printf("mysqlFlag: %d\n", g_args.mysqlFlag);
+ printf("outpath: %s\n", g_args.outpath);
+ printf("inpath: %s\n", g_args.inpath);
+ printf("resultFile: %s\n", g_args.resultFile);
+ printf("encode: %s\n", g_args.encode);
+ printf("all_databases: %s\n", g_args.all_databases?"true":"false");
+ printf("databases: %d\n", g_args.databases);
+ printf("databasesSeq: %s\n", g_args.databasesSeq);
+ printf("schemaonly: %s\n", g_args.schemaonly?"true":"false");
+ printf("with_property: %s\n", g_args.with_property?"true":"false");
+ printf("avro format: %s\n", g_args.avro?"true":"false");
+ printf("start_time: %" PRId64 "\n", g_args.start_time);
+ printf("human readable start time: %s \n", g_args.humanStartTime);
+ printf("end_time: %" PRId64 "\n", g_args.end_time);
+ printf("human readable end time: %s \n", g_args.humanEndTime);
+ printf("precision: %s\n", g_args.precision);
+ printf("data_batch: %d\n", g_args.data_batch);
+ printf("max_sql_len: %d\n", g_args.max_sql_len);
+ printf("table_batch: %d\n", g_args.table_batch);
+ printf("thread_num: %d\n", g_args.thread_num);
+ printf("allow_sys: %d\n", g_args.allow_sys);
+ printf("abort: %d\n", g_args.abort);
+ printf("isDumpIn: %d\n", g_args.isDumpIn);
+ printf("arg_list_len: %d\n", g_args.arg_list_len);
+ printf("debug_print: %d\n", g_args.debug_print);
+
+ for (int32_t i = 0; i < g_args.arg_list_len; i++) {
+ printf("arg_list[%d]: %s\n", i, g_args.arg_list[i]);
+ }
+ }
+ printf("==============================\n");
+ if (taosCheckParam(&g_args) < 0) {
+ exit(EXIT_FAILURE);
+ }
+
+ g_fpOfResult = fopen(g_args.resultFile, "a");
+ if (NULL == g_fpOfResult) {
+ errorPrint("Failed to open %s for save result\n", g_args.resultFile);
+ exit(-1);
+ };
+
+ fprintf(g_fpOfResult, "#############################################################################\n");
+ fprintf(g_fpOfResult, "============================== arguments config =============================\n");
+ {
+ fprintf(g_fpOfResult, "host: %s\n", g_args.host);
+ fprintf(g_fpOfResult, "user: %s\n", g_args.user);
+ fprintf(g_fpOfResult, "password: %s\n", g_args.password);
+ fprintf(g_fpOfResult, "port: %u\n", g_args.port);
+ fprintf(g_fpOfResult, "mysqlFlag: %d\n", g_args.mysqlFlag);
+ fprintf(g_fpOfResult, "outpath: %s\n", g_args.outpath);
+ fprintf(g_fpOfResult, "inpath: %s\n", g_args.inpath);
+ fprintf(g_fpOfResult, "resultFile: %s\n", g_args.resultFile);
+ fprintf(g_fpOfResult, "encode: %s\n", g_args.encode);
+ fprintf(g_fpOfResult, "all_databases: %s\n", g_args.all_databases?"true":"false");
+ fprintf(g_fpOfResult, "databases: %d\n", g_args.databases);
+ fprintf(g_fpOfResult, "databasesSeq: %s\n", g_args.databasesSeq);
+ fprintf(g_fpOfResult, "schemaonly: %s\n", g_args.schemaonly?"true":"false");
+ fprintf(g_fpOfResult, "with_property: %s\n", g_args.with_property?"true":"false");
+ fprintf(g_fpOfResult, "avro format: %s\n", g_args.avro?"true":"false");
+ fprintf(g_fpOfResult, "start_time: %" PRId64 "\n", g_args.start_time);
+ fprintf(g_fpOfResult, "human readable start time: %s \n", g_args.humanStartTime);
+ fprintf(g_fpOfResult, "end_time: %" PRId64 "\n", g_args.end_time);
+ fprintf(g_fpOfResult, "human readable end time: %s \n", g_args.humanEndTime);
+ fprintf(g_fpOfResult, "precision: %s\n", g_args.precision);
+ fprintf(g_fpOfResult, "data_batch: %d\n", g_args.data_batch);
+ fprintf(g_fpOfResult, "max_sql_len: %d\n", g_args.max_sql_len);
+ fprintf(g_fpOfResult, "table_batch: %d\n", g_args.table_batch);
+ fprintf(g_fpOfResult, "thread_num: %d\n", g_args.thread_num);
+ fprintf(g_fpOfResult, "allow_sys: %d\n", g_args.allow_sys);
+ fprintf(g_fpOfResult, "abort: %d\n", g_args.abort);
+ fprintf(g_fpOfResult, "isDumpIn: %d\n", g_args.isDumpIn);
+ fprintf(g_fpOfResult, "arg_list_len: %d\n", g_args.arg_list_len);
+
+ for (int32_t i = 0; i < g_args.arg_list_len; i++) {
+ fprintf(g_fpOfResult, "arg_list[%d]: %s\n", i, g_args.arg_list[i]);
+ }
+ }
+
+ g_numOfCores = (int32_t)sysconf(_SC_NPROCESSORS_ONLN);
+
+ time_t tTime = time(NULL);
+ struct tm tm = *localtime(&tTime);
+
+ if (g_args.isDumpIn) {
+ fprintf(g_fpOfResult, "============================== DUMP IN ============================== \n");
+ fprintf(g_fpOfResult, "# DumpIn start time: %d-%02d-%02d %02d:%02d:%02d\n",
+ tm.tm_year + 1900, tm.tm_mon + 1,
+ tm.tm_mday, tm.tm_hour, tm.tm_min, tm.tm_sec);
+ if (taosDumpIn() < 0) {
+ ret = -1;
+ }
+ } else {
+ fprintf(g_fpOfResult, "============================== DUMP OUT ============================== \n");
+ fprintf(g_fpOfResult, "# DumpOut start time: %d-%02d-%02d %02d:%02d:%02d\n",
+ tm.tm_year + 1900, tm.tm_mon + 1,
+ tm.tm_mday, tm.tm_hour, tm.tm_min, tm.tm_sec);
+ if (taosDumpOut() < 0) {
+ ret = -1;
+ } else {
+ fprintf(g_fpOfResult, "\n============================== TOTAL STATISTICS ============================== \n");
+ fprintf(g_fpOfResult, "# total database count: %d\n",
+ g_resultStatistics.totalDatabasesOfDumpOut);
+ fprintf(g_fpOfResult, "# total super table count: %d\n",
+ g_resultStatistics.totalSuperTblsOfDumpOut);
+ fprintf(g_fpOfResult, "# total child table count: %"PRId64"\n",
+ g_resultStatistics.totalChildTblsOfDumpOut);
+ fprintf(g_fpOfResult, "# total row count: %"PRId64"\n",
+ g_resultStatistics.totalRowsOfDumpOut);
+ }
+ }
+
+ fprintf(g_fpOfResult, "\n");
+ fclose(g_fpOfResult);
+
+ if (g_tablesList) {
+ free(g_tablesList);
+ }
+
+ return ret;
+}
+
diff --git a/src/os/src/linux/linuxEnv.c b/src/os/src/linux/linuxEnv.c
index 650a45aae42c8d2dfba63d8f4e7e6ec35b385ae8..35ca64d79f8b7a883014fd6ca980300ede22d6e2 100644
--- a/src/os/src/linux/linuxEnv.c
+++ b/src/os/src/linux/linuxEnv.c
@@ -32,6 +32,13 @@ void osInit() {
strcpy(tsDataDir, "/var/lib/tq");
strcpy(tsLogDir, "/var/log/tq");
strcpy(tsScriptDir, "/etc/tq");
+#elif (_TD_PRO_ == true)
+ if (configDir[0] == 0) {
+ strcpy(configDir, "/etc/ProDB");
+ }
+ strcpy(tsDataDir, "/var/lib/ProDB");
+ strcpy(tsLogDir, "/var/log/ProDB");
+ strcpy(tsScriptDir, "/etc/ProDB");
#else
if (configDir[0] == 0) {
strcpy(configDir, "/etc/taos");
diff --git a/src/os/src/windows/wEnv.c b/src/os/src/windows/wEnv.c
index b35cb8f040aec5ff4b4fb12665d0842e72958ba1..6f46bb43c75ff2c9735fc53a11bce585c1c213f6 100644
--- a/src/os/src/windows/wEnv.c
+++ b/src/os/src/windows/wEnv.c
@@ -39,6 +39,14 @@ void osInit() {
strcpy(tsDataDir, "C:/TQ/data");
strcpy(tsLogDir, "C:/TQ/log");
strcpy(tsScriptDir, "C:/TQ/script");
+#elif (_TD_PRO_ == true)
+ if (configDir[0] == 0) {
+ strcpy(configDir, "C:/ProDB/cfg");
+ }
+ strcpy(tsVnodeDir, "C:/ProDB/data");
+ strcpy(tsDataDir, "C:/ProDB/data");
+ strcpy(tsLogDir, "C:/ProDB/log");
+ strcpy(tsScriptDir, "C:/ProDB/script");
#else
if (configDir[0] == 0) {
strcpy(configDir, "C:/TDengine/cfg");
diff --git a/src/util/inc/tconfig.h b/src/util/inc/tconfig.h
index 2c632d4a17f5394dc28df72414948855b89bc001..2ba4b964c04b0a1ca9f883cd619aae2b7fcbe1d7 100644
--- a/src/util/inc/tconfig.h
+++ b/src/util/inc/tconfig.h
@@ -20,7 +20,7 @@
extern "C" {
#endif
-#define TSDB_CFG_MAX_NUM 123
+#define TSDB_CFG_MAX_NUM 124
#define TSDB_CFG_PRINT_LEN 23
#define TSDB_CFG_OPTION_LEN 24
#define TSDB_CFG_VALUE_LEN 41
diff --git a/src/util/src/tconfig.c b/src/util/src/tconfig.c
index 6ed9cff9fbabad06d00cb883933fefae443a1f5f..9ce6876fd6d2c555acf5450a9128f787ccd300c8 100644
--- a/src/util/src/tconfig.c
+++ b/src/util/src/tconfig.c
@@ -379,6 +379,9 @@ void taosReadGlobalLogCfg() {
#elif (_TD_TQ_ == true)
printf("configDir:%s not there, use default value: /etc/tq", configDir);
strcpy(configDir, "/etc/tq");
+ #elif (_TD_PRO_ == true)
+ printf("configDir:%s not there, use default value: /etc/ProDB", configDir);
+ strcpy(configDir, "/etc/ProDB");
#else
printf("configDir:%s not there, use default value: /etc/taos", configDir);
strcpy(configDir, "/etc/taos");
diff --git a/src/util/src/terror.c b/src/util/src/terror.c
index e3d022a6b0a4a929b6c06b2c305fb71b6980a865..404d4ad0c18944826abebf6d0e73c573dbe54756 100644
--- a/src/util/src/terror.c
+++ b/src/util/src/terror.c
@@ -118,6 +118,7 @@ TAOS_DEFINE_ERROR(TSDB_CODE_TSC_INVALID_COLUMN_LENGTH, "Invalid column length
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_DUP_TAG_NAMES, "duplicated tag names")
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_INVALID_JSON, "Invalid JSON format")
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_INVALID_JSON_TYPE, "Invalid JSON data type")
+TAOS_DEFINE_ERROR(TSDB_CODE_TSC_INVALID_JSON_CONFIG, "Invalid JSON configuration")
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_VALUE_OUT_OF_RANGE, "Value out of range")
// mnode
diff --git a/src/util/src/tlog.c b/src/util/src/tlog.c
index 1ce3eadf58432337511d0d600848ad334b96fc91..0d335ca2664ffee75a79144b97181a5b625df66d 100644
--- a/src/util/src/tlog.c
+++ b/src/util/src/tlog.c
@@ -85,6 +85,8 @@ int64_t dbgWSize = 0;
char tsLogDir[TSDB_FILENAME_LEN] = "/var/log/power";
#elif (_TD_TQ_ == true)
char tsLogDir[TSDB_FILENAME_LEN] = "/var/log/tq";
+#elif (_TD_PRO_ == true)
+char tsLogDir[TSDB_FILENAME_LEN] = "/var/log/ProDB";
#else
char tsLogDir[PATH_MAX] = "/var/log/taos";
#endif
diff --git a/tests/examples/c/-g b/tests/examples/c/-g
deleted file mode 100755
index 3909909e8fe531a7b6d35ca315b8277e7270bb02..0000000000000000000000000000000000000000
Binary files a/tests/examples/c/-g and /dev/null differ
diff --git a/tests/examples/c/apitest.c b/tests/examples/c/apitest.c
index 03123afb3584ea94417c88e55edd9f8e232b0fe9..c886c6d2fe332380e9f519bfc1133d3d5b4106fa 100644
--- a/tests/examples/c/apitest.c
+++ b/tests/examples/c/apitest.c
@@ -1090,9 +1090,10 @@ void verify_telnet_insert(TAOS* taos) {
//bigint
char* lines2_3[] = {
"stb2_3 1626006833651ms -9223372036854775807i64 host=\"host0\"",
- "stb2_3 1626006833652ms 9223372036854775807i64 host=\"host0\""
+ "stb2_3 1626006833652ms 9223372036854775807i64 host=\"host0\"",
+ "stb2_3 1626006833662ms 9223372036854775807 host=\"host0\""
};
- code = taos_insert_telnet_lines(taos, lines2_3, 2);
+ code = taos_insert_telnet_lines(taos, lines2_3, 3);
if (code) {
printf("lines2_3 code: %d, %s.\n", code, tstrerror(code));
}
@@ -1107,11 +1108,10 @@ void verify_telnet_insert(TAOS* taos) {
"stb2_4 1626006833660ms -3.4e10f32 host=\"host0\"",
"stb2_4 1626006833670ms 3.4E+2f32 host=\"host0\"",
"stb2_4 1626006833680ms -3.4e-2f32 host=\"host0\"",
- "stb2_4 1626006833690ms 3.15 host=\"host0\"",
"stb2_4 1626006833700ms 3.4E38f32 host=\"host0\"",
"stb2_4 1626006833710ms -3.4E38f32 host=\"host0\""
};
- code = taos_insert_telnet_lines(taos, lines2_4, 11);
+ code = taos_insert_telnet_lines(taos, lines2_4, 10);
if (code) {
printf("lines2_4 code: %d, %s.\n", code, tstrerror(code));
}
@@ -1127,9 +1127,10 @@ void verify_telnet_insert(TAOS* taos) {
"stb2_5 1626006833670ms 3.4E+2f64 host=\"host0\"",
"stb2_5 1626006833680ms -3.4e-2f64 host=\"host0\"",
"stb2_5 1626006833690ms 1.7E308f64 host=\"host0\"",
- "stb2_5 1626006833700ms -1.7E308f64 host=\"host0\""
+ "stb2_5 1626006833700ms -1.7E308f64 host=\"host0\"",
+ "stb2_5 1626006833710ms 3.15 host=\"host0\""
};
- code = taos_insert_telnet_lines(taos, lines2_5, 10);
+ code = taos_insert_telnet_lines(taos, lines2_5, 11);
if (code) {
printf("lines2_5 code: %d, %s.\n", code, tstrerror(code));
}
@@ -1166,7 +1167,7 @@ void verify_telnet_insert(TAOS* taos) {
//nchar
char* lines2_8[] = {
"stb2_8 1626006833610ms L\"nchar_val数值一\" host=\"host0\"",
- "stb2_8 1626006833620ms L\"nchar_val数值二\" host=\"host0\"",
+ "stb2_8 1626006833620ms L\"nchar_val数值二\" host=\"host0\""
};
code = taos_insert_telnet_lines(taos, lines2_8, 2);
if (code) {
diff --git a/tests/examples/c/makefile b/tests/examples/c/makefile
index f364eb76fc34ab0975c00dcae2b8348e58b38517..768cceaec7cab39606647b702f87c7a2f36aa473 100644
--- a/tests/examples/c/makefile
+++ b/tests/examples/c/makefile
@@ -7,7 +7,8 @@ LFLAGS = '-Wl,-rpath,/usr/local/taos/driver/' -ltaos -lpthread -lm -lrt
CFLAGS = -O3 -g -Wall -Wno-deprecated -fPIC -Wno-unused-result -Wconversion \
-Wno-char-subscripts -D_REENTRANT -Wno-format -D_REENTRANT -DLINUX \
-Wno-unused-function -D_M_X64 -I/usr/local/taos/include -std=gnu99 \
- -I../../../deps/cJson/inc
+ -I../../../deps/cJson/inc -I../../../src/os/inc -I../../../src/inc \
+ -I../../../src/util/inc -I../../../src/common/inc
all: $(TARGET)
exe:
diff --git a/tests/pytest/crash_gen/valgrind_taos.supp b/tests/pytest/crash_gen/valgrind_taos.supp
index 344ad5dde5f9fc58b760691b94f112e9b458f1d7..8c35778018b9c34789f862f6a728e487694357f4 100644
--- a/tests/pytest/crash_gen/valgrind_taos.supp
+++ b/tests/pytest/crash_gen/valgrind_taos.supp
@@ -18177,4 +18177,40 @@
fun:_PyEval_EvalFrameDefault
obj:/usr/bin/python3.8
fun:_PyEval_EvalFrameDefault
+}
+{
+
+ Memcheck:Leak
+ match-leak-kinds: definite
+ fun:malloc
+ fun:_my_Py_InitModule
+ fun:b_init_cffi_1_0_external_module
+ obj:/usr/bin/python3.8
+ obj:/usr/bin/python3.8
+ fun:PyObject_CallMethod
+ fun:PyInit__openssl
+ fun:_PyImport_LoadDynamicModuleWithSpec
+ obj:/usr/bin/python3.8
+ obj:/usr/bin/python3.8
+ fun:PyVectorcall_Call
+ fun:_PyEval_EvalFrameDefault
+ fun:_PyEval_EvalCodeWithName
+}
+{
+
+ Memcheck:Leak
+ match-leak-kinds: definite
+ fun:malloc
+ fun:_PyObject_GC_New
+ fun:ffi_internal_new
+ fun:b_init_cffi_1_0_external_module
+ obj:/usr/bin/python3.8
+ obj:/usr/bin/python3.8
+ fun:PyObject_CallMethod
+ fun:PyInit__constant_time
+ fun:_PyImport_LoadDynamicModuleWithSpec
+ obj:/usr/bin/python3.8
+ obj:/usr/bin/python3.8
+ fun:PyVectorcall_Call
+ fun:_PyEval_EvalFrameDefault
}
\ No newline at end of file
diff --git a/tests/pytest/fulltest.sh b/tests/pytest/fulltest.sh
index 050f1fd060e5ef455881769f39a60e6f59169a53..b8bb94b26d5117cb9e99468dfa94155ca70c4193 100755
--- a/tests/pytest/fulltest.sh
+++ b/tests/pytest/fulltest.sh
@@ -273,6 +273,7 @@ python3 ./test.py -f query/queryCnameDisplay.py
# python3 ./test.py -f query/operator_cost.py
# python3 ./test.py -f query/long_where_query.py
python3 test.py -f query/nestedQuery/queryWithSpread.py
+python3 ./test.py -f query/bug6586.py
#stream
python3 ./test.py -f stream/metric_1.py
@@ -391,7 +392,7 @@ python3 test.py -f alter/alter_cacheLastRow.py
python3 ./test.py -f query/querySession.py
python3 test.py -f alter/alter_create_exception.py
python3 ./test.py -f insert/flushwhiledrop.py
-python3 ./test.py -f insert/schemalessInsert.py
+#python3 ./test.py -f insert/schemalessInsert.py
python3 ./test.py -f alter/alterColMultiTimes.py
python3 ./test.py -f query/queryWildcardLength.py
python3 ./test.py -f query/queryTbnameUpperLower.py
diff --git a/tests/pytest/insert/insertJSONPayload.py b/tests/pytest/insert/insertJSONPayload.py
index 30f34446a93237f9b7b610efc9b1b5507ba09f4a..88b03cf3f526a380a97acf2a45b86a2c64b66069 100644
--- a/tests/pytest/insert/insertJSONPayload.py
+++ b/tests/pytest/insert/insertJSONPayload.py
@@ -31,6 +31,27 @@ class TDTestCase:
### Default format ###
+ ### metric ###
+ print("============= step0 : test metric ================")
+ payload = '''
+ {
+ "metric": ".stb.0.",
+ "timestamp": 1626006833610123,
+ "value": 10,
+ "tags": {
+ "t1": true,
+ "t2": false,
+ "t3": 10,
+ "t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
+ }
+ }
+ '''
+ code = self._conn.insert_json_payload(payload)
+ print("insert_json_payload result {}".format(code))
+
+ tdSql.query("describe _stb_0_")
+ tdSql.checkRows(6)
+
### metric value ###
print("============= step1 : test metric value types ================")
payload = '''
@@ -50,7 +71,7 @@ class TDTestCase:
print("insert_json_payload result {}".format(code))
tdSql.query("describe stb0_0")
- tdSql.checkData(1, 1, "FLOAT")
+ tdSql.checkData(1, 1, "BIGINT")
payload = '''
{
@@ -107,12 +128,52 @@ class TDTestCase:
print("insert_json_payload result {}".format(code))
tdSql.query("describe stb0_3")
- tdSql.checkData(1, 1, "NCHAR")
+ tdSql.checkData(1, 1, "BINARY")
- ### timestamp 0 ###
payload = '''
{
"metric": "stb0_4",
+ "timestamp": 1626006833610123,
+ "value": 3.14,
+ "tags": {
+ "t1": true,
+ "t2": false,
+ "t3": 10,
+ "t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
+ }
+ }
+ '''
+ code = self._conn.insert_json_payload(payload)
+ print("insert_json_payload result {}".format(code))
+
+ tdSql.query("describe stb0_4")
+ tdSql.checkData(1, 1, "DOUBLE")
+
+ payload = '''
+ {
+ "metric": "stb0_5",
+ "timestamp": 1626006833610123,
+ "value": 3.14E-2,
+ "tags": {
+ "t1": true,
+ "t2": false,
+ "t3": 10,
+ "t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
+ }
+ }
+ '''
+ code = self._conn.insert_json_payload(payload)
+ print("insert_json_payload result {}".format(code))
+
+ tdSql.query("describe stb0_5")
+ tdSql.checkData(1, 1, "DOUBLE")
+
+
+ print("============= step2 : test timestamp ================")
+ ### timestamp 0 ###
+ payload = '''
+ {
+ "metric": "stb0_6",
"timestamp": 0,
"value": 123,
"tags": {
@@ -127,14 +188,15 @@ class TDTestCase:
print("insert_json_payload result {}".format(code))
+ print("============= step3 : test tags ================")
### ID ###
payload = '''
{
- "metric": "stb0_5",
+ "metric": "stb0_7",
"timestamp": 0,
"value": 123,
"tags": {
- "ID": "tb0_5",
+ "ID": "tb0_7",
"t1": true,
"iD": "tb000",
"t2": false,
@@ -147,10 +209,60 @@ class TDTestCase:
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
- tdSql.query("select tbname from stb0_5")
- tdSql.checkData(0, 0, "tb0_5")
+ tdSql.query("select tbname from stb0_7")
+ tdSql.checkData(0, 0, "tb0_7")
+
+ ### Default tag numeric types ###
+ payload = '''
+ {
+ "metric": "stb0_8",
+ "timestamp": 0,
+ "value": 123,
+ "tags": {
+ "t1": 123
+ }
+ }
+ '''
+ code = self._conn.insert_json_payload(payload)
+ print("insert_json_payload result {}".format(code))
+
+ tdSql.query("describe stb0_8")
+ tdSql.checkData(2, 1, "BIGINT")
+
+ payload = '''
+ {
+ "metric": "stb0_9",
+ "timestamp": 0,
+ "value": 123,
+ "tags": {
+ "t1": 123.00
+ }
+ }
+ '''
+ code = self._conn.insert_json_payload(payload)
+ print("insert_json_payload result {}".format(code))
+
+ tdSql.query("describe stb0_9")
+ tdSql.checkData(2, 1, "DOUBLE")
+
+ payload = '''
+ {
+ "metric": "stb0_10",
+ "timestamp": 0,
+ "value": 123,
+ "tags": {
+ "t1": 123E-1
+ }
+ }
+ '''
+ code = self._conn.insert_json_payload(payload)
+ print("insert_json_payload result {}".format(code))
+
+ tdSql.query("describe stb0_10")
+ tdSql.checkData(2, 1, "DOUBLE")
### Nested format ###
+ print("============= step4 : test nested format ================")
### timestamp ###
#seconds
payload = '''
diff --git a/tests/pytest/insert/insertTelnetLines.py b/tests/pytest/insert/insertTelnetLines.py
index 4041b309a1007c1177f26d28b022f4e314dcf9ba..b47a74249bbad57ef758e886c513a7eea78b7634 100644
--- a/tests/pytest/insert/insertTelnetLines.py
+++ b/tests/pytest/insert/insertTelnetLines.py
@@ -36,13 +36,14 @@ class TDTestCase:
"stb0_0 1626006833639000000ns 4i8 host=\"host0\" interface=\"eth0\"",
"stb0_1 1626006833639000000ns 4i8 host=\"host0\" interface=\"eth0\"",
"stb0_2 1626006833639000000ns 4i8 host=\"host0\" interface=\"eth0\"",
+ ".stb0.3. 1626006833639000000ns 4i8 host=\"host0\" interface=\"eth0\"",
]
code = self._conn.insert_telnet_lines(lines0)
print("insert_telnet_lines result {}".format(code))
tdSql.query("show stables")
- tdSql.checkRows(3)
+ tdSql.checkRows(4)
tdSql.query("describe stb0_0")
tdSql.checkRows(4)
@@ -53,6 +54,9 @@ class TDTestCase:
tdSql.query("describe stb0_2")
tdSql.checkRows(4)
+ tdSql.query("describe _stb0_3_")
+ tdSql.checkRows(4)
+
### timestamp ###
print("============= step2 : test timestamp ================")
lines1 = [
@@ -122,14 +126,15 @@ class TDTestCase:
#bigint
lines2_3 = [
"stb2_3 1626006833651ms -9223372036854775807i64 host=\"host0\"",
- "stb2_3 1626006833652ms 9223372036854775807i64 host=\"host0\""
+ "stb2_3 1626006833652ms 9223372036854775807i64 host=\"host0\"",
+ "stb2_3 1626006833662ms 9223372036854775807 host=\"host0\""
]
code = self._conn.insert_telnet_lines(lines2_3)
print("insert_telnet_lines result {}".format(code))
tdSql.query("select * from stb2_3")
- tdSql.checkRows(2)
+ tdSql.checkRows(3)
tdSql.query("describe stb2_3")
tdSql.checkRows(3)
@@ -145,7 +150,6 @@ class TDTestCase:
"stb2_4 1626006833660ms -3.4e10f32 host=\"host0\"",
"stb2_4 1626006833670ms 3.4E+2f32 host=\"host0\"",
"stb2_4 1626006833680ms -3.4e-2f32 host=\"host0\"",
- "stb2_4 1626006833690ms 3.15 host=\"host0\"",
"stb2_4 1626006833700ms 3.4E38f32 host=\"host0\"",
"stb2_4 1626006833710ms -3.4E38f32 host=\"host0\""
]
@@ -154,7 +158,7 @@ class TDTestCase:
print("insert_telnet_lines result {}".format(code))
tdSql.query("select * from stb2_4")
- tdSql.checkRows(11)
+ tdSql.checkRows(10)
tdSql.query("describe stb2_4")
tdSql.checkRows(3)
@@ -171,14 +175,15 @@ class TDTestCase:
"stb2_5 1626006833670ms 3.4E+2f64 host=\"host0\"",
"stb2_5 1626006833680ms -3.4e-2f64 host=\"host0\"",
"stb2_5 1626006833690ms 1.7E308f64 host=\"host0\"",
- "stb2_5 1626006833700ms -1.7E308f64 host=\"host0\""
+ "stb2_5 1626006833700ms -1.7E308f64 host=\"host0\"",
+ "stb2_5 1626006833710ms 3.15 host=\"host0\""
]
code = self._conn.insert_telnet_lines(lines2_5)
print("insert_telnet_lines result {}".format(code))
tdSql.query("select * from stb2_5")
- tdSql.checkRows(10)
+ tdSql.checkRows(11)
tdSql.query("describe stb2_5")
tdSql.checkRows(3)
diff --git a/tests/pytest/insert/openTsdbTelnetLinesInsert.py b/tests/pytest/insert/openTsdbTelnetLinesInsert.py
index 25518437e102c985b4d84887b1806f9e341c86d6..26d25941950602978334da0234aa40ce3d4a6c3b 100644
--- a/tests/pytest/insert/openTsdbTelnetLinesInsert.py
+++ b/tests/pytest/insert/openTsdbTelnetLinesInsert.py
@@ -13,9 +13,8 @@
import traceback
import random
-from taos.error import LinesError
+from taos.error import TelnetLinesError
import time
-from copy import deepcopy
import numpy as np
from util.log import *
from util.cases import *
@@ -39,13 +38,13 @@ class TDTestCase:
tdSql.execute(f'use {name}')
def timeTrans(self, time_value):
- if time_value.endswith("ns"):
+ if time_value.lower().endswith("ns"):
ts = int(''.join(list(filter(str.isdigit, time_value))))/1000000000
- elif time_value.endswith("us") or time_value.isdigit() and int(time_value) != 0:
+ elif time_value.lower().endswith("us") or time_value.isdigit() and int(time_value) != 0:
ts = int(''.join(list(filter(str.isdigit, time_value))))/1000000
- elif time_value.endswith("ms"):
+ elif time_value.lower().endswith("ms"):
ts = int(''.join(list(filter(str.isdigit, time_value))))/1000
- elif time_value.endswith("s") and list(time_value)[-1] not in "num":
+ elif time_value.lower().endswith("s") and list(time_value)[-1] not in "num":
ts = int(''.join(list(filter(str.isdigit, time_value))))/1
elif int(time_value) == 0:
ts = time.time()
@@ -68,43 +67,49 @@ class TDTestCase:
return int(time.mktime(time.strptime(datetime_input, "%Y-%m-%d %H:%M:%S.%f")))
def getTdTypeValue(self, value):
- if value.endswith("i8"):
+ if value.lower().endswith("i8"):
td_type = "TINYINT"
td_tag_value = ''.join(list(value)[:-2])
- elif value.endswith("i16"):
+ elif value.lower().endswith("i16"):
td_type = "SMALLINT"
td_tag_value = ''.join(list(value)[:-3])
- elif value.endswith("i32"):
+ elif value.lower().endswith("i32"):
td_type = "INT"
td_tag_value = ''.join(list(value)[:-3])
- elif value.endswith("i64"):
+ elif value.lower().endswith("i64"):
td_type = "BIGINT"
td_tag_value = ''.join(list(value)[:-3])
- elif value.endswith("u64"):
+ elif value.lower().endswith("u64"):
td_type = "BIGINT UNSIGNED"
td_tag_value = ''.join(list(value)[:-3])
- elif value.endswith("f32"):
+ elif value.lower().endswith("f32"):
td_type = "FLOAT"
td_tag_value = ''.join(list(value)[:-3])
td_tag_value = '{}'.format(np.float32(td_tag_value))
- elif value.endswith("f64"):
+ elif value.lower().endswith("f64"):
td_type = "DOUBLE"
td_tag_value = ''.join(list(value)[:-3])
- elif value.startswith('L"'):
+ elif value.lower().startswith('l"'):
td_type = "NCHAR"
td_tag_value = ''.join(list(value)[2:-1])
elif value.startswith('"') and value.endswith('"'):
td_type = "BINARY"
td_tag_value = ''.join(list(value)[1:-1])
- elif value.lower() == "t" or value == "true" or value == "True" or value == "TRUE":
+ elif value.lower() == "t" or value.lower() == "true":
td_type = "BOOL"
td_tag_value = "True"
- elif value.lower() == "f" or value == "false" or value == "False" or value == "FALSE":
+ elif value.lower() == "f" or value.lower() == "false":
td_type = "BOOL"
td_tag_value = "False"
- else:
- td_type = "FLOAT"
+ elif value.isdigit():
+ td_type = "BIGINT"
td_tag_value = value
+ else:
+ td_type = "DOUBLE"
+ if "e" in value.lower():
+ td_tag_value = str(float(value))
+ else:
+ td_tag_value = value
return td_type, td_tag_value
def typeTrans(self, type_list):
@@ -137,9 +142,7 @@ class TDTestCase:
def inputHandle(self, input_sql):
input_sql_split_list = input_sql.split(" ")
stb_name = input_sql_split_list[0]
-
- #'stb2_5 1626006833610ms 3f64 host="host0"',
- stb_tag_list = input_sql_split_list[3].split(',')
+ stb_tag_list = input_sql_split_list[3:]
stb_col_value = input_sql_split_list[2]
ts_value = self.timeTrans(input_sql_split_list[1])
@@ -190,7 +193,7 @@ class TDTestCase:
t8="L\"ncharTagValue\"", ts="1626006833639000000ns",
id_noexist_tag=None, id_change_tag=None, id_upper_tag=None, id_double_tag=None,
t_add_tag=None, t_mul_tag=None, t_multi_tag=None, c_blank_tag=None, t_blank_tag=None,
- chinese_tag=None, multi_field_tag=None):
+ chinese_tag=None, multi_field_tag=None, point_trans_tag=None):
if stb_name == "":
stb_name = tdCom.getLongName(len=6, mode="letters")
if tb_name == "":
@@ -203,31 +206,33 @@ class TDTestCase:
id = "ID"
else:
id = "id"
- sql_seq = f'{stb_name} {ts} {value} {id}=\"{tb_name}\",t0={t0},t1={t1},t2={t2},t3={t3},t4={t4},t5={t5},t6={t6},t7={t7},t8={t8}'
+ sql_seq = f'{stb_name} {ts} {value} {id}=\"{tb_name}\" t0={t0} t1={t1} t2={t2} t3={t3} t4={t4} t5={t5} t6={t6} t7={t7} t8={t8}'
if id_noexist_tag is not None:
- sql_seq = f'{stb_name} {ts} {value} t0={t0},t1={t1},t2={t2},t3={t3},t4={t4},t5={t5},t6={t6},t7={t7},t8={t8}'
+ sql_seq = f'{stb_name} {ts} {value} t0={t0} t1={t1} t2={t2} t3={t3} t4={t4} t5={t5} t6={t6} t7={t7} t8={t8}'
if t_add_tag is not None:
- sql_seq = f'{stb_name} {ts} {value} t0={t0},t1={t1},t2={t2},t3={t3},t4={t4},t5={t5},t6={t6},t7={t7},t8={t8},t9={t8}'
+ sql_seq = f'{stb_name} {ts} {value} t0={t0} t1={t1} t2={t2} t3={t3} t4={t4} t5={t5} t6={t6} t7={t7} t8={t8} t9={t8}'
if id_change_tag is not None:
- sql_seq = f'{stb_name} {ts} {value} t0={t0},{id}=\"{tb_name}\",t1={t1},t2={t2},t3={t3},t4={t4},t5={t5},t6={t6},t7={t7},t8={t8}'
+ sql_seq = f'{stb_name} {ts} {value} t0={t0} {id}=\"{tb_name}\" t1={t1} t2={t2} t3={t3} t4={t4} t5={t5} t6={t6} t7={t7} t8={t8}'
if id_double_tag is not None:
- sql_seq = f'{stb_name} {ts} {value} {id}=\"{tb_name}_1\",t0={t0},t1={t1},{id}=\"{tb_name}_2\",t2={t2},t3={t3},t4={t4},t5={t5},t6={t6},t7={t7},t8={t8}'
+ sql_seq = f'{stb_name} {ts} {value} {id}=\"{tb_name}_1\" t0={t0} t1={t1} {id}=\"{tb_name}_2\" t2={t2} t3={t3} t4={t4} t5={t5} t6={t6} t7={t7} t8={t8}'
if t_add_tag is not None:
- sql_seq = f'{stb_name} {ts} {value} {id}=\"{tb_name}\",t0={t0},t1={t1},t2={t2},t3={t3},t4={t4},t5={t5},t6={t6},t7={t7},t8={t8},t11={t1},t10={t8}'
+ sql_seq = f'{stb_name} {ts} {value} {id}=\"{tb_name}\" t0={t0} t1={t1} t2={t2} t3={t3} t4={t4} t5={t5} t6={t6} t7={t7} t8={t8} t11={t1} t10={t8}'
if t_mul_tag is not None:
- sql_seq = f'{stb_name} {ts} {value} {id}=\"{tb_name}\",t0={t0},t1={t1},t2={t2},t3={t3},t4={t4},t5={t5},t6={t6}'
+ sql_seq = f'{stb_name} {ts} {value} {id}=\"{tb_name}\" t0={t0} t1={t1} t2={t2} t3={t3} t4={t4} t5={t5} t6={t6}'
if id_noexist_tag is not None:
- sql_seq = f'{stb_name} {ts} {value} t0={t0},t1={t1},t2={t2},t3={t3},t4={t4},t5={t5},t6={t6}'
+ sql_seq = f'{stb_name} {ts} {value} t0={t0} t1={t1} t2={t2} t3={t3} t4={t4} t5={t5} t6={t6}'
if t_multi_tag is not None:
- sql_seq = f'{stb_name} {ts} {value},{value} {id}=\"{tb_name}\",t0={t0},t1={t1},t2={t2},t3={t3},t4={t4},t5={t5},t6={t6}'
+ sql_seq = f'{stb_name} {ts} {value} {value} {id}=\"{tb_name}\" t0={t0} t1={t1} t2={t2} t3={t3} t4={t4} t5={t5} t6={t6}'
if c_blank_tag is not None:
- sql_seq = f'{stb_name} {ts} {id}=\"{tb_name}\",t0={t0},t1={t1},t2={t2},t3={t3},t4={t4},t5={t5},t6={t6},t7={t7},t8={t8}'
+ sql_seq = f'{stb_name} {ts} {id}=\"{tb_name}\" t0={t0} t1={t1} t2={t2} t3={t3} t4={t4} t5={t5} t6={t6} t7={t7} t8={t8}'
if t_blank_tag is not None:
sql_seq = f'{stb_name} {ts} {value} {id}=\"{tb_name}\"'
if chinese_tag is not None:
- sql_seq = f'{stb_name} {ts} L"涛思数据" t0={t0},t1=L"涛思数据"'
+ sql_seq = f'{stb_name} {ts} L"涛思数据" t0={t0} t1=L"涛思数据"'
if multi_field_tag is not None:
- sql_seq = f'{stb_name} {ts} {value} {id}=\"{tb_name}\",t0={t0} t1={t1}'
+ sql_seq = f'{stb_name} {ts} {value} {id}=\"{tb_name}\" t0={t0} {value}'
+ if point_trans_tag is not None:
+ sql_seq = f'point.trans.test {ts} {value} t0={t0}'
return sql_seq, stb_name
def genMulTagColStr(self, genType, count=1):
@@ -239,7 +244,7 @@ class TDTestCase:
if genType == "tag":
for i in range(0, count):
if i < (count-1):
- tag_str += f't{i}=f,'
+ tag_str += f't{i}=f '
else:
tag_str += f't{i}=f'
return tag_str
@@ -253,7 +258,7 @@ class TDTestCase:
tag_str = self.genMulTagColStr("tag", tag_count)
col_str = self.genMulTagColStr("col")
ts = "1626006833640000000ns"
- long_sql = stb_name + ' ' + ts + ' ' + col_str + ' ' + f'id=\"{tb_name}\"' + ',' + tag_str
+ long_sql = stb_name + ' ' + ts + ' ' + col_str + ' ' + f'id=\"{tb_name}\"' + ' ' + tag_str
return long_sql, stb_name
def getNoIdTbName(self, stb_name):
@@ -299,7 +304,6 @@ class TDTestCase:
tdSql.checkEqual(res_field_list_without_ts, expect_list[1])
for i in range(len(res_type_list)):
tdSql.checkEqual(res_type_list[i], expect_list[2][i])
- # tdSql.checkEqual(res_type_list, expect_list[2])
def initCheckCase(self):
"""
@@ -328,19 +332,16 @@ class TDTestCase:
binary_symbols = '\"abcd`~!@#$%^&*()_-{[}]|:;<.>?lfjal"\'\'"\"'
'''
tdCom.cleanTb()
- binary_symbols = '"aaa"'
- # binary_symbols = '"abcd`~!@#$%^&*()_-{[}]|:;<.>?lfjal"'
+ binary_symbols = '"abcd`~!@#$%^&*()_-{[}]|:;<.>?lfjal"'
nchar_symbols = f'L{binary_symbols}'
input_sql1, stb_name1 = self.genFullTypeSql(value=binary_symbols, t7=binary_symbols, t8=nchar_symbols)
-
- # input_sql2, stb_name2 = self.genFullTypeSql(value=nchar_symbols, t7=binary_symbols, t8=nchar_symbols)
+ input_sql2, stb_name2 = self.genFullTypeSql(value=nchar_symbols, t7=binary_symbols, t8=nchar_symbols)
self.resCmp(input_sql1, stb_name1)
- # self.resCmp(input_sql2, stb_name2)
+ self.resCmp(input_sql2, stb_name2)
def tsCheckCase(self):
"""
test ts list --> ["1626006833639000000ns", "1626006833639019us", "1626006833640ms", "1626006834s", "1626006822639022"]
- # ! us级时间戳都为0时,数据库中查询显示,但python接口拿到的结果不显示 .000000的情况请确认,目前修改时间处理代码可以通过
"""
tdCom.cleanTb()
ts_list = ["1626006833639000000ns", "1626006833639019us", "1626006833640ms", "1626006834s", "1626006822639022", 0]
@@ -393,9 +394,10 @@ class TDTestCase:
tdCom.cleanTb()
try:
self._conn.insert_telnet_lines([input_sql])
- except LinesError:
- pass
-
+ raise Exception("should not reach here")
+ except TelnetLinesError as err:
+ tdSql.checkNotEqual(err.errno, 0)
+
def idIllegalNameCheckCase(self):
"""
test illegal id name
@@ -407,8 +409,9 @@ class TDTestCase:
input_sql = self.genFullTypeSql(tb_name=f"\"aaa{i}bbb\"")[0]
try:
self._conn.insert_telnet_lines([input_sql])
- except LinesError:
- pass
+ raise Exception("should not reach here")
+ except TelnetLinesError as err:
+ tdSql.checkNotEqual(err.errno, 0)
def idStartWithNumCheckCase(self):
"""
@@ -418,8 +421,9 @@ class TDTestCase:
input_sql = self.genFullTypeSql(tb_name=f"\"1aaabbb\"")[0]
try:
self._conn.insert_telnet_lines([input_sql])
- except LinesError:
- pass
+ raise Exception("should not reach here")
+ except TelnetLinesError as err:
+ tdSql.checkNotEqual(err.errno, 0)
def nowTsCheckCase(self):
"""
@@ -429,8 +433,9 @@ class TDTestCase:
input_sql = self.genFullTypeSql(ts="now")[0]
try:
self._conn.insert_telnet_lines([input_sql])
- except LinesError:
- pass
+ raise Exception("should not reach here")
+ except TelnetLinesError as err:
+ tdSql.checkNotEqual(err.errno, 0)
def dateFormatTsCheckCase(self):
"""
@@ -440,8 +445,9 @@ class TDTestCase:
input_sql = self.genFullTypeSql(ts="2021-07-21\ 19:01:46.920")[0]
try:
self._conn.insert_telnet_lines([input_sql])
- except LinesError:
- pass
+ raise Exception("should not reach here")
+ except TelnetLinesError as err:
+ tdSql.checkNotEqual(err.errno, 0)
def illegalTsCheckCase(self):
"""
@@ -451,8 +457,9 @@ class TDTestCase:
input_sql = self.genFullTypeSql(ts="16260068336390us19")[0]
try:
self._conn.insert_telnet_lines([input_sql])
- except LinesError:
- pass
+ raise Exception("should not reach here")
+ except TelnetLinesError as err:
+ tdSql.checkNotEqual(err.errno, 0)
def tagValueLengthCheckCase(self):
"""
@@ -467,8 +474,9 @@ class TDTestCase:
input_sql = self.genFullTypeSql(t1=t1)[0]
try:
self._conn.insert_telnet_lines([input_sql])
- except LinesError:
- pass
+ raise Exception("should not reach here")
+ except TelnetLinesError as err:
+ tdSql.checkNotEqual(err.errno, 0)
#i16
for t2 in ["-32767i16", "32767i16"]:
@@ -478,8 +486,9 @@ class TDTestCase:
input_sql = self.genFullTypeSql(t2=t2)[0]
try:
self._conn.insert_telnet_lines([input_sql])
- except LinesError:
- pass
+ raise Exception("should not reach here")
+ except TelnetLinesError as err:
+ tdSql.checkNotEqual(err.errno, 0)
#i32
for t3 in ["-2147483647i32", "2147483647i32"]:
@@ -489,8 +498,9 @@ class TDTestCase:
input_sql = self.genFullTypeSql(t3=t3)[0]
try:
self._conn.insert_telnet_lines([input_sql])
- except LinesError:
- pass
+ raise Exception("should not reach here")
+ except TelnetLinesError as err:
+ tdSql.checkNotEqual(err.errno, 0)
#i64
for t4 in ["-9223372036854775807i64", "9223372036854775807i64"]:
@@ -500,8 +510,9 @@ class TDTestCase:
input_sql = self.genFullTypeSql(t4=t4)[0]
try:
self._conn.insert_telnet_lines([input_sql])
- except LinesError:
- pass
+ raise Exception("should not reach here")
+ except TelnetLinesError as err:
+ tdSql.checkNotEqual(err.errno, 0)
# f32
for t5 in [f"{-3.4028234663852885981170418348451692544*(10**38)}f32", f"{3.4028234663852885981170418348451692544*(10**38)}f32"]:
@@ -513,7 +524,7 @@ class TDTestCase:
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
- except LinesError as err:
+ except TelnetLinesError as err:
tdSql.checkNotEqual(err.errno, 0)
@@ -527,32 +538,31 @@ class TDTestCase:
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
- except LinesError as err:
+ except TelnetLinesError as err:
tdSql.checkNotEqual(err.errno, 0)
# binary
stb_name = tdCom.getLongName(7, "letters")
- input_sql = f'{stb_name} 1626006833639000000ns t t0=t,t1="{tdCom.getLongName(16374, "letters")}"'
+ input_sql = f'{stb_name} 1626006833639000000ns t t0=t t1="{tdCom.getLongName(16374, "letters")}"'
self._conn.insert_telnet_lines([input_sql])
- input_sql = f'{stb_name} 1626006833639000000ns t t0=t,t1="{tdCom.getLongName(16375, "letters")}"'
+ input_sql = f'{stb_name} 1626006833639000000ns t t0=t t1="{tdCom.getLongName(16375, "letters")}"'
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
- except LinesError as err:
- pass
-
+ except TelnetLinesError as err:
+ tdSql.checkNotEqual(err.errno, 0)
# nchar
# * legal nchar could not be larger than 16374/4
stb_name = tdCom.getLongName(7, "letters")
- input_sql = f'{stb_name} 1626006833639000000ns t t0=t,t1=L"{tdCom.getLongName(4093, "letters")}"'
+ input_sql = f'{stb_name} 1626006833639000000ns t t0=t t1=L"{tdCom.getLongName(4093, "letters")}"'
self._conn.insert_telnet_lines([input_sql])
- input_sql = f'{stb_name} 1626006833639000000ns t t0=t,t1=L"{tdCom.getLongName(4094, "letters")}"'
+ input_sql = f'{stb_name} 1626006833639000000ns t t0=t t1=L"{tdCom.getLongName(4094, "letters")}"'
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
- except LinesError as err:
+ except TelnetLinesError as err:
tdSql.checkNotEqual(err.errno, 0)
def colValueLengthCheckCase(self):
@@ -570,7 +580,7 @@ class TDTestCase:
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
- except LinesError as err:
+ except TelnetLinesError as err:
tdSql.checkNotEqual(err.errno, 0)
# i16
tdCom.cleanTb()
@@ -583,7 +593,7 @@ class TDTestCase:
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
- except LinesError as err:
+ except TelnetLinesError as err:
tdSql.checkNotEqual(err.errno, 0)
# i32
@@ -597,7 +607,7 @@ class TDTestCase:
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
- except LinesError as err:
+ except TelnetLinesError as err:
tdSql.checkNotEqual(err.errno, 0)
# i64
@@ -611,7 +621,7 @@ class TDTestCase:
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
- except LinesError as err:
+ except TelnetLinesError as err:
tdSql.checkNotEqual(err.errno, 0)
# f32
@@ -626,7 +636,7 @@ class TDTestCase:
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
- except LinesError as err:
+ except TelnetLinesError as err:
tdSql.checkNotEqual(err.errno, 0)
# f64
@@ -641,7 +651,7 @@ class TDTestCase:
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
- except LinesError as err:
+ except TelnetLinesError as err:
tdSql.checkNotEqual(err.errno, 0)
# # binary
@@ -655,7 +665,7 @@ class TDTestCase:
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
- except LinesError as err:
+ except TelnetLinesError as err:
tdSql.checkNotEqual(err.errno, 0)
# nchar
@@ -670,7 +680,7 @@ class TDTestCase:
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
- except LinesError as err:
+ except TelnetLinesError as err:
tdSql.checkNotEqual(err.errno, 0)
def tagColIllegalValueCheckCase(self):
@@ -681,18 +691,10 @@ class TDTestCase:
tdCom.cleanTb()
# bool
for i in ["TrUe", "tRue", "trUe", "truE", "FalsE", "fAlse", "faLse", "falSe", "falsE"]:
- input_sql1 = self.genFullTypeSql(t0=i)[0]
- try:
- self._conn.insert_telnet_lines([input_sql1])
- raise Exception("should not reach here")
- except LinesError as err:
- tdSql.checkNotEqual(err.errno, 0)
- input_sql2 = self.genFullTypeSql(value=i)[0]
- try:
- self._conn.insert_telnet_lines([input_sql2])
- raise Exception("should not reach here")
- except LinesError as err:
- tdSql.checkNotEqual(err.errno, 0)
+ input_sql1, stb_name = self.genFullTypeSql(t0=i)
+ self.resCmp(input_sql1, stb_name)
+ input_sql2, stb_name = self.genFullTypeSql(value=i)
+ self.resCmp(input_sql2, stb_name)
# i8 i16 i32 i64 f32 f64
for input_sql in [
@@ -706,7 +708,7 @@ class TDTestCase:
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
- except LinesError as err:
+ except TelnetLinesError as err:
tdSql.checkNotEqual(err.errno, 0)
# check binary and nchar blank
@@ -717,17 +719,39 @@ class TDTestCase:
for input_sql in [input_sql1, input_sql2, input_sql3, input_sql4]:
try:
self._conn.insert_telnet_lines([input_sql])
- except LinesError as err:
- pass
+ raise Exception("should not reach here")
+ except TelnetLinesError as err:
+ tdSql.checkNotEqual(err.errno, 0)
# check accepted binary and nchar symbols
# # * ~!@#$¥%^&*()-+={}|[]、「」:;
for symbol in list('~!@#$¥%^&*()-+={}|[]、「」:;'):
input_sql1 = f'{tdCom.getLongName(7, "letters")} 1626006833639000000ns "abc{symbol}aaa" t0=t'
- input_sql2 = f'{tdCom.getLongName(7, "letters")} 1626006833639000000ns t t0=t,t1="abc{symbol}aaa"'
+ input_sql2 = f'{tdCom.getLongName(7, "letters")} 1626006833639000000ns t t0=t t1="abc{symbol}aaa"'
self._conn.insert_telnet_lines([input_sql1])
self._conn.insert_telnet_lines([input_sql2])
-
+
+ def blankCheckCase(self):
+ '''
+ check blank case
+ '''
+ tdCom.cleanTb()
+ input_sql_list = [f'{tdCom.getLongName(7, "letters")} {tdCom.getLongName(7, "letters")} 1626006833639000000ns "abcaaa" t0=t',
+ f'{tdCom.getLongName(7, "letters")} 16260068336 39000000ns L"bcdaaa" t1=f',
+ f'{tdCom.getLongName(7, "letters")} 1626006833639000000ns t t0="abc aaa"',
+ f'{tdCom.getLongName(7, "letters")} 1626006833639000000ns t t0=L"abc aaa"',
+ f'{tdCom.getLongName(7, "letters")} 1626006833639000000ns "abc aaa" t0=L"abcaaa"',
+ f'{tdCom.getLongName(7, "letters")} 1626006833639000000ns L"abc aaa" t0=L"abcaaa"',
+ f'{tdCom.getLongName(7, "letters")} 1626006833639000000ns L"abaaa" t0=L"abcaaa1"',
+ f'{tdCom.getLongName(7, "letters")} 1626006833639000000ns L"abaaa" t0=L"abcaaa2"',
+ f'{tdCom.getLongName(7, "letters")} 1626006833639000000ns L"abaaa" t0=t t1="abc t2="taa""',
+ f'{tdCom.getLongName(7, "letters")} 1626006833639000000ns L"abaaa" t0=L"abcaaa3"']
+ for input_sql in input_sql_list:
+ try:
+ self._conn.insert_telnet_lines([input_sql])
+ raise Exception("should not reach here")
+ except TelnetLinesError as err:
+ tdSql.checkNotEqual(err.errno, 0)
def duplicateIdTagColInsertCheckCase(self):
"""
@@ -738,7 +762,7 @@ class TDTestCase:
try:
self._conn.insert_telnet_lines([input_sql_id])
raise Exception("should not reach here")
- except LinesError as err:
+ except TelnetLinesError as err:
tdSql.checkNotEqual(err.errno, 0)
input_sql = self.genFullTypeSql()[0]
@@ -746,7 +770,7 @@ class TDTestCase:
try:
self._conn.insert_telnet_lines([input_sql_tag])
raise Exception("should not reach here")
- except LinesError as err:
+ except TelnetLinesError as err:
tdSql.checkNotEqual(err.errno, 0)
##### stb exist #####
@@ -762,7 +786,6 @@ class TDTestCase:
self.resCmp(input_sql, stb_name, condition='where tbname like "t_%"')
tdSql.query(f"select * from {stb_name}")
tdSql.checkRows(2)
- # TODO cover other case
def duplicateInsertExistCheckCase(self):
"""
@@ -857,21 +880,21 @@ class TDTestCase:
stb_name = tdCom.getLongName(7, "letters")
tb_name = f'{stb_name}_1'
- input_sql = f'{stb_name} 1626006833639000000ns f id="{tb_name}",t0=t'
+ input_sql = f'{stb_name} 1626006833639000000ns f id="{tb_name}" t0=t'
self._conn.insert_telnet_lines([input_sql])
# * every binary and nchar must be length+2, so here is two tag, max length could not larger than 16384-2*2
- input_sql = f'{stb_name} 1626006833639000000ns f t0=t,t1="{tdCom.getLongName(16374, "letters")}",t2="{tdCom.getLongName(5, "letters")}"'
+ input_sql = f'{stb_name} 1626006833639000000ns f t0=t t1="{tdCom.getLongName(16374, "letters")}" t2="{tdCom.getLongName(5, "letters")}"'
self._conn.insert_telnet_lines([input_sql])
tdSql.query(f"select * from {stb_name}")
tdSql.checkRows(2)
- input_sql = f'{stb_name} 1626006833639000000ns f t0=t,t1="{tdCom.getLongName(16374, "letters")}",t2="{tdCom.getLongName(6, "letters")}"'
+ input_sql = f'{stb_name} 1626006833639000000ns f t0=t t1="{tdCom.getLongName(16374, "letters")}" t2="{tdCom.getLongName(6, "letters")}"'
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
- except LinesError:
- pass
+ except TelnetLinesError as err:
+ tdSql.checkNotEqual(err.errno, 0)
tdSql.query(f"select * from {stb_name}")
tdSql.checkRows(2)
@@ -883,19 +906,19 @@ class TDTestCase:
tdCom.cleanTb()
stb_name = tdCom.getLongName(7, "letters")
tb_name = f'{stb_name}_1'
- input_sql = f'{stb_name} 1626006833639000000ns f id="{tb_name}",t0=t'
+ input_sql = f'{stb_name} 1626006833639000000ns f id="{tb_name}" t0=t'
self._conn.insert_telnet_lines([input_sql])
# * legal nchar could not be larger than 16374/4
- input_sql = f'{stb_name} 1626006833639000000ns f t0=t,t1=L"{tdCom.getLongName(4093, "letters")}",t2=L"{tdCom.getLongName(1, "letters")}"'
+ input_sql = f'{stb_name} 1626006833639000000ns f t0=t t1=L"{tdCom.getLongName(4093, "letters")}" t2=L"{tdCom.getLongName(1, "letters")}"'
self._conn.insert_telnet_lines([input_sql])
tdSql.query(f"select * from {stb_name}")
tdSql.checkRows(2)
- input_sql = f'{stb_name} 1626006833639000000ns f t0=t,t1=L"{tdCom.getLongName(4093, "letters")}",t2=L"{tdCom.getLongName(2, "letters")}"'
+ input_sql = f'{stb_name} 1626006833639000000ns f t0=t t1=L"{tdCom.getLongName(4093, "letters")}" t2=L"{tdCom.getLongName(2, "letters")}"'
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
- except LinesError as err:
+ except TelnetLinesError as err:
tdSql.checkNotEqual(err.errno, 0)
tdSql.query(f"select * from {stb_name}")
tdSql.checkRows(2)
@@ -908,15 +931,15 @@ class TDTestCase:
stb_name = tdCom.getLongName(8, "letters")
tdSql.execute(f'create stable {stb_name}(ts timestamp, f int) tags(t1 bigint)')
- lines = ["st123456 1626006833639000000ns 1i64 t1=3i64,t2=4f64,t3=\"t3\"",
- "st123456 1626006833640000000ns 2i64 t1=4i64,t3=\"t4\",t2=5f64,t4=5f64",
- f'{stb_name} 1626056811823316532ns 3i64 t2=5f64,t3=L\"ste\"',
- "stf567890 1626006933640000000ns 4i64 t1=4i64,t3=\"t4\",t2=5f64,t4=5f64",
- "st123456 1626006833642000000ns 5i64 t1=4i64,t2=5f64,t3=\"t4\"",
- f'{stb_name} 1626056811843316532ns 6i64 t2=5f64,t3=L\"ste2\"',
- f'{stb_name} 1626056812843316532ns 7i64 t2=5f64,t3=L\"ste2\"',
- "st123456 1626006933640000000ns 8i64 t1=4i64,t3=\"t4\",t2=5f64,t4=5f64",
- "st123456 1626006933641000000ns 9i64 t1=4i64,t3=\"t4\",t2=5f64,t4=5f64"
+ lines = ["st123456 1626006833639000000ns 1i64 t1=3i64 t2=4f64 t3=\"t3\"",
+ "st123456 1626006833640000000ns 2i64 t1=4i64 t3=\"t4\" t2=5f64 t4=5f64",
+ f'{stb_name} 1626056811823316532ns 3i64 t2=5f64 t3=L\"ste\"',
+ "stf567890 1626006933640000000ns 4i64 t1=4i64 t3=\"t4\" t2=5f64 t4=5f64",
+ "st123456 1626006833642000000ns 5i64 t1=4i64 t2=5f64 t3=\"t4\"",
+ f'{stb_name} 1626056811843316532ns 6i64 t2=5f64 t3=L\"ste2\"',
+ f'{stb_name} 1626056812843316532ns 7i64 t2=5f64 t3=L\"ste2\"',
+ "st123456 1626006933640000000ns 8i64 t1=4i64 t3=\"t4\" t2=5f64 t4=5f64",
+ "st123456 1626006933641000000ns 9i64 t1=4i64 t3=\"t4\" t2=5f64 t4=5f64"
]
self._conn.insert_telnet_lines(lines)
tdSql.query('show stables')
@@ -939,7 +962,7 @@ class TDTestCase:
sql_list.append(input_sql)
self._conn.insert_telnet_lines(sql_list)
tdSql.query('show tables')
- tdSql.checkRows(1000)
+ tdSql.checkRows(count)
def batchErrorInsertCheckCase(self):
"""
@@ -947,12 +970,12 @@ class TDTestCase:
"""
tdCom.cleanTb()
stb_name = tdCom.getLongName(8, "letters")
- lines = ["st123456 1626006833639000000ns 3i64 t1=3i64,t2=4f64,t3=\"t3\"",
- f"{stb_name} 1626056811823316532ns tRue t2=5f64,t3=L\"ste\""]
+ lines = ["st123456 1626006833639000000ns 3i 64 t1=3i64 t2=4f64 t3=\"t3\"",
+ f"{stb_name} 1626056811823316532ns tRue t2=5f64 t3=L\"ste\""]
try:
self._conn.insert_telnet_lines(lines)
raise Exception("should not reach here")
- except LinesError as err:
+ except TelnetLinesError as err:
tdSql.checkNotEqual(err.errno, 0)
def multiColsInsertCheckCase(self):
@@ -964,7 +987,7 @@ class TDTestCase:
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
- except LinesError as err:
+ except TelnetLinesError as err:
tdSql.checkNotEqual(err.errno, 0)
def blankColInsertCheckCase(self):
@@ -976,7 +999,7 @@ class TDTestCase:
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
- except LinesError as err:
+ except TelnetLinesError as err:
tdSql.checkNotEqual(err.errno, 0)
def blankTagInsertCheckCase(self):
@@ -988,7 +1011,7 @@ class TDTestCase:
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
- except LinesError as err:
+ except TelnetLinesError as err:
tdSql.checkNotEqual(err.errno, 0)
def chineseCheckCase(self):
@@ -1008,24 +1031,45 @@ class TDTestCase:
try:
self._conn.insert_telnet_lines([input_sql])
raise Exception("should not reach here")
- except LinesError as err:
+ except TelnetLinesError as err:
tdSql.checkNotEqual(err.errno, 0)
def errorTypeCheckCase(self):
stb_name = tdCom.getLongName(8, "letters")
- input_sql_list = [f'{stb_name} 0 "hkgjiwdj" t0=f,t1=127I8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="vozamcts",t8=L"ncharTagValue"', \
- f'{stb_name} 0 "hkgjiwdj" t0=f,t1=127i8,t2=32767I16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="vozamcts",t8=L"ncharTagValue"', \
- f'{stb_name} 0 "hkgjiwdj" t0=f,t1=127i8,t2=32767i16,t3=2147483647I32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="vozamcts",t8=L"ncharTagValue"', \
- f'{stb_name} 0 "hkgjiwdj" t0=f,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807I64,t5=11.12345f32,t6=22.123456789f64,t7="vozamcts",t8=L"ncharTagValue"', \
- f'{stb_name} 0 "hkgjiwdj" t0=f,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345F32,t6=22.123456789f64,t7="vozamcts",t8=L"ncharTagValue"', \
- f'{stb_name} 0 "hkgjiwdj" t0=f,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789F64,t7="vozamcts",t8=L"ncharTagValue"', \
- f'{stb_name} 1626006833639000000NS "hkgjiwdj" t0=f,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="vozamcts",t8=L"ncharTagValue"']
+ input_sql_list = [f'{stb_name}_1 1626006833639000000Ns "hkgjiwdj" t0=f t1=127I8 t2=32767i16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64 t7="vozamcts" t8=L"ncharTagValue"', \
+ f'{stb_name}_2 1626006833639000001nS "hkgjiwdj" t0=f t1=127i8 t2=32767I16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64 t7="vozamcts" t8=L"ncharTagValue"', \
+ f'{stb_name}_3 1626006833639000002NS "hkgjiwdj" t0=f t1=127i8 t2=32767I16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64 t7="vozamcts" t8=L"ncharTagValue"', \
+ f'{stb_name}_4 1626006833639019Us "hkgjiwdj" t0=f t1=127i8 t2=32767I16 t3=2147483647I32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64 t7="vozamcts" t8=L"ncharTagValue"', \
+ f'{stb_name}_5 1626006833639018uS "hkgjiwdj" t0=f t1=127i8 t2=32767I16 t3=2147483647i32 t4=9223372036854775807I64 t5=11.12345f32 t6=22.123456789f64 t7="vozamcts" t8=L"ncharTagValue"', \
+ f'{stb_name}_6 1626006833639017US "hkgjiwdj" t0=f t1=127i8 t2=32767I16 t3=2147483647i32 t4=9223372036854775807I64 t5=11.12345f32 t6=22.123456789f64 t7="vozamcts" t8=L"ncharTagValue"', \
+ f'{stb_name}_7 1626006833640Ms "hkgjiwdj" t0=f t1=127i8 t2=32767I16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789F64 t7="vozamcts" t8=L"ncharTagValue"', \
+ f'{stb_name}_8 1626006833641mS "hkgjiwdj" t0=f t1=127i8 t2=32767I16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64 t7="vozamcts" t8=L"ncharTagValue"', \
+ f'{stb_name}_9 1626006833642MS "hkgjiwdj" t0=f t1=127i8 t2=32767I16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64 t7="vozamcts" t8=L"ncharTagValue"', \
+ f'{stb_name}_10 1626006834S "hkgjiwdj" t0=f t1=127i8 t2=32767I16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64 t7="vozamcts" t8=l"ncharTagValue"', \
+ f'{stb_name}_11 1626006834S "hkgjiwdj" t0=f t1=127i8 t2=32767I16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64 t7="vozamcts" t8=L"ncharTagValue"']
for input_sql in input_sql_list:
- try:
- self._conn.insert_telnet_lines([input_sql])
- raise Exception("should not reach here")
- except LinesError as err:
- tdSql.checkNotEqual(err.errno, 0)
+ stb_name = input_sql.split(" ")[0]
+ self.resCmp(input_sql, stb_name)
+
+ def pointTransCheckCase(self):
+ """
+ metric value "." trans to "_"
+ """
+ tdCom.cleanTb()
+ input_sql = self.genFullTypeSql(point_trans_tag=True)[0]
+ stb_name = input_sql.split(" ")[0].replace(".", "_")
+ self.resCmp(input_sql, stb_name)
+
+ def defaultTypeCheckCase(self):
+ stb_name = tdCom.getLongName(8, "letters")
+ input_sql_list = [f'{stb_name}_1 1626006833639000000Ns 9223372036854775807 t0=f t1=127 t2=32767i16 t3=2147483647i32 t4=9223372036854775807 t5=11.12345f32 t6=22.123456789f64 t7="vozamcts" t8=L"ncharTagValue"', \
+ f'{stb_name}_2 1626006834S 22.123456789 t0=f t1=127i8 t2=32767I16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789 t7="vozamcts" t8=L"ncharTagValue"', \
+ f'{stb_name}_3 1626006834S 10e5 t0=f t1=127i8 t2=32767I16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=10e5 t7="vozamcts" t8=L"ncharTagValue"', \
+ f'{stb_name}_4 1626006834S 10.0e5 t0=f t1=127i8 t2=32767I16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=10.0e5 t7="vozamcts" t8=L"ncharTagValue"', \
+ f'{stb_name}_5 1626006834S -10.0e5 t0=f t1=127i8 t2=32767I16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=-10.0e5 t7="vozamcts" t8=L"ncharTagValue"']
+ for input_sql in input_sql_list:
+ stb_name = input_sql.split(" ")[0]
+ self.resCmp(input_sql, stb_name)
def genSqlList(self, count=5, stb_name="", tb_name=""):
"""
@@ -1166,11 +1210,11 @@ class TDTestCase:
tdCom.cleanTb()
input_sql, stb_name = self.genFullTypeSql(value="\"binaryTagValue\"")
self.resCmp(input_sql, stb_name)
- s_stb_d_tb_m_tag_list = [(f'{stb_name} 1626006833639000000ns "omfdhyom" t0=F,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64', 'yzwswz'), \
- (f'{stb_name} 1626006833639000000ns "vqowydbc" t0=F,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64', 'yzwswz'), \
- (f'{stb_name} 1626006833639000000ns "plgkckpv" t0=F,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64', 'yzwswz'), \
- (f'{stb_name} 1626006833639000000ns "cujyqvlj" t0=F,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64', 'yzwswz'), \
- (f'{stb_name} 1626006833639000000ns "twjxisat" t0=T,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64', 'yzwswz')]
+ s_stb_d_tb_m_tag_list = [(f'{stb_name} 1626006833639000000ns "omfdhyom" t0=F t1=127i8 t2=32767i16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64', 'yzwswz'), \
+ (f'{stb_name} 1626006833639000000ns "vqowydbc" t0=F t1=127i8 t2=32767i16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64', 'yzwswz'), \
+ (f'{stb_name} 1626006833639000000ns "plgkckpv" t0=F t1=127i8 t2=32767i16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64', 'yzwswz'), \
+ (f'{stb_name} 1626006833639000000ns "cujyqvlj" t0=F t1=127i8 t2=32767i16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64', 'yzwswz'), \
+ (f'{stb_name} 1626006833639000000ns "twjxisat" t0=T t1=127i8 t2=32767i16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64', 'yzwswz')]
self.multiThreadRun(self.genMultiThreadSeq(s_stb_d_tb_m_tag_list))
tdSql.query(f"show tables;")
tdSql.checkRows(3)
@@ -1195,11 +1239,11 @@ class TDTestCase:
tb_name = tdCom.getLongName(7, "letters")
input_sql, stb_name = self.genFullTypeSql(tb_name=tb_name, value="\"binaryTagValue\"")
self.resCmp(input_sql, stb_name)
- s_stb_s_tb_d_ts_list = [(f'{stb_name} 0 "hkgjiwdj" id="{tb_name}",t0=f,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="vozamcts",t8=L"ncharTagValue"', 'dwpthv'), \
- (f'{stb_name} 0 "rljjrrul" id="{tb_name}",t0=False,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="bmcanhbs",t8=L"ncharTagValue"', 'dwpthv'), \
- (f'{stb_name} 0 "basanglx" id="{tb_name}",t0=False,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="enqkyvmb",t8=L"ncharTagValue"', 'dwpthv'), \
- (f'{stb_name} 0 "clsajzpp" id="{tb_name}",t0=F,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="eivaegjk",t8=L"ncharTagValue"', 'dwpthv'), \
- (f'{stb_name} 0 "jitwseso" id="{tb_name}",t0=T,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="yhlwkddq",t8=L"ncharTagValue"', 'dwpthv')]
+ s_stb_s_tb_d_ts_list = [(f'{stb_name} 0 "hkgjiwdj" id="{tb_name}" t0=f t1=127i8 t2=32767i16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64 t7="vozamcts" t8=L"ncharTagValue"', 'dwpthv'), \
+ (f'{stb_name} 0 "rljjrrul" id="{tb_name}" t0=False t1=127i8 t2=32767i16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64 t7="bmcanhbs" t8=L"ncharTagValue"', 'dwpthv'), \
+ (f'{stb_name} 0 "basanglx" id="{tb_name}" t0=False t1=127i8 t2=32767i16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64 t7="enqkyvmb" t8=L"ncharTagValue"', 'dwpthv'), \
+ (f'{stb_name} 0 "clsajzpp" id="{tb_name}" t0=F t1=127i8 t2=32767i16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64 t7="eivaegjk" t8=L"ncharTagValue"', 'dwpthv'), \
+ (f'{stb_name} 0 "jitwseso" id="{tb_name}" t0=T t1=127i8 t2=32767i16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64 t7="yhlwkddq" t8=L"ncharTagValue"', 'dwpthv')]
self.multiThreadRun(self.genMultiThreadSeq(s_stb_s_tb_d_ts_list))
tdSql.query(f"show tables;")
tdSql.checkRows(1)
@@ -1231,11 +1275,11 @@ class TDTestCase:
tb_name = tdCom.getLongName(7, "letters")
input_sql, stb_name = self.genFullTypeSql(tb_name=tb_name, value="\"binaryTagValue\"")
self.resCmp(input_sql, stb_name)
- s_stb_s_tb_d_ts_a_tag_list = [(f'{stb_name} 0 "clummqfy" id="{tb_name}",t0=False,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="hpxzrdiw",t8=L"ncharTagValue",t11=127i8,t10=L"ncharTagValue"', 'bokaxl'), \
- (f'{stb_name} 0 "yqeztggb" id="{tb_name}",t0=F,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="gdtblmrc",t8=L"ncharTagValue",t11=127i8,t10=L"ncharTagValue"', 'bokaxl'), \
- (f'{stb_name} 0 "gbkinqdk" id="{tb_name}",t0=f,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="iqniuvco",t8=L"ncharTagValue",t11=127i8,t10=L"ncharTagValue"', 'bokaxl'), \
- (f'{stb_name} 0 "ldxxejbd" id="{tb_name}",t0=f,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="vxkipags",t8=L"ncharTagValue",t11=127i8,t10=L"ncharTagValue"', 'bokaxl'), \
- (f'{stb_name} 0 "tlvzwjes" id="{tb_name}",t0=true,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="enwrlrtj",t8=L"ncharTagValue",t11=127i8,t10=L"ncharTagValue"', 'bokaxl')]
+ s_stb_s_tb_d_ts_a_tag_list = [(f'{stb_name} 0 "clummqfy" id="{tb_name}" t0=False t1=127i8 t2=32767i16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64 t7="hpxzrdiw" t8=L"ncharTagValue" t11=127i8 t10=L"ncharTagValue"', 'bokaxl'), \
+ (f'{stb_name} 0 "yqeztggb" id="{tb_name}" t0=F t1=127i8 t2=32767i16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64 t7="gdtblmrc" t8=L"ncharTagValue" t11=127i8 t10=L"ncharTagValue"', 'bokaxl'), \
+ (f'{stb_name} 0 "gbkinqdk" id="{tb_name}" t0=f t1=127i8 t2=32767i16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64 t7="iqniuvco" t8=L"ncharTagValue" t11=127i8 t10=L"ncharTagValue"', 'bokaxl'), \
+ (f'{stb_name} 0 "ldxxejbd" id="{tb_name}" t0=f t1=127i8 t2=32767i16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64 t7="vxkipags" t8=L"ncharTagValue" t11=127i8 t10=L"ncharTagValue"', 'bokaxl'), \
+ (f'{stb_name} 0 "tlvzwjes" id="{tb_name}" t0=true t1=127i8 t2=32767i16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64 t7="enwrlrtj" t8=L"ncharTagValue" t11=127i8 t10=L"ncharTagValue"', 'bokaxl')]
self.multiThreadRun(self.genMultiThreadSeq(s_stb_s_tb_d_ts_a_tag_list))
tdSql.query(f"show tables;")
tdSql.checkRows(1)
@@ -1264,44 +1308,31 @@ class TDTestCase:
tdCom.cleanTb()
input_sql, stb_name = self.genFullTypeSql(value="\"binaryTagValue\"")
self.resCmp(input_sql, stb_name)
- s_stb_d_tb_d_ts_m_tag_list = [(f'{stb_name} 0 "mnpmtzul" t0=f,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64', 'pcppkg'), \
- (f'{stb_name} 0 "zbvwckcd" t0=True,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64', 'pcppkg'), \
- (f'{stb_name} 0 "vymcjfwc" t0=F,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64', 'pcppkg'), \
- (f'{stb_name} 0 "laumkwfn" t0=False,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64', 'pcppkg'), \
- (f'{stb_name} 0 "nyultzxr" t0=false,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64', 'pcppkg')]
+ s_stb_d_tb_d_ts_m_tag_list = [(f'{stb_name} 0 "mnpmtzul" t0=f t1=127i8 t2=32767i16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64', 'pcppkg'), \
+ (f'{stb_name} 0 "zbvwckcd" t0=True t1=127i8 t2=32767i16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64', 'pcppkg'), \
+ (f'{stb_name} 0 "vymcjfwc" t0=F t1=127i8 t2=32767i16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64', 'pcppkg'), \
+ (f'{stb_name} 0 "laumkwfn" t0=False t1=127i8 t2=32767i16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64', 'pcppkg'), \
+ (f'{stb_name} 0 "nyultzxr" t0=false t1=127i8 t2=32767i16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64', 'pcppkg')]
self.multiThreadRun(self.genMultiThreadSeq(s_stb_d_tb_d_ts_m_tag_list))
tdSql.query(f"show tables;")
tdSql.checkRows(3)
def test(self):
- # input_sql1 = "stb2_5 1626006833610ms 3f64 host=\"host0\",host2=L\"host2\""
- # input_sql2 = "rfasta,id=\"rfasta_1\",t0=true,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64 c0=True,c1=127i8,c2=32767i16,c3=2147483647i32,c4=9223372036854775807i64,c5=11.12345f32,c6=22.123456789f64 1626006933640000000ns"
try:
- input_sql = f'test_nchar 0 L"涛思数据" t0=f,t1=L"涛思数据",t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64'
+ input_sql = f'test_nchar 0 L"涛思数据" t0=f t1=L"涛思数据" t2=32767i16 t3=2147483647i32 t4=9223372036854775807i64 t5=11.12345f32 t6=22.123456789f64'
self._conn.insert_telnet_lines([input_sql])
- # input_sql, stb_name = self.genFullTypeSql()
- # self.resCmp(input_sql, stb_name)
- except LinesError as err:
+ except TelnetLinesError as err:
print(err.errno)
- # self._conn.insert_telnet_lines([input_sql2])
- # input_sql3 = f'abcd,id="cc¥Ec",t0=True,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="ndsfdrum",t8=L"ncharTagValue" c0=f,c1=127i8,c2=32767i16,c3=2147483647i32,c4=9223372036854775807i64,c5=11.12345f32,c6=22.123456789f64,c7="igwoehkm",c8=L"ncharColValue",c9=7u64 0'
- # print(input_sql3)
- # input_sql4 = 'hmemeb,id="kilrcrldgf",t0=F,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7="fysodjql",t8=L"ncharTagValue" c0=True,c1=127i8,c2=32767i16,c3=2147483647i32,c4=9223372036854775807i64,c5=11.12345f32,c6=22.123456789f64,c7="waszbfvc",c8=L"ncharColValue",c9=7u64 0'
- # code = self._conn.insert_telnet_lines([input_sql3])
- # print(code)
- # self._conn.insert_telnet_lines([input_sql4])
def runAll(self):
self.initCheckCase()
self.boolTypeCheckCase()
- # ! leave a bug
- #self.symbolsCheckCase()
+ self.symbolsCheckCase()
self.tsCheckCase()
self.idSeqCheckCase()
self.idUpperCheckCase()
self.noIdCheckCase()
self.maxColTagCheckCase()
-
self.idIllegalNameCheckCase()
self.idStartWithNumCheckCase()
self.nowTsCheckCase()
@@ -1310,6 +1341,7 @@ class TDTestCase:
self.tagValueLengthCheckCase()
self.colValueLengthCheckCase()
self.tagColIllegalValueCheckCase()
+ self.blankCheckCase()
self.duplicateIdTagColInsertCheckCase()
self.noIdStbExistCheckCase()
self.duplicateInsertExistCheckCase()
@@ -1319,9 +1351,8 @@ class TDTestCase:
self.tagMd5Check()
self.tagColBinaryMaxLengthCheckCase()
self.tagColNcharMaxLengthCheckCase()
-
self.batchInsertCheckCase()
- self.multiInsertCheckCase(1000)
+ self.multiInsertCheckCase(10)
self.batchErrorInsertCheckCase()
self.multiColsInsertCheckCase()
self.blankColInsertCheckCase()
@@ -1329,32 +1360,31 @@ class TDTestCase:
self.chineseCheckCase()
self.multiFieldCheckCase()
self.errorTypeCheckCase()
- # MultiThreads
- # self.stbInsertMultiThreadCheckCase()
- # self.sStbStbDdataInsertMultiThreadCheckCase()
- # self.sStbStbDdataAtInsertMultiThreadCheckCase()
- # self.sStbStbDdataMtInsertMultiThreadCheckCase()
- # self.sStbDtbDdataInsertMultiThreadCheckCase()
- # self.sStbDtbDdataMtInsertMultiThreadCheckCase()
- # self.sStbDtbDdataAtInsertMultiThreadCheckCase()
- # self.sStbStbDdataDtsInsertMultiThreadCheckCase()
- # self.sStbStbDdataDtsMtInsertMultiThreadCheckCase()
- # self.sStbStbDdataDtsAtInsertMultiThreadCheckCase()
- # self.sStbDtbDdataDtsInsertMultiThreadCheckCase()
- # self.sStbDtbDdataDtsMtInsertMultiThreadCheckCase()
+ self.pointTransCheckCase()
+ self.defaultTypeCheckCase()
+ # # MultiThreads
+ self.stbInsertMultiThreadCheckCase()
+ self.sStbStbDdataInsertMultiThreadCheckCase()
+ self.sStbStbDdataAtInsertMultiThreadCheckCase()
+ self.sStbStbDdataMtInsertMultiThreadCheckCase()
+ self.sStbDtbDdataInsertMultiThreadCheckCase()
+ self.sStbDtbDdataMtInsertMultiThreadCheckCase()
+ self.sStbDtbDdataAtInsertMultiThreadCheckCase()
+ self.sStbStbDdataDtsInsertMultiThreadCheckCase()
+ self.sStbStbDdataDtsMtInsertMultiThreadCheckCase()
+ self.sStbStbDdataDtsAtInsertMultiThreadCheckCase()
+ self.sStbDtbDdataDtsInsertMultiThreadCheckCase()
+ self.sStbDtbDdataDtsMtInsertMultiThreadCheckCase()
def run(self):
print("running {}".format(__file__))
self.createDb()
try:
- # self.symbolsCheckCase()
self.runAll()
# self.test()
except Exception as err:
print(''.join(traceback.format_exception(None, err, err.__traceback__)))
raise err
- # self.tagColIllegalValueCheckCase()
- # self.test()
def stop(self):
tdSql.close()
diff --git a/tests/pytest/query/bug6586.py b/tests/pytest/query/bug6586.py
new file mode 100644
index 0000000000000000000000000000000000000000..87d7199dd06a42eed1345311bdfb833ba4cfe93a
--- /dev/null
+++ b/tests/pytest/query/bug6586.py
@@ -0,0 +1,42 @@
+###################################################################
+# Copyright (c) 2016 by TAOS Technologies, Inc.
+# All rights reserved.
+#
+# This file is proprietary and confidential to TAOS Technologies.
+# No part of this file may be reproduced, stored, transmitted,
+# disclosed or used in any form or by any means other than as
+# expressly provided by the written permission from Jianhui Tao
+#
+###################################################################
+
+# -*- coding: utf-8 -*-
+
+from util.log import *
+from util.cases import *
+from util.sql import *
+
+class TDTestCase:
+ def init(self, conn, logSql):
+ tdLog.debug("start to execute %s" % __file__)
+ tdSql.init(conn.cursor(), logSql)
+
+ def run(self):
+ # TD-6586 Binary type value return None with python connector
+ # PR: https://github.com/taosdata/TDengine/pull/7913/files
+
+ tdSql.execute("create database if not exists binary_convertion")
+ tdSql.execute("use binary_convertion")
+ tdSql.execute("create stable stb (ts timestamp,value binary(3)) tags (t0 bool,t1 tinyint,t2 smallint,t3 int,t4 bigint,t5 float,t6 double,t7 binary(3),t8 nchar(3))")
+ tdSql.execute("create table if not exists tb1 using stb(t0,t1,t2,t3,t4,t5,t6,t7,t8) tags (1,127,32767,2147483647,9223372036854775807,11.123450279,22.123456789,'aaa','aaa')")
+ tdSql.execute("insert into tb1 (ts,value) values (1600000000000, \"aaa\")")
+ res = tdSql.query('select * from stb', True)
+ expected_res = [(datetime.datetime(2020, 9, 13, 20, 26, 40), 'aaa', True, 127, 32767, 2147483647, 9223372036854775807, 11.12345027923584, 22.123456789, 'aaa', 'aaa')]
+ tdSql.checkEqual(res, expected_res)
+
+ def stop(self):
+ tdSql.close()
+ tdLog.success("%s successfully executed" % __file__)
+
+
+tdCases.addWindows(__file__, TDTestCase())
+tdCases.addLinux(__file__, TDTestCase())
diff --git a/tests/pytest/query/queryCnameDisplay.py b/tests/pytest/query/queryCnameDisplay.py
index 8864c0e37621c72ad39fb4249749244b1fbe8367..66a7f85120fe13293996d1bd3153b6fe9b1d6a72 100644
--- a/tests/pytest/query/queryCnameDisplay.py
+++ b/tests/pytest/query/queryCnameDisplay.py
@@ -49,10 +49,11 @@ class TDTestCase:
# select as cname with cname_list
sql_seq = f'select count(ts) as {cname_list[0]}, sum(pi1) as {cname_list[1]}, avg(pi2) as {cname_list[2]}, count(pf1) as {cname_list[3]}, count(pf2) as {cname_list[4]}, count(ps1) as {cname_list[5]}, min(pi3) as {cname_list[6]}, max(pi4) as {cname_list[7]}, count(pb1) as {cname_list[8]}, count(ps2) as {cname_list[9]} from regular_table_cname_check'
- sql_seq_no_as = sql_seq.replace('as ', '')
+ sql_seq_no_as = sql_seq.replace(' as ', ' ')
+ print(sql_seq)
+ print(sql_seq_no_as)
res = tdSql.getColNameList(sql_seq)
res_no_as = tdSql.getColNameList(sql_seq_no_as)
-
# cname[1] > 64, it is expected to be equal to 64
cname_list_1_expected = cname_list[1][:-1]
cname_list[1] = cname_list_1_expected
@@ -79,7 +80,7 @@ class TDTestCase:
# select as cname with cname_list
sql_seq = f'select count(ts) as {cname_list[0]}, sum(pi1) as {cname_list[1]}, avg(pi2) as {cname_list[2]}, count(pf1) as {cname_list[3]}, count(pf2) as {cname_list[4]}, count(ps1) as {cname_list[5]}, min(pi3) as {cname_list[6]}, max(pi4) as {cname_list[7]}, count(pb1) as {cname_list[8]}, count(ps2) as {cname_list[9]}, count(si1) as {cname_list[10]}, count(si2) as {cname_list[11]}, count(sf1) as {cname_list[12]}, count(sf2) as {cname_list[13]}, count(ss1) as {cname_list[14]}, count(si3) as {cname_list[15]}, count(si4) as {cname_list[16]}, count(sb1) as {cname_list[17]}, count(ss2) as {cname_list[18]} from super_table_cname_check'
- sql_seq_no_as = sql_seq.replace('as ', '')
+ sql_seq_no_as = sql_seq.replace(' as ', ' ')
res = tdSql.getColNameList(sql_seq)
res_no_as = tdSql.getColNameList(sql_seq_no_as)
diff --git a/tests/pytest/tools/taosdumpTest.py b/tests/pytest/tools/taosdumpTest.py
index 0dfc42f331b1a1c59d71268985d6a72d4d652856..628617e27b4af8695b96961441c6b135bdb15416 100644
--- a/tests/pytest/tools/taosdumpTest.py
+++ b/tests/pytest/tools/taosdumpTest.py
@@ -55,7 +55,7 @@ class TDTestCase:
if not os.path.exists("./taosdumptest/tmp1"):
os.makedirs("./taosdumptest/tmp1")
else:
- print("目录存在")
+ print("directory exists")
if not os.path.exists("./taosdumptest/tmp2"):
os.makedirs("./taosdumptest/tmp2")
diff --git a/tests/pytest/util/common.py b/tests/pytest/util/common.py
index 35abc4802f9de2080a6b6a166daf833c9cf04578..adf9026e7808dd1fd6715db26f70db56ce339cd5 100644
--- a/tests/pytest/util/common.py
+++ b/tests/pytest/util/common.py
@@ -14,7 +14,7 @@
import random
import string
from util.sql import tdSql
-
+from util.dnodes import tdDnodes
class TDCom:
def init(self, conn, logSql):
tdSql.init(conn.cursor(), logSql)
@@ -47,6 +47,42 @@ class TDCom:
chars = ''.join(random.choice(string.ascii_letters.lower() + string.digits) for i in range(len))
return chars
+ def restartTaosd(self, index=1, db_name="db"):
+ tdDnodes.stop(index)
+ tdDnodes.startWithoutSleep(index)
+ tdSql.execute(f"use {db_name}")
+
+ def typeof(self, variate):
+ v_type=None
+ if type(variate) is int:
+ v_type = "int"
+ elif type(variate) is str:
+ v_type = "str"
+ elif type(variate) is float:
+ v_type = "float"
+ elif type(variate) is bool:
+ v_type = "bool"
+ elif type(variate) is list:
+ v_type = "list"
+ elif type(variate) is tuple:
+ v_type = "tuple"
+ elif type(variate) is dict:
+ v_type = "dict"
+ elif type(variate) is set:
+ v_type = "set"
+ return v_type
+
+ def splitNumLetter(self, input_mix_str):
+ nums, letters = "", ""
+ for i in input_mix_str:
+ if i.isdigit():
+ nums += i
+ elif i.isspace():
+ pass
+ else:
+ letters += i
+ return nums, letters
+
def close(self):
self.cursor.close()
diff --git a/tests/script/general/parser/regex.sim b/tests/script/general/parser/regex.sim
index eed36018d4c04ec5752e64105d025347982bfcb0..6d87e1cd7c6c6620eabb44e66195aab3cb177494 100644
--- a/tests/script/general/parser/regex.sim
+++ b/tests/script/general/parser/regex.sim
@@ -79,6 +79,23 @@ if $rows != 1 then
return -1
endi
+sql select c1b from $st_name where c1b match '\\.\\*'
+if $rows != 0 then
+ return -1
+endi
+
+sql select c1b from $st_name where c1b match '\\\\'
+if $rows != 0 then
+ return -1
+endi
+
+sql insert into $ct1_name values(now+3s, '\\this is engine')
+
+sql select c1b from $st_name where c1b match '\\'
+if $rows != 1 then
+ return -1
+endi
+
sql_error select c1b from $st_name where c1b match e;
sql_error select c1b from $st_name where c1b nmatch e;
diff --git a/tests/test-all.sh b/tests/test-all.sh
index eea623b27e482d67e0d3e94a27c7f4376449d556..dfd7f49178ac3d9b4fc5437181a10e9f846aabcf 100755
--- a/tests/test-all.sh
+++ b/tests/test-all.sh
@@ -11,15 +11,15 @@ tests_dir=`pwd`
IN_TDINTERNAL="community"
function stopTaosd {
- echo "Stop taosd"
+ echo "Stop taosd"
sudo systemctl stop taosd || echo 'no sudo or systemctl or stop fail'
PID=`ps -ef|grep -w taosd | grep -v grep | awk '{print $2}'`
- while [ -n "$PID" ]
- do
+ while [ -n "$PID" ]
+ do
pkill -TERM -x taosd
sleep 1
- PID=`ps -ef|grep -w taosd | grep -v grep | awk '{print $2}'`
- done
+ PID=`ps -ef|grep -w taosd | grep -v grep | awk '{print $2}'`
+ done
}
function dohavecore(){