提交 b9fea5f6 编写于 作者: S shenglian zhou

Merge branch 'develop' into feature/szhou/csum-sample-mavg

...@@ -16,7 +16,7 @@ TDengine 的模块之一是时序数据库。但除此之外,为减少研发 ...@@ -16,7 +16,7 @@ TDengine 的模块之一是时序数据库。但除此之外,为减少研发
采用 TDengine,可将典型的物联网、车联网、工业互联网大数据平台的总拥有成本大幅降低。但需要指出的是,因充分利用了物联网时序数据的特点,它无法用来处理网络爬虫、微博、微信、电商、ERP、CRM 等通用型数据。 采用 TDengine,可将典型的物联网、车联网、工业互联网大数据平台的总拥有成本大幅降低。但需要指出的是,因充分利用了物联网时序数据的特点,它无法用来处理网络爬虫、微博、微信、电商、ERP、CRM 等通用型数据。
![TDengine技术生态图](page://images/eco_system.png) ![TDengine技术生态图](../images/eco_system.png)
<center>图 1. TDengine技术生态图</center> <center>图 1. TDengine技术生态图</center>
## <a class="anchor" id="scenes"></a>TDengine 总体适用场景 ## <a class="anchor" id="scenes"></a>TDengine 总体适用场景
......
...@@ -6,7 +6,7 @@ ...@@ -6,7 +6,7 @@
taosd包含rpc, dnode, vnode, tsdb, query, cq, sync, wal, mnode, http, monitor等模块,具体如下图: taosd包含rpc, dnode, vnode, tsdb, query, cq, sync, wal, mnode, http, monitor等模块,具体如下图:
![modules.png](page://images/architecture/modules.png) ![modules.png](../../images/architecture/modules.png)
taosd的启动入口是dnode模块,dnode然后启动其他模块,包括可选配置的http, monitor模块。taosc或dnode之间交互的消息都是通过rpc模块进行,dnode模块根据接收到的消息类型,将消息分发到vnode或mnode的消息队列,或由dnode模块自己消费。dnode的工作线程(worker)消费消息队列里的消息,交给mnode或vnode进行处理。下面对各个模块做简要说明。 taosd的启动入口是dnode模块,dnode然后启动其他模块,包括可选配置的http, monitor模块。taosc或dnode之间交互的消息都是通过rpc模块进行,dnode模块根据接收到的消息类型,将消息分发到vnode或mnode的消息队列,或由dnode模块自己消费。dnode的工作线程(worker)消费消息队列里的消息,交给mnode或vnode进行处理。下面对各个模块做简要说明。
...@@ -41,13 +41,13 @@ RPC模块还提供数据压缩功能,如果数据包的字节数超过系统 ...@@ -41,13 +41,13 @@ RPC模块还提供数据压缩功能,如果数据包的字节数超过系统
taosd的消息消费由dnode通过读写线程池进行控制,是系统的中枢。该模块内的结构体图如下: taosd的消息消费由dnode通过读写线程池进行控制,是系统的中枢。该模块内的结构体图如下:
![dnode.png](page://images/architecture/dnode.png) ![dnode.png](../../images/architecture/dnode.png)
## VNODE模块 ## VNODE模块
vnode是一独立的数据存储查询逻辑单元,但因为一个vnode只能容许一个DB,因此vnode内部没有account, DB, user等概念。为实现更好的模块化、封装以及未来的扩展,它有很多子模块,包括负责存储的TSDB,负责查询的Query, 负责数据复制的sync,负责数据库日志的的wal, 负责连续查询的cq(continuous query), 负责事件触发的流计算的event等模块,这些子模块只与vnode模块发生关系,与其他模块没有任何调用关系。模块图如下: vnode是一独立的数据存储查询逻辑单元,但因为一个vnode只能容许一个DB,因此vnode内部没有account, DB, user等概念。为实现更好的模块化、封装以及未来的扩展,它有很多子模块,包括负责存储的TSDB,负责查询的Query, 负责数据复制的sync,负责数据库日志的的wal, 负责连续查询的cq(continuous query), 负责事件触发的流计算的event等模块,这些子模块只与vnode模块发生关系,与其他模块没有任何调用关系。模块图如下:
![vnode.png](page://images/architecture/vnode.png) ![vnode.png](../../images/architecture/vnode.png)
vnode模块向下,与dnodeVRead,dnodeVWrite发生互动,向上,与子模块发生互动。它主要的功能有: vnode模块向下,与dnodeVRead,dnodeVWrite发生互动,向上,与子模块发生互动。它主要的功能有:
......
...@@ -90,7 +90,7 @@ TDengine采取的是Master-Slave模式进行同步,与流行的RAFT一致性 ...@@ -90,7 +90,7 @@ TDengine采取的是Master-Slave模式进行同步,与流行的RAFT一致性
具体的流程图如下: 具体的流程图如下:
![replica-master.png](page://images/architecture/replica-master.png) ![replica-master.png](../../images/architecture/replica-master.png)
选择Master的具体规则如下: 选择Master的具体规则如下:
...@@ -105,7 +105,7 @@ TDengine采取的是Master-Slave模式进行同步,与流行的RAFT一致性 ...@@ -105,7 +105,7 @@ TDengine采取的是Master-Slave模式进行同步,与流行的RAFT一致性
如果vnode A是master, vnode B是slave, vnode A能接受客户端的写请求,而vnode B不能。当vnode A收到写的请求后,遵循下面的流程: 如果vnode A是master, vnode B是slave, vnode A能接受客户端的写请求,而vnode B不能。当vnode A收到写的请求后,遵循下面的流程:
![replica-forward.png](page://images/architecture/replica-forward.png) ![replica-forward.png](../../images/architecture/replica-forward.png)
1. 应用对写请求做基本的合法性检查,通过,则给该请求包打上一个版本号(version, 单调递增) 1. 应用对写请求做基本的合法性检查,通过,则给该请求包打上一个版本号(version, 单调递增)
2. 应用将打上版本号的写请求封装一个WAL Head, 写入WAL(Write Ahead Log) 2. 应用将打上版本号的写请求封装一个WAL Head, 写入WAL(Write Ahead Log)
...@@ -140,7 +140,7 @@ TDengine采取的是Master-Slave模式进行同步,与流行的RAFT一致性 ...@@ -140,7 +140,7 @@ TDengine采取的是Master-Slave模式进行同步,与流行的RAFT一致性
整个数据恢复流程分为两大步骤,第一步,先恢复archived data(file), 然后恢复wal。具体流程如下: 整个数据恢复流程分为两大步骤,第一步,先恢复archived data(file), 然后恢复wal。具体流程如下:
![replica-restore.png](page://images/architecture/replica-restore.png) ![replica-restore.png](../../images/architecture/replica-restore.png)
1. 通过已经建立的TCP连接,发送sync req给master节点 1. 通过已经建立的TCP连接,发送sync req给master节点
2. master收到sync req后,以client的身份,向vnode B主动建立一新的专用于同步的TCP连接(syncFd) 2. master收到sync req后,以client的身份,向vnode B主动建立一新的专用于同步的TCP连接(syncFd)
......
...@@ -156,7 +156,7 @@ TDengine 的设计是基于单个硬件、软件系统不可靠,基于任何 ...@@ -156,7 +156,7 @@ TDengine 的设计是基于单个硬件、软件系统不可靠,基于任何
TDengine 分布式架构的逻辑结构图如下: TDengine 分布式架构的逻辑结构图如下:
![TDengine架构示意图](page://images/architecture/structure.png) ![TDengine架构示意图](../images/architecture/structure.png)
<center> 图 1 TDengine架构示意图 </center> <center> 图 1 TDengine架构示意图 </center>
一个完整的 TDengine 系统是运行在一到多个物理节点上的,逻辑上,它包含数据节点(dnode)、TDengine应用驱动(taosc)以及应用(app)。系统中存在一到多个数据节点,这些数据节点组成一个集群(cluster)。应用通过taosc的API与TDengine集群进行互动。下面对每个逻辑单元进行简要介绍。 一个完整的 TDengine 系统是运行在一到多个物理节点上的,逻辑上,它包含数据节点(dnode)、TDengine应用驱动(taosc)以及应用(app)。系统中存在一到多个数据节点,这些数据节点组成一个集群(cluster)。应用通过taosc的API与TDengine集群进行互动。下面对每个逻辑单元进行简要介绍。
...@@ -207,7 +207,7 @@ TDengine 分布式架构的逻辑结构图如下: ...@@ -207,7 +207,7 @@ TDengine 分布式架构的逻辑结构图如下:
为解释vnode、mnode、taosc和应用之间的关系以及各自扮演的角色,下面对写入数据这个典型操作的流程进行剖析。 为解释vnode、mnode、taosc和应用之间的关系以及各自扮演的角色,下面对写入数据这个典型操作的流程进行剖析。
![TDengine典型的操作流程](page://images/architecture/message.png) ![TDengine典型的操作流程](../images/architecture/message.png)
<center> 图 2 TDengine典型的操作流程 </center> <center> 图 2 TDengine典型的操作流程 </center>
1. 应用通过JDBC、ODBC或其他API接口发起插入数据的请求。 1. 应用通过JDBC、ODBC或其他API接口发起插入数据的请求。
...@@ -278,7 +278,7 @@ TDengine除vnode分片之外,还对时序数据按照时间段进行分区。 ...@@ -278,7 +278,7 @@ TDengine除vnode分片之外,还对时序数据按照时间段进行分区。
Master Vnode遵循下面的写入流程: Master Vnode遵循下面的写入流程:
![TDengine Master写入流程](page://images/architecture/write_master.png) ![TDengine Master写入流程](../images/architecture/write_master.png)
<center> 图 3 TDengine Master写入流程 </center> <center> 图 3 TDengine Master写入流程 </center>
1. master vnode收到应用的数据插入请求,验证OK,进入下一步; 1. master vnode收到应用的数据插入请求,验证OK,进入下一步;
...@@ -292,7 +292,7 @@ Master Vnode遵循下面的写入流程: ...@@ -292,7 +292,7 @@ Master Vnode遵循下面的写入流程:
对于slave vnode,写入流程是: 对于slave vnode,写入流程是:
![TDengine Slave写入流程](page://images/architecture/write_slave.png) ![TDengine Slave写入流程](../images/architecture/write_slave.png)
<center> 图 4 TDengine Slave写入流程 </center> <center> 图 4 TDengine Slave写入流程 </center>
1. slave vnode收到Master vnode转发了的数据插入请求。检查last version是否与master一致,如果一致,进入下一步。如果不一致,需要进入同步状态。 1. slave vnode收到Master vnode转发了的数据插入请求。检查last version是否与master一致,如果一致,进入下一步。如果不一致,需要进入同步状态。
...@@ -434,7 +434,7 @@ SELECT COUNT(*) FROM d1001 WHERE ts >= '2017-7-14 00:00:00' AND ts < '2017-7-14 ...@@ -434,7 +434,7 @@ SELECT COUNT(*) FROM d1001 WHERE ts >= '2017-7-14 00:00:00' AND ts < '2017-7-14
TDengine对每个数据采集点单独建表,但在实际应用中经常需要对不同的采集点数据进行聚合。为高效的进行聚合操作,TDengine引入超级表(STable)的概念。超级表用来代表一特定类型的数据采集点,它是包含多张表的表集合,集合里每张表的模式(schema)完全一致,但每张表都带有自己的静态标签,标签可以有多个,可以随时增加、删除和修改。应用可通过指定标签的过滤条件,对一个STable下的全部或部分表进行聚合或统计操作,这样大大简化应用的开发。其具体流程如下图所示: TDengine对每个数据采集点单独建表,但在实际应用中经常需要对不同的采集点数据进行聚合。为高效的进行聚合操作,TDengine引入超级表(STable)的概念。超级表用来代表一特定类型的数据采集点,它是包含多张表的表集合,集合里每张表的模式(schema)完全一致,但每张表都带有自己的静态标签,标签可以有多个,可以随时增加、删除和修改。应用可通过指定标签的过滤条件,对一个STable下的全部或部分表进行聚合或统计操作,这样大大简化应用的开发。其具体流程如下图所示:
![多表聚合查询原理图](page://images/architecture/multi_tables.png) ![多表聚合查询原理图](../images/architecture/multi_tables.png)
<center> 图 5 多表聚合查询原理图 </center> <center> 图 5 多表聚合查询原理图 </center>
1. 应用将一个查询条件发往系统; 1. 应用将一个查询条件发往系统;
......
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
`taos-jdbcdriver` 的实现包括 2 种形式: JDBC-JNI 和 JDBC-RESTful(taos-jdbcdriver-2.0.18 开始支持 JDBC-RESTful)。 JDBC-JNI 通过调用客户端 libtaos.so(或 taos.dll )的本地方法实现, JDBC-RESTful 则在内部封装了 RESTful 接口实现。 `taos-jdbcdriver` 的实现包括 2 种形式: JDBC-JNI 和 JDBC-RESTful(taos-jdbcdriver-2.0.18 开始支持 JDBC-RESTful)。 JDBC-JNI 通过调用客户端 libtaos.so(或 taos.dll )的本地方法实现, JDBC-RESTful 则在内部封装了 RESTful 接口实现。
![tdengine-connector](page://images/tdengine-jdbc-connector.png) ![tdengine-connector](../../images/tdengine-jdbc-connector.png)
上图显示了 3 种 Java 应用使用连接器访问 TDengine 的方式: 上图显示了 3 种 Java 应用使用连接器访问 TDengine 的方式:
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
TDengine提供了丰富的应用程序开发接口,其中包括C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用。 TDengine提供了丰富的应用程序开发接口,其中包括C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用。
![image-connecotr](page://images/connector.png) ![image-connecotr](../images/connector.png)
目前TDengine的连接器可支持的平台广泛,包括:X64/X86/ARM64/ARM32/MIPS/Alpha等硬件平台,以及Linux/Win64/Win32等开发环境。对照矩阵如下: 目前TDengine的连接器可支持的平台广泛,包括:X64/X86/ARM64/ARM32/MIPS/Alpha等硬件平台,以及Linux/Win64/Win32等开发环境。对照矩阵如下:
...@@ -64,8 +64,7 @@ TDengine提供了丰富的应用程序开发接口,其中包括C/C++、Java、 ...@@ -64,8 +64,7 @@ TDengine提供了丰富的应用程序开发接口,其中包括C/C++、Java、
编辑taos.cfg文件(默认路径/etc/taos/taos.cfg),将firstEP修改为TDengine服务器的End Point,例如:h1.taos.com:6030 编辑taos.cfg文件(默认路径/etc/taos/taos.cfg),将firstEP修改为TDengine服务器的End Point,例如:h1.taos.com:6030
**提示: ** **提示:**
1. **如本机没有部署TDengine服务,仅安装了应用驱动,则taos.cfg中仅需配置firstEP,无需配置FQDN。** 1. **如本机没有部署TDengine服务,仅安装了应用驱动,则taos.cfg中仅需配置firstEP,无需配置FQDN。**
2. **为防止与服务器端连接时出现“unable to resolve FQDN”错误,建议确认客户端的hosts文件已经配置正确的FQDN值。** 2. **为防止与服务器端连接时出现“unable to resolve FQDN”错误,建议确认客户端的hosts文件已经配置正确的FQDN值。**
......
...@@ -32,15 +32,15 @@ allow_loading_unsigned_plugins = taosdata-tdengine-datasource ...@@ -32,15 +32,15 @@ allow_loading_unsigned_plugins = taosdata-tdengine-datasource
用户可以直接通过 localhost:3000 的网址,登录 Grafana 服务器(用户名/密码:admin/admin),通过左侧 `Configuration -> Data Sources` 可以添加数据源,如下图所示: 用户可以直接通过 localhost:3000 的网址,登录 Grafana 服务器(用户名/密码:admin/admin),通过左侧 `Configuration -> Data Sources` 可以添加数据源,如下图所示:
![img](page://images/connections/add_datasource1.jpg) ![img](../images/connections/add_datasource1.jpg)
点击 `Add data source` 可进入新增数据源页面,在查询框中输入 TDengine 可选择添加,如下图所示: 点击 `Add data source` 可进入新增数据源页面,在查询框中输入 TDengine 可选择添加,如下图所示:
![img](page://images/connections/add_datasource2.jpg) ![img](../images/connections/add_datasource2.jpg)
进入数据源配置页面,按照默认提示修改相应配置即可: 进入数据源配置页面,按照默认提示修改相应配置即可:
![img](page://images/connections/add_datasource3.jpg) ![img](../images/connections/add_datasource3.jpg)
* Host: TDengine 集群的中任意一台服务器的 IP 地址与 TDengine RESTful 接口的端口号(6041),默认 http://localhost:6041 。 * Host: TDengine 集群的中任意一台服务器的 IP 地址与 TDengine RESTful 接口的端口号(6041),默认 http://localhost:6041 。
* User:TDengine 用户名。 * User:TDengine 用户名。
...@@ -48,13 +48,13 @@ allow_loading_unsigned_plugins = taosdata-tdengine-datasource ...@@ -48,13 +48,13 @@ allow_loading_unsigned_plugins = taosdata-tdengine-datasource
点击 `Save & Test` 进行测试,成功会有如下提示: 点击 `Save & Test` 进行测试,成功会有如下提示:
![img](page://images/connections/add_datasource4.jpg) ![img](../images/connections/add_datasource4.jpg)
#### 创建 Dashboard #### 创建 Dashboard
回到主界面创建 Dashboard,点击 Add Query 进入面板查询页面: 回到主界面创建 Dashboard,点击 Add Query 进入面板查询页面:
![img](page://images/connections/create_dashboard1.jpg) ![img](../images/connections/create_dashboard1.jpg)
如上图所示,在 Query 中选中 `TDengine` 数据源,在下方查询框可输入相应 sql 进行查询,具体说明如下: 如上图所示,在 Query 中选中 `TDengine` 数据源,在下方查询框可输入相应 sql 进行查询,具体说明如下:
...@@ -65,7 +65,7 @@ allow_loading_unsigned_plugins = taosdata-tdengine-datasource ...@@ -65,7 +65,7 @@ allow_loading_unsigned_plugins = taosdata-tdengine-datasource
按照默认提示查询当前 TDengine 部署所在服务器指定间隔系统内存平均使用量如下: 按照默认提示查询当前 TDengine 部署所在服务器指定间隔系统内存平均使用量如下:
![img](page://images/connections/create_dashboard2.jpg) ![img](../images/connections/create_dashboard2.jpg)
> 关于如何使用Grafana创建相应的监测界面以及更多有关使用Grafana的信息,请参考Grafana官方的[文档](https://grafana.com/docs/)。 > 关于如何使用Grafana创建相应的监测界面以及更多有关使用Grafana的信息,请参考Grafana官方的[文档](https://grafana.com/docs/)。
...@@ -75,11 +75,11 @@ allow_loading_unsigned_plugins = taosdata-tdengine-datasource ...@@ -75,11 +75,11 @@ allow_loading_unsigned_plugins = taosdata-tdengine-datasource
点击左侧 `Import` 按钮,并上传 `tdengine-grafana.json` 文件: 点击左侧 `Import` 按钮,并上传 `tdengine-grafana.json` 文件:
![img](page://images/connections/import_dashboard1.jpg) ![img](../images/connections/import_dashboard1.jpg)
导入完成之后可看到如下效果: 导入完成之后可看到如下效果:
![img](page://images/connections/import_dashboard2.jpg) ![img](../images/connections/import_dashboard2.jpg)
## <a class="anchor" id="matlab"></a>MATLAB ## <a class="anchor" id="matlab"></a>MATLAB
......
...@@ -15,7 +15,8 @@ One of the modules of TDengine is the time-series database. However, in addition ...@@ -15,7 +15,8 @@ One of the modules of TDengine is the time-series database. However, in addition
With TDengine, the total cost of ownership of typical IoT, Internet of Vehicles, and Industrial Internet Big Data platforms can be greatly reduced. However, since it makes full use of the characteristics of IoT time-series data, TDengine cannot be used to process general data from web crawlers, microblogs, WeChat, e-commerce, ERP, CRM, and other sources. With TDengine, the total cost of ownership of typical IoT, Internet of Vehicles, and Industrial Internet Big Data platforms can be greatly reduced. However, since it makes full use of the characteristics of IoT time-series data, TDengine cannot be used to process general data from web crawlers, microblogs, WeChat, e-commerce, ERP, CRM, and other sources.
![TDengine Technology Ecosystem](page://images/eco_system.png) ![TDengine Technology Ecosystem](../images/eco_system.png)
<center>Figure 1. TDengine Technology Ecosystem</center> <center>Figure 1. TDengine Technology Ecosystem</center>
## <a class="anchor" id="scenes"></a>Overall Scenarios of TDengine ## <a class="anchor" id="scenes"></a>Overall Scenarios of TDengine
......
...@@ -154,7 +154,7 @@ The design of TDengine is based on the assumption that one single node or softwa ...@@ -154,7 +154,7 @@ The design of TDengine is based on the assumption that one single node or softwa
Logical structure diagram of TDengine distributed architecture as following: Logical structure diagram of TDengine distributed architecture as following:
![TDengine architecture diagram](page://images/architecture/structure.png) ![TDengine architecture diagram](../images/architecture/structure.png)
<center> Figure 1: TDengine architecture diagram </center> <center> Figure 1: TDengine architecture diagram </center>
A complete TDengine system runs on one or more physical nodes. Logically, it includes data node (dnode), TDEngine application driver (TAOSC) and application (app). There are one or more data nodes in the system, which form a cluster. The application interacts with the TDengine cluster through TAOSC's API. The following is a brief introduction to each logical unit. A complete TDengine system runs on one or more physical nodes. Logically, it includes data node (dnode), TDEngine application driver (TAOSC) and application (app). There are one or more data nodes in the system, which form a cluster. The application interacts with the TDengine cluster through TAOSC's API. The following is a brief introduction to each logical unit.
...@@ -197,7 +197,7 @@ A complete TDengine system runs on one or more physical nodes. Logically, it inc ...@@ -197,7 +197,7 @@ A complete TDengine system runs on one or more physical nodes. Logically, it inc
To explain the relationship between vnode, mnode, TAOSC and application and their respective roles, the following is an analysis of a typical data writing process. To explain the relationship between vnode, mnode, TAOSC and application and their respective roles, the following is an analysis of a typical data writing process.
![typical process of TDengine](page://images/architecture/message.png) ![typical process of TDengine](../images/architecture/message.png)
<center> Figure 2: Typical process of TDengine </center> <center> Figure 2: Typical process of TDengine </center>
1. Application initiates a request to insert data through JDBC, ODBC, or other APIs. 1. Application initiates a request to insert data through JDBC, ODBC, or other APIs.
...@@ -266,7 +266,7 @@ If a database has N replicas, thus a virtual node group has N virtual nodes, but ...@@ -266,7 +266,7 @@ If a database has N replicas, thus a virtual node group has N virtual nodes, but
Master Vnode uses a writing process as follows: Master Vnode uses a writing process as follows:
![TDengine Master Writing Process](page://images/architecture/write_master.png) ![TDengine Master Writing Process](../images/architecture/write_master.png)
<center> Figure 3: TDengine Master writing process </center> <center> Figure 3: TDengine Master writing process </center>
1. Master vnode receives the application data insertion request, verifies, and moves to next step; 1. Master vnode receives the application data insertion request, verifies, and moves to next step;
...@@ -280,7 +280,7 @@ Master Vnode uses a writing process as follows: ...@@ -280,7 +280,7 @@ Master Vnode uses a writing process as follows:
For a slave vnode, the write process as follows: For a slave vnode, the write process as follows:
![TDengine Slave Writing Process](page://images/architecture/write_slave.png) ![TDengine Slave Writing Process](../images/architecture/write_slave.png)
<center> Figure 4: TDengine Slave Writing Process </center> <center> Figure 4: TDengine Slave Writing Process </center>
1. Slave vnode receives a data insertion request forwarded by Master vnode; 1. Slave vnode receives a data insertion request forwarded by Master vnode;
...@@ -412,7 +412,7 @@ For the data collected by device D1001, the number of records per hour is counte ...@@ -412,7 +412,7 @@ For the data collected by device D1001, the number of records per hour is counte
TDengine creates a separate table for each data collection point, but in practical applications, it is often necessary to aggregate data from different data collection points. In order to perform aggregation operations efficiently, TDengine introduces the concept of STable. STable is used to represent a specific type of data collection point. It is a table set containing multiple tables. The schema of each table in the set is the same, but each table has its own static tag. The tags can be multiple and be added, deleted and modified at any time. Applications can aggregate or statistically operate all or a subset of tables under a STABLE by specifying tag filters, thus greatly simplifying the development of applications. The process is shown in the following figure: TDengine creates a separate table for each data collection point, but in practical applications, it is often necessary to aggregate data from different data collection points. In order to perform aggregation operations efficiently, TDengine introduces the concept of STable. STable is used to represent a specific type of data collection point. It is a table set containing multiple tables. The schema of each table in the set is the same, but each table has its own static tag. The tags can be multiple and be added, deleted and modified at any time. Applications can aggregate or statistically operate all or a subset of tables under a STABLE by specifying tag filters, thus greatly simplifying the development of applications. The process is shown in the following figure:
![Diagram of multi-table aggregation query](page://images/architecture/multi_tables.png) ![Diagram of multi-table aggregation query](../images/architecture/multi_tables.png)
<center> Figure 5: Diagram of multi-table aggregation query </center> <center> Figure 5: Diagram of multi-table aggregation query </center>
1. Application sends a query condition to system; 1. Application sends a query condition to system;
......
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
The taos-jdbcdriver is implemented in two forms: JDBC-JNI and JDBC-RESTful (supported from taos-jdbcdriver-2.0.18). JDBC-JNI is implemented by calling the local methods of libtaos.so (or taos.dll) on the client, while JDBC-RESTful encapsulates the RESTful interface implementation internally. The taos-jdbcdriver is implemented in two forms: JDBC-JNI and JDBC-RESTful (supported from taos-jdbcdriver-2.0.18). JDBC-JNI is implemented by calling the local methods of libtaos.so (or taos.dll) on the client, while JDBC-RESTful encapsulates the RESTful interface implementation internally.
![tdengine-connector](page://images/tdengine-jdbc-connector.png) ![tdengine-connector](../../images/tdengine-jdbc-connector.png)
The figure above shows the three ways Java applications can access the TDengine: The figure above shows the three ways Java applications can access the TDengine:
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
TDengine provides many connectors for development, including C/C++, JAVA, Python, RESTful, Go, Node.JS, etc. TDengine provides many connectors for development, including C/C++, JAVA, Python, RESTful, Go, Node.JS, etc.
![image-connector](page://images/connector.png) ![image-connector](../images/connector.png)
At present, TDengine connectors support a wide range of platforms, including hardware platforms such as X64/X86/ARM64/ARM32/MIPS/Alpha, and development environments such as Linux/Win64/Win32. The comparison matrix is as follows: At present, TDengine connectors support a wide range of platforms, including hardware platforms such as X64/X86/ARM64/ARM32/MIPS/Alpha, and development environments such as Linux/Win64/Win32. The comparison matrix is as follows:
......
...@@ -26,15 +26,15 @@ sudo cp -rf /usr/local/taos/connector/grafanaplugin /var/lib/grafana/plugins/tde ...@@ -26,15 +26,15 @@ sudo cp -rf /usr/local/taos/connector/grafanaplugin /var/lib/grafana/plugins/tde
You can log in the Grafana server (username/password:admin/admin) through localhost:3000, and add data sources through `Configuration -> Data Sources` on the left panel, as shown in the following figure: You can log in the Grafana server (username/password:admin/admin) through localhost:3000, and add data sources through `Configuration -> Data Sources` on the left panel, as shown in the following figure:
![img](page://images/connections/add_datasource1.jpg) ![img](../images/connections/add_datasource1.jpg)
Click `Add data source` to enter the Add Data Source page, and enter TDengine in the query box to select Add, as shown in the following figure: Click `Add data source` to enter the Add Data Source page, and enter TDengine in the query box to select Add, as shown in the following figure:
![img](page://images/connections/add_datasource2.jpg) ![img](../images/connections/add_datasource2.jpg)
Enter the data source configuration page and modify the corresponding configuration according to the default prompt: Enter the data source configuration page and modify the corresponding configuration according to the default prompt:
![img](page://images/connections/add_datasource3.jpg) ![img](../images/connections/add_datasource3.jpg)
- Host: IP address of any server in TDengine cluster and port number of TDengine RESTful interface (6041), default [http://localhost:6041](http://localhost:6041/) - Host: IP address of any server in TDengine cluster and port number of TDengine RESTful interface (6041), default [http://localhost:6041](http://localhost:6041/)
- User: TDengine username. - User: TDengine username.
...@@ -42,13 +42,13 @@ Enter the data source configuration page and modify the corresponding configurat ...@@ -42,13 +42,13 @@ Enter the data source configuration page and modify the corresponding configurat
Click `Save & Test` to test. Success will be prompted as follows: Click `Save & Test` to test. Success will be prompted as follows:
![img](page://images/connections/add_datasource4.jpg) ![img](../images/connections/add_datasource4.jpg)
#### Create Dashboard #### Create Dashboard
Go back to the home to create Dashboard, and click `Add Query` to enter the panel query page: Go back to the home to create Dashboard, and click `Add Query` to enter the panel query page:
![img](page://images/connections/create_dashboard1.jpg) ![img](../images/connections/create_dashboard1.jpg)
As shown in the figure above, select the TDengine data source in Query, and enter the corresponding sql in the query box below to query. Details are as follows: As shown in the figure above, select the TDengine data source in Query, and enter the corresponding sql in the query box below to query. Details are as follows:
...@@ -58,7 +58,7 @@ As shown in the figure above, select the TDengine data source in Query, and ente ...@@ -58,7 +58,7 @@ As shown in the figure above, select the TDengine data source in Query, and ente
According to the default prompt, query the average system memory usage at the specified interval of the server where the current TDengine deployed in as follows: According to the default prompt, query the average system memory usage at the specified interval of the server where the current TDengine deployed in as follows:
![img](page://images/connections/create_dashboard2.jpg) ![img](../images/connections/create_dashboard2.jpg)
> Please refer to Grafana [documents](https://grafana.com/docs/) for how to use Grafana to create the corresponding monitoring interface and for more about Grafana usage. > Please refer to Grafana [documents](https://grafana.com/docs/) for how to use Grafana to create the corresponding monitoring interface and for more about Grafana usage.
...@@ -68,11 +68,11 @@ A `tdengine-grafana.json` importable dashboard is provided under the Grafana plu ...@@ -68,11 +68,11 @@ A `tdengine-grafana.json` importable dashboard is provided under the Grafana plu
Click the `Import` button on the left panel and upload the `tdengine-grafana.json` file: Click the `Import` button on the left panel and upload the `tdengine-grafana.json` file:
![img](page://images/connections/import_dashboard1.jpg) ![img](../images/connections/import_dashboard1.jpg)
You can see as follows after Dashboard imported. You can see as follows after Dashboard imported.
![img](page://images/connections/import_dashboard2.jpg) ![img](../images/connections/import_dashboard2.jpg)
## <a class="anchor" id="matlab"></a> MATLAB ## <a class="anchor" id="matlab"></a> MATLAB
......
...@@ -54,6 +54,9 @@ typedef struct { ...@@ -54,6 +54,9 @@ typedef struct {
int tscSmlInsert(TAOS* taos, TAOS_SML_DATA_POINT* points, int numPoint, SSmlLinesInfo* info); int tscSmlInsert(TAOS* taos, TAOS_SML_DATA_POINT* points, int numPoint, SSmlLinesInfo* info);
bool checkDuplicateKey(char *key, SHashObj *pHash, SSmlLinesInfo* info); bool checkDuplicateKey(char *key, SHashObj *pHash, SSmlLinesInfo* info);
bool isValidInteger(char *str);
bool isValidFloat(char *str);
int32_t isValidChildTableName(const char *pTbName, int16_t len); int32_t isValidChildTableName(const char *pTbName, int16_t len);
bool convertSmlValueType(TAOS_SML_KV *pVal, char *value, bool convertSmlValueType(TAOS_SML_KV *pVal, char *value,
......
...@@ -1137,7 +1137,7 @@ static void escapeSpecialCharacter(uint8_t field, const char **pos) { ...@@ -1137,7 +1137,7 @@ static void escapeSpecialCharacter(uint8_t field, const char **pos) {
*pos = cur; *pos = cur;
} }
static bool isValidInteger(char *str) { bool isValidInteger(char *str) {
char *c = str; char *c = str;
if (*c != '+' && *c != '-' && !isdigit(*c)) { if (*c != '+' && *c != '-' && !isdigit(*c)) {
return false; return false;
...@@ -1152,7 +1152,7 @@ static bool isValidInteger(char *str) { ...@@ -1152,7 +1152,7 @@ static bool isValidInteger(char *str) {
return true; return true;
} }
static bool isValidFloat(char *str) { bool isValidFloat(char *str) {
char *c = str; char *c = str;
uint8_t has_dot, has_exp, has_sign; uint8_t has_dot, has_exp, has_sign;
has_dot = 0; has_dot = 0;
...@@ -1212,7 +1212,7 @@ static bool isTinyInt(char *pVal, uint16_t len) { ...@@ -1212,7 +1212,7 @@ static bool isTinyInt(char *pVal, uint16_t len) {
if (len <= 2) { if (len <= 2) {
return false; return false;
} }
if (!strcmp(&pVal[len - 2], "i8")) { if (!strcasecmp(&pVal[len - 2], "i8")) {
//printf("Type is int8(%s)\n", pVal); //printf("Type is int8(%s)\n", pVal);
return true; return true;
} }
...@@ -1226,7 +1226,7 @@ static bool isTinyUint(char *pVal, uint16_t len) { ...@@ -1226,7 +1226,7 @@ static bool isTinyUint(char *pVal, uint16_t len) {
if (pVal[0] == '-') { if (pVal[0] == '-') {
return false; return false;
} }
if (!strcmp(&pVal[len - 2], "u8")) { if (!strcasecmp(&pVal[len - 2], "u8")) {
//printf("Type is uint8(%s)\n", pVal); //printf("Type is uint8(%s)\n", pVal);
return true; return true;
} }
...@@ -1237,7 +1237,7 @@ static bool isSmallInt(char *pVal, uint16_t len) { ...@@ -1237,7 +1237,7 @@ static bool isSmallInt(char *pVal, uint16_t len) {
if (len <= 3) { if (len <= 3) {
return false; return false;
} }
if (!strcmp(&pVal[len - 3], "i16")) { if (!strcasecmp(&pVal[len - 3], "i16")) {
//printf("Type is int16(%s)\n", pVal); //printf("Type is int16(%s)\n", pVal);
return true; return true;
} }
...@@ -1251,7 +1251,7 @@ static bool isSmallUint(char *pVal, uint16_t len) { ...@@ -1251,7 +1251,7 @@ static bool isSmallUint(char *pVal, uint16_t len) {
if (pVal[0] == '-') { if (pVal[0] == '-') {
return false; return false;
} }
if (strcmp(&pVal[len - 3], "u16") == 0) { if (strcasecmp(&pVal[len - 3], "u16") == 0) {
//printf("Type is uint16(%s)\n", pVal); //printf("Type is uint16(%s)\n", pVal);
return true; return true;
} }
...@@ -1262,7 +1262,7 @@ static bool isInt(char *pVal, uint16_t len) { ...@@ -1262,7 +1262,7 @@ static bool isInt(char *pVal, uint16_t len) {
if (len <= 3) { if (len <= 3) {
return false; return false;
} }
if (strcmp(&pVal[len - 3], "i32") == 0) { if (strcasecmp(&pVal[len - 3], "i32") == 0) {
//printf("Type is int32(%s)\n", pVal); //printf("Type is int32(%s)\n", pVal);
return true; return true;
} }
...@@ -1276,7 +1276,7 @@ static bool isUint(char *pVal, uint16_t len) { ...@@ -1276,7 +1276,7 @@ static bool isUint(char *pVal, uint16_t len) {
if (pVal[0] == '-') { if (pVal[0] == '-') {
return false; return false;
} }
if (strcmp(&pVal[len - 3], "u32") == 0) { if (strcasecmp(&pVal[len - 3], "u32") == 0) {
//printf("Type is uint32(%s)\n", pVal); //printf("Type is uint32(%s)\n", pVal);
return true; return true;
} }
...@@ -1287,7 +1287,7 @@ static bool isBigInt(char *pVal, uint16_t len) { ...@@ -1287,7 +1287,7 @@ static bool isBigInt(char *pVal, uint16_t len) {
if (len <= 3) { if (len <= 3) {
return false; return false;
} }
if (strcmp(&pVal[len - 3], "i64") == 0) { if (strcasecmp(&pVal[len - 3], "i64") == 0) {
//printf("Type is int64(%s)\n", pVal); //printf("Type is int64(%s)\n", pVal);
return true; return true;
} }
...@@ -1301,7 +1301,7 @@ static bool isBigUint(char *pVal, uint16_t len) { ...@@ -1301,7 +1301,7 @@ static bool isBigUint(char *pVal, uint16_t len) {
if (pVal[0] == '-') { if (pVal[0] == '-') {
return false; return false;
} }
if (strcmp(&pVal[len - 3], "u64") == 0) { if (strcasecmp(&pVal[len - 3], "u64") == 0) {
//printf("Type is uint64(%s)\n", pVal); //printf("Type is uint64(%s)\n", pVal);
return true; return true;
} }
...@@ -1312,7 +1312,7 @@ static bool isFloat(char *pVal, uint16_t len) { ...@@ -1312,7 +1312,7 @@ static bool isFloat(char *pVal, uint16_t len) {
if (len <= 3) { if (len <= 3) {
return false; return false;
} }
if (strcmp(&pVal[len - 3], "f32") == 0) { if (strcasecmp(&pVal[len - 3], "f32") == 0) {
//printf("Type is float(%s)\n", pVal); //printf("Type is float(%s)\n", pVal);
return true; return true;
} }
...@@ -1323,7 +1323,7 @@ static bool isDouble(char *pVal, uint16_t len) { ...@@ -1323,7 +1323,7 @@ static bool isDouble(char *pVal, uint16_t len) {
if (len <= 3) { if (len <= 3) {
return false; return false;
} }
if (strcmp(&pVal[len - 3], "f64") == 0) { if (strcasecmp(&pVal[len - 3], "f64") == 0) {
//printf("Type is double(%s)\n", pVal); //printf("Type is double(%s)\n", pVal);
return true; return true;
} }
...@@ -1331,34 +1331,24 @@ static bool isDouble(char *pVal, uint16_t len) { ...@@ -1331,34 +1331,24 @@ static bool isDouble(char *pVal, uint16_t len) {
} }
static bool isBool(char *pVal, uint16_t len, bool *bVal) { static bool isBool(char *pVal, uint16_t len, bool *bVal) {
if ((len == 1) && if ((len == 1) && !strcasecmp(&pVal[len - 1], "t")) {
(pVal[len - 1] == 't' ||
pVal[len - 1] == 'T')) {
//printf("Type is bool(%c)\n", pVal[len - 1]); //printf("Type is bool(%c)\n", pVal[len - 1]);
*bVal = true; *bVal = true;
return true; return true;
} }
if ((len == 1) && if ((len == 1) && !strcasecmp(&pVal[len - 1], "f")) {
(pVal[len - 1] == 'f' ||
pVal[len - 1] == 'F')) {
//printf("Type is bool(%c)\n", pVal[len - 1]); //printf("Type is bool(%c)\n", pVal[len - 1]);
*bVal = false; *bVal = false;
return true; return true;
} }
if((len == 4) && if((len == 4) && !strcasecmp(&pVal[len - 4], "true")) {
(!strcmp(&pVal[len - 4], "true") ||
!strcmp(&pVal[len - 4], "True") ||
!strcmp(&pVal[len - 4], "TRUE"))) {
//printf("Type is bool(%s)\n", &pVal[len - 4]); //printf("Type is bool(%s)\n", &pVal[len - 4]);
*bVal = true; *bVal = true;
return true; return true;
} }
if((len == 5) && if((len == 5) && !strcasecmp(&pVal[len - 5], "false")) {
(!strcmp(&pVal[len - 5], "false") ||
!strcmp(&pVal[len - 5], "False") ||
!strcmp(&pVal[len - 5], "FALSE"))) {
//printf("Type is bool(%s)\n", &pVal[len - 5]); //printf("Type is bool(%s)\n", &pVal[len - 5]);
*bVal = false; *bVal = false;
return true; return true;
...@@ -1384,7 +1374,7 @@ static bool isNchar(char *pVal, uint16_t len) { ...@@ -1384,7 +1374,7 @@ static bool isNchar(char *pVal, uint16_t len) {
if (len < 3) { if (len < 3) {
return false; return false;
} }
if (pVal[0] == 'L' && pVal[1] == '"' && pVal[len - 1] == '"') { if ((pVal[0] == 'l' || pVal[0] == 'L')&& pVal[1] == '"' && pVal[len - 1] == '"') {
//printf("Type is nchar(%s)\n", pVal); //printf("Type is nchar(%s)\n", pVal);
return true; return true;
} }
...@@ -1434,7 +1424,7 @@ static bool isTimeStamp(char *pVal, uint16_t len, SMLTimeStampType *tsType) { ...@@ -1434,7 +1424,7 @@ static bool isTimeStamp(char *pVal, uint16_t len, SMLTimeStampType *tsType) {
return false; return false;
} }
static bool convertStrToNumber(TAOS_SML_KV *pVal, char*str, SSmlLinesInfo* info) { static bool convertStrToNumber(TAOS_SML_KV *pVal, char *str, SSmlLinesInfo* info) {
errno = 0; errno = 0;
uint8_t type = pVal->type; uint8_t type = pVal->type;
int16_t length = pVal->length; int16_t length = pVal->length;
...@@ -1442,6 +1432,7 @@ static bool convertStrToNumber(TAOS_SML_KV *pVal, char*str, SSmlLinesInfo* info) ...@@ -1442,6 +1432,7 @@ static bool convertStrToNumber(TAOS_SML_KV *pVal, char*str, SSmlLinesInfo* info)
uint64_t val_u; uint64_t val_u;
double val_d; double val_d;
strntolower_s(str, str, (int32_t)strlen(str));
if (IS_FLOAT_TYPE(type)) { if (IS_FLOAT_TYPE(type)) {
val_d = strtod(str, NULL); val_d = strtod(str, NULL);
} else { } else {
...@@ -1659,9 +1650,19 @@ bool convertSmlValueType(TAOS_SML_KV *pVal, char *value, ...@@ -1659,9 +1650,19 @@ bool convertSmlValueType(TAOS_SML_KV *pVal, char *value,
memcpy(pVal->value, &bVal, pVal->length); memcpy(pVal->value, &bVal, pVal->length);
return true; return true;
} }
//Handle default(no appendix) as float //Handle default(no appendix) interger type as BIGINT
if (isValidInteger(value) || isValidFloat(value)) { if (isValidInteger(value)) {
pVal->type = TSDB_DATA_TYPE_FLOAT; pVal->type = TSDB_DATA_TYPE_BIGINT;
pVal->length = (int16_t)tDataTypes[pVal->type].bytes;
if (!convertStrToNumber(pVal, value, info)) {
return false;
}
return true;
}
//Handle default(no appendix) floating number type as DOUBLE
if (isValidFloat(value)) {
pVal->type = TSDB_DATA_TYPE_DOUBLE;
pVal->length = (int16_t)tDataTypes[pVal->type].bytes; pVal->length = (int16_t)tDataTypes[pVal->type].bytes;
if (!convertStrToNumber(pVal, value, info)) { if (!convertStrToNumber(pVal, value, info)) {
return false; return false;
...@@ -1724,6 +1725,7 @@ int32_t convertSmlTimeStamp(TAOS_SML_KV *pVal, char *value, ...@@ -1724,6 +1725,7 @@ int32_t convertSmlTimeStamp(TAOS_SML_KV *pVal, char *value,
SMLTimeStampType type; SMLTimeStampType type;
int64_t tsVal; int64_t tsVal;
strntolower_s(value, value, len);
if (!isTimeStamp(value, len, &type)) { if (!isTimeStamp(value, len, &type)) {
return TSDB_CODE_TSC_INVALID_TIME_STAMP; return TSDB_CODE_TSC_INVALID_TIME_STAMP;
} }
......
...@@ -38,7 +38,7 @@ static int32_t parseTelnetMetric(TAOS_SML_DATA_POINT *pSml, const char **index, ...@@ -38,7 +38,7 @@ static int32_t parseTelnetMetric(TAOS_SML_DATA_POINT *pSml, const char **index,
uint16_t len = 0; uint16_t len = 0;
pSml->stableName = tcalloc(TSDB_TABLE_NAME_LEN + 1, 1); // +1 to avoid 1772 line over write pSml->stableName = tcalloc(TSDB_TABLE_NAME_LEN + 1, 1); // +1 to avoid 1772 line over write
if (pSml->stableName == NULL){ if (pSml->stableName == NULL) {
return TSDB_CODE_TSC_OUT_OF_MEMORY; return TSDB_CODE_TSC_OUT_OF_MEMORY;
} }
if (isdigit(*cur)) { if (isdigit(*cur)) {
...@@ -58,7 +58,13 @@ static int32_t parseTelnetMetric(TAOS_SML_DATA_POINT *pSml, const char **index, ...@@ -58,7 +58,13 @@ static int32_t parseTelnetMetric(TAOS_SML_DATA_POINT *pSml, const char **index,
break; break;
} }
//convert dot to underscore for now, will be removed once dot is allowed in tbname.
if (*cur == '.') {
pSml->stableName[len] = '_';
} else {
pSml->stableName[len] = *cur; pSml->stableName[len] = *cur;
}
cur++; cur++;
len++; len++;
} }
...@@ -455,6 +461,13 @@ int32_t parseMetricFromJSON(cJSON *root, TAOS_SML_DATA_POINT* pSml, SSmlLinesInf ...@@ -455,6 +461,13 @@ int32_t parseMetricFromJSON(cJSON *root, TAOS_SML_DATA_POINT* pSml, SSmlLinesInf
return TSDB_CODE_TSC_INVALID_JSON; return TSDB_CODE_TSC_INVALID_JSON;
} }
//convert dot to underscore for now, will be removed once dot is allowed in tbname.
for (int i = 0; i < strlen(metric->valuestring); ++i) {
if (metric->valuestring[i] == '.') {
metric->valuestring[i] = '_';
}
}
tstrncpy(pSml->stableName, metric->valuestring, stableLen + 1); tstrncpy(pSml->stableName, metric->valuestring, stableLen + 1);
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
...@@ -485,6 +498,7 @@ int32_t parseTimestampFromJSONObj(cJSON *root, int64_t *tsVal, SSmlLinesInfo* in ...@@ -485,6 +498,7 @@ int32_t parseTimestampFromJSONObj(cJSON *root, int64_t *tsVal, SSmlLinesInfo* in
} }
size_t typeLen = strlen(type->valuestring); size_t typeLen = strlen(type->valuestring);
strntolower_s(type->valuestring, type->valuestring, (int32_t)typeLen);
if (typeLen == 1 && type->valuestring[0] == 's') { if (typeLen == 1 && type->valuestring[0] == 's') {
//seconds //seconds
*tsVal = (int64_t)(*tsVal * 1e9); *tsVal = (int64_t)(*tsVal * 1e9);
...@@ -505,6 +519,8 @@ int32_t parseTimestampFromJSONObj(cJSON *root, int64_t *tsVal, SSmlLinesInfo* in ...@@ -505,6 +519,8 @@ int32_t parseTimestampFromJSONObj(cJSON *root, int64_t *tsVal, SSmlLinesInfo* in
default: default:
return TSDB_CODE_TSC_INVALID_JSON; return TSDB_CODE_TSC_INVALID_JSON;
} }
} else {
return TSDB_CODE_TSC_INVALID_JSON;
} }
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
...@@ -725,16 +741,34 @@ int32_t parseValueFromJSON(cJSON *root, TAOS_SML_KV *pVal, SSmlLinesInfo* info) ...@@ -725,16 +741,34 @@ int32_t parseValueFromJSON(cJSON *root, TAOS_SML_KV *pVal, SSmlLinesInfo* info)
break; break;
} }
case cJSON_Number: { case cJSON_Number: {
//convert default JSON Number type to float //convert default JSON Number type to BIGINT/DOUBLE
pVal->type = TSDB_DATA_TYPE_FLOAT; if (isValidInteger(root->numberstring)) {
pVal->type = TSDB_DATA_TYPE_BIGINT;
pVal->length = (int16_t)tDataTypes[pVal->type].bytes; pVal->length = (int16_t)tDataTypes[pVal->type].bytes;
pVal->value = tcalloc(pVal->length, 1); pVal->value = tcalloc(pVal->length, 1);
*(float *)(pVal->value) = (float)(root->valuedouble); *(int64_t *)(pVal->value) = (int64_t)(root->valuedouble);
} else if (isValidFloat(root->numberstring)) {
pVal->type = TSDB_DATA_TYPE_DOUBLE;
pVal->length = (int16_t)tDataTypes[pVal->type].bytes;
pVal->value = tcalloc(pVal->length, 1);
*(double *)(pVal->value) = (double)(root->valuedouble);
} else {
return TSDB_CODE_TSC_INVALID_JSON_TYPE;
}
break; break;
} }
case cJSON_String: { case cJSON_String: {
//convert default JSON String type to nchar /* set default JSON type to binary/nchar according to
* user configured parameter tsDefaultJSONStrType
*/
if (strcasecmp(tsDefaultJSONStrType, "binary") == 0) {
pVal->type = TSDB_DATA_TYPE_BINARY;
} else if (strcasecmp(tsDefaultJSONStrType, "nchar") == 0) {
pVal->type = TSDB_DATA_TYPE_NCHAR; pVal->type = TSDB_DATA_TYPE_NCHAR;
} else {
tscError("OTD:0x%"PRIx64" Invalid default JSON string type set from config %s", info->id, tsDefaultJSONStrType);
return TSDB_CODE_TSC_INVALID_JSON_CONFIG;
}
//pVal->length = wcslen((wchar_t *)root->valuestring) * TSDB_NCHAR_SIZE; //pVal->length = wcslen((wchar_t *)root->valuestring) * TSDB_NCHAR_SIZE;
pVal->length = (int16_t)strlen(root->valuestring); pVal->length = (int16_t)strlen(root->valuestring);
pVal->value = tcalloc(pVal->length + 1, 1); pVal->value = tcalloc(pVal->length + 1, 1);
......
...@@ -227,6 +227,9 @@ extern char Compressor[]; ...@@ -227,6 +227,9 @@ extern char Compressor[];
// long query // long query
extern int8_t tsDeadLockKillQuery; extern int8_t tsDeadLockKillQuery;
// schemaless
extern char tsDefaultJSONStrType[];
typedef struct { typedef struct {
char dir[TSDB_FILENAME_LEN]; char dir[TSDB_FILENAME_LEN];
int level; int level;
......
...@@ -282,6 +282,9 @@ char Compressor[32] = "ZSTD_COMPRESSOR"; // ZSTD_COMPRESSOR or GZIP_COMPRESS ...@@ -282,6 +282,9 @@ char Compressor[32] = "ZSTD_COMPRESSOR"; // ZSTD_COMPRESSOR or GZIP_COMPRESS
// long query death-lock // long query death-lock
int8_t tsDeadLockKillQuery = 0; int8_t tsDeadLockKillQuery = 0;
// default JSON string type
char tsDefaultJSONStrType[7] = "binary";
int32_t (*monStartSystemFp)() = NULL; int32_t (*monStartSystemFp)() = NULL;
void (*monStopSystemFp)() = NULL; void (*monStopSystemFp)() = NULL;
void (*monExecuteSQLFp)(char *sql) = NULL; void (*monExecuteSQLFp)(char *sql) = NULL;
...@@ -1637,6 +1640,17 @@ static void doInitGlobalConfig(void) { ...@@ -1637,6 +1640,17 @@ static void doInitGlobalConfig(void) {
cfg.unitType = TAOS_CFG_UTYPE_NONE; cfg.unitType = TAOS_CFG_UTYPE_NONE;
taosInitConfigOption(cfg); taosInitConfigOption(cfg);
// default JSON string type option "binary"/"nchar"
cfg.option = "defaultJSONStrType";
cfg.ptr = tsDefaultJSONStrType;
cfg.valType = TAOS_CFG_VTYPE_STRING;
cfg.cfgType = TSDB_CFG_CTYPE_B_CONFIG | TSDB_CFG_CTYPE_B_SHOW | TSDB_CFG_CTYPE_B_CLIENT;
cfg.minValue = 0;
cfg.maxValue = 0;
cfg.ptrLength = tListLen(tsDefaultJSONStrType);
cfg.unitType = TAOS_CFG_UTYPE_NONE;
taosInitConfigOption(cfg);
#ifdef TD_TSZ #ifdef TD_TSZ
// lossy compress // lossy compress
cfg.option = "lossyColumns"; cfg.option = "lossyColumns";
......
...@@ -110,7 +110,8 @@ int32_t* taosGetErrno(); ...@@ -110,7 +110,8 @@ int32_t* taosGetErrno();
#define TSDB_CODE_TSC_DUP_TAG_NAMES TAOS_DEF_ERROR_CODE(0, 0x0220) //"duplicated tag names") #define TSDB_CODE_TSC_DUP_TAG_NAMES TAOS_DEF_ERROR_CODE(0, 0x0220) //"duplicated tag names")
#define TSDB_CODE_TSC_INVALID_JSON TAOS_DEF_ERROR_CODE(0, 0x0221) //"Invalid JSON format") #define TSDB_CODE_TSC_INVALID_JSON TAOS_DEF_ERROR_CODE(0, 0x0221) //"Invalid JSON format")
#define TSDB_CODE_TSC_INVALID_JSON_TYPE TAOS_DEF_ERROR_CODE(0, 0x0222) //"Invalid JSON data type") #define TSDB_CODE_TSC_INVALID_JSON_TYPE TAOS_DEF_ERROR_CODE(0, 0x0222) //"Invalid JSON data type")
#define TSDB_CODE_TSC_VALUE_OUT_OF_RANGE TAOS_DEF_ERROR_CODE(0, 0x0223) //"Value out of range") #define TSDB_CODE_TSC_INVALID_JSON_CONFIG TAOS_DEF_ERROR_CODE(0, 0x0223) //"Invalid JSON configuration")
#define TSDB_CODE_TSC_VALUE_OUT_OF_RANGE TAOS_DEF_ERROR_CODE(0, 0x0224) //"Value out of range")
// mnode // mnode
#define TSDB_CODE_MND_MSG_NOT_PROCESSED TAOS_DEF_ERROR_CODE(0, 0x0300) //"Message not processed") #define TSDB_CODE_MND_MSG_NOT_PROCESSED TAOS_DEF_ERROR_CODE(0, 0x0300) //"Message not processed")
......
...@@ -8,12 +8,14 @@ IF (GIT_FOUND) ...@@ -8,12 +8,14 @@ IF (GIT_FOUND)
MESSAGE("Git found") MESSAGE("Git found")
EXECUTE_PROCESS( EXECUTE_PROCESS(
COMMAND ${GIT_EXECUTABLE} log --pretty=oneline -n 1 ${CMAKE_CURRENT_LIST_DIR}/taosdemo.c COMMAND ${GIT_EXECUTABLE} log --pretty=oneline -n 1 ${CMAKE_CURRENT_LIST_DIR}/taosdemo.c
WORKING_DIRECTORY ${CMAKE_CURRENT_LIST_DIR}
RESULT_VARIABLE RESULT RESULT_VARIABLE RESULT
OUTPUT_VARIABLE TAOSDEMO_COMMIT_SHA1) OUTPUT_VARIABLE TAOSDEMO_COMMIT_SHA1)
IF ("${TAOSDEMO_COMMIT_SHA1}" STREQUAL "") IF ("${TAOSDEMO_COMMIT_SHA1}" STREQUAL "")
MESSAGE("taosdemo's latest commit in short is:" ${TAOSDEMO_COMMIT_SHA1}) SET(TAOSDEMO_COMMIT_SHA1 "unknown")
ELSE () ELSE ()
STRING(SUBSTRING "${TAOSDEMO_COMMIT_SHA1}" 0 7 TAOSDEMO_COMMIT_SHA1) STRING(SUBSTRING "${TAOSDEMO_COMMIT_SHA1}" 0 7 TAOSDEMO_COMMIT_SHA1)
STRING(STRIP "${TAOSDEMO_COMMIT_SHA1}" TAOSDEMO_COMMIT_SHA1)
ENDIF () ENDIF ()
EXECUTE_PROCESS( EXECUTE_PROCESS(
COMMAND ${GIT_EXECUTABLE} status -z -s ${CMAKE_CURRENT_LIST_DIR}/taosdemo.c COMMAND ${GIT_EXECUTABLE} status -z -s ${CMAKE_CURRENT_LIST_DIR}/taosdemo.c
...@@ -25,14 +27,13 @@ IF (GIT_FOUND) ...@@ -25,14 +27,13 @@ IF (GIT_FOUND)
RESULT_VARIABLE RESULT RESULT_VARIABLE RESULT
OUTPUT_VARIABLE TAOSDEMO_STATUS) OUTPUT_VARIABLE TAOSDEMO_STATUS)
ENDIF (TD_LINUX) ENDIF (TD_LINUX)
MESSAGE("taosdemo.c status: " ${TAOSDEMO_STATUS})
ELSE() ELSE()
MESSAGE("Git not found") MESSAGE("Git not found")
SET(TAOSDEMO_COMMIT_SHA1 "unknown") SET(TAOSDEMO_COMMIT_SHA1 "unknown")
SET(TAOSDEMO_STATUS "unknown") SET(TAOSDEMO_STATUS "unknown")
ENDIF (GIT_FOUND) ENDIF (GIT_FOUND)
STRING(STRIP "${TAOSDEMO_COMMIT_SHA1}" TAOSDEMO_COMMIT_SHA1)
MESSAGE("taosdemo's latest commit in short is:" ${TAOSDEMO_COMMIT_SHA1}) MESSAGE("taosdemo's latest commit in short is:" ${TAOSDEMO_COMMIT_SHA1})
STRING(STRIP "${TAOSDEMO_STATUS}" TAOSDEMO_STATUS) STRING(STRIP "${TAOSDEMO_STATUS}" TAOSDEMO_STATUS)
......
...@@ -20,7 +20,7 @@ ...@@ -20,7 +20,7 @@
extern "C" { extern "C" {
#endif #endif
#define TSDB_CFG_MAX_NUM 123 #define TSDB_CFG_MAX_NUM 124
#define TSDB_CFG_PRINT_LEN 23 #define TSDB_CFG_PRINT_LEN 23
#define TSDB_CFG_OPTION_LEN 24 #define TSDB_CFG_OPTION_LEN 24
#define TSDB_CFG_VALUE_LEN 41 #define TSDB_CFG_VALUE_LEN 41
......
...@@ -118,6 +118,7 @@ TAOS_DEFINE_ERROR(TSDB_CODE_TSC_INVALID_COLUMN_LENGTH, "Invalid column length ...@@ -118,6 +118,7 @@ TAOS_DEFINE_ERROR(TSDB_CODE_TSC_INVALID_COLUMN_LENGTH, "Invalid column length
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_DUP_TAG_NAMES, "duplicated tag names") TAOS_DEFINE_ERROR(TSDB_CODE_TSC_DUP_TAG_NAMES, "duplicated tag names")
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_INVALID_JSON, "Invalid JSON format") TAOS_DEFINE_ERROR(TSDB_CODE_TSC_INVALID_JSON, "Invalid JSON format")
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_INVALID_JSON_TYPE, "Invalid JSON data type") TAOS_DEFINE_ERROR(TSDB_CODE_TSC_INVALID_JSON_TYPE, "Invalid JSON data type")
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_INVALID_JSON_CONFIG, "Invalid JSON configuration")
TAOS_DEFINE_ERROR(TSDB_CODE_TSC_VALUE_OUT_OF_RANGE, "Value out of range") TAOS_DEFINE_ERROR(TSDB_CODE_TSC_VALUE_OUT_OF_RANGE, "Value out of range")
// mnode // mnode
......
...@@ -1090,9 +1090,10 @@ void verify_telnet_insert(TAOS* taos) { ...@@ -1090,9 +1090,10 @@ void verify_telnet_insert(TAOS* taos) {
//bigint //bigint
char* lines2_3[] = { char* lines2_3[] = {
"stb2_3 1626006833651ms -9223372036854775807i64 host=\"host0\"", "stb2_3 1626006833651ms -9223372036854775807i64 host=\"host0\"",
"stb2_3 1626006833652ms 9223372036854775807i64 host=\"host0\"" "stb2_3 1626006833652ms 9223372036854775807i64 host=\"host0\"",
"stb2_3 1626006833662ms 9223372036854775807 host=\"host0\""
}; };
code = taos_insert_telnet_lines(taos, lines2_3, 2); code = taos_insert_telnet_lines(taos, lines2_3, 3);
if (code) { if (code) {
printf("lines2_3 code: %d, %s.\n", code, tstrerror(code)); printf("lines2_3 code: %d, %s.\n", code, tstrerror(code));
} }
...@@ -1107,11 +1108,10 @@ void verify_telnet_insert(TAOS* taos) { ...@@ -1107,11 +1108,10 @@ void verify_telnet_insert(TAOS* taos) {
"stb2_4 1626006833660ms -3.4e10f32 host=\"host0\"", "stb2_4 1626006833660ms -3.4e10f32 host=\"host0\"",
"stb2_4 1626006833670ms 3.4E+2f32 host=\"host0\"", "stb2_4 1626006833670ms 3.4E+2f32 host=\"host0\"",
"stb2_4 1626006833680ms -3.4e-2f32 host=\"host0\"", "stb2_4 1626006833680ms -3.4e-2f32 host=\"host0\"",
"stb2_4 1626006833690ms 3.15 host=\"host0\"",
"stb2_4 1626006833700ms 3.4E38f32 host=\"host0\"", "stb2_4 1626006833700ms 3.4E38f32 host=\"host0\"",
"stb2_4 1626006833710ms -3.4E38f32 host=\"host0\"" "stb2_4 1626006833710ms -3.4E38f32 host=\"host0\""
}; };
code = taos_insert_telnet_lines(taos, lines2_4, 11); code = taos_insert_telnet_lines(taos, lines2_4, 10);
if (code) { if (code) {
printf("lines2_4 code: %d, %s.\n", code, tstrerror(code)); printf("lines2_4 code: %d, %s.\n", code, tstrerror(code));
} }
...@@ -1127,9 +1127,10 @@ void verify_telnet_insert(TAOS* taos) { ...@@ -1127,9 +1127,10 @@ void verify_telnet_insert(TAOS* taos) {
"stb2_5 1626006833670ms 3.4E+2f64 host=\"host0\"", "stb2_5 1626006833670ms 3.4E+2f64 host=\"host0\"",
"stb2_5 1626006833680ms -3.4e-2f64 host=\"host0\"", "stb2_5 1626006833680ms -3.4e-2f64 host=\"host0\"",
"stb2_5 1626006833690ms 1.7E308f64 host=\"host0\"", "stb2_5 1626006833690ms 1.7E308f64 host=\"host0\"",
"stb2_5 1626006833700ms -1.7E308f64 host=\"host0\"" "stb2_5 1626006833700ms -1.7E308f64 host=\"host0\"",
"stb2_5 1626006833710ms 3.15 host=\"host0\""
}; };
code = taos_insert_telnet_lines(taos, lines2_5, 10); code = taos_insert_telnet_lines(taos, lines2_5, 11);
if (code) { if (code) {
printf("lines2_5 code: %d, %s.\n", code, tstrerror(code)); printf("lines2_5 code: %d, %s.\n", code, tstrerror(code));
} }
...@@ -1166,7 +1167,7 @@ void verify_telnet_insert(TAOS* taos) { ...@@ -1166,7 +1167,7 @@ void verify_telnet_insert(TAOS* taos) {
//nchar //nchar
char* lines2_8[] = { char* lines2_8[] = {
"stb2_8 1626006833610ms L\"nchar_val数值一\" host=\"host0\"", "stb2_8 1626006833610ms L\"nchar_val数值一\" host=\"host0\"",
"stb2_8 1626006833620ms L\"nchar_val数值二\" host=\"host0\"", "stb2_8 1626006833620ms L\"nchar_val数值二\" host=\"host0\""
}; };
code = taos_insert_telnet_lines(taos, lines2_8, 2); code = taos_insert_telnet_lines(taos, lines2_8, 2);
if (code) { if (code) {
......
...@@ -273,6 +273,7 @@ python3 ./test.py -f query/queryCnameDisplay.py ...@@ -273,6 +273,7 @@ python3 ./test.py -f query/queryCnameDisplay.py
# python3 ./test.py -f query/operator_cost.py # python3 ./test.py -f query/operator_cost.py
# python3 ./test.py -f query/long_where_query.py # python3 ./test.py -f query/long_where_query.py
python3 test.py -f query/nestedQuery/queryWithSpread.py python3 test.py -f query/nestedQuery/queryWithSpread.py
python3 ./test.py -f query/bug6586.py
#stream #stream
python3 ./test.py -f stream/metric_1.py python3 ./test.py -f stream/metric_1.py
...@@ -391,7 +392,7 @@ python3 test.py -f alter/alter_cacheLastRow.py ...@@ -391,7 +392,7 @@ python3 test.py -f alter/alter_cacheLastRow.py
python3 ./test.py -f query/querySession.py python3 ./test.py -f query/querySession.py
python3 test.py -f alter/alter_create_exception.py python3 test.py -f alter/alter_create_exception.py
python3 ./test.py -f insert/flushwhiledrop.py python3 ./test.py -f insert/flushwhiledrop.py
python3 ./test.py -f insert/schemalessInsert.py #python3 ./test.py -f insert/schemalessInsert.py
python3 ./test.py -f alter/alterColMultiTimes.py python3 ./test.py -f alter/alterColMultiTimes.py
python3 ./test.py -f query/queryWildcardLength.py python3 ./test.py -f query/queryWildcardLength.py
python3 ./test.py -f query/queryTbnameUpperLower.py python3 ./test.py -f query/queryTbnameUpperLower.py
......
...@@ -31,6 +31,27 @@ class TDTestCase: ...@@ -31,6 +31,27 @@ class TDTestCase:
### Default format ### ### Default format ###
### metric ###
print("============= step0 : test metric ================")
payload = '''
{
"metric": ".stb.0.",
"timestamp": 1626006833610123,
"value": 10,
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
tdSql.query("describe _stb_0_")
tdSql.checkRows(6)
### metric value ### ### metric value ###
print("============= step1 : test metric value types ================") print("============= step1 : test metric value types ================")
payload = ''' payload = '''
...@@ -50,7 +71,7 @@ class TDTestCase: ...@@ -50,7 +71,7 @@ class TDTestCase:
print("insert_json_payload result {}".format(code)) print("insert_json_payload result {}".format(code))
tdSql.query("describe stb0_0") tdSql.query("describe stb0_0")
tdSql.checkData(1, 1, "FLOAT") tdSql.checkData(1, 1, "BIGINT")
payload = ''' payload = '''
{ {
...@@ -107,12 +128,52 @@ class TDTestCase: ...@@ -107,12 +128,52 @@ class TDTestCase:
print("insert_json_payload result {}".format(code)) print("insert_json_payload result {}".format(code))
tdSql.query("describe stb0_3") tdSql.query("describe stb0_3")
tdSql.checkData(1, 1, "NCHAR") tdSql.checkData(1, 1, "BINARY")
### timestamp 0 ###
payload = ''' payload = '''
{ {
"metric": "stb0_4", "metric": "stb0_4",
"timestamp": 1626006833610123,
"value": 3.14,
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
tdSql.query("describe stb0_4")
tdSql.checkData(1, 1, "DOUBLE")
payload = '''
{
"metric": "stb0_5",
"timestamp": 1626006833610123,
"value": 3.14E-2,
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
tdSql.query("describe stb0_5")
tdSql.checkData(1, 1, "DOUBLE")
print("============= step2 : test timestamp ================")
### timestamp 0 ###
payload = '''
{
"metric": "stb0_6",
"timestamp": 0, "timestamp": 0,
"value": 123, "value": 123,
"tags": { "tags": {
...@@ -127,14 +188,15 @@ class TDTestCase: ...@@ -127,14 +188,15 @@ class TDTestCase:
print("insert_json_payload result {}".format(code)) print("insert_json_payload result {}".format(code))
print("============= step3 : test tags ================")
### ID ### ### ID ###
payload = ''' payload = '''
{ {
"metric": "stb0_5", "metric": "stb0_7",
"timestamp": 0, "timestamp": 0,
"value": 123, "value": 123,
"tags": { "tags": {
"ID": "tb0_5", "ID": "tb0_7",
"t1": true, "t1": true,
"iD": "tb000", "iD": "tb000",
"t2": false, "t2": false,
...@@ -147,10 +209,60 @@ class TDTestCase: ...@@ -147,10 +209,60 @@ class TDTestCase:
code = self._conn.insert_json_payload(payload) code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code)) print("insert_json_payload result {}".format(code))
tdSql.query("select tbname from stb0_5") tdSql.query("select tbname from stb0_7")
tdSql.checkData(0, 0, "tb0_5") tdSql.checkData(0, 0, "tb0_7")
### Default tag numeric types ###
payload = '''
{
"metric": "stb0_8",
"timestamp": 0,
"value": 123,
"tags": {
"t1": 123
}
}
'''
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
tdSql.query("describe stb0_8")
tdSql.checkData(2, 1, "BIGINT")
payload = '''
{
"metric": "stb0_9",
"timestamp": 0,
"value": 123,
"tags": {
"t1": 123.00
}
}
'''
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
tdSql.query("describe stb0_9")
tdSql.checkData(2, 1, "DOUBLE")
payload = '''
{
"metric": "stb0_10",
"timestamp": 0,
"value": 123,
"tags": {
"t1": 123E-1
}
}
'''
code = self._conn.insert_json_payload(payload)
print("insert_json_payload result {}".format(code))
tdSql.query("describe stb0_10")
tdSql.checkData(2, 1, "DOUBLE")
### Nested format ### ### Nested format ###
print("============= step4 : test nested format ================")
### timestamp ### ### timestamp ###
#seconds #seconds
payload = ''' payload = '''
......
...@@ -36,13 +36,14 @@ class TDTestCase: ...@@ -36,13 +36,14 @@ class TDTestCase:
"stb0_0 1626006833639000000ns 4i8 host=\"host0\" interface=\"eth0\"", "stb0_0 1626006833639000000ns 4i8 host=\"host0\" interface=\"eth0\"",
"stb0_1 1626006833639000000ns 4i8 host=\"host0\" interface=\"eth0\"", "stb0_1 1626006833639000000ns 4i8 host=\"host0\" interface=\"eth0\"",
"stb0_2 1626006833639000000ns 4i8 host=\"host0\" interface=\"eth0\"", "stb0_2 1626006833639000000ns 4i8 host=\"host0\" interface=\"eth0\"",
".stb0.3. 1626006833639000000ns 4i8 host=\"host0\" interface=\"eth0\"",
] ]
code = self._conn.insert_telnet_lines(lines0) code = self._conn.insert_telnet_lines(lines0)
print("insert_telnet_lines result {}".format(code)) print("insert_telnet_lines result {}".format(code))
tdSql.query("show stables") tdSql.query("show stables")
tdSql.checkRows(3) tdSql.checkRows(4)
tdSql.query("describe stb0_0") tdSql.query("describe stb0_0")
tdSql.checkRows(4) tdSql.checkRows(4)
...@@ -53,6 +54,9 @@ class TDTestCase: ...@@ -53,6 +54,9 @@ class TDTestCase:
tdSql.query("describe stb0_2") tdSql.query("describe stb0_2")
tdSql.checkRows(4) tdSql.checkRows(4)
tdSql.query("describe _stb0_3_")
tdSql.checkRows(4)
### timestamp ### ### timestamp ###
print("============= step2 : test timestamp ================") print("============= step2 : test timestamp ================")
lines1 = [ lines1 = [
...@@ -122,14 +126,15 @@ class TDTestCase: ...@@ -122,14 +126,15 @@ class TDTestCase:
#bigint #bigint
lines2_3 = [ lines2_3 = [
"stb2_3 1626006833651ms -9223372036854775807i64 host=\"host0\"", "stb2_3 1626006833651ms -9223372036854775807i64 host=\"host0\"",
"stb2_3 1626006833652ms 9223372036854775807i64 host=\"host0\"" "stb2_3 1626006833652ms 9223372036854775807i64 host=\"host0\"",
"stb2_3 1626006833662ms 9223372036854775807 host=\"host0\""
] ]
code = self._conn.insert_telnet_lines(lines2_3) code = self._conn.insert_telnet_lines(lines2_3)
print("insert_telnet_lines result {}".format(code)) print("insert_telnet_lines result {}".format(code))
tdSql.query("select * from stb2_3") tdSql.query("select * from stb2_3")
tdSql.checkRows(2) tdSql.checkRows(3)
tdSql.query("describe stb2_3") tdSql.query("describe stb2_3")
tdSql.checkRows(3) tdSql.checkRows(3)
...@@ -145,7 +150,6 @@ class TDTestCase: ...@@ -145,7 +150,6 @@ class TDTestCase:
"stb2_4 1626006833660ms -3.4e10f32 host=\"host0\"", "stb2_4 1626006833660ms -3.4e10f32 host=\"host0\"",
"stb2_4 1626006833670ms 3.4E+2f32 host=\"host0\"", "stb2_4 1626006833670ms 3.4E+2f32 host=\"host0\"",
"stb2_4 1626006833680ms -3.4e-2f32 host=\"host0\"", "stb2_4 1626006833680ms -3.4e-2f32 host=\"host0\"",
"stb2_4 1626006833690ms 3.15 host=\"host0\"",
"stb2_4 1626006833700ms 3.4E38f32 host=\"host0\"", "stb2_4 1626006833700ms 3.4E38f32 host=\"host0\"",
"stb2_4 1626006833710ms -3.4E38f32 host=\"host0\"" "stb2_4 1626006833710ms -3.4E38f32 host=\"host0\""
] ]
...@@ -154,7 +158,7 @@ class TDTestCase: ...@@ -154,7 +158,7 @@ class TDTestCase:
print("insert_telnet_lines result {}".format(code)) print("insert_telnet_lines result {}".format(code))
tdSql.query("select * from stb2_4") tdSql.query("select * from stb2_4")
tdSql.checkRows(11) tdSql.checkRows(10)
tdSql.query("describe stb2_4") tdSql.query("describe stb2_4")
tdSql.checkRows(3) tdSql.checkRows(3)
...@@ -171,14 +175,15 @@ class TDTestCase: ...@@ -171,14 +175,15 @@ class TDTestCase:
"stb2_5 1626006833670ms 3.4E+2f64 host=\"host0\"", "stb2_5 1626006833670ms 3.4E+2f64 host=\"host0\"",
"stb2_5 1626006833680ms -3.4e-2f64 host=\"host0\"", "stb2_5 1626006833680ms -3.4e-2f64 host=\"host0\"",
"stb2_5 1626006833690ms 1.7E308f64 host=\"host0\"", "stb2_5 1626006833690ms 1.7E308f64 host=\"host0\"",
"stb2_5 1626006833700ms -1.7E308f64 host=\"host0\"" "stb2_5 1626006833700ms -1.7E308f64 host=\"host0\"",
"stb2_5 1626006833710ms 3.15 host=\"host0\""
] ]
code = self._conn.insert_telnet_lines(lines2_5) code = self._conn.insert_telnet_lines(lines2_5)
print("insert_telnet_lines result {}".format(code)) print("insert_telnet_lines result {}".format(code))
tdSql.query("select * from stb2_5") tdSql.query("select * from stb2_5")
tdSql.checkRows(10) tdSql.checkRows(11)
tdSql.query("describe stb2_5") tdSql.query("describe stb2_5")
tdSql.checkRows(3) tdSql.checkRows(3)
......
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
from util.log import *
from util.cases import *
from util.sql import *
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
def run(self):
# TD-6586 Binary type value return None with python connector
# PR: https://github.com/taosdata/TDengine/pull/7913/files
tdSql.execute("create database if not exists binary_convertion")
tdSql.execute("use binary_convertion")
tdSql.execute("create stable stb (ts timestamp,value binary(3)) tags (t0 bool,t1 tinyint,t2 smallint,t3 int,t4 bigint,t5 float,t6 double,t7 binary(3),t8 nchar(3))")
tdSql.execute("create table if not exists tb1 using stb(t0,t1,t2,t3,t4,t5,t6,t7,t8) tags (1,127,32767,2147483647,9223372036854775807,11.123450279,22.123456789,'aaa','aaa')")
tdSql.execute("insert into tb1 (ts,value) values (1600000000000, \"aaa\")")
res = tdSql.query('select * from stb', True)
expected_res = [(datetime.datetime(2020, 9, 13, 20, 26, 40), 'aaa', True, 127, 32767, 2147483647, 9223372036854775807, 11.12345027923584, 22.123456789, 'aaa', 'aaa')]
tdSql.checkEqual(res, expected_res)
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
...@@ -14,7 +14,7 @@ ...@@ -14,7 +14,7 @@
import random import random
import string import string
from util.sql import tdSql from util.sql import tdSql
from util.dnodes import tdDnodes
class TDCom: class TDCom:
def init(self, conn, logSql): def init(self, conn, logSql):
tdSql.init(conn.cursor(), logSql) tdSql.init(conn.cursor(), logSql)
...@@ -47,6 +47,42 @@ class TDCom: ...@@ -47,6 +47,42 @@ class TDCom:
chars = ''.join(random.choice(string.ascii_letters.lower() + string.digits) for i in range(len)) chars = ''.join(random.choice(string.ascii_letters.lower() + string.digits) for i in range(len))
return chars return chars
def restartTaosd(self, index=1, db_name="db"):
tdDnodes.stop(index)
tdDnodes.startWithoutSleep(index)
tdSql.execute(f"use {db_name}")
def typeof(self, variate):
v_type=None
if type(variate) is int:
v_type = "int"
elif type(variate) is str:
v_type = "str"
elif type(variate) is float:
v_type = "float"
elif type(variate) is bool:
v_type = "bool"
elif type(variate) is list:
v_type = "list"
elif type(variate) is tuple:
v_type = "tuple"
elif type(variate) is dict:
v_type = "dict"
elif type(variate) is set:
v_type = "set"
return v_type
def splitNumLetter(self, input_mix_str):
nums, letters = "", ""
for i in input_mix_str:
if i.isdigit():
nums += i
elif i.isspace():
pass
else:
letters += i
return nums, letters
def close(self): def close(self):
self.cursor.close() self.cursor.close()
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册