未验证 提交 0f2161f1 编写于 作者: C cpwu 提交者: GitHub

Merge branch '3.0' into cpwu/3.0

......@@ -62,7 +62,7 @@ TDengine的主要功能如下:
<figure>
![TDengine技术生态图](eco_system.webp)
![TDengine Database 技术生态图](eco_system.webp)
</figure>
<center>图 1. TDengine技术生态图</center>
......
......@@ -22,7 +22,7 @@ title: 集群部署
### 第二步
建议关闭所有物理节点的防火墙,至少保证端口:6030 - 6042 的 TCP 和 UDP 端口都是开放的。强烈建议先关闭防火墙,集群搭建完毕之后,再来配置端口;
确保集群中所有主机在端口 6030-6042 上的 TCP/UDP 协议能够互通。
### 第三步
......
......@@ -11,7 +11,7 @@ TDengine 支持按时间段窗口切分方式进行聚合结果查询,比如
INTERVAL 子句用于产生相等时间周期的窗口,SLIDING 用以指定窗口向前滑动的时间。每次执行的查询是一个时间窗口,时间窗口随着时间流动向前滑动。在定义连续查询的时候需要指定时间窗口(time window )大小和每次前向增量时间(forward sliding times)。如图,[t0s, t0e] ,[t1s , t1e], [t2s, t2e] 是分别是执行三次连续查询的时间窗口范围,窗口的前向滑动的时间范围 sliding time 标识 。查询过滤、聚合等操作按照每个时间窗口为独立的单位执行。当 SLIDING 与 INTERVAL 相等的时候,滑动窗口即为翻转窗口。
![时间窗口示意图](./timewindow-1.webp)
![TDengine Database 时间窗口示意图](./timewindow-1.webp)
INTERVAL 和 SLIDING 子句需要配合聚合和选择函数来使用。以下 SQL 语句非法:
......@@ -33,7 +33,7 @@ _ 从 2.1.5.0 版本开始,INTERVAL 语句允许的最短时间间隔调整为
使用整数(布尔值)或字符串来标识产生记录时候设备的状态量。产生的记录如果具有相同的状态量数值则归属于同一个状态窗口,数值改变后该窗口关闭。如下图所示,根据状态量确定的状态窗口分别是[2019-04-28 14:22:07,2019-04-28 14:22:10]和[2019-04-28 14:22:11,2019-04-28 14:22:12]两个。(状态窗口暂不支持对超级表使用)
![时间窗口示意图](./timewindow-3.webp)
![TDengine Database 时间窗口示意图](./timewindow-3.webp)
使用 STATE_WINDOW 来确定状态窗口划分的列。例如:
......@@ -45,7 +45,7 @@ SELECT COUNT(*), FIRST(ts), status FROM temp_tb_1 STATE_WINDOW(status);
会话窗口根据记录的时间戳主键的值来确定是否属于同一个会话。如下图所示,如果设置时间戳的连续的间隔小于等于 12 秒,则以下 6 条记录构成 2 个会话窗口,分别是:[2019-04-28 14:22:10,2019-04-28 14:22:30]和[2019-04-28 14:23:10,2019-04-28 14:23:30]。因为 2019-04-28 14:22:30 与 2019-04-28 14:23:10 之间的时间间隔是 40 秒,超过了连续时间间隔(12 秒)。
![时间窗口示意图](./timewindow-2.webp)
![TDengine Database 时间窗口示意图](./timewindow-2.webp)
在 tol_value 时间间隔范围内的结果都认为归属于同一个窗口,如果连续的两条记录的时间超过 tol_val,则自动开启下一个窗口。(会话窗口暂不支持对超级表使用)
......
......@@ -4,7 +4,7 @@ title: 连接器
TDengine 提供了丰富的应用程序开发接口,为了便于用户快速开发自己的应用,TDengine 支持了多种编程语言的连接器,其中官方连接器包括支持 C/C++、Java、Python、Go、Node.js、C# 和 Rust 的连接器。这些连接器支持使用原生接口(taosc)和 REST 接口(部分语言暂不支持)连接 TDengine 集群。社区开发者也贡献了多个非官方连接器,例如 ADO.NET 连接器、Lua 连接器和 PHP 连接器。
![image-connector](./connector.webp)
![TDengine Database connector architecture](./connector.webp)
## 支持的平台
......
......@@ -11,7 +11,7 @@ import TabItem from '@theme/TabItem';
`taos-jdbcdriver` 是 TDengine 的官方 Java 语言连接器,Java 开发人员可以通过它开发存取 TDengine 数据库的应用软件。`taos-jdbcdriver` 实现了 JDBC driver 标准的接口,并提供两种形式的连接器。一种是通过 TDengine 客户端驱动程序(taosc)原生连接 TDengine 实例,支持数据写入、查询、订阅、schemaless 接口和参数绑定接口等功能,一种是通过 taosAdapter 提供的 REST 接口连接 TDengine 实例(2.4.0.0 及更高版本)。REST 连接实现的功能集合和原生连接有少量不同。
![tdengine-connector](tdengine-jdbc-connector.webp)
![TDengine Database Connector Java](tdengine-jdbc-connector.webp)
上图显示了两种 Java 应用使用连接器访问 TDengine 的两种方式:
......
......@@ -24,7 +24,7 @@ taosAdapter 提供以下功能:
## taosAdapter 架构图
![taosAdapter Architecture](taosAdapter-architecture.webp)
![TDengine Database taosAdapter Architecture](taosAdapter-architecture.webp)
## taosAdapter 部署方法
......
......@@ -233,25 +233,25 @@ sudo systemctl enable grafana-server
指向 **Configurations** -> **Data Sources** 菜单,然后点击 **Add data source** 按钮。
![添加数据源按钮](./assets/howto-add-datasource-button.webp)
![TDengine Database TDinsight 添加数据源按钮](./assets/howto-add-datasource-button.webp)
搜索并选择**TDengine**
![添加数据源](./assets/howto-add-datasource-tdengine.webp)
![TDengine Database TDinsight 添加数据源](./assets/howto-add-datasource-tdengine.webp)
配置 TDengine 数据源。
![数据源配置](./assets/howto-add-datasource.webp)
![TDengine Database TDinsight 数据源配置](./assets/howto-add-datasource.webp)
保存并测试,正常情况下会报告 'TDengine Data source is working'。
![数据源测试](./assets/howto-add-datasource-test.webp)
![TDengine Database TDinsight 数据源测试](./assets/howto-add-datasource-test.webp)
### 导入仪表盘
指向 **+** / **Create** - **import**(或 `/dashboard/import` url)。
![导入仪表盘和配置](./assets/import_dashboard.webp)
![TDengine Database TDinsight 导入仪表盘和配置](./assets/import_dashboard.webp)
**Import via grafana.com** 位置键入仪表盘 ID `15167`**Load**
......@@ -259,7 +259,7 @@ sudo systemctl enable grafana-server
导入完成后,TDinsight 的完整页面视图如下所示。
![显示](./assets/TDinsight-full.webp)
![TDengine Database TDinsight 显示](./assets/TDinsight-full.webp)
## TDinsight 仪表盘详细信息
......@@ -269,7 +269,7 @@ TDinsight 仪表盘旨在提供 TDengine 相关资源使用情况[dnodes, mnodes
### 集群状态
![tdinsight-mnodes-overview](./assets/TDinsight-1-cluster-status.webp)
![TDengine Database TDinsight mnodes overview](./assets/TDinsight-1-cluster-status.webp)
这部分包括集群当前信息和状态,告警信息也在此处(从左到右,从上到下)。
......@@ -289,7 +289,7 @@ TDinsight 仪表盘旨在提供 TDengine 相关资源使用情况[dnodes, mnodes
### DNodes 状态
![tdinsight-mnodes-overview](./assets/TDinsight-2-dnodes.webp)
![TDengine Database TDinsight mnodes overview](./assets/TDinsight-2-dnodes.webp)
- **DNodes Status**`show dnodes` 的简单表格视图。
- **DNodes Lifetime**:从创建 dnode 开始经过的时间。
......@@ -298,14 +298,14 @@ TDinsight 仪表盘旨在提供 TDengine 相关资源使用情况[dnodes, mnodes
### MNode 概述
![tdinsight-mnodes-overview](./assets/TDinsight-3-mnodes.webp)
![TDengine Database TDinsight mnodes overview](./assets/TDinsight-3-mnodes.webp)
1. **MNodes Status**`show mnodes` 的简单表格视图。
2. **MNodes Number**:类似于`DNodes Number`,MNodes 数量变化。
### 请求
![tdinsight-requests](./assets/TDinsight-4-requests.webp)
![TDengine Database TDinsight requests](./assets/TDinsight-4-requests.webp)
1. **Requests Rate(Inserts per Second)**:平均每秒插入次数。
2. **Requests (Selects)**:查询请求数及变化率(count of second)。
......@@ -313,7 +313,7 @@ TDinsight 仪表盘旨在提供 TDengine 相关资源使用情况[dnodes, mnodes
### 数据库
![tdinsight-database](./assets/TDinsight-5-database.webp)
![TDengine Database TDinsight database](./assets/TDinsight-5-database.webp)
数据库使用情况,对变量 `$database` 的每个值即每个数据库进行重复多行展示。
......@@ -325,7 +325,7 @@ TDinsight 仪表盘旨在提供 TDengine 相关资源使用情况[dnodes, mnodes
### DNode 资源使用情况
![dnode-usage](./assets/TDinsight-6-dnode-usage.webp)
![TDengine Database TDinsight dnode-usage](./assets/TDinsight-6-dnode-usage.webp)
数据节点资源使用情况展示,对变量 `$fqdn` 即每个数据节点进行重复多行展示。包括:
......@@ -346,13 +346,13 @@ TDinsight 仪表盘旨在提供 TDengine 相关资源使用情况[dnodes, mnodes
### 登录历史
![登录历史](./assets/TDinsight-7-login-history.webp)
![TDengine Database TDinsight 登录历史](./assets/TDinsight-7-login-history.webp)
目前只报告每分钟登录次数。
### 监控 taosAdapter
![taosadapter](./assets/TDinsight-8-taosadapter.webp)
![TDengine Database TDinsight monitor taosadapter](./assets/TDinsight-8-taosadapter.webp)
支持监控 taosAdapter 请求统计和状态详情。包括:
......
......@@ -80,7 +80,7 @@ taos --dump-config
| 补充说明 | RESTful 服务在 2.4.0.0 之前(不含)由 taosd 提供,默认端口为 6041; 在 2.4.0.0 及后续版本由 taosAdapter,默认端口为 6041 |
:::note
对于端口,TDengine 会使用从 serverPort 起 13 个连续的 TCP 和 UDP 端口号,请务必在防火墙打开。因此如果是缺省配置,需要打开从 6030 到 6042 共 13 个端口,而且必须 TCP 和 UDP 都打开。(详细的端口情况请参见下表)
确保集群中所有主机在端口 6030-6042 上的 TCP/UDP 协议能够互通。(详细的端口情况请参见下表)
:::
| 协议 | 默认端口 | 用途说明 | 修改方法 |
| :--- | :-------- | :---------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------- |
......@@ -590,7 +590,7 @@ charset 的有效值是 UTF-8。
| 适用范围 | 仅服务端适用 |
| 含义 | 每个 DB 中 能够使用的最大 vnode 个数 |
| 取值范围 | 0-8192 |
| 缺省值 | |
| 缺省值 | 0 |
### maxTablesPerVnode
......
......@@ -64,15 +64,15 @@ GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=tdengine-datasource
用户可以直接通过 http://localhost:3000 的网址,登录 Grafana 服务器(用户名/密码:admin/admin),通过左侧 `Configuration -> Data Sources` 可以添加数据源,如下图所示:
![img](./add_datasource1.webp)
![TDengine Database Grafana plugin add data source](./add_datasource1.webp)
点击 `Add data source` 可进入新增数据源页面,在查询框中输入 TDengine 可选择添加,如下图所示:
![img](./add_datasource2.webp)
![TDengine Database Grafana plugin add data source](./add_datasource2.webp)
进入数据源配置页面,按照默认提示修改相应配置即可:
![img](./add_datasource3.webp)
![TDengine Database Grafana plugin add data source](./add_datasource3.webp)
- Host: TDengine 集群中提供 REST 服务 (在 2.4 之前由 taosd 提供, 从 2.4 开始由 taosAdapter 提供)的组件所在服务器的 IP 地址与 TDengine REST 服务的端口号(6041),默认 http://localhost:6041。
- User:TDengine 用户名。
......@@ -80,13 +80,13 @@ GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=tdengine-datasource
点击 `Save & Test` 进行测试,成功会有如下提示:
![img](./add_datasource4.webp)
![TDengine Database Grafana plugin add data source](./add_datasource4.webp)
### 创建 Dashboard
回到主界面创建 Dashboard,点击 Add Query 进入面板查询页面:
![img](./create_dashboard1.webp)
![TDengine Database Grafana plugin create dashboard](./create_dashboard1.webp)
如上图所示,在 Query 中选中 `TDengine` 数据源,在下方查询框可输入相应 SQL 进行查询,具体说明如下:
......@@ -96,7 +96,7 @@ GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=tdengine-datasource
按照默认提示查询当前 TDengine 部署所在服务器指定间隔系统内存平均使用量如下:
![img](./create_dashboard2.webp)
![TDengine Database Grafana plugin create dashboard](./create_dashboard2.webp)
> 关于如何使用 Grafana 创建相应的监测界面以及更多有关使用 Grafana 的信息,请参考 Grafana 官方的[文档](https://grafana.com/docs/)。
......
......@@ -45,25 +45,25 @@ MQTT 是流行的物联网数据传输协议,[EMQX](https://github.com/emqx/em
使用浏览器打开网址 http://IP:18083 并登录 EMQX Dashboard。初次安装用户名为 `admin` 密码为:`public`
![img](./emqx/login-dashboard.webp)
![TDengine Database EMQX login dashboard](./emqx/login-dashboard.webp)
### 创建规则(Rule)
选择左侧“规则引擎(Rule Engine)”中的“规则(Rule)”并点击“创建(Create)”按钮:
![img](./emqx/rule-engine.webp)
![TDengine Database EMQX rule engine](./emqx/rule-engine.webp)
### 编辑 SQL 字段
![img](./emqx/create-rule.webp)
![TDengine Database EMQX create rule](./emqx/create-rule.webp)
### 新增“动作(action handler)”
![img](./emqx/add-action-handler.webp)
![TDengine Database EMQX](./emqx/add-action-handler.webp)
### 新增“资源(Resource)”
![img](./emqx/create-resource.webp)
![TDengine Database EMQX create resource](./emqx/create-resource.webp)
选择“发送数据到 Web 服务“并点击“新建资源”按钮:
......@@ -71,13 +71,13 @@ MQTT 是流行的物联网数据传输协议,[EMQX](https://github.com/emqx/em
选择“发送数据到 Web 服务“并填写 请求 URL 为 运行 taosAdapter 的服务器地址和端口(默认为 6041)。其他属性请保持默认值。
![img](./emqx/edit-resource.webp)
![TDengine Database EMQX edit resource](./emqx/edit-resource.webp)
### 编辑“动作(action)”
编辑资源配置,增加 Authorization 认证的键/值配对项,相关文档请参考[ TDengine REST API 文档](https://docs.taosdata.com/reference/rest-api/)。在消息体中输入规则引擎替换模板。
![img](./emqx/edit-action.webp)
![TDengine Database EMQX edit action](./emqx/edit-action.webp)
## 编写模拟测试程序
......@@ -164,7 +164,7 @@ MQTT 是流行的物联网数据传输协议,[EMQX](https://github.com/emqx/em
注意:代码中 CLIENT_NUM 在开始测试中可以先设置一个较小的值,避免硬件性能不能完全处理较大并发客户端数量。
![img](./emqx/client-num.webp)
![TDengine Database EMQX client num](./emqx/client-num.webp)
## 执行测试模拟发送 MQTT 数据
......@@ -173,19 +173,19 @@ npm install mqtt mockjs --save --registry=https://registry.npm.taobao.org
node mock.js
```
![img](./emqx/run-mock.webp)
![TDengine Database EMQX run-mock](./emqx/run-mock.webp)
## 验证 EMQX 接收到数据
在 EMQX Dashboard 规则引擎界面进行刷新,可以看到有多少条记录被正确接收到:
![img](./emqx/check-rule-matched.webp)
![TDengine Database EMQX rule matched](./emqx/check-rule-matched.webp)
## 验证数据写入到 TDengine
使用 TDengine CLI 程序登录并查询相应数据库和表,验证数据是否被正确写入到 TDengine 中:
![img](./emqx/check-result-in-taos.webp)
![TDengine Database EMQX result in taos](./emqx/check-result-in-taos.webp)
TDengine 详细使用方法请参考 [TDengine 官方文档](https://docs.taosdata.com/)
EMQX 详细使用方法请参考 [EMQX 官方文档](https://www.emqx.io/docs/zh/v4.4/rule/rule-engine.html)
......
......@@ -9,11 +9,11 @@ TDengine Kafka Connector 包含两个插件: TDengine Source Connector 和 TDeng
Kafka Connect 是 Apache Kafka 的一个组件,用于使其它系统,比如数据库、云服务、文件系统等能方便地连接到 Kafka。数据既可以通过 Kafka Connect 从其它系统流向 Kafka, 也可以通过 Kafka Connect 从 Kafka 流向其它系统。从其它系统读数据的插件称为 Source Connector, 写数据到其它系统的插件称为 Sink Connector。Source Connector 和 Sink Connector 都不会直接连接 Kafka Broker,Source Connector 把数据转交给 Kafka Connect。Sink Connector 从 Kafka Connect 接收数据。
![](kafka/Kafka_Connect.webp)
![TDengine Database Kafka Connector -- Kafka Connect structure](kafka/Kafka_Connect.webp)
TDengine Source Connector 用于把数据实时地从 TDengine 读出来发送给 Kafka Connect。TDengine Sink Connector 用于 从 Kafka Connect 接收数据并写入 TDengine。
![](kafka/streaming-integration-with-kafka-connect.webp)
![TDengine Database Kafka Connector -- streaming integration with kafka connect](kafka/streaming-integration-with-kafka-connect.webp)
## 什么是 Confluent?
......@@ -26,7 +26,7 @@ Confluent 在 Kafka 的基础上增加很多扩展功能。包括:
5. 管理和监控 Kafka 的 GUI —— Confluent 控制中心
这些扩展功能有的包含在社区版本的 Confluent 中,有的只有企业版能用。
![](kafka/confluentPlatform.webp)
![TDengine Database Kafka Connector -- Confluent introduction](kafka/confluentPlatform.webp)
Confluent 企业版提供了 `confluent` 命令行工具管理各个组件。
......
......@@ -11,7 +11,7 @@ TDengine 的设计是基于单个硬件、软件系统不可靠,基于任何
TDengine 分布式架构的逻辑结构图如下:
![TDengine架构示意图](./structure.webp)
![TDengine Database 架构示意图](./structure.webp)
<center> 图 1 TDengine架构示意图 </center>
......@@ -41,7 +41,7 @@ TDengine 分布式架构的逻辑结构图如下:
- 集群数据节点对外提供 RESTful 服务占用一个 TCP 端口,是 serverPort+11。
- 集群内数据节点与 Arbitrator 节点之间通讯占用一个 TCP 端口,是 serverPort+12。
因此一个数据节点总的端口范围为 serverPort 到 serverPort+12,总共 13 个 TCP/UDP 端口。使用时,需要确保防火墙将这些端口打开。每个数据节点可以配置不同的 serverPort。详细的端口情况请参见 [TDengine 2.0 端口说明](/train-faq/faq#port)
因此一个数据节点总的端口范围为 serverPort 到 serverPort+12,总共 13 个 TCP/UDP 端口。确保集群中所有主机在端口 6030-6042 上的 TCP/UDP 协议能够互通。详细的端口情况请参见 [TDengine 2.0 端口说明](/train-faq/faq#port)
**集群对外连接:**TDengine 集群可以容纳单个、多个甚至几千个数据节点。应用只需要向集群中任何一个数据节点发起连接即可,连接需要提供的网络参数是一数据节点的 End Point(FQDN 加配置的端口号)。通过命令行 CLI 启动应用 taos 时,可以通过选项-h 来指定数据节点的 FQDN,-P 来指定其配置的端口号,如果端口不配置,将采用 TDengine 的系统配置参数 serverPort。
......@@ -63,7 +63,7 @@ TDengine 分布式架构的逻辑结构图如下:
为解释 vnode、mnode、taosc 和应用之间的关系以及各自扮演的角色,下面对写入数据这个典型操作的流程进行剖析。
![TDengine典型的操作流程](./message.webp)
![TDengine Database 典型的操作流程](./message.webp)
<center> 图 2 TDengine 典型的操作流程 </center>
......@@ -135,7 +135,7 @@ TDengine 除 vnode 分片之外,还对时序数据按照时间段进行分区
Master Vnode 遵循下面的写入流程:
![TDengine Master写入流程](./write_master.webp)
![TDengine Database Master写入流程](./write_master.webp)
<center> 图 3 TDengine Master 写入流程 </center>
......@@ -150,7 +150,7 @@ Master Vnode 遵循下面的写入流程:
对于 slave vnode,写入流程是:
![TDengine Slave 写入流程](./write_slave.webp)
![TDengine Database Slave 写入流程](./write_slave.webp)
<center> 图 4 TDengine Slave 写入流程 </center>
......@@ -284,7 +284,7 @@ SELECT COUNT(*) FROM d1001 WHERE ts >= '2017-7-14 00:00:00' AND ts < '2017-7-14
TDengine 对每个数据采集点单独建表,但在实际应用中经常需要对不同的采集点数据进行聚合。为高效的进行聚合操作,TDengine 引入超级表(STable)的概念。超级表用来代表一特定类型的数据采集点,它是包含多张表的表集合,集合里每张表的模式(schema)完全一致,但每张表都带有自己的静态标签,标签可以有多个,可以随时增加、删除和修改。应用可通过指定标签的过滤条件,对一个 STable 下的全部或部分表进行聚合或统计操作,这样大大简化应用的开发。其具体流程如下图所示:
![多表聚合查询原理图](./multi_tables.webp)
![TDengine Database 多表聚合查询原理图](./multi_tables.webp)
<center> 图 5 多表聚合查询原理图 </center>
......
......@@ -16,7 +16,7 @@ IT 运维监测数据通常都是对时间特性比较敏感的数据,例如
本文介绍不需要写一行代码,通过简单修改几行配置文件,就可以快速搭建一个基于 TDengine + Telegraf + Grafana 的 IT 运维系统。架构如下图:
![IT-DevOps-Solutions-Telegraf.webp](./IT-DevOps-Solutions-Telegraf.webp)
![TDengine Database IT-DevOps-Solutions-Telegraf](./IT-DevOps-Solutions-Telegraf.webp)
## 安装步骤
......@@ -75,7 +75,7 @@ sudo systemctl start telegraf
点击左侧齿轮图标并选择 `Plugins`,应该可以找到 TDengine data source 插件图标。
点击左侧加号图标并选择 `Import`,从 `https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard-v0.1.0.json` 下载 dashboard JSON 文件后导入。之后可以看到如下界面的仪表盘:
![IT-DevOps-Solutions-telegraf-dashboard.webp]./IT-DevOps-Solutions-telegraf-dashboard.webp)
![TDengine Database IT-DevOps-Solutions-telegraf-dashboard](./IT-DevOps-Solutions-telegraf-dashboard.webp)
## 总结
......
......@@ -16,7 +16,7 @@ IT 运维监测数据通常都是对时间特性比较敏感的数据,例如
本文介绍不需要写一行代码,通过简单修改几行配置文件,就可以快速搭建一个基于 TDengine + collectd / statsD + Grafana 的 IT 运维系统。架构如下图:
![IT-DevOps-Solutions-Collectd-StatsD.webp](./IT-DevOps-Solutions-Collectd-StatsD.webp)
![TDengine Database IT-DevOps-Solutions-Collectd-StatsD](./IT-DevOps-Solutions-Collectd-StatsD.webp)
## 安装步骤
......@@ -81,12 +81,12 @@ repeater 部分添加 { host:'<TDengine server/cluster host>', port: <port for S
从 https://github.com/taosdata/grafanaplugin/blob/master/examples/collectd/grafana/dashboards/collect-metrics-with-tdengine-v0.1.0.json 下载 dashboard json 文件,点击左侧加号图标并选择 `Import`,按照界面提示选择 JSON 文件导入。之后可以看到如下界面的仪表盘:
![IT-DevOps-Solutions-collectd-dashboard.webp](./IT-DevOps-Solutions-collectd-dashboard.webp)
![TDengine Database IT-DevOps-Solutions-collectd-dashboard](./IT-DevOps-Solutions-collectd-dashboard.webp)
#### 导入 StatsD 仪表盘
`https://github.com/taosdata/grafanaplugin/blob/master/examples/statsd/dashboards/statsd-with-tdengine-v0.1.0.json` 下载 dashboard json 文件,点击左侧加号图标并选择 `Import`,按照界面提示导入 JSON 文件。之后可以看到如下界面的仪表盘:
![IT-DevOps-Solutions-statsd-dashboard.webp](./IT-DevOps-Solutions-statsd-dashboard.webp)
![TDengine Database IT-DevOps-Solutions-statsd-dashboard](./IT-DevOps-Solutions-statsd-dashboard.webp)
## 总结
......
......@@ -27,7 +27,7 @@ title: OpenTSDB 应用迁移到 TDengine 的最佳实践
一个典型的 DevOps 应用场景的系统整体的架构如下图(图 1) 所示。
**图 1. DevOps 场景中典型架构**
![IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch](./IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch.webp "图1. DevOps 场景中典型架构")
![TDengine Database IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch](./IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch.webp "图1. DevOps 场景中典型架构")
在该应用场景中,包含了部署在应用环境中负责收集机器度量(Metrics)、网络度量(Metrics)以及应用度量(Metrics)的 Agent 工具、汇聚 Agent 收集信息的数据收集器,数据持久化存储和管理的系统以及监控数据可视化工具(例如:Grafana 等)。
......@@ -70,7 +70,7 @@ LoadPlugin write_tsdb
TDengine 提供了默认的两套 Dashboard 模板,用户只需要将 Grafana 目录下的模板导入到 Grafana 中即可激活使用。
**图 2. 导入 Grafana 模板**
![](./IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.webp "图2. 导入 Grafana 模板")
![TDengine Database IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard](./IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.webp "图2. 导入 Grafana 模板")
操作完以上步骤后,就完成了将 OpenTSDB 替换成为 TDengine 的迁移工作。可以看到整个流程非常简单,不需要写代码,只需要对某些配置文件进行调整即可完成全部的迁移工作。
......@@ -83,7 +83,7 @@ TDengine 提供了默认的两套 Dashboard 模板,用户只需要将 Grafana
如果你的应用特别复杂,或者应用领域并不是 DevOps 场景,你可以继续阅读后续的章节,更加全面深入地了解将 OpenTSDB 的应用迁移到 TDengine 的高级话题。
**图 3. 迁移完成后的系统架构**
![IT-DevOps-Solutions-Immigrate-TDengine-Arch](./IT-DevOps-Solutions-Immigrate-TDengine-Arch.webp "图 3. 迁移完成后的系统架构")
![TDengine Database IT-DevOps-Solutions-Immigrate-TDengine-Arch](./IT-DevOps-Solutions-Immigrate-TDengine-Arch.webp "图 3. 迁移完成后的系统架构")
## 其他场景的迁移评估与策略
......
......@@ -61,7 +61,7 @@ title: 常见问题及反馈
5. ping 服务器 FQDN,如果没有反应,请检查你的网络,DNS 设置,或客户端所在计算机的系统 hosts 文件。如果部署的是 TDengine 集群,客户端需要能 ping 通所有集群节点的 FQDN。
6. 检查防火墙设置(Ubuntu 使用 ufw status,CentOS 使用 firewall-cmd --list-port),确认 TCP/UDP 端口 6030-6042 是打开的
6. 检查防火墙设置(Ubuntu 使用 ufw status,CentOS 使用 firewall-cmd --list-port),确保集群中所有主机在端口 6030-6042 上的 TCP/UDP 协议能够互通。
7. 对于 Linux 上的 JDBC(ODBC, Python, Go 等接口类似)连接, 确保*libtaos.so*在目录*/usr/local/taos/driver*里, 并且*/usr/local/taos/driver*在系统库函数搜索路径*LD_LIBRARY_PATH*
......
......@@ -54,7 +54,7 @@ With TDengine, the total cost of ownership of your time-series data platform can
## Technical Ecosystem
This is how TDengine would be situated, in a typical time-series data processing platform:
![TDengine Technical Ecosystem ](eco_system.webp)
![TDengine Database Technical Ecosystem ](eco_system.webp)
<center>Figure 1. TDengine Technical Ecosystem</center>
......
......@@ -2,19 +2,26 @@
title: Data Model
---
The data model employed by TDengine is similar to a relational database, you need to create databases and tables. Design the data model based on your own application scenarios and you should design the STable (abbreviation for super table) schema to fit your data. This chapter will explain the big picture without getting into syntax details.
The data model employed by TDengine is similar to that of a relational database. You have to create databases and tables. You must design the data model based on your own business and application requirements. You should design the STable (an abbreviation for super table) schema to fit your data. This chapter will explain the big picture without getting into syntactical details.
## Create Database
The characteristics of data from different data collection points may be different, such as collection frequency, days to keep, number of replicas, data block size, whether it's allowed to update data, etc. For TDengine to operate with the best performance, it's strongly suggested to put the data with different characteristics into different databases because different storage policies can be set for each database. When creating a database, there are a lot of parameters that can be configured, such as the days to keep data, the number of replicas, the number of memory blocks, time precision, the minimum and maximum number of rows in each data block, compress or not, the time range of the data in single data file, etc. Below is an example of the SQL statement for creating a database.
The [characteristics of time-series data](https://www.taosdata.com/blog/2019/07/09/86.html) from different data collection points may be different. Characteristics include collection frequency, retention policy and others which determine how you create and configure the database. For e.g. days to keep, number of replicas, data block size, whether data updates are allowed and other configurable parameters would be determined by the characteristics of your data and your business requirements. For TDengine to operate with the best performance, we strongly recommend that you create and configure different databases for data with different characteristics. This allows you, for example, to set up different storage and retention policies. When creating a database, there are a lot of parameters that can be configured such as, the days to keep data, the number of replicas, the number of memory blocks, time precision, the minimum and maximum number of rows in each data block, whether compression is enabled, the time range of the data in single data file and so on. Below is an example of the SQL statement to create a database.
```sql
CREATE DATABASE power KEEP 365 DAYS 10 BLOCKS 6 UPDATE 1;
```
In the above SQL statement, a database named "power" will be created, the data in it will be kept for 365 days, which means the data older than 365 days will be deleted automatically, a new data file will be created every 10 days, the number of memory blocks is 6, data is allowed to be updated. For more details please refer to [Database](/taos-sql/database).
In the above SQL statement:
- a database named "power" will be created
- the data in it will be kept for 365 days, which means that data older than 365 days will be deleted automatically
- a new data file will be created every 10 days
- the number of memory blocks is 6
- data is allowed to be updated
After creating a database, the current database in use can be switched using SQL command `USE`, for example below SQL statement switches the current database to `power`. Without the current database specified, table name must be preceded with the corresponding database name.
For more details please refer to [Database](/taos-sql/database).
After creating a database, the current database in use can be switched using SQL command `USE`. For example the SQL statement below switches the current database to `power`. Without the current database specified, table name must be preceded with the corresponding database name.
```sql
USE power;
......@@ -30,7 +37,7 @@ USE power;
## Create STable
In a time-series application, there may be multiple kinds of data collection points. For example, in the electrical power system there are meters, transformers, bus bars, switches, etc. For easy and efficient aggregation of multiple tables, one STable needs to be created for each kind of data collection point. For example, for the meters in [table 1](/tdinternal/arch#model_table1), the below SQL statement can be used to create the super table.
In a time-series application, there may be multiple kinds of data collection points. For example, in the electrical power system there are meters, transformers, bus bars, switches, etc. For easy and efficient aggregation of multiple tables, one STable needs to be created for each kind of data collection point. For example, for the meters in [table 1](/tdinternal/arch#model_table1), the SQL statement below can be used to create the super table.
```sql
CREATE STable meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int);
......@@ -41,15 +48,15 @@ If you are using versions prior to 2.0.15, the `STable` keyword needs to be repl
:::
Similar to creating a regular table, when creating a STable, the name and schema need to be provided. In the STable schema, the first column must be timestamp (like ts in the example), and the other columns (like current, voltage and phase in the example) are the data collected. The column type can be integer, float, double, string ,etc. Besides, the schema for tags need to be provided, like location and groupId in the example. The tag type can be integer, float, string, etc. The static properties of a data collection point can be defined as tags, like the location, device type, device group ID, manager ID, etc. Tags in the schema can be added, removed or updated. Please refer to [STable](/taos-sql/stable) for more details.
Similar to creating a regular table, when creating a STable, the name and schema need to be provided. In the STable schema, the first column must always be a timestamp (like ts in the example), and the other columns (like current, voltage and phase in the example) are the data collected. The remaining columns can [contain data of type](/taos-sql/data-type/) integer, float, double, string etc. In addition, the schema for tags, like location and groupId in the example, must be provided. The tag type can be integer, float, string, etc. Tags are essentially the static properties of a data collection point. For example, properties like the location, device type, device group ID, manager ID are tags. Tags in the schema can be added, removed or updated. Please refer to [STable](/taos-sql/stable) for more details.
For each kind of data collection point, a corresponding STable must be created. There may be many STables in an application. For electrical power system, we need to create a STable respectively for meters, transformers, busbars, switches. There may be multiple kinds of data collection points on a single device, for example there may be one data collection point for electrical data like current and voltage and another point for environmental data like temperature, humidity and wind direction, multiple STables are required for such kind of device.
For each kind of data collection point, a corresponding STable must be created. There may be many STables in an application. For electrical power system, we need to create a STable respectively for meters, transformers, busbars, switches. There may be multiple kinds of data collection points on a single device, for example there may be one data collection point for electrical data like current and voltage and another data collection point for environmental data like temperature, humidity and wind direction. Multiple STables are required for these kinds of devices.
At most 4096 (or 1024 prior to version 2.1.7.0) columns are allowed in a STable. If there are more than 4096 of metrics to be collected for a data collection point, multiple STables are required. There can be multiple databases in a system, while one or more STables can exist in a database.
## Create Table
A specific table needs to be created for each data collection point. Similar to RDBMS, table name and schema are required to create a table. Beside, one or more tags can be created for each table. To create a table, a STable needs to be used as template and the values need to be specified for the tags. For example, for the meters in [Table 1](/tdinternal/arch#model_table1), the table can be created using below SQL statement.
A specific table needs to be created for each data collection point. Similar to RDBMS, table name and schema are required to create a table. Additionally, one or more tags can be created for each table. To create a table, a STable needs to be used as template and the values need to be specified for the tags. For example, for the meters in [Table 1](/tdinternal/arch#model_table1), the table can be created using below SQL statement.
```sql
CREATE TABLE d1001 USING meters TAGS ("California.SanFrancisco", 2);
......@@ -57,17 +64,17 @@ CREATE TABLE d1001 USING meters TAGS ("California.SanFrancisco", 2);
In the above SQL statement, "d1001" is the table name, "meters" is the STable name, followed by the value of tag "Location" and the value of tag "groupId", which are "California.SanFrancisco" and "2" respectively in the example. The tag values can be updated after the table is created. Please refer to [Tables](/taos-sql/table) for details.
In TDengine system, it's recommended to create a table for a data collection point via STable. A table created via STable is called subtable in some parts of the TDengine documentation. All SQL commands applied on regular tables can be applied on subtables.
In the TDengine system, it's recommended to create a table for a data collection point via STable. A table created via STable is called subtable in some parts of the TDengine documentation. All SQL commands applied on regular tables can be applied on subtables.
:::warning
It's not recommended to create a table in a database while using a STable from another database as template.
:::tip
It's suggested to use the global unique ID of a data collection point as the table name, for example the device serial number. If there isn't such a unique ID, multiple IDs that are not global unique can be combined to form a global unique ID. It's not recommended to use a global unique ID as tag value.
It's suggested to use the globally unique ID of a data collection point as the table name. For example the device serial number could be used as a unique ID. If a unique ID doesn't exist, multiple IDs that are not globally unique can be combined to form a globally unique ID. It's not recommended to use a globally unique ID as tag value.
## Create Table Automatically
In some circumstances, it's unknown whether the table already exists when inserting rows. The table can be created automatically using the SQL statement below, and nothing will happen if the table already exist.
In some circumstances, it's unknown whether the table already exists when inserting rows. The table can be created automatically using the SQL statement below, and nothing will happen if the table already exists.
```sql
INSERT INTO d1001 USING meters TAGS ("California.SanFrancisco", 2) VALUES (now, 10.2, 219, 0.32);
......@@ -79,6 +86,8 @@ For more details please refer to [Create Table Automatically](/taos-sql/insert#a
## Single Column vs Multiple Column
A multiple columns data model is supported in TDengine. As long as multiple metrics are collected by the same data collection point at the same time, i.e. the timestamp are identical, these metrics can be put in a single STable as columns. However, there is another kind of design, i.e. single column data model, a table is created for each metric, which means a STable is required for each kind of metric. For example, 3 STables are required for current, voltage and phase.
A multiple columns data model is supported in TDengine. As long as multiple metrics are collected by the same data collection point at the same time, i.e. the timestamps are identical, these metrics can be put in a single STable as columns.
However, there is another kind of design, i.e. single column data model in which a table is created for each metric. This means that a STable is required for each kind of metric. For example in a single column model, 3 STables would be required for current, voltage and phase.
It's recommended to use a multiple column data model as much as possible because it's better in the performance of inserting or querying rows. In some cases, however, the metrics to be collected vary frequently and correspondingly the STable schema needs to be changed frequently too. In such case, it's more convenient to use single column data model.
It's recommended to use a multiple column data model as much as possible because insert and query performance is higher. In some cases, however, the collected metrics may vary frequently and so the corresponding STable schema needs to be changed frequently too. In such cases, it's more convenient to use single column data model.
......@@ -15,13 +15,13 @@ import CLine from "./_c_line.mdx";
## Introduction
A single line of text is used in InfluxDB Line protocol format represents one row of data, each line contains 4 parts as shown below.
In the InfluxDB Line protocol format, a single line of text is used to represent one row of data. Each line contains 4 parts as shown below.
```
measurement,tag_set field_set timestamp
```
- `measurement` will be used as the STable name
- `measurement` will be used as the name of the STable
- `tag_set` will be used as tags, with format like `<tag_key>=<tag_value>,<tag_key>=<tag_value>`
- `field_set`will be used as data columns, with format like `<field_key>=<field_value>,<field_key>=<field_value>`
- `timestamp` is the primary key timestamp corresponding to this row of data
......@@ -34,8 +34,8 @@ meters,location=California.LoSangeles,groupid=2 current=13.4,voltage=223,phase=0
:::note
- All the data in `tag_set` will be converted to ncahr type automatically .
- Each data in `field_set` must be self-description for its data type. For example 1.2f32 means a value 1.2 of float type, it will be treated as double without the "f" type suffix.
- All the data in `tag_set` will be converted to nchar type automatically .
- Each data in `field_set` must be self-descriptive for its data type. For example 1.2f32 means a value 1.2 of float type. Without the "f" type suffix, it will be treated as type double.
- Multiple kinds of precision can be used for the `timestamp` field. Time precision can be from nanosecond (ns) to hour (h).
:::
......
......@@ -15,7 +15,7 @@ import CTelnet from "./_c_opts_telnet.mdx";
## Introduction
A single line of text is used in OpenTSDB line protocol to represent one row of data. OpenTSDB employs single column data model, so one line can only contain a single data column. There can be multiple tags. Each line contains 4 parts as below:
A single line of text is used in OpenTSDB line protocol to represent one row of data. OpenTSDB employs a single column data model, so each line can only contain a single data column. There can be multiple tags. Each line contains 4 parts as below:
```
<metric> <timestamp> <value> <tagk_1>=<tagv_1>[ <tagk_n>=<tagv_n>]
......@@ -60,7 +60,7 @@ Please refer to [OpenTSDB Telnet API](http://opentsdb.net/docs/build/html/api_te
</TabItem>
</Tabs>
2 STables will be crated automatically while each STable has 4 rows of data in the above sample code.
2 STables will be created automatically and each STable has 4 rows of data in the above sample code.
```cmd
taos> use test;
......
......@@ -47,7 +47,7 @@ Please refer to [OpenTSDB HTTP API](http://opentsdb.net/docs/build/html/api_http
:::note
- In JSON protocol, strings will be converted to nchar type and numeric values will be converted to double type.
- Only data in array format is accepted, array must be used even there is only one row.
- Only data in array format is accepted and so an array must be used even if there is only one row.
:::
......
......@@ -20,7 +20,7 @@ import CAsync from "./_c_async.mdx";
## Introduction
SQL is used by TDengine as the query language. Application programs can send SQL statements to TDengine through REST API or connectors. TDengine CLI `taos` can also be used to execute SQL Ad-Hoc queries. Here is the list of major query functionalities supported by TDengine:
SQL is used by TDengine as its query language. Application programs can send SQL statements to TDengine through REST API or connectors. TDengine's CLI `taos` can also be used to execute ad hoc SQL queries. Here is the list of major query functionalities supported by TDengine:
- Query on single column or multiple columns
- Filter on tags or data columns:>, <, =, <\>, like
......@@ -31,7 +31,7 @@ SQL is used by TDengine as the query language. Application programs can send SQL
- Join query with timestamp alignment
- Aggregate functions: count, max, min, avg, sum, twa, stddev, leastsquares, top, bottom, first, last, percentile, apercentile, last_row, spread, diff
For example, the SQL statement below can be executed in TDengine CLI `taos` to select the rows whose voltage column is bigger than 215 and limit the output to only 2 rows.
For example, the SQL statement below can be executed in TDengine CLI `taos` to select records with voltage greater than 215 and limit the output to only 2 rows.
```sql
select * from d1001 where voltage > 215 order by ts desc limit 2;
......@@ -46,46 +46,46 @@ taos> select * from d1001 where voltage > 215 order by ts desc limit 2;
Query OK, 2 row(s) in set (0.001100s)
```
To meet the requirements of many use cases, some special functions have been added in TDengine, for example `twa` (Time Weighted Average), `spared` (The difference between the maximum and the minimum), and `last_row` (the last row). Furthermore, continuous query is also supported in TDengine.
To meet the requirements of varied use cases, some special functions have been added in TDengine. Some examples are `twa` (Time Weighted Average), `spread` (The difference between the maximum and the minimum), and `last_row` (the last row). Furthermore, continuous query is also supported in TDengine.
For detailed query syntax please refer to [Select](/taos-sql/select).
## Aggregation among Tables
In many use cases, there are always multiple kinds of data collection points. A new concept, called STable (abbreviated for super table), is used in TDengine to represent a kind of data collection point, and a subtable is used to represent a specific data collection point. Tags are used by TDengine to represent the static properties of data collection points. A specific data collection point has its own values for static properties. By specifying filter conditions on tags, aggregation can be performed efficiently among all the subtables created via the same STable, i.e. same kind of data collection points. Aggregate functions applicable for tables can be used directly on STables, the syntax is exactly the same.
In most use cases, there are always multiple kinds of data collection points. A new concept, called STable (abbreviation for super table), is used in TDengine to represent one type of data collection point, and a subtable is used to represent a specific data collection point of that type. Tags are used by TDengine to represent the static properties of data collection points. A specific data collection point has its own values for static properties. By specifying filter conditions on tags, aggregation can be performed efficiently among all the subtables created via the same STable, i.e. same type of data collection points. Aggregate functions applicable for tables can be used directly on STables; the syntax is exactly the same.
In summary, for a STable, its subtables can be aggregated by a simple query on the STable, it's a kind of join operation. But tables belong to different STables can not be aggregated.
In summary, records across subtables can be aggregated by a simple query on their STable. It is like a join operation. However, tables belonging to different STables can not be aggregated.
### Example 1
In TDengine CLI `taos`, use below SQL to get the average voltage of all the meters in California grouped by location.
In TDengine CLI `taos`, use the SQL below to get the average voltage of all the meters in California grouped by location.
```
taos> SELECT AVG(voltage) FROM meters GROUP BY location;
avg(voltage) | location |
=============================================================
222.000000000 | California.LoSangeles |
222.000000000 | California.LosAngeles |
219.200000000 | California.SanFrancisco |
Query OK, 2 row(s) in set (0.002136s)
```
### Example 2
In TDengine CLI `taos`, use below SQL to get the number of rows and the maximum current in the past 24 hours from meters whose groupId is 2.
In TDengine CLI `taos`, use the SQL below to get the number of rows and the maximum current in the past 24 hours from meters whose groupId is 2.
```
taos> SELECT count(*), max(current) FROM meters where groupId = 2 and ts > now - 24h;
cunt(*) | max(current) |
count(*) | max(current) |
==================================
5 | 13.4 |
Query OK, 1 row(s) in set (0.002136s)
```
Join queries are only allowed between the subtables of the same STable. In [Select](/taos-sql/select), all query operations are marked as to whether they supports STables or not.
Join queries are only allowed between subtables of the same STable. In [Select](/taos-sql/select), all query operations are marked as to whether they support STables or not.
## Down Sampling and Interpolation
In IoT use cases, down sampling is widely used to aggregate the data by time range. The `INTERVAL` keyword in TDengine can be used to simplify the query by time window. For example, the SQL statement below can be used to get the sum of current every 10 seconds from meters table d1001.
In IoT use cases, down sampling is widely used to aggregate data by time range. The `INTERVAL` keyword in TDengine can be used to simplify the query by time window. For example, the SQL statement below can be used to get the sum of current every 10 seconds from meters table d1001.
```
taos> SELECT sum(current) FROM d1001 INTERVAL(10s);
......@@ -169,7 +169,7 @@ In the section describing [Insert](/develop/insert-data/sql-writing), a database
### Asynchronous Query
Besides synchronous queries, an asynchronous query API is also provided by TDengine to insert or query data more efficiently. With a similar hardware and software environment, the async API is 2~4 times faster than sync APIs. Async API works in non-blocking mode, which means an operation can be returned without finishing so that the calling thread can switch to other works to improve the performance of the whole application system. Async APIs perform especially better in the case of poor networks.
Besides synchronous queries, an asynchronous query API is also provided by TDengine to insert or query data more efficiently. With a similar hardware and software environment, the async API is 2~4 times faster than sync APIs. Async API works in non-blocking mode, which means an operation can be returned without finishing so that the calling thread can switch to other work to improve the performance of the whole application system. Async APIs perform especially better in the case of poor networks.
Please note that async query can only be used with a native connection.
......
---
sidebar_label: Continuous Query
description: "Continuous query is a query that's executed automatically according to predefined frequency to provide aggregate query capability by time window, it's actually a simplified time driven stream computing."
description: "Continuous query is a query that's executed automatically at a predefined frequency to provide aggregate query capability by time window. It is essentially simplified, time driven, stream computing."
title: "Continuous Query"
---
Continuous query is a query that's executed automatically according to a predefined frequency to provide aggregate query capability by time window, it's actually a simplified time driven stream computing. Continuous query can be performed on a table or STable in TDengine. The result of continuous query can be pushed to clients or written back to TDengine. Each query is executed on a time window, which moves forward with time. The size of time window and the forward sliding time need to be specified with parameter `INTERVAL` and `SLIDING` respectively.
A continuous query is a query that's executed automatically at a predefined frequency to provide aggregate query capability by time window. It is essentially simplified, time driven, stream computing. A continuous query can be performed on a table or STable in TDengine. The results of a continuous query can be pushed to clients or written back to TDengine. Each query is executed on a time window, which moves forward with time. The size of time window and the forward sliding time need to be specified with parameter `INTERVAL` and `SLIDING` respectively.
Continuous query in TDengine is time driven, and can be defined using TAOS SQL directly without any extra operations. With continuous query, the result can be generated according to a time window to achieve down sampling of the original data. Once a continuous query is defined using TAOS SQL, the query is automatically executed at the end of each time window and the result is pushed back to clients or written to TDengine.
A continuous query in TDengine is time driven, and can be defined using TAOS SQL directly without any extra operations. With a continuous query, the result can be generated based on a time window to achieve down sampling of the original data. Once a continuous query is defined using TAOS SQL, the query is automatically executed at the end of each time window and the result is pushed back to clients or written to TDengine.
There are some differences between continuous query in TDengine and time window computation in stream computing:
......@@ -35,7 +35,7 @@ In this section the use case of meters will be used to introduce how to use cont
```sql
create table meters (ts timestamp, current float, voltage int, phase float) tags (location binary(64), groupId int);
create table D1001 using meters tags ("California.SanFrancisco", 2);
create table D1002 using meters tags ("California.LoSangeles", 2);
create table D1002 using meters tags ("California.LosAngeles", 2);
```
The SQL statement below retrieves the average voltage for a one minute time window, with each time window moving forward by 30 seconds.
......@@ -68,7 +68,7 @@ taos> select * from avg_vol;
2020-07-29 13:39:00.000 | 223.0800000 |
```
Please note that the minimum allowed time window is 10 milliseconds, and no upper limit.
Please note that the minimum allowed time window is 10 milliseconds, and there is no upper limit.
It's possible to specify the start and end time of a continuous query. If the start time is not specified, the timestamp of the first row will be considered as the start time; if the end time is not specified, the continuous query will be performed indefinitely, otherwise it will be terminated once the end time is reached. For example, the continuous query in the SQL statement below will be started from now and terminated one hour later.
......
---
sidebar_label: Subscription
description: "Lightweight service for data subscription and pushing, the time series data inserted into TDengine continuously can be pushed automatically to the subscribing clients."
description: "Lightweight service for data subscription and publishing. Time series data inserted into TDengine continuously can be pushed automatically to subscribing clients."
title: Data Subscription
---
......@@ -16,9 +16,9 @@ import CDemo from "./_sub_c.mdx";
## Introduction
Due to the nature of time series data, data inserting in TDengine is similar to data publishing in message queues. Data is stored in ascending order of timestamp inside TDengine, so each table in TDengine can essentially be considered as a message queue.
Due to the nature of time series data, data insertion into TDengine is similar to data publishing in message queues. Data is stored in ascending order of timestamp inside TDengine, and so each table in TDengine can essentially be considered as a message queue.
A lightweight service for data subscription and pushing is built in TDengine. With the API provided by TDengine, client programs can use `select` statements to subscribe to data from one or more tables. The subscription and state maintenance is performed on the client side, the client programs poll the server to check whether there is new data, and if so the new data will be pushed back to the client side. If the client program is restarted, where to start for retrieving new data is up to the client side.
A lightweight service for data subscription and publishing is built into TDengine. With the API provided by TDengine, client programs can use `select` statements to subscribe to data from one or more tables. The subscription and state maintenance is performed on the client side. The client programs poll the server to check whether there is new data, and if so the new data will be pushed back to the client side. If the client program is restarted, where to start retrieving new data is up to the client side.
There are 3 major APIs related to subscription provided in the TDengine client driver.
......@@ -32,7 +32,7 @@ For more details about these APIs please refer to [C/C++ Connector](/reference/c
If we want to get a notification and take some actions if the current exceeds a threshold, like 10A, from some meters, there are two ways:
The first way is to query on each sub table and record the last timestamp matching the criteria, then after some time query on the data later than recorded timestamp and repeat this process. The SQL statements for this way are as below.
The first way is to query each sub table and record the last timestamp matching the criteria. Then after some time, query the data later than the recorded timestamp, and repeat this process. The SQL statements for this way are as below.
```sql
select * from D1001 where ts > {last_timestamp1} and current > 10;
......@@ -50,7 +50,7 @@ select * from meters where ts > {last_timestamp} and current > 10;
However, this presents a new problem in how to choose `last_timestamp`. First, the timestamp when the data is generated is different from the timestamp when the data is inserted into the database, sometimes the difference between them may be very big. Second, the time when the data from different meters arrives at the database may be different too. If the timestamp of the "slowest" meter is used as `last_timestamp` in the query, the data from other meters may be selected repeatedly; but if the timestamp of the "fastest" meter is used as `last_timestamp`, some data from other meters may be missed.
All the problems mentioned above can be resolved thoroughly using subscription provided by TDengine.
All the problems mentioned above can be resolved easily using the subscription functionality provided by TDengine.
The first step is to create subscription using `taos_subscribe`.
......@@ -65,31 +65,33 @@ if (async) {
}
```
The subscription in TDengine can be either synchronous or asynchronous. In the above sample code, the value of variable `async` is determined from the CLI input, then it's used to create either an async or sync subscription. Sync subscription means the client program needs to invoke `taos_consume` to retrieve data, and async subscription means another thread created by `taos_subscribe` internally invokes `taos_consume` to retrieve data and pass the data to `subscribe_callback` for processing, `subscribe_callback` is a call back function provided by the client program and it's suggested not to do time consuming operation in the call back function.
The subscription in TDengine can be either synchronous or asynchronous. In the above sample code, the value of variable `async` is determined from the CLI input, then it's used to create either an async or sync subscription. Sync subscription means the client program needs to invoke `taos_consume` to retrieve data, and async subscription means another thread created by `taos_subscribe` internally invokes `taos_consume` to retrieve data and pass the data to `subscribe_callback` for processing. `subscribe_callback` is a callback function provided by the client program. You should not perform time consuming operations in the callback function.
The parameter `taos` is an established connection. There is nothing special in sync subscription mode. In async subscription, it should be exclusively by current thread, otherwise unpredictable error may occur.
The parameter `taos` is an established connection. Nothing special needs to be done for thread safety for synchronous subscription. For asynchronous subscription, the taos_subscribe function should be called exclusively by the current thread, to avoid unpredictable errors.
The parameter `sql` is a `select` statement in which `where` clause can be used to specify filter conditions. In our example, the data whose current exceeds 10A needs to be subscribed like below SQL statement:
The parameter `sql` is a `select` statement in which the `where` clause can be used to specify filter conditions. In our example, we can subscribe to the records in which the current exceeds 10A, with the following SQL statement:
```sql
select * from meters where current > 10;
```
Please note that, all the data will be processed because no start time is specified. If only the data from one day ago needs to be processed, a time related condition can be added:
Please note that, all the data will be processed because no start time is specified. If we only want to process data for the past day, a time related condition can be added:
```sql
select * from meters where ts > now - 1d and current > 10;
```
The parameter `topic` is the name of the subscription, it needs to be guaranteed unique in the client program, but it's not necessary to be globally unique because subscription is implemented in the APIs on the client side.
The parameter `topic` is the name of the subscription. The client application must guarantee that the name is unique. However, it doesn't have to be globally unique because subscription is implemented in the APIs on the client side.
If the subscription named as `topic` doesn't exist, the parameter `restart` will be ignored. If the subscription named as `topic` has been created before by the client program, when the client program is restarted with the subscription named `topic`, parameter `restart` is used to determine whether to retrieve data from the beginning or from the last point where the subscription was broken. If the value of `restart` is **true** (i.e. a non-zero value), the data will be retrieved from beginning, or if it is **false** (i.e. zero), the data already consumed before will not be processed again.
If the subscription named as `topic` doesn't exist, the parameter `restart` will be ignored. If the subscription named as `topic` has been created before by the client program, when the client program is restarted with the subscription named `topic`, parameter `restart` is used to determine whether to retrieve data from the beginning or from the last point where the subscription was broken.
The last parameter of `taos_subscribe` is the polling interval in unit of millisecond. In sync mode, if the time difference between two continuous invocations to `taos_consume` is smaller than the interval specified by `taos_subscribe`, `taos_consume` will be blocked until the interval is reached. In async mode, this interval is the minimum interval between two invocations to the call back function.
If the value of `restart` is **true** (i.e. a non-zero value), data will be retrieved from the beginning. If it is **false** (i.e. zero), the data already consumed before will not be processed again.
The last parameter of `taos_subscribe` is the polling interval in units of millisecond. In sync mode, if the time difference between two continuous invocations to `taos_consume` is smaller than the interval specified by `taos_subscribe`, `taos_consume` will be blocked until the interval is reached. In async mode, this interval is the minimum interval between two invocations to the call back function.
The second to last parameter of `taos_subscribe` is used to pass arguments to the call back function. `taos_subscribe` doesn't process this parameter and simply passes it to the call back function. This parameter is simply ignored in sync mode.
After a subscription is created, its data can be consumed and processed, below is the sample code of how to consume data in sync mode, in the else part if `if (async)`.
After a subscription is created, its data can be consumed and processed. Shown below is the sample code to consume data in sync mode, in the else condition of `if (async)`.
```c
if (async) {
......@@ -106,7 +108,7 @@ if (async) {
}
```
In the above sample code, there is an infinite loop, each time carriage return is entered `taos_consume` is invoked, the return value of `taos_consume` is the selected result set, exactly as the input of `taos_use_result`, in the above sample `print_result` is used instead to simplify the sample. Below is the implementation of `print_result`.
In the above sample code in the else condition, there is an infinite loop. Each time carriage return is entered `taos_consume` is invoked. The return value of `taos_consume` is the selected result set. In the above sample, `print_result` is used to simplify the printing of the result set. It is similar to `taos_use_result`. Below is the implementation of `print_result`.
```c
void print_result(TAOS_RES* res, int blockFetch) {
......@@ -133,9 +135,9 @@ void print_result(TAOS_RES* res, int blockFetch) {
}
```
In the above code `taos_print_row` is used to process the data consumed. All the matching rows will be printed.
In the above code `taos_print_row` is used to process the data consumed. All matching rows are printed.
In async mode, the data consuming is simpler as below.
In async mode, consuming data is simpler as shown below.
```c
void subscribe_callback(TAOS_SUB* tsub, TAOS_RES *res, void* param, int code) {
......@@ -175,7 +177,7 @@ Then, this row of data will be shown by the example program on the first termina
## Examples
Below example program demonstrates how to subscribe the data rows whose current exceeds 10A using connectors.
The example program below demonstrates how to subscribe, using connectors, to data rows in which current exceeds 10A.
### Prepare Data
......@@ -250,7 +252,7 @@ taos> use power;
taos> insert into d1001 values(now, 12.4, 220, 1);
```
Because the current in inserted row exceeds 10A, it will be consumed by the example program.
Because the current in the inserted row exceeds 10A, it will be consumed by the example program.
```
ts: 1651146662805 current: 12.4 voltage: 220 phase: 1 location: California.SanFrancisco groupid: 2
......
......@@ -4,15 +4,15 @@ title: Cache
description: "The latest row of each table is kept in cache to provide high performance query of latest state."
---
The cache management policy in TDengine is First-In-First-Out (FIFO), which is also known as insert driven cache management policy and different from read driven cache management, i.e. Least-Recent-Used (LRU). It simply stores the latest data in cache and flushes the oldest data in cache to disk when the cache usage reaches a threshold. In IoT use cases, the most cared about data is the latest data, i.e. current state. The cache policy in TDengine is based the nature of IoT data.
The cache management policy in TDengine is First-In-First-Out (FIFO). FIFO is also known as insert driven cache management policy and it is different from read driven cache management, which is more commonly known as Least-Recently-Used (LRU). FIFO simply stores the latest data in cache and flushes the oldest data in cache to disk, when the cache usage reaches a threshold. In IoT use cases, it is the current state i.e. the latest or most recent data that is important. The cache policy in TDengine, like much of the design and architecture of TDengine, is based on the nature of IoT data.
Caching the latest data provides the capability of retrieving data in milliseconds. With this capability, TDengine can be configured properly to be used as caching system without deploying another separate caching system to simplify the system architecture and minimize the operation cost. The cache will be emptied after TDengine is restarted, TDengine doesn't reload data from disk into cache like a real key-value caching system.
Caching the latest data provides the capability of retrieving data in milliseconds. With this capability, TDengine can be configured properly to be used as a caching system without deploying another separate caching system. This simplifies the system architecture and minimizes operational costs. The cache is emptied after TDengine is restarted. TDengine does not reload data from disk into cache, like a key-value caching system.
The memory space used by TDengine cache is fixed in size, according to the configuration based on application requirement and system resources. Independent memory pool is allocated for and managed by each vnode (virtual node) in TDengine, there is no sharing of memory pools between vnodes. All the tables belonging to a vnode share all the cache memory of the vnode.
The memory space used by the TDengine cache is fixed in size and configurable. It should be allocated based on application requirements and system resources. An independent memory pool is allocated for and managed by each vnode (virtual node) in TDengine. There is no sharing of memory pools between vnodes. All the tables belonging to a vnode share all the cache memory of the vnode.
Memory pool is divided into blocks and data is stored in row format in memory and each block follows FIFO policy. The size of each block is determined by configuration parameter `cache`, the number of blocks for each vnode is determined by `blocks`. For each vnode, the total cache size is `cache * blocks`. A cache block needs to ensure that each table can store at least dozens of records to be efficient.
The memory pool is divided into blocks and data is stored in row format in memory and each block follows FIFO policy. The size of each block is determined by configuration parameter `cache` and the number of blocks for each vnode is determined by the parameter `blocks`. For each vnode, the total cache size is `cache * blocks`. A cache block needs to ensure that each table can store at least dozens of records, to be efficient.
`last_row` function can be used to retrieve the last row of a table or a STable to quickly show the current state of devices on monitoring screen. For example the below SQL statement retrieves the latest voltage of all meters in San Francisco of California.
`last_row` function can be used to retrieve the last row of a table or a STable to quickly show the current state of devices on monitoring screen. For example the below SQL statement retrieves the latest voltage of all meters in San Francisco, California.
```sql
select last_row(voltage) from meters where location='California.SanFrancisco';
......
......@@ -6,29 +6,35 @@ title: Deployment
### Step 1
The FQDN of all hosts needs to be setup properly, all the FQDNs need to be configured in the /etc/hosts of each host. It must be confirmed that each FQDN can be accessed (by ping, for example) from any other hosts.
The FQDN of all hosts must be setup properly. For e.g. FQDNs may have to be configured in the /etc/hosts file on each host. You must confirm that each FQDN can be accessed from any other host. For e.g. you can do this by using the `ping` command.
On each host the command `hostname -f` can be executed to get the hostname. `ping` command can be executed on each host to check whether any other host is accessible from it. If any host is not accessible, the network configuration, like /etc/hosts or DNS configuration, need to be checked and revised to make any two hosts accessible to each other.
To get the hostname on any host, the command `hostname -f` can be executed. `ping <FQDN>` command can be executed on each host to check whether any other host is accessible from it. If any host is not accessible, the network configuration, like /etc/hosts or DNS configuration, needs to be checked and revised, to make any two hosts accessible to each other.
:::note
- The host where the client program runs also needs to be configured properly for FQDN, to make sure all hosts for client or server can be accessed from any other. In other words, the hosts where the client is running are also considered as a part of the cluster.
- It's suggested to disable the firewall for all hosts in the cluster. At least TCP/UDP for port 6030~6042 need to be open if a firewall is enabled.
- Please ensure that your firewall rules do not block TCP/UDP on ports 6030-6042 on all hosts in the cluster.
:::
### Step 2
If any previous version of TDengine has been installed and configured on any host, the installation needs to be removed and the data needs to be cleaned up. For details about uninstalling please refer to [Install and Uninstall](/operation/pkg-install). To clean up the data, please use `rm -rf /var/lib/taos/\*` assuming the `dataDir` is configured as `/var/lib/taos`.
If any previous version of TDengine has been installed and configured on any host, the installation needs to be removed and the data needs to be cleaned up. For details about uninstalling please refer to [Install and Uninstall](/operation/pkg-install). To clean up the data, please use `rm -rf /var/lib/taos/\*` assuming the `dataDir` is configured as `/var/lib/taos`.
:::note
As a best practice, before cleaning up any data files or directories, please ensure that your data has been backed up correctly, if required by your data integrity, backup, security, or other standard operating protocols (SOP).
:::
### Step 3
Now it's time to install TDengine on all hosts without starting `taosd`, the versions on all hosts should be same. If it's prompted to input the existing TDengine cluster, simply press carriage return to ignore it. `install.sh -e no` can also be used to disable this prompt. For details please refer to [Install and Uninstall](/operation/pkg-install).
Now it's time to install TDengine on all hosts but without starting `taosd`. Note that the versions on all hosts should be same. If you are prompted to input the existing TDengine cluster, simply press carriage return to ignore the prompt. `install.sh -e no` can also be used to disable this prompt. For details please refer to [Install and Uninstall](/operation/pkg-install).
### Step 4
Now each physical node (referred to as `dnode` hereinafter, it's abbreviation for "data node") of TDengine needs to be configured properly. Please note that one dnode doesn't stand for one host, multiple TDengine nodes can be started on single host as long as they are configured properly without conflicting. More specifically each instance of the configuration file `taos.cfg` stands for a dnode. Assuming the first dnode of TDengine cluster is "h1.taosdata.com:6030", its `taos.cfg` is configured as following.
Now each physical node (referred to, hereinafter, as `dnode` which is an abbreviation for "data node") of TDengine needs to be configured properly. Please note that one dnode doesn't stand for one host. Multiple TDengine dnodes can be started on a single host as long as they are configured properly without conflicting. More specifically each instance of the configuration file `taos.cfg` stands for a dnode. Assuming the first dnode of TDengine cluster is "h1.taosdata.com:6030", its `taos.cfg` is configured as following.
```c
// firstEp is the end point to connect to when any dnode starts
......@@ -67,9 +73,11 @@ Prior to version 2.0.19.0, besides the above parameters, `locale` and `charset`
## Start Cluster
In the following example we assume that first dnode has FQDN h1.taosdata.com and the second dnode has FQDN h2.taosdata.com.
### Start The First DNODE
The first dnode can be started following the instructions in [Get Started](/get-started/), for example h1.taosdata.com. Then TDengine CLI `taos` can be launched to execute command `show dnodes`, the output is as following for example:
The first dnode can be started following the instructions in [Get Started](/get-started/). Then TDengine CLI `taos` can be launched to execute command `show dnodes`, the output is as following for example:
```
Welcome to the TDengine shell from Linux, Client Version:2.0.0.0
......@@ -80,27 +88,41 @@ Copyright (c) 2017 by TAOS Data, Inc. All rights reserved.
taos> show dnodes;
id | end_point | vnodes | cores | status | role | create_time |
=====================================================================================
1 | h1.taos.com:6030 | 0 | 2 | ready | any | 2020-07-31 03:49:29.202 |
1 | h1.taosdata.com:6030 | 0 | 2 | ready | any | 2020-07-31 03:49:29.202 |
Query OK, 1 row(s) in set (0.006385s)
taos>
```
From the above output, it is shown that the end point of the started dnode is "h1.taos.com:6030", which is the `firstEp` of the cluster.
From the above output, it is shown that the end point of the started dnode is "h1.taosdata.com:6030", which is the `firstEp` of the cluster.
### Start Other DNODEs
There are a few steps necessary to add other dnodes in the cluster.
First, start `taosd` as instructed in [Get Started](/get-started/), assuming it's for the second dnode. Before starting `taosd`, please making sure the configuration is correct, especially `firstEp`, `FQDN` and `serverPort`, `firstEp` must be same as the dnode shown in the section "Start First DNODE", i.e. "h1.taosdata.com" in this example.
Let's assume we are starting the second dnode with FQDN, h2.taosdata.com. First we make sure the configuration is correct.
```c
// firstEp is the end point to connect to when any dnode starts
firstEp h1.taosdata.com:6030
// must be configured to the FQDN of the host where the dnode is launched
fqdn h2.taosdata.com
// the port used by the dnode, default is 6030
serverPort 6030
```
Second, we can start `taosd` as instructed in [Get Started](/get-started/).
Then, on the first dnode, use TDengine CLI `taos` to execute below command to add the end point of the dnode in the cluster. In the command "fqdn:port" should be quoted using double quotes.
Then, on the first dnode i.e. h1.taosdata.com in our example, use TDengine CLI `taos` to execute the following command to add the end point of the dnode in the cluster. In the command "fqdn:port" should be quoted using double quotes.
```sql
CREATE DNODE "h2.taos.com:6030";
```
Then on the first dnode, execute `show dnodes` in `taos` to show whether the second dnode has been added in the cluster successfully or not.
Then on the first dnode h1.taosdata.com, execute `show dnodes` in `taos` to show whether the second dnode has been added in the cluster successfully or not.
```sql
SHOW DNODES;
......
......@@ -10,7 +10,7 @@ Window related clauses are used to divide the data set to be queried into subset
`INTERVAL` clause is used to generate time windows of the same time interval, `SLIDING` is used to specify the time step for which the time window moves forward. The query is performed on one time window each time, and the time window moves forward with time. When defining continuous query both the size of time window and the step of forward sliding time need to be specified. As shown in the figure blow, [t0s, t0e] ,[t1s , t1e], [t2s, t2e] are respectively the time ranges of three time windows on which continuous queries are executed. The time step for which time window moves forward is marked by `sliding time`. Query, filter and aggregate operations are executed on each time window respectively. When the time step specified by `SLIDING` is same as the time interval specified by `INTERVAL`, the sliding time window is actually a flip time window.
![Time Window](./timewindow-1.webp)
![TDengine Database Time Window](./timewindow-1.webp)
`INTERVAL` and `SLIDING` should be used with aggregate functions and select functions. Below SQL statement is illegal because no aggregate or selection function is used with `INTERVAL`.
......@@ -30,7 +30,7 @@ When the time length specified by `SLIDING` is the same as that specified by `IN
In case of using integer, bool, or string to represent the device status at a moment, the continuous rows with same status belong to same status window. Once the status changes, the status window closes. As shown in the following figure, there are two status windows according to status, [2019-04-28 14:22:07,2019-04-28 14:22:10] and [2019-04-28 14:22:11,2019-04-28 14:22:12]. Status window is not applicable to STable for now.
![Status Window](./timewindow-3.webp)
![TDengine Database Status Window](./timewindow-3.webp)
`STATE_WINDOW` is used to specify the column based on which to define status window, for example:
......@@ -46,7 +46,7 @@ SELECT COUNT(*), FIRST(ts) FROM temp_tb_1 SESSION(ts, tol_val);
The primary key, i.e. timestamp, is used to determine which session window the row belongs to. If the time interval between two adjacent rows is within the time range specified by `tol_val`, they belong to the same session window; otherwise they belong to two different time windows. As shown in the figure below, if the limit of time interval for the session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:10,2019-04-28 14:22:30] and [2019-04-28 14:23:10,2019-04-28 14:23:30], because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds.
![Session Window](./timewindow-2.webp)
![TDengine Database Session Window](./timewindow-2.webp)
If the time interval between two continuous rows are within the time interval specified by `tol_value` they belong to the same session window; otherwise a new session window is started automatically. Session window is not supported on STable for now.
......
......@@ -7,12 +7,15 @@ title: Fault Tolerance & Disaster Recovery
TDengine uses **WAL**, i.e. Write Ahead Log, to achieve fault tolerance and high reliability.
When a data block is received by TDengine, the original data block is firstly written into WAL. The log in WAL will be deleted only after the data has been written into data files in the database. Data can be recovered from WAL in case the server is stopped abnormally due to any reason and then restarted.
When a data block is received by TDengine, the original data block is first written into WAL. The log in WAL will be deleted only after the data has been written into data files in the database. Data can be recovered from WAL in case the server is stopped abnormally due to any reason and then restarted.
There are 2 configuration parameters related to WAL:
- walLevel:0:wal is disabled; 1:wal is enabled without fsync; 2:wal is enabled with fsync.
- fsync:only valid when walLevel is set to 2, it specified the interval of invoking fsync. If set to 0, it means fsync is invoked immediately once WAL is written.
- walLevel:
- 0:wal is disabled;
- 1:wal is enabled without fsync;
- 2:wal is enabled with fsync.
- fsync:only valid when walLevel is set to 2, it specifies the interval of invoking fsync. If set to 0, it means fsync is invoked immediately once WAL is written.
To achieve absolutely no data loss, walLevel needs to be set to 2 and fsync needs to be set to 1. The penalty is the performance of data ingestion downgrades. However, if the concurrent threads of data insertion on the client side can reach a big enough number, for example 50, the data ingestion performance would be still good enough, our verification shows that the drop is only 30% compared to fsync is set to 3,000 milliseconds.
......@@ -20,10 +23,10 @@ To achieve absolutely no data loss, walLevel needs to be set to 2 and fsync need
TDengine uses replications to provide high availability and disaster recovery capability.
TDengine cluster is managed by mnode. To make sure the high availability of mnode, multiple replicas can be configured by system parameter `numOfMnodes`. The data replication between mnode replicas is in synchronous way to guarantee the metadata consistency.
TDengine cluster is managed by mnode. To make sure the high availability of mnode, multiple replicas can be configured by the system parameter `numOfMnodes`. The data replication between mnode replicas is performed in a synchronous way to guarantee the metadata consistency.
The number of replicas for time series data in TDengine is associated with each database, there can be a lot of databases in a cluster while each database can be configured with a different number of replicas. When creating a database, parameter `replica` is used to configure the number of replications. To achieve high availability, `replica` needs to be higher than 1.
The number of replicas for the time series data in TDengine is associated with each database, there can be a lot of databases in a cluster while each database can be configured with a different number of replicas. When creating a database, parameter `replica` is used to configure the number of replications. To achieve high availability, `replica` needs to be higher than 1.
The number of dnodes in a TDengine cluster must NOT be lower than the number of replicas for any database, otherwise it would fail when trying to create table.
The number of dnodes in a TDengine cluster must NOT be lower than the number of replicas for any database, otherwise it would fail when trying to create a table.
As long as the dnodes of a TDengine cluster are deployed on different physical machines and replica number is set to bigger than 1, high availability can be achieved without any other assistance. If dnodes of TDengine cluster are deployed in geographically different data centers, disaster recovery can be achieved too.
As long as the dnodes of a TDengine cluster are deployed on different physical machines and the replica number is set to bigger than 1, high availability can be achieved without any other assistance. If dnodes of TDengine cluster are deployed in geographically different data centers, disaster recovery can be achieved too.
......@@ -2,7 +2,7 @@
title: User Management
---
System operator can use TDengine CLI `taos` to create or remove user or change password. The SQL command is as low:
A system operator can use TDengine CLI `taos` to create or remove users or change passwords. The SQL commands are documented below:
## Create User
......@@ -10,7 +10,7 @@ System operator can use TDengine CLI `taos` to create or remove user or change p
CREATE USER <user_name> PASS <'password'>;
```
When creating a user and specifying the user name and password, password needs to be quoted using single quotes.
When creating a user and specifying the user name and password, the password needs to be quoted using single quotes.
## Drop User
......@@ -18,7 +18,7 @@ When creating a user and specifying the user name and password, password needs t
DROP USER <user_name>;
```
Drop a user can only be performed by root.
Dropping a user can only be performed by root.
## Change Password
......@@ -26,7 +26,7 @@ Drop a user can only be performed by root.
ALTER USER <user_name> PASS <'password'>;
```
To keep the case of the password when changing password, password needs to be quoted using single quotes.
To keep the case of the password when changing password, the password needs to be quoted using single quotes.
## Change Privilege
......@@ -36,7 +36,7 @@ ALTER USER <user_name> PRIVILEGE <write|read>;
The privileges that can be changed to are `read` or `write` without single quotes.
Note:there is another privilege `super`, which not allowed to be authorized to any user.
Note:there is another privilege `super`, which is not allowed to be authorized to any user.
## Show Users
......@@ -45,6 +45,6 @@ SHOW USERS;
```
:::note
In SQL syntax, `< >` means the part that needs to be input by user, excluding the `< >` itself.
In SQL syntax, `< >` means the part that needs to be input by the user, excluding the `< >` itself.
:::
......@@ -2,26 +2,26 @@
title: Data Import
---
There are multiple ways of importing data provided byTDengine: import with script, import from data file, import using `taosdump`.
There are multiple ways of importing data provided by TDengine: import with script, import from data file, import using `taosdump`.
## Import Using Script
TDengine CLI `taos` supports `source <filename>` command for executing the SQL statements in the file in batch. The SQL statements for creating databases, creating tables, and inserting rows can be written in single file with one statement on each line, then the file can be executed using `source` command in TDengine CLI `taos` to execute the SQL statements in order and in batch. In the script file, any line beginning with "#" is treated as comments and ignored silently.
TDengine CLI `taos` supports `source <filename>` command for executing the SQL statements in the file in batch. The SQL statements for creating databases, creating tables, and inserting rows can be written in a single file with one statement on each line, then the file can be executed using the `source` command in TDengine CLI `taos` to execute the SQL statements in order and in batch. In the script file, any line beginning with "#" is treated as comments and ignored silently.
## Import from Data File
In TDengine CLI, data can be imported from a CSV file into an existing table. The data in single CSV must belong to same table and must be consistent with the schema of that table. The SQL statement is as below:
In TDengine CLI, data can be imported from a CSV file into an existing table. The data in a single CSV must belong to the same table and must be consistent with the schema of that table. The SQL statement is as below:
```sql
insert into tb1 file 'path/data.csv';
```
:::note
If there is description in the first line of a CSV file, please remove it before importing. If there is no value for a column, please use `NULL` without quotes.
If there is a description in the first line of the CSV file, please remove it before importing. If there is no value for a column, please use `NULL` without quotes.
:::
For example, there is a sub table d1001 whose schema is as below:
For example, there is a subtable d1001 whose schema is as below:
```sql
taos> DESCRIBE d1001
......@@ -49,7 +49,7 @@ The format of the CSV file to be imported, data.csv, is as below:
'2018-10-12 06:38:05.000',18.30000,219,0.31000
```
Then, below SQL statement can be used to import data from file "data.csv", assuming the file is located under the home directory of current Linux user.
Then, the below SQL statement can be used to import data from file "data.csv", assuming the file is located under the home directory of the current Linux user.
```sql
taos> insert into d1001 file '~/data.csv';
......@@ -58,4 +58,4 @@ Query OK, 9 row(s) affected (0.004763s)
## Import using taosdump
A convenient tool for importing and exporting data is provided by TDengine, `taosdump`, which can used to export data from one TDengine cluster and import into another one. For the details of using `taosdump` please refer to [Tool for exporting and importing data: taosdump](/reference/taosdump).
A convenient tool for importing and exporting data is provided by TDengine, `taosdump`, which can be used to export data from one TDengine cluster and import into another one. For the details of using `taosdump` please refer to [Tool for exporting and importing data: taosdump](/reference/taosdump).
......@@ -3,7 +3,7 @@ sidebar_label: Connections & Tasks
title: Manage Connections and Query Tasks
---
System operator can use TDengine CLI to show the connections, ongoing queries, stream computing, and can close connection or stop ongoing query task or stream computing.
A system operator can use TDengine CLI to show the connections, ongoing queries, stream computing, and can close connection or stop ongoing query task or stream computing.
## Show Connections
......@@ -51,4 +51,4 @@ The first column of the output is stream ID, which is composed of the connection
KILL STREAM <stream-id>;
```
The the above SQL command, `stream-id` is from the first column of the output of `SHOW STREAMS`.
The above SQL command, `stream-id` is from the first column of the output of `SHOW STREAMS`.
......@@ -2,19 +2,19 @@
title: TDengine Monitoring
---
After TDengine is started, a database named `log` for monitoring is created automatically. The information about CPU, memory, disk, bandwidth, number of requests, disk I/O speed, slow query is written into `log` database on the basis of a predefined interval. Besides, some important system operations, like logon, create user, drop database, and alerts and warnings generated in TDengine are written into `log` database too. System operator can view the data in `log` database from TDengine CLI or from a web console.
After TDengine is started, a database named `log` for monitoring is created automatically. The information about CPU, memory, disk, bandwidth, number of requests, disk I/O speed, slow query is written into `log` database on the basis of a predefined interval. Additionally, some important system operations, like logon, create user, drop database, and alerts and warnings generated in TDengine are written into the `log` database too. A system operator can view the data in `log` database from TDengine CLI or from a web console.
Collection of the monitoring information is enabled by default, but can be disabled by parameter `monitor` in configuration file.
The collection of the monitoring information is enabled by default, but can be disabled by parameter `monitor` in the configuration file.
## TDinsight
TDinsight is a total solution which uses the monitor database `log` mentioned previously and Grafana to monitor a TDengine cluster.
TDinsight is a complete solution which uses the monitor database `log` mentioned previously and Grafana to monitor a TDengine cluster.
From version 2.3.3.0, more monitoring data has been added in the `log` database. Please refer to [TDinsight Grafana Dashboard](https://grafana.com/grafana/dashboards/15167) to learn more details about using TDinsight to monitor TDengine.
A script `TDinsight.sh` is provided to deploy TDinsight in automatic way.
A script `TDinsight.sh` is provided to deploy TDinsight automatically.
Download `TDinsight.sh` with below command:
Download `TDinsight.sh` with the below command:
```bash
wget https://github.com/taosdata/grafanaplugin/raw/master/dashboards/TDinsight.sh
......@@ -38,7 +38,7 @@ There are two ways to setup Grafana alert notification.
sudo ./TDinsight.sh -a http://localhost:6041 -u root -p taosdata -E <notifier uid>
```
- The AliCloud SMS alert built in TDengine data source plugin can be enabled with parameter `-s`, the parameters of this way are as follows:
- The AliCloud SMS alert built in TDengine data source plugin can be enabled with parameter `-s`, the parameters of enabling this plugin are listed below:
- `-I`: AliCloud SMS Key ID
- `-K`: AliCloud SMS Key Secret
......@@ -47,7 +47,7 @@ There are two ways to setup Grafana alert notification.
- `-T`: Input parameters in JSON format for the SMS notification template, for example`{"alarm_level":"%s","time":"%s","name":"%s","content":"%s"}`
- `-B`: List of mobile numbers to be notified
Below is an example of the full command using this way.
Below is an example of the full command using the AliCloud SMS alert.
```bash
sudo ./TDinsight.sh -a http://localhost:6041 -u root -p taosdata -s \
......@@ -55,6 +55,6 @@ There are two ways to setup Grafana alert notification.
-T '{"alarm_level":"%s","time":"%s","name":"%s","content":"%s"}'
```
Launch `TDinsight.sh` as above command and restart Grafana, then open Dashboard `http://localhost:3000/d/tdinsight`.
Launch `TDinsight.sh` with the command above and restart Grafana, then open Dashboard `http://localhost:3000/d/tdinsight`.
For more use cases and restrictions please refer to [TDinsight](/reference/tdinsight/).
......@@ -10,13 +10,13 @@ The diagnostic for network connection can be executed between Linux and Linux or
Diagnostic steps:
1. If the port range to be diagnosed are being occupied by a `taosd` server process, please firstly stop `taosd.
2. On the server side, execute command `taos -n server -P <port> -l <pktlen>` to monitor the port range starting from the port specified by `-P` parameter with the role of "server.
3. On the client side, execute command `taos -n client -h <fqdn of server> -P <port> -l <pktlen>` to send testing package to the specified server and port.
1. If the port range to be diagnosed are being occupied by a `taosd` server process, please first stop `taosd.
2. On the server side, execute command `taos -n server -P <port> -l <pktlen>` to monitor the port range starting from the port specified by `-P` parameter with the role of "server".
3. On the client side, execute command `taos -n client -h <fqdn of server> -P <port> -l <pktlen>` to send a testing package to the specified server and port.
-l <pktlen\>: The size of the testing package, in bytes. The value range is [11, 64,000] and default value is 1,000. Please be noted that the package length must be same in the above 2 commands executed on server side and client side respectively.
-l <pktlen\>: The size of the testing package, in bytes. The value range is [11, 64,000] and default value is 1,000. Please note that the package length must be same in the above 2 commands executed on server side and client side respectively.
Output of the server side is as below for example:
Output of the server side for the example is below:
```bash
# taos -n server -P 6000
......@@ -47,7 +47,7 @@ Output of the server side is as below for example:
12/21 14:50:22.721261 0x7f53427ec700 UTL UDP: send:1000 bytes to 172.27.0.8 at 6011
```
Output of the client side is as below for example:
Output of the client side for the example is below:
```bash
# taos -n client -h 172.27.0.7 -P 6000
......@@ -65,13 +65,13 @@ Output of the client side is as below for example:
12/21 14:50:22.721274 0x7fc95d859200 UTL successed to test UDP port:6011
```
The output needs to be checked carefully for the system operator to find out root cause and solve the problem.
The output needs to be checked carefully for the system operator to find out the root cause and solve the problem.
## Startup Status and RPC Diagnostic
`taos -n startup -h <fqdn of server>` can be used to check the startup status of a `taosd` process. This is a comman task for a system operator to do to determine whether `taosd` has been started successfully, especially in case of cluster.
`taos -n rpc -h <fqdn of server>` can be used to check whether the port of a started `taosd` can be accessed or not. If `taosd` process doesn't respond or work abnormally, this command can be used to initiate a rpc communication with the specified fqdn to determine whether it's network problem or `taosd` is abnormal.
`taos -n rpc -h <fqdn of server>` can be used to check whether the port of a started `taosd` can be accessed or not. If `taosd` process doesn't respond or is working abnormally, this command can be used to initiate a rpc communication with the specified fqdn to determine whether it's a network problem or `taosd` is abnormal.
## Sync and Arbitrator Diagnostic
......@@ -80,7 +80,7 @@ taos -n sync -P 6040 -h <fqdn of server>
taos -n sync -P 6042 -h <fqdn of server>
```
The above commands can be executed on Linux Shell to check whether the port for sync works well and whether the sync module of the server side works well. Besides, `-P 6042` is used to check whether the arbitrator is configured properly and works well.
The above commands can be executed on Linux Shell to check whether the port for sync is working well and whether the sync module on the server side is working well. Additionally, `-P 6042` is used to check whether the arbitrator is configured properly and is working well.
## Network Speed Diagnostic
......@@ -88,12 +88,12 @@ The above commands can be executed on Linux Shell to check whether the port for
From version 2.2.0.0, the above command can be executed on Linux Shell to test the network speed, it sends uncompressed package to a running `taosd` server process or a simulated server process started by `taos -n server` to test the network speed. Parameters can be used when testing network speed are as below:
-n:When set to "speed", it means testing network speed
-h:The FQDN or IP of the server process to be connected to; if not set, the FQDN configured in `taos.cfg` is used
-P:The port of the server process to connect to, the default value is 6030
-N:The number of packages that will be sent in the test, range is [1,10000], default value is 100
-l:The size of each package in bytes, range is [1024, 1024 \* 1024 \* 1024], default value is 1024
-S:The type of network packages to send, can be either TCP or UDP, default value is
-n:When set to "speed", it means testing network speed.
-h:The FQDN or IP of the server process to be connected to; if not set, the FQDN configured in `taos.cfg` is used.
-P:The port of the server process to connect to, the default value is 6030.
-N:The number of packages that will be sent in the test, range is [1,10000], default value is 100.
-l:The size of each package in bytes, range is [1024, 1024 \* 1024 \* 1024], default value is 1024.
-S:The type of network packages to send, can be either TCP or UDP, default value is TCP.
## FQDN Resolution Diagnostic
......@@ -101,22 +101,22 @@ From version 2.2.0.0, the above command can be executed on Linux Shell to test t
From version 2.2.0.0, the above command can be executed on Linux Shell to test the resolution speed of FQDN. It can be used to try to resolve a FQDN to an IP address and record the time spent in this process. The parameters that can be used for this purpose are as below:
-n:When set to "fqdn", it means testing the speed of resolving FQDN
-h:The FQDN to be resolved. If not set, the `FQDN` parameter in `taos.cfg` is used by default.
-n:When set to "fqdn", it means testing the speed of resolving FQDN.
-h:The FQDN to be resolved. If not set, the `FQDN` parameter in `taos.cfg` is used by default.
## Server Log
The parameter `debugFlag` is used to control the log level of `taosd` server process. The default value is 131, for debug purpose it needs to be escalated to 135 or 143.
The parameter `debugFlag` is used to control the log level of the `taosd` server process. The default value is 131, for debug purpose it needs to be escalated to 135 or 143.
Once this parameter is set to 135 or 143, the log file grows very quickly especially when there is huge volume of data insertion and data query requests. If all the logs are stored together, some important information may be missed very easily, so on server side important information is stored at different place from other logs.
Once this parameter is set to 135 or 143, the log file grows very quickly especially when there is a huge volume of data insertion and data query requests. If all the logs are stored together, some important information may be missed very easily, so on server side important information is stored at different place from other logs.
- The log at level of INFO, WARNING and ERROR is stored in `taosinfo` so that it is easy to find important information
- The log at level of DEBUG (135) and TRACE (143) and other information not handled by `taosinfo` are stored in `taosdlog`
## Client Log
An independent log file, named as "taoslog+<seq num\>" is generated for each client program, i.e. a client process. The default value of `debugFlag` is also 131 and only log at level of INFO/ERROR/WARNING is recorded, it and needs to be changed to 135 or 143 so that log at DEBUG or TRACE level can be recorded for debugging purpose.
An independent log file, named as "taoslog+<seq num\>" is generated for each client program, i.e. a client process. The default value of `debugFlag` is also 131 and only logs at level of INFO/ERROR/WARNING are recorded, for debugging purposes it needs to be changed to 135 or 143 so that logs at DEBUG or TRACE level can be recorded.
The maximum length of a single log file is controlled by parameter `numOfLogLines` and only 2 log files are kept for each `taosd` server process.
log file is written in async way to minimize the workload on disk, bu the penalty is that a few log lines may be lost in some extreme conditions.
Log files are written in an async way to minimize the workload on disk, but the trade off for performance is that a few log lines may be lost in some extreme conditions.
......@@ -4,7 +4,7 @@ title: Connector
TDengine provides a rich set of APIs (application development interface). To facilitate users to develop their applications quickly, TDengine supports connectors for multiple programming languages, including official connectors for C/C++, Java, Python, Go, Node.js, C#, and Rust. These connectors support connecting to TDengine clusters using both native interfaces (taosc) and REST interfaces (not supported in a few languages yet). Community developers have also contributed several unofficial connectors, such as the ADO.NET connector, the Lua connector, and the PHP connector.
![image-connector](./connector.webp)
![TDengine Database image-connector](./connector.webp)
## Supported platforms
......
......@@ -11,7 +11,7 @@ import TabItem from '@theme/TabItem';
'taos-jdbcdriver' is TDengine's official Java language connector, which allows Java developers to develop applications that access the TDengine database. 'taos-jdbcdriver' implements the interface of the JDBC driver standard and provides two forms of connectors. One is to connect to a TDengine instance natively through the TDengine client driver (taosc), which supports functions including data writing, querying, subscription, schemaless writing, and bind interface. And the other is to connect to a TDengine instance through the REST interface provided by taosAdapter (2.4.0.0 and later). REST connections implement has a slight differences to compare the set of features implemented and native connections.
![tdengine-connector](tdengine-jdbc-connector.webp)
![TDengine Database tdengine-connector](tdengine-jdbc-connector.webp)
The preceding diagram shows two ways for a Java app to access TDengine via connector:
......
......@@ -24,7 +24,7 @@ taosAdapter provides the following features.
## taosAdapter architecture diagram
![taosAdapter Architecture](taosAdapter-architecture.webp)
![TDengine Database taosAdapter Architecture](taosAdapter-architecture.webp)
## taosAdapter Deployment Method
......
......@@ -233,33 +233,33 @@ The default username/password is `admin`. Grafana will require a password change
Point to the **Configurations** -> **Data Sources** menu, and click the **Add data source** button.
![Add data source button](./assets/howto-add-datasource-button.webp)
![TDengine Database TDinsight Add data source button](./assets/howto-add-datasource-button.webp)
Search for and select **TDengine**.
![Add datasource](./assets/howto-add-datasource-tdengine.webp)
![TDengine Database TDinsight Add datasource](./assets/howto-add-datasource-tdengine.webp)
Configure the TDengine datasource.
![Datasource Configuration](./assets/howto-add-datasource.webp)
![TDengine Database TDinsight Datasource Configuration](./assets/howto-add-datasource.webp)
Save and test. It will report 'TDengine Data source is working' under normal circumstances.
![datasource test](./assets/howto-add-datasource-test.webp)
![TDengine Database TDinsight datasource test](./assets/howto-add-datasource-test.webp)
### Importing dashboards
Point to **+** / **Create** - **import** (or `/dashboard/import` url).
![Import Dashboard and Configuration](./assets/import_dashboard.webp)
![TDengine Database TDinsight Import Dashboard and Configuration](./assets/import_dashboard.webp)
Type the dashboard ID `15167` in the **Import via grafana.com** location and **Load**.
![Import via grafana.com](./assets/import-dashboard-15167.webp)
![TDengine Database TDinsight Import via grafana.com](./assets/import-dashboard-15167.webp)
Once the import is complete, the full page view of TDinsight is shown below.
![show](./assets/TDinsight-full.webp)
![TDengine Database TDinsight show](./assets/TDinsight-full.webp)
## TDinsight dashboard details
......@@ -269,7 +269,7 @@ Details of the metrics are as follows.
### Cluster Status
![tdinsight-mnodes-overview](./assets/TDinsight-1-cluster-status.webp)
![TDengine Database TDinsight mnodes overview](./assets/TDinsight-1-cluster-status.webp)
This section contains the current information and status of the cluster, the alert information is also here (from left to right, top to bottom).
......@@ -289,7 +289,7 @@ This section contains the current information and status of the cluster, the ale
### DNodes Status
![tdinsight-mnodes-overview](./assets/TDinsight-2-dnodes.webp)
![TDengine Database TDinsight mnodes overview](./assets/TDinsight-2-dnodes.webp)
- **DNodes Status**: simple table view of `show dnodes`.
- **DNodes Lifetime**: the time elapsed since the dnode was created.
......@@ -298,14 +298,14 @@ This section contains the current information and status of the cluster, the ale
### MNode Overview
![tdinsight-mnodes-overview](./assets/TDinsight-3-mnodes.webp)
![TDengine Database TDinsight mnodes overview](./assets/TDinsight-3-mnodes.webp)
1. **MNodes Status**: a simple table view of `show mnodes`. 2.
2. **MNodes Number**: similar to `DNodes Number`, the number of MNodes changes.
### Request
![tdinsight-requests](./assets/TDinsight-4-requests.webp)
![TDengine Database TDinsight tdinsight requests](./assets/TDinsight-4-requests.webp)
1. **Requests Rate(Inserts per Second)**: average number of inserts per second.
2. **Requests (Selects)**: number of query requests and change rate (count of second).
......@@ -313,7 +313,7 @@ This section contains the current information and status of the cluster, the ale
### Database
![tdinsight-database](./assets/TDinsight-5-database.webp)
![TDengine Database TDinsight database](./assets/TDinsight-5-database.webp)
Database usage, repeated for each value of the variable `$database` i.e. multiple rows per database.
......@@ -325,7 +325,7 @@ Database usage, repeated for each value of the variable `$database` i.e. multipl
### DNode Resource Usage
![dnode-usage](./assets/TDinsight-6-dnode-usage.webp)
![TDengine Database TDinsight dnode usage](./assets/TDinsight-6-dnode-usage.webp)
Data node resource usage display with repeated multiple rows for the variable `$fqdn` i.e., each data node. Includes.
......@@ -346,13 +346,13 @@ Data node resource usage display with repeated multiple rows for the variable `$
### Login History
![Login History](./assets/TDinsight-7-login-history.webp)
![TDengine Database TDinsight Login History](./assets/TDinsight-7-login-history.webp)
Currently, only the number of logins per minute is reported.
### Monitoring taosAdapter
![taosadapter](./assets/TDinsight-8-taosadapter.webp)
![TDengine Database TDinsight monitor taosadapter](./assets/TDinsight-8-taosadapter.webp)
Support monitoring taosAdapter request statistics and status details. Includes.
......
......@@ -62,15 +62,15 @@ GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=tdengine-datasource
Users can log in to the Grafana server (username/password: admin/admin) directly through the URL `http://localhost:3000` and add a datasource through `Configuration -> Data Sources` on the left side, as shown in the following figure.
![img](./grafana/add_datasource1.webp)
![TDengine Database TDinsight plugin add datasource 1](./grafana/add_datasource1.webp)
Click `Add data source` to enter the Add data source page, and enter TDengine in the query box to add it, as shown in the following figure.
![img](./grafana/add_datasource2.webp)
![TDengine Database TDinsight plugin add datasource 2](./grafana/add_datasource2.webp)
Enter the datasource configuration page, and follow the default prompts to modify the corresponding configuration.
![img](./grafana/add_datasource3.webp)
![TDengine Database TDinsight plugin add database 3](./grafana/add_datasource3.webp)
- Host: IP address of the server where the components of the TDengine cluster provide REST service (offered by taosd before 2.4 and by taosAdapter since 2.4) and the port number of the TDengine REST service (6041), by default use `http://localhost:6041`.
- User: TDengine user name.
......@@ -78,13 +78,13 @@ Enter the datasource configuration page, and follow the default prompts to modif
Click `Save & Test` to test. Follows are a success.
![img](./grafana/add_datasource4.webp)
![TDengine Database TDinsight plugin add database 4](./grafana/add_datasource4.webp)
### Create Dashboard
Go back to the main interface to create the Dashboard, click Add Query to enter the panel query page:
![img](./grafana/create_dashboard1.webp)
![TDengine Database TDinsight plugin create dashboard 1](./grafana/create_dashboard1.webp)
As shown above, select the `TDengine` data source in the `Query` and enter the corresponding SQL in the query box below for query.
......@@ -94,7 +94,7 @@ As shown above, select the `TDengine` data source in the `Query` and enter the c
Follow the default prompt to query the average system memory usage for the specified interval on the server where the current TDengine deployment is located as follows.
![img](./grafana/create_dashboard2.webp)
![TDengine Database TDinsight plugin create dashboard 2](./grafana/create_dashboard2.webp)
> For more information on how to use Grafana to create the appropriate monitoring interface and for more details on using Grafana, refer to the official Grafana [documentation](https://grafana.com/docs/).
......
......@@ -44,25 +44,25 @@ Since the configuration interface of EMQX differs from version to version, here
Use your browser to open the URL `http://IP:18083` and log in to EMQX Dashboard. The initial installation username is `admin` and the password is: `public`.
![img](./emqx/login-dashboard.webp)
![TDengine Database EMQX login dashboard](./emqx/login-dashboard.webp)
### Creating Rule
Select "Rule" in the "Rule Engine" on the left and click the "Create" button: !
![img](./emqx/rule-engine.webp)
![TDengine Database EMQX rule engine](./emqx/rule-engine.webp)
### Edit SQL fields
![img](./emqx/create-rule.webp)
![TDengine Database EMQX create rule](./emqx/create-rule.webp)
### Add "action handler"
![img](./emqx/add-action-handler.webp)
![TDengine Database EMQX add action handler](./emqx/add-action-handler.webp)
### Add "Resource"
![img](./emqx/create-resource.webp)
![TDengine Database EMQX create resource](./emqx/create-resource.webp)
Select "Data to Web Service" and click the "New Resource" button.
......@@ -70,13 +70,13 @@ Select "Data to Web Service" and click the "New Resource" button.
Select "Data to Web Service" and fill in the request URL as the address and port of the server running taosAdapter (default is 6041). Leave the other properties at their default values.
![img](./emqx/edit-resource.webp)
![TDengine Database EMQX edit resource](./emqx/edit-resource.webp)
### Edit "action"
Edit the resource configuration to add the key/value pairing for Authorization. Please refer to the [ TDengine REST API documentation ](https://docs.taosdata.com/reference/rest-api/) for the authorization in details. Enter the rule engine replacement template in the message body.
![img](./emqx/edit-action.webp)
![TDengine Database EMQX edit action](./emqx/edit-action.webp)
## Compose program to mock data
......@@ -163,7 +163,7 @@ Edit the resource configuration to add the key/value pairing for Authorization.
Note: `CLIENT_NUM` in the code can be set to a smaller value at the beginning of the test to avoid hardware performance be not capable to handle a more significant number of concurrent clients.
![img](./emqx/client-num.webp)
![TDengine Database EMQX client num](./emqx/client-num.webp)
## Execute tests to simulate sending MQTT data
......@@ -172,19 +172,19 @@ npm install mqtt mockjs --save ---registry=https://registry.npm.taobao.org
node mock.js
```
![img](./emqx/run-mock.webp)
![TDengine Database EMQX run mock](./emqx/run-mock.webp)
## Verify that EMQX is receiving data
Refresh the EMQX Dashboard rules engine interface to see how many records were received correctly:
![img](./emqx/check-rule-matched.webp)
![TDengine Database EMQX rule matched](./emqx/check-rule-matched.webp)
## Verify that data writing to TDengine
Use the TDengine CLI program to log in and query the appropriate databases and tables to verify that the data is being written to TDengine correctly:
![img](./emqx/check-result-in-taos.webp)
![TDengine Database EMQX result in taos](./emqx/check-result-in-taos.webp)
Please refer to the [TDengine official documentation](https://docs.taosdata.com/) for more details on how to use TDengine.
EMQX Please refer to the [EMQX official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use EMQX.
......@@ -9,11 +9,11 @@ TDengine Kafka Connector contains two plugins: TDengine Source Connector and TDe
Kafka Connect is a component of Apache Kafka that enables other systems, such as databases, cloud services, file systems, etc., to connect to Kafka easily. Data can flow from other software to Kafka via Kafka Connect and Kafka to other systems via Kafka Connect. Plugins that read data from other software are called Source Connectors, and plugins that write data to other software are called Sink Connectors. Neither Source Connector nor Sink Connector will directly connect to Kafka Broker, and Source Connector transfers data to Kafka Connect. Sink Connector receives data from Kafka Connect.
![](kafka/Kafka_Connect.webp)
![TDengine Database Kafka Connector -- Kafka Connect](kafka/Kafka_Connect.webp)
TDengine Source Connector is used to read data from TDengine in real-time and send it to Kafka Connect. Users can use The TDengine Sink Connector to receive data from Kafka Connect and write it to TDengine.
![](kafka/streaming-integration-with-kafka-connect.webp)
![TDengine Database Kafka Connector -- streaming integration with kafka connect](kafka/streaming-integration-with-kafka-connect.webp)
## What is Confluent?
......@@ -26,7 +26,7 @@ Confluent adds many extensions to Kafka. include:
5. GUI for managing and monitoring Kafka - Confluent Control Center
Some of these extensions are available in the community version of Confluent. Some are only available in the enterprise version.
![](kafka/confluentPlatform.webp)
![TDengine Database Kafka Connector -- Confluent platform](kafka/confluentPlatform.webp)
Confluent Enterprise Edition provides the `confluent` command-line tool to manage various components.
......
......@@ -11,7 +11,7 @@ The design of TDengine is based on the assumption that any hardware or software
Logical structure diagram of TDengine distributed architecture as following:
![TDengine architecture diagram](structure.webp)
![TDengine Database architecture diagram](structure.webp)
<center> Figure 1: TDengine architecture diagram </center>
A complete TDengine system runs on one or more physical nodes. Logically, it includes data node (dnode), TDengine client driver (TAOSC) and application (app). There are one or more data nodes in the system, which form a cluster. The application interacts with the TDengine cluster through TAOSC's API. The following is a brief introduction to each logical unit.
......@@ -54,7 +54,7 @@ A complete TDengine system runs on one or more physical nodes. Logically, it inc
To explain the relationship between vnode, mnode, TAOSC and application and their respective roles, the following is an analysis of a typical data writing process.
![typical process of TDengine](message.webp)
![typical process of TDengine Database](message.webp)
<center> Figure 2: Typical process of TDengine </center>
1. Application initiates a request to insert data through JDBC, ODBC, or other APIs.
......@@ -123,7 +123,7 @@ If a database has N replicas, thus a virtual node group has N virtual nodes, but
Master Vnode uses a writing process as follows:
![TDengine Master Writing Process](write_master.webp)
![TDengine Database Master Writing Process](write_master.webp)
<center> Figure 3: TDengine Master writing process </center>
1. Master vnode receives the application data insertion request, verifies, and moves to next step;
......@@ -137,7 +137,7 @@ Master Vnode uses a writing process as follows:
For a slave vnode, the write process as follows:
![TDengine Slave Writing Process](write_slave.webp)
![TDengine Database Slave Writing Process](write_slave.webp)
<center> Figure 4: TDengine Slave Writing Process </center>
1. Slave vnode receives a data insertion request forwarded by Master vnode;
......@@ -267,7 +267,7 @@ For the data collected by device D1001, the number of records per hour is counte
TDengine creates a separate table for each data collection point, but in practical applications, it is often necessary to aggregate data from different data collection points. In order to perform aggregation operations efficiently, TDengine introduces the concept of STable. STable is used to represent a specific type of data collection point. It is a table set containing multiple tables. The schema of each table in the set is the same, but each table has its own static tag. The tags can be multiple and be added, deleted and modified at any time. Applications can aggregate or statistically operate all or a subset of tables under a STABLE by specifying tag filters, thus greatly simplifying the development of applications. The process is shown in the following figure:
![Diagram of multi-table aggregation query](multi_tables.webp)
![TDengine Database Diagram of multi-table aggregation query](multi_tables.webp)
<center> Figure 5: Diagram of multi-table aggregation query </center>
1. Application sends a query condition to system;
......
......@@ -16,7 +16,7 @@ Current mainstream IT DevOps system usually include a data collection module, a
This article introduces how to quickly build a TDengine + Telegraf + Grafana based IT DevOps visualization system without writing even a single line of code and by simply modifying a few lines of configuration files. The architecture is as follows.
![IT-DevOps-Solutions-Telegraf.webp](./IT-DevOps-Solutions-Telegraf.webp)
![TDengine Database IT-DevOps-Solutions-Telegraf](./IT-DevOps-Solutions-Telegraf.webp)
## Installation steps
......@@ -73,9 +73,9 @@ sudo systemctl start telegraf
Log in to the Grafana interface using a web browser at `IP:3000`, with the system's initial username and password being `admin/admin`.
Click on the gear icon on the left and select `Plugins`, you should find the TDengine data source plugin icon.
Click on the plus icon on the left and select `Import` to get the data from `https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard- v0.1.0.json`, download the dashboard JSON file and import it. You will then see the dashboard in the following screen.
Click on the plus icon on the left and select `Import` to get the data from `https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard-v0.1.0.json`, download the dashboard JSON file and import it. You will then see the dashboard in the following screen.
![IT-DevOps-Solutions-telegraf-dashboard.webp](./IT-DevOps-Solutions-telegraf-dashboard.webp)
![TDengine Database IT-DevOps-Solutions-telegraf-dashboard](./IT-DevOps-Solutions-telegraf-dashboard.webp)
## Wrap-up
......
......@@ -17,7 +17,7 @@ The new version of TDengine supports multiple data protocols and can accept data
This article introduces how to quickly build an IT DevOps visualization system based on TDengine + collectd / StatsD + Grafana without writing even a single line of code but by simply modifying a few lines of configuration files. The architecture is shown in the following figure.
![IT-DevOps-Solutions-Collectd-StatsD.webp](./IT-DevOps-Solutions-Collectd-StatsD.webp)
![TDengine Database IT-DevOps-Solutions-Collectd-StatsD](./IT-DevOps-Solutions-Collectd-StatsD.webp)
## Installation Steps
......@@ -83,19 +83,19 @@ Click on the gear icon on the left and select `Plugins`, you should find the TDe
Download the dashboard json from `https://github.com/taosdata/grafanaplugin/blob/master/examples/collectd/grafana/dashboards/collect-metrics-with-tdengine-v0.1.0.json`, click the plus icon on the left and select Import, follow the instructions to import the JSON file. After that, you can see
The dashboard can be seen in the following screen.
![IT-DevOps-Solutions-collectd-dashboard.webp](./IT-DevOps-Solutions-collectd-dashboard.webp)
![TDengine Database IT-DevOps-Solutions-collectd-dashboard](./IT-DevOps-Solutions-collectd-dashboard.webp)
#### import collectd dashboard
Download the dashboard json file from `https://github.com/taosdata/grafanaplugin/blob/master/examples/collectd/grafana/dashboards/collect-metrics-with-tdengine-v0.1.0.json`. Download the dashboard json file, click the plus icon on the left side and select `Import`, and follow the interface prompts to select the JSON file to import. After that, you can see
dashboard with the following interface.
![IT-DevOps-Solutions-collectd-dashboard.webp](./IT-DevOps-Solutions-collectd-dashboard.webp)
![IT-DevOps-Solutions-collectd-dashboard](./IT-DevOps-Solutions-collectd-dashboard.webp)
#### Importing the StatsD dashboard
Download the dashboard json from `https://github.com/taosdata/grafanaplugin/blob/master/examples/statsd/dashboards/statsd-with-tdengine-v0.1.0.json`. Click on the plus icon on the left and select `Import`, and follow the interface prompts to import the JSON file. You will then see the dashboard in the following screen.
![IT-DevOps-Solutions-statsd-dashboard.webp](./IT-DevOps-Solutions-statsd-dashboard.webp)
![TDengine Database IT-DevOps-Solutions-statsd-dashboard](./IT-DevOps-Solutions-statsd-dashboard.webp)
## Wrap-up
......
......@@ -32,7 +32,7 @@ We will explain how to migrate OpenTSDB applications to TDengine quickly, secure
The following figure (Figure 1) shows the system's overall architecture for a typical DevOps application scenario.
**Figure 1. Typical architecture in a DevOps scenario**
![IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch](./IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch.webp "Figure 1. Typical architecture in a DevOps scenario")
![TDengine Database IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch](./IT-DevOps-Solutions-Immigrate-OpenTSDB-Arch.webp "Figure 1. Typical architecture in a DevOps scenario")
In this application scenario, there are Agent tools deployed in the application environment to collect machine metrics, network metrics, and application metrics. Data collectors to aggregate information collected by agents, systems for persistent data storage and management, and tools for monitoring data visualization (e.g., Grafana, etc.).
......@@ -75,7 +75,7 @@ After writing the data to TDengine properly, you can adapt Grafana to visualize
TDengine provides two sets of Dashboard templates by default, and users only need to import the templates from the Grafana directory into Grafana to activate their use.
**Importing Grafana Templates** Figure 2.
![](./IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.webp "Figure 2. Importing a Grafana Template")
![TDengine Database IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard](./IT-DevOps-Solutions-Immigrate-OpenTSDB-Dashboard.webp "Figure 2. Importing a Grafana Template")
After the above steps, you completed the migration to replace OpenTSDB with TDengine. You can see that the whole process is straightforward, there is no need to write any code, and only some configuration files need to be adjusted to meet the migration work.
......@@ -88,7 +88,7 @@ In most DevOps scenarios, if you have a small OpenTSDB cluster (3 or fewer nodes
Suppose your application is particularly complex, or the application domain is not a DevOps scenario. You can continue reading subsequent chapters for a more comprehensive and in-depth look at the advanced topics of migrating an OpenTSDB application to TDengine.
**Figure 3. System architecture after migration**
![IT-DevOps-Solutions-Immigrate-TDengine-Arch](./IT-DevOps-Solutions-Immigrate-TDengine-Arch.webp "Figure 3. System architecture after migration completion")
![TDengine Database IT-DevOps-Solutions-Immigrate-TDengine-Arch](./IT-DevOps-Solutions-Immigrate-TDengine-Arch.webp "Figure 3. System architecture after migration completion")
## Migration evaluation and strategy for other scenarios
......
......@@ -36,10 +36,10 @@ int main() {
executeSQL(taos, "CREATE DATABASE power");
executeSQL(taos, "USE power");
executeSQL(taos, "CREATE STABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)");
executeSQL(taos, "INSERT INTO d1001 USING meters TAGS(Beijing.Chaoyang, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)"
"d1002 USING meters TAGS(Beijing.Chaoyang, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)"
"d1003 USING meters TAGS(Beijing.Haidian, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)"
"d1004 USING meters TAGS(Beijing.Haidian, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)");
executeSQL(taos, "INSERT INTO d1001 USING meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)"
"d1002 USING meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)"
"d1003 USING meters TAGS(California.LosAngeles, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)"
"d1004 USING meters TAGS(California.LosAngeles, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)");
taos_close(taos);
taos_cleanup();
}
......
......@@ -29,11 +29,11 @@ int main() {
executeSQL(taos, "USE test");
char *line =
"[{\"metric\": \"meters.current\", \"timestamp\": 1648432611249, \"value\": 10.3, \"tags\": {\"location\": "
"\"Beijing.Chaoyang\", \"groupid\": 2}},{\"metric\": \"meters.voltage\", \"timestamp\": 1648432611249, "
"\"value\": 219, \"tags\": {\"location\": \"Beijing.Haidian\", \"groupid\": 1}},{\"metric\": \"meters.current\", "
"\"timestamp\": 1648432611250, \"value\": 12.6, \"tags\": {\"location\": \"Beijing.Chaoyang\", \"groupid\": "
"\"California.SanFrancisco\", \"groupid\": 2}},{\"metric\": \"meters.voltage\", \"timestamp\": 1648432611249, "
"\"value\": 219, \"tags\": {\"location\": \"California.LosAngeles\", \"groupid\": 1}},{\"metric\": \"meters.current\", "
"\"timestamp\": 1648432611250, \"value\": 12.6, \"tags\": {\"location\": \"California.SanFrancisco\", \"groupid\": "
"2}},{\"metric\": \"meters.voltage\", \"timestamp\": 1648432611250, \"value\": 221, \"tags\": {\"location\": "
"\"Beijing.Haidian\", \"groupid\": 1}}]";
"\"California.LosAngeles\", \"groupid\": 1}}]";
char *lines[] = {line};
TAOS_RES *res = taos_schemaless_insert(taos, lines, 1, TSDB_SML_JSON_PROTOCOL, TSDB_SML_TIMESTAMP_NOT_CONFIGURED);
......
......@@ -27,10 +27,10 @@ int main() {
executeSQL(taos, "DROP DATABASE IF EXISTS test");
executeSQL(taos, "CREATE DATABASE test");
executeSQL(taos, "USE test");
char *lines[] = {"meters,location=Beijing.Haidian,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
"meters,location=Beijing.Haidian,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
"meters,location=Beijing.Haidian,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
"meters,location=Beijing.Haidian,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250"};
char *lines[] = {"meters,location=California.LosAngeles,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
"meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
"meters,location=California.LosAngeles,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
"meters,location=California.LosAngeles,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250"};
TAOS_RES *res = taos_schemaless_insert(taos, lines, 4, TSDB_SML_LINE_PROTOCOL, TSDB_SML_TIMESTAMP_MILLI_SECONDS);
if (taos_errno(res) != 0) {
printf("failed to insert schema-less data, reason: %s\n", taos_errstr(res));
......
......@@ -52,7 +52,7 @@ void insertData(TAOS *taos) {
checkErrorCode(stmt, code, "failed to execute taos_stmt_prepare");
// bind table name and tags
TAOS_BIND tags[2];
char *location = "Beijing.Chaoyang";
char *location = "California.SanFrancisco";
int groupId = 2;
tags[0].buffer_type = TSDB_DATA_TYPE_BINARY;
tags[0].buffer_length = strlen(location);
......
......@@ -139,5 +139,5 @@ int main() {
// output:
// ts current voltage phase location groupid
// 1648432611249 10.300000 219 0.310000 Beijing.Chaoyang 2
// 1648432611749 12.600000 218 0.330000 Beijing.Chaoyang 2
\ No newline at end of file
// 1648432611249 10.300000 219 0.310000 California.SanFrancisco 2
// 1648432611749 12.600000 218 0.330000 California.SanFrancisco 2
\ No newline at end of file
......@@ -59,7 +59,7 @@ void insertData(TAOS *taos) {
checkErrorCode(stmt, code, "failed to execute taos_stmt_prepare");
// bind table name and tags
TAOS_BIND tags[2];
char* location = "Beijing.Chaoyang";
char* location = "California.SanFrancisco";
int groupId = 2;
tags[0].buffer_type = TSDB_DATA_TYPE_BINARY;
tags[0].buffer_length = strlen(location);
......
......@@ -28,14 +28,14 @@ int main() {
executeSQL(taos, "CREATE DATABASE test");
executeSQL(taos, "USE test");
char *lines[] = {
"meters.current 1648432611249 10.3 location=Beijing.Chaoyang groupid=2",
"meters.current 1648432611250 12.6 location=Beijing.Chaoyang groupid=2",
"meters.current 1648432611249 10.8 location=Beijing.Haidian groupid=3",
"meters.current 1648432611250 11.3 location=Beijing.Haidian groupid=3",
"meters.voltage 1648432611249 219 location=Beijing.Chaoyang groupid=2",
"meters.voltage 1648432611250 218 location=Beijing.Chaoyang groupid=2",
"meters.voltage 1648432611249 221 location=Beijing.Haidian groupid=3",
"meters.voltage 1648432611250 217 location=Beijing.Haidian groupid=3",
"meters.current 1648432611249 10.3 location=California.SanFrancisco groupid=2",
"meters.current 1648432611250 12.6 location=California.SanFrancisco groupid=2",
"meters.current 1648432611249 10.8 location=California.LosAngeles groupid=3",
"meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3",
"meters.voltage 1648432611249 219 location=California.SanFrancisco groupid=2",
"meters.voltage 1648432611250 218 location=California.SanFrancisco groupid=2",
"meters.voltage 1648432611249 221 location=California.LosAngeles groupid=3",
"meters.voltage 1648432611250 217 location=California.LosAngeles groupid=3",
};
TAOS_RES *res = taos_schemaless_insert(taos, lines, 8, TSDB_SML_TELNET_PROTOCOL, TSDB_SML_TIMESTAMP_NOT_CONFIGURED);
if (taos_errno(res) != 0) {
......
......@@ -9,10 +9,10 @@ namespace TDengineExample
IntPtr conn = GetConnection();
PrepareDatabase(conn);
string[] lines = {
"meters,location=Beijing.Haidian,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
"meters,location=Beijing.Haidian,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
"meters,location=Beijing.Haidian,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
"meters,location=Beijing.Haidian,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250"
"meters,location=California.LosAngeles,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
"meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
"meters,location=California.LosAngeles,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
"meters,location=California.LosAngeles,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250"
};
IntPtr res = TDengine.SchemalessInsert(conn, lines, lines.Length, (int)TDengineSchemalessProtocol.TSDB_SML_LINE_PROTOCOL, (int)TDengineSchemalessPrecision.TSDB_SML_TIMESTAMP_MILLI_SECONDS);
if (TDengine.ErrorNo(res) != 0)
......
......@@ -8,10 +8,10 @@ namespace TDengineExample
{
IntPtr conn = GetConnection();
PrepareDatabase(conn);
string[] lines = { "[{\"metric\": \"meters.current\", \"timestamp\": 1648432611249, \"value\": 10.3, \"tags\": {\"location\": \"Beijing.Chaoyang\", \"groupid\": 2}}," +
" {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611249, \"value\": 219, \"tags\": {\"location\": \"Beijing.Haidian\", \"groupid\": 1}}, " +
"{\"metric\": \"meters.current\", \"timestamp\": 1648432611250, \"value\": 12.6, \"tags\": {\"location\": \"Beijing.Chaoyang\", \"groupid\": 2}}," +
" {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611250, \"value\": 221, \"tags\": {\"location\": \"Beijing.Haidian\", \"groupid\": 1}}]"
string[] lines = { "[{\"metric\": \"meters.current\", \"timestamp\": 1648432611249, \"value\": 10.3, \"tags\": {\"location\": \"California.SanFrancisco\", \"groupid\": 2}}," +
" {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611249, \"value\": 219, \"tags\": {\"location\": \"California.LosAngeles\", \"groupid\": 1}}, " +
"{\"metric\": \"meters.current\", \"timestamp\": 1648432611250, \"value\": 12.6, \"tags\": {\"location\": \"California.SanFrancisco\", \"groupid\": 2}}," +
" {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611250, \"value\": 221, \"tags\": {\"location\": \"California.LosAngeles\", \"groupid\": 1}}]"
};
IntPtr res = TDengine.SchemalessInsert(conn, lines, 1, (int)TDengineSchemalessProtocol.TSDB_SML_JSON_PROTOCOL, (int)TDengineSchemalessPrecision.TSDB_SML_TIMESTAMP_NOT_CONFIGURED);
......
......@@ -9,14 +9,14 @@ namespace TDengineExample
IntPtr conn = GetConnection();
PrepareDatabase(conn);
string[] lines = {
"meters.current 1648432611249 10.3 location=Beijing.Chaoyang groupid=2",
"meters.current 1648432611250 12.6 location=Beijing.Chaoyang groupid=2",
"meters.current 1648432611249 10.8 location=Beijing.Haidian groupid=3",
"meters.current 1648432611250 11.3 location=Beijing.Haidian groupid=3",
"meters.voltage 1648432611249 219 location=Beijing.Chaoyang groupid=2",
"meters.voltage 1648432611250 218 location=Beijing.Chaoyang groupid=2",
"meters.voltage 1648432611249 221 location=Beijing.Haidian groupid=3",
"meters.voltage 1648432611250 217 location=Beijing.Haidian groupid=3",
"meters.current 1648432611249 10.3 location=California.SanFrancisco groupid=2",
"meters.current 1648432611250 12.6 location=California.SanFrancisco groupid=2",
"meters.current 1648432611249 10.8 location=California.LosAngeles groupid=3",
"meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3",
"meters.voltage 1648432611249 219 location=California.SanFrancisco groupid=2",
"meters.voltage 1648432611250 218 location=California.SanFrancisco groupid=2",
"meters.voltage 1648432611249 221 location=California.LosAngeles groupid=3",
"meters.voltage 1648432611250 217 location=California.LosAngeles groupid=3",
};
IntPtr res = TDengine.SchemalessInsert(conn, lines, lines.Length, (int)TDengineSchemalessProtocol.TSDB_SML_TELNET_PROTOCOL, (int)TDengineSchemalessPrecision.TSDB_SML_TIMESTAMP_NOT_CONFIGURED);
if (TDengine.ErrorNo(res) != 0)
......
......@@ -158,5 +158,5 @@ namespace TDengineExample
// Connect to TDengine success
// fieldCount=6
// ts current voltage phase location groupid
// 1648432611249 10.3 219 0.31 Beijing.Chaoyang 2
// 1648432611749 12.6 218 0.33 Beijing.Chaoyang 2
\ No newline at end of file
// 1648432611249 10.3 219 0.31 California.SanFrancisco 2
// 1648432611749 12.6 218 0.33 California.SanFrancisco 2
\ No newline at end of file
......@@ -15,10 +15,10 @@ namespace TDengineExample
CheckRes(conn, res, "failed to change database");
res = TDengine.Query(conn, "CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)");
CheckRes(conn, res, "failed to create stable");
var sql = "INSERT INTO d1001 USING meters TAGS(Beijing.Chaoyang, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000) " +
"d1002 USING power.meters TAGS(Beijing.Chaoyang, 3) VALUES('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000) " +
"d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000)('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000) " +
"d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000)('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)";
var sql = "INSERT INTO d1001 USING meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000) " +
"d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000) " +
"d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000)('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000) " +
"d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000)('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)";
res = TDengine.Query(conn, sql);
CheckRes(conn, res, "failed to insert data");
int affectedRows = TDengine.AffectRows(res);
......
......@@ -21,7 +21,7 @@ namespace TDengineExample
CheckStmtRes(res, "failed to prepare stmt");
// 2. bind table name and tags
TAOS_BIND[] tags = new TAOS_BIND[2] { TaosBind.BindBinary("Beijing.Chaoyang"), TaosBind.BindInt(2) };
TAOS_BIND[] tags = new TAOS_BIND[2] { TaosBind.BindBinary("California.SanFrancisco"), TaosBind.BindInt(2) };
res = TDengine.StmtSetTbnameTags(stmt, "d1001", tags);
CheckStmtRes(res, "failed to bind table name and tags");
......
......@@ -25,10 +25,10 @@ func main() {
defer conn.Close()
prepareDatabase(conn)
payload := `[{"metric": "meters.current", "timestamp": 1648432611249, "value": 10.3, "tags": {"location": "Beijing.Chaoyang", "groupid": 2}},
{"metric": "meters.voltage", "timestamp": 1648432611249, "value": 219, "tags": {"location": "Beijing.Haidian", "groupid": 1}},
{"metric": "meters.current", "timestamp": 1648432611250, "value": 12.6, "tags": {"location": "Beijing.Chaoyang", "groupid": 2}},
{"metric": "meters.voltage", "timestamp": 1648432611250, "value": 221, "tags": {"location": "Beijing.Haidian", "groupid": 1}}]`
payload := `[{"metric": "meters.current", "timestamp": 1648432611249, "value": 10.3, "tags": {"location": "California.SanFrancisco", "groupid": 2}},
{"metric": "meters.voltage", "timestamp": 1648432611249, "value": 219, "tags": {"location": "California.LosAngeles", "groupid": 1}},
{"metric": "meters.current", "timestamp": 1648432611250, "value": 12.6, "tags": {"location": "California.SanFrancisco", "groupid": 2}},
{"metric": "meters.voltage", "timestamp": 1648432611250, "value": 221, "tags": {"location": "California.LosAngeles", "groupid": 1}}]`
err = conn.OpenTSDBInsertJsonPayload(payload)
if err != nil {
......
......@@ -25,10 +25,10 @@ func main() {
defer conn.Close()
prepareDatabase(conn)
var lines = []string{
"meters,location=Beijing.Haidian,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
"meters,location=Beijing.Haidian,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
"meters,location=Beijing.Haidian,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
"meters,location=Beijing.Haidian,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250",
"meters,location=California.LosAngeles,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
"meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
"meters,location=California.LosAngeles,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
"meters,location=California.LosAngeles,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250",
}
err = conn.InfluxDBInsertLines(lines, "ms")
......
......@@ -19,10 +19,10 @@ func createStable(taos *sql.DB) {
}
func insertData(taos *sql.DB) {
sql := `INSERT INTO power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
power.d1002 USING power.meters TAGS(Beijing.Chaoyang, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
power.d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)`
sql := `INSERT INTO power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)`
result, err := taos.Exec(sql)
if err != nil {
fmt.Println("failed to insert, err:", err)
......
......@@ -37,7 +37,7 @@ func main() {
checkErr(err, "failed to create prepare statement")
// bind table name and tags
tagParams := param.NewParam(2).AddBinary([]byte("Beijing.Chaoyang")).AddInt(2)
tagParams := param.NewParam(2).AddBinary([]byte("California.SanFrancisco")).AddInt(2)
err = stmt.SetTableNameWithTags("d1001", tagParams)
checkErr(err, "failed to execute SetTableNameWithTags")
......
......@@ -25,14 +25,14 @@ func main() {
defer conn.Close()
prepareDatabase(conn)
var lines = []string{
"meters.current 1648432611249 10.3 location=Beijing.Chaoyang groupid=2",
"meters.current 1648432611250 12.6 location=Beijing.Chaoyang groupid=2",
"meters.current 1648432611249 10.8 location=Beijing.Haidian groupid=3",
"meters.current 1648432611250 11.3 location=Beijing.Haidian groupid=3",
"meters.voltage 1648432611249 219 location=Beijing.Chaoyang groupid=2",
"meters.voltage 1648432611250 218 location=Beijing.Chaoyang groupid=2",
"meters.voltage 1648432611249 221 location=Beijing.Haidian groupid=3",
"meters.voltage 1648432611250 217 location=Beijing.Haidian groupid=3",
"meters.current 1648432611249 10.3 location=California.SanFrancisco groupid=2",
"meters.current 1648432611250 12.6 location=California.SanFrancisco groupid=2",
"meters.current 1648432611249 10.8 location=California.LosAngeles groupid=3",
"meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3",
"meters.voltage 1648432611249 219 location=California.SanFrancisco groupid=2",
"meters.voltage 1648432611250 218 location=California.SanFrancisco groupid=2",
"meters.voltage 1648432611249 221 location=California.LosAngeles groupid=3",
"meters.voltage 1648432611250 217 location=California.LosAngeles groupid=3",
}
err = conn.OpenTSDBInsertTelnetLines(lines)
......
......@@ -23,10 +23,10 @@ public class JSONProtocolExample {
}
private static String getJSONData() {
return "[{\"metric\": \"meters.current\", \"timestamp\": 1648432611249, \"value\": 10.3, \"tags\": {\"location\": \"Beijing.Chaoyang\", \"groupid\": 2}}," +
" {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611249, \"value\": 219, \"tags\": {\"location\": \"Beijing.Haidian\", \"groupid\": 1}}, " +
"{\"metric\": \"meters.current\", \"timestamp\": 1648432611250, \"value\": 12.6, \"tags\": {\"location\": \"Beijing.Chaoyang\", \"groupid\": 2}}," +
" {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611250, \"value\": 221, \"tags\": {\"location\": \"Beijing.Haidian\", \"groupid\": 1}}]";
return "[{\"metric\": \"meters.current\", \"timestamp\": 1648432611249, \"value\": 10.3, \"tags\": {\"location\": \"California.SanFrancisco\", \"groupid\": 2}}," +
" {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611249, \"value\": 219, \"tags\": {\"location\": \"California.LosAngeles\", \"groupid\": 1}}, " +
"{\"metric\": \"meters.current\", \"timestamp\": 1648432611250, \"value\": 12.6, \"tags\": {\"location\": \"California.SanFrancisco\", \"groupid\": 2}}," +
" {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611250, \"value\": 221, \"tags\": {\"location\": \"California.LosAngeles\", \"groupid\": 1}}]";
}
public static void main(String[] args) throws SQLException {
......
......@@ -12,11 +12,11 @@ import java.sql.Statement;
public class LineProtocolExample {
// format: measurement,tag_set field_set timestamp
private static String[] lines = {
"meters,location=Beijing.Haidian,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249000", // micro
"meters,location=California.LosAngeles,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249000", // micro
// seconds
"meters,location=Beijing.Haidian,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611249500",
"meters,location=Beijing.Haidian,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249300",
"meters,location=Beijing.Haidian,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611249800",
"meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611249500",
"meters,location=California.LosAngeles,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249300",
"meters,location=California.LosAngeles,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611249800",
};
private static Connection getConnection() throws SQLException {
......
......@@ -16,28 +16,28 @@ public class RestInsertExample {
private static List<String> getRawData() {
return Arrays.asList(
"d1001,2018-10-03 14:38:05.000,10.30000,219,0.31000,Beijing.Chaoyang,2",
"d1001,2018-10-03 14:38:15.000,12.60000,218,0.33000,Beijing.Chaoyang,2",
"d1001,2018-10-03 14:38:16.800,12.30000,221,0.31000,Beijing.Chaoyang,2",
"d1002,2018-10-03 14:38:16.650,10.30000,218,0.25000,Beijing.Chaoyang,3",
"d1003,2018-10-03 14:38:05.500,11.80000,221,0.28000,Beijing.Haidian,2",
"d1003,2018-10-03 14:38:16.600,13.40000,223,0.29000,Beijing.Haidian,2",
"d1004,2018-10-03 14:38:05.000,10.80000,223,0.29000,Beijing.Haidian,3",
"d1004,2018-10-03 14:38:06.500,11.50000,221,0.35000,Beijing.Haidian,3"
"d1001,2018-10-03 14:38:05.000,10.30000,219,0.31000,California.SanFrancisco,2",
"d1001,2018-10-03 14:38:15.000,12.60000,218,0.33000,California.SanFrancisco,2",
"d1001,2018-10-03 14:38:16.800,12.30000,221,0.31000,California.SanFrancisco,2",
"d1002,2018-10-03 14:38:16.650,10.30000,218,0.25000,California.SanFrancisco,3",
"d1003,2018-10-03 14:38:05.500,11.80000,221,0.28000,California.LosAngeles,2",
"d1003,2018-10-03 14:38:16.600,13.40000,223,0.29000,California.LosAngeles,2",
"d1004,2018-10-03 14:38:05.000,10.80000,223,0.29000,California.LosAngeles,3",
"d1004,2018-10-03 14:38:06.500,11.50000,221,0.35000,California.LosAngeles,3"
);
}
/**
* The generated SQL is:
* INSERT INTO power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES('2018-10-03 14:38:05.000',10.30000,219,0.31000)
* power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES('2018-10-03 14:38:15.000',12.60000,218,0.33000)
* power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES('2018-10-03 14:38:16.800',12.30000,221,0.31000)
* power.d1002 USING power.meters TAGS(Beijing.Chaoyang, 3) VALUES('2018-10-03 14:38:16.650',10.30000,218,0.25000)
* power.d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES('2018-10-03 14:38:05.500',11.80000,221,0.28000)
* power.d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES('2018-10-03 14:38:16.600',13.40000,223,0.29000)
* power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES('2018-10-03 14:38:05.000',10.80000,223,0.29000)
* power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES('2018-10-03 14:38:06.500',11.50000,221,0.35000)
* INSERT INTO power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 14:38:05.000',10.30000,219,0.31000)
* power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 14:38:15.000',12.60000,218,0.33000)
* power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 14:38:16.800',12.30000,221,0.31000)
* power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES('2018-10-03 14:38:16.650',10.30000,218,0.25000)
* power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES('2018-10-03 14:38:05.500',11.80000,221,0.28000)
* power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES('2018-10-03 14:38:16.600',13.40000,223,0.29000)
* power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 14:38:05.000',10.80000,223,0.29000)
* power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 14:38:06.500',11.50000,221,0.35000)
*/
private static String getSQL() {
StringBuilder sb = new StringBuilder("INSERT INTO ");
......
......@@ -51,5 +51,5 @@ public class RestQueryExample {
// possible output:
// avg(voltage) location
// 222.0 Beijing.Haidian
// 219.0 Beijing.Chaoyang
// 222.0 California.LosAngeles
// 219.0 California.SanFrancisco
......@@ -30,14 +30,14 @@ public class StmtInsertExample {
private static List<String> getRawData() {
return Arrays.asList(
"d1001,2018-10-03 14:38:05.000,10.30000,219,0.31000,Beijing.Chaoyang,2",
"d1001,2018-10-03 14:38:15.000,12.60000,218,0.33000,Beijing.Chaoyang,2",
"d1001,2018-10-03 14:38:16.800,12.30000,221,0.31000,Beijing.Chaoyang,2",
"d1002,2018-10-03 14:38:16.650,10.30000,218,0.25000,Beijing.Chaoyang,3",
"d1003,2018-10-03 14:38:05.500,11.80000,221,0.28000,Beijing.Haidian,2",
"d1003,2018-10-03 14:38:16.600,13.40000,223,0.29000,Beijing.Haidian,2",
"d1004,2018-10-03 14:38:05.000,10.80000,223,0.29000,Beijing.Haidian,3",
"d1004,2018-10-03 14:38:06.500,11.50000,221,0.35000,Beijing.Haidian,3"
"d1001,2018-10-03 14:38:05.000,10.30000,219,0.31000,California.SanFrancisco,2",
"d1001,2018-10-03 14:38:15.000,12.60000,218,0.33000,California.SanFrancisco,2",
"d1001,2018-10-03 14:38:16.800,12.30000,221,0.31000,California.SanFrancisco,2",
"d1002,2018-10-03 14:38:16.650,10.30000,218,0.25000,California.SanFrancisco,3",
"d1003,2018-10-03 14:38:05.500,11.80000,221,0.28000,California.LosAngeles,2",
"d1003,2018-10-03 14:38:16.600,13.40000,223,0.29000,California.LosAngeles,2",
"d1004,2018-10-03 14:38:05.000,10.80000,223,0.29000,California.LosAngeles,3",
"d1004,2018-10-03 14:38:06.500,11.50000,221,0.35000,California.LosAngeles,3"
);
}
......
......@@ -11,14 +11,14 @@ import java.sql.Statement;
public class TelnetLineProtocolExample {
// format: <metric> <timestamp> <value> <tagk_1>=<tagv_1>[ <tagk_n>=<tagv_n>]
private static String[] lines = { "meters.current 1648432611249 10.3 location=Beijing.Chaoyang groupid=2",
"meters.current 1648432611250 12.6 location=Beijing.Chaoyang groupid=2",
"meters.current 1648432611249 10.8 location=Beijing.Haidian groupid=3",
"meters.current 1648432611250 11.3 location=Beijing.Haidian groupid=3",
"meters.voltage 1648432611249 219 location=Beijing.Chaoyang groupid=2",
"meters.voltage 1648432611250 218 location=Beijing.Chaoyang groupid=2",
"meters.voltage 1648432611249 221 location=Beijing.Haidian groupid=3",
"meters.voltage 1648432611250 217 location=Beijing.Haidian groupid=3",
private static String[] lines = { "meters.current 1648432611249 10.3 location=California.SanFrancisco groupid=2",
"meters.current 1648432611250 12.6 location=California.SanFrancisco groupid=2",
"meters.current 1648432611249 10.8 location=California.LosAngeles groupid=3",
"meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3",
"meters.voltage 1648432611249 219 location=California.SanFrancisco groupid=2",
"meters.voltage 1648432611250 218 location=California.SanFrancisco groupid=2",
"meters.voltage 1648432611249 221 location=California.LosAngeles groupid=3",
"meters.voltage 1648432611250 217 location=California.LosAngeles groupid=3",
};
private static Connection getConnection() throws SQLException {
......
......@@ -23,16 +23,16 @@ public class TestAll {
String jdbcUrl = "jdbc:TAOS://localhost:6030?user=root&password=taosdata";
try (Connection conn = DriverManager.getConnection(jdbcUrl)) {
try (Statement stmt = conn.createStatement()) {
String sql = "INSERT INTO power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES('2018-10-03 14:38:05.000',10.30000,219,0.31000)\n" +
" power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES('2018-10-03 15:38:15.000',12.60000,218,0.33000)\n" +
" power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES('2018-10-03 15:38:16.800',12.30000,221,0.31000)\n" +
" power.d1002 USING power.meters TAGS(Beijing.Chaoyang, 3) VALUES('2018-10-03 15:38:16.650',10.30000,218,0.25000)\n" +
" power.d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES('2018-10-03 15:38:05.500',11.80000,221,0.28000)\n" +
" power.d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES('2018-10-03 15:38:16.600',13.40000,223,0.29000)\n" +
" power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES('2018-10-03 15:38:05.000',10.80000,223,0.29000)\n" +
" power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES('2018-10-03 15:38:06.000',10.80000,223,0.29000)\n" +
" power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES('2018-10-03 15:38:07.000',10.80000,223,0.29000)\n" +
" power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES('2018-10-03 15:38:08.500',11.50000,221,0.35000)";
String sql = "INSERT INTO power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 14:38:05.000',10.30000,219,0.31000)\n" +
" power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 15:38:15.000',12.60000,218,0.33000)\n" +
" power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 15:38:16.800',12.30000,221,0.31000)\n" +
" power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES('2018-10-03 15:38:16.650',10.30000,218,0.25000)\n" +
" power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES('2018-10-03 15:38:05.500',11.80000,221,0.28000)\n" +
" power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES('2018-10-03 15:38:16.600',13.40000,223,0.29000)\n" +
" power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 15:38:05.000',10.80000,223,0.29000)\n" +
" power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 15:38:06.000',10.80000,223,0.29000)\n" +
" power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 15:38:07.000',10.80000,223,0.29000)\n" +
" power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 15:38:08.500',11.50000,221,0.35000)";
stmt.execute(sql);
}
......
......@@ -13,10 +13,10 @@ function createDatabase() {
function insertData() {
const lines = [
"meters,location=Beijing.Haidian,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
"meters,location=Beijing.Haidian,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
"meters,location=Beijing.Haidian,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
"meters,location=Beijing.Haidian,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250",
"meters,location=California.LosAngeles,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
"meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
"meters,location=California.LosAngeles,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
"meters,location=California.LosAngeles,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250",
];
cursor.schemalessInsert(
lines,
......
......@@ -11,10 +11,10 @@ try {
cursor.execute(
"CREATE STABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)"
);
var sql = `INSERT INTO power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
power.d1002 USING power.meters TAGS(Beijing.Chaoyang, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
power.d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)`;
var sql = `INSERT INTO power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)`;
cursor.execute(sql);
} finally {
cursor.close();
......
......@@ -25,7 +25,7 @@ function insertData() {
// bind table name and tags
let tagBind = new taos.TaosBind(2);
tagBind.bindBinary("Beijing.Chaoyang");
tagBind.bindBinary("California.SanFrancisco");
tagBind.bindInt(2);
cursor.stmtSetTbnameTags("d1001", tagBind.getBind());
......
......@@ -17,25 +17,25 @@ function insertData() {
metric: "meters.current",
timestamp: 1648432611249,
value: 10.3,
tags: { location: "Beijing.Chaoyang", groupid: 2 },
tags: { location: "California.SanFrancisco", groupid: 2 },
},
{
metric: "meters.voltage",
timestamp: 1648432611249,
value: 219,
tags: { location: "Beijing.Haidian", groupid: 1 },
tags: { location: "California.LosAngeles", groupid: 1 },
},
{
metric: "meters.current",
timestamp: 1648432611250,
value: 12.6,
tags: { location: "Beijing.Chaoyang", groupid: 2 },
tags: { location: "California.SanFrancisco", groupid: 2 },
},
{
metric: "meters.voltage",
timestamp: 1648432611250,
value: 221,
tags: { location: "Beijing.Haidian", groupid: 1 },
tags: { location: "California.LosAngeles", groupid: 1 },
},
];
......
......@@ -13,14 +13,14 @@ function createDatabase() {
function insertData() {
const lines = [
"meters.current 1648432611249 10.3 location=Beijing.Chaoyang groupid=2",
"meters.current 1648432611250 12.6 location=Beijing.Chaoyang groupid=2",
"meters.current 1648432611249 10.8 location=Beijing.Haidian groupid=3",
"meters.current 1648432611250 11.3 location=Beijing.Haidian groupid=3",
"meters.voltage 1648432611249 219 location=Beijing.Chaoyang groupid=2",
"meters.voltage 1648432611250 218 location=Beijing.Chaoyang groupid=2",
"meters.voltage 1648432611249 221 location=Beijing.Haidian groupid=3",
"meters.voltage 1648432611250 217 location=Beijing.Haidian groupid=3",
"meters.current 1648432611249 10.3 location=California.SanFrancisco groupid=2",
"meters.current 1648432611250 12.6 location=California.SanFrancisco groupid=2",
"meters.current 1648432611249 10.8 location=California.LosAngeles groupid=3",
"meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3",
"meters.voltage 1648432611249 219 location=California.SanFrancisco groupid=2",
"meters.voltage 1648432611250 218 location=California.SanFrancisco groupid=2",
"meters.voltage 1648432611249 221 location=California.LosAngeles groupid=3",
"meters.voltage 1648432611250 217 location=California.LosAngeles groupid=3",
];
cursor.schemalessInsert(
lines,
......
......@@ -24,7 +24,7 @@ function insertData() {
// bind table name and tags
let tagBind = new taos.TaosBind(2);
tagBind.bindBinary("Beijing.Chaoyang");
tagBind.bindBinary("California.SanFrancisco");
tagBind.bindInt(2);
cursor.stmtSetTbnameTags("d1001", tagBind.getBind());
......
......@@ -4,7 +4,7 @@ use TDengine\Connection;
use TDengine\Exception\TDengineException;
try {
// 实例化
// instantiate
$host = 'localhost';
$port = 6030;
$username = 'root';
......@@ -12,9 +12,9 @@ try {
$dbname = null;
$connection = new Connection($host, $port, $username, $password, $dbname);
// 连接
// connect
$connection->connect();
} catch (TDengineException $e) {
// 连接失败捕获异常
// throw exception
throw $e;
}
......@@ -4,7 +4,7 @@ use TDengine\Connection;
use TDengine\Exception\TDengineException;
try {
// 实例化
// instantiate
$host = 'localhost';
$port = 6030;
$username = 'root';
......@@ -12,22 +12,22 @@ try {
$dbname = 'power';
$connection = new Connection($host, $port, $username, $password, $dbname);
// 连接
// connect
$connection->connect();
// 插入
// insert
$connection->query('CREATE DATABASE if not exists power');
$connection->query('CREATE STABLE if not exists meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)');
$resource = $connection->query(<<<'SQL'
INSERT INTO power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
power.d1002 USING power.meters TAGS(Beijing.Chaoyang, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
power.d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)
INSERT INTO power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)
SQL);
// 影响行数
// get affected rows
var_dump($resource->affectedRows());
} catch (TDengineException $e) {
// 捕获异常
// throw exception
throw $e;
}
......@@ -4,7 +4,7 @@ use TDengine\Connection;
use TDengine\Exception\TDengineException;
try {
// 实例化
// instantiate
$host = 'localhost';
$port = 6030;
$username = 'root';
......@@ -12,18 +12,18 @@ try {
$dbname = 'power';
$connection = new Connection($host, $port, $username, $password, $dbname);
// 连接
// connect
$connection->connect();
// 插入
// insert
$connection->query('CREATE DATABASE if not exists power');
$connection->query('CREATE STABLE if not exists meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)');
$stmt = $connection->prepare('INSERT INTO ? USING meters TAGS(?, ?) VALUES(?, ?, ?, ?)');
// 设置表名和标签
// set table name and tags
$stmt->setTableNameTags('d1001', [
// 支持格式同参数绑定
[TDengine\TSDB_DATA_TYPE_BINARY, 'Beijing.Chaoyang'],
[TDengine\TSDB_DATA_TYPE_BINARY, 'California.SanFrancisco'],
[TDengine\TSDB_DATA_TYPE_INT, 2],
]);
......@@ -41,9 +41,9 @@ try {
]);
$resource = $stmt->execute();
// 影响行数
// get affected rows
var_dump($resource->affectedRows());
} catch (TDengineException $e) {
// 捕获异常
// throw exception
throw $e;
}
......@@ -4,7 +4,7 @@ use TDengine\Connection;
use TDengine\Exception\TDengineException;
try {
// 实例化
// instantiate
$host = 'localhost';
$port = 6030;
$username = 'root';
......@@ -12,12 +12,12 @@ try {
$dbname = 'power';
$connection = new Connection($host, $port, $username, $password, $dbname);
// 连接
// connect
$connection->connect();
$resource = $connection->query('SELECT ts, current FROM meters LIMIT 2');
var_dump($resource->fetch());
} catch (TDengineException $e) {
// 捕获异常
// throw exception
throw $e;
}
......@@ -2,14 +2,14 @@ import taos
from datetime import datetime
# note: lines have already been sorted by table name
lines = [('d1001', '2018-10-03 14:38:05.000', 10.30000, 219, 0.31000, 'Beijing.Chaoyang', 2),
('d1001', '2018-10-03 14:38:15.000', 12.60000, 218, 0.33000, 'Beijing.Chaoyang', 2),
('d1001', '2018-10-03 14:38:16.800', 12.30000, 221, 0.31000, 'Beijing.Chaoyang', 2),
('d1002', '2018-10-03 14:38:16.650', 10.30000, 218, 0.25000, 'Beijing.Chaoyang', 3),
('d1003', '2018-10-03 14:38:05.500', 11.80000, 221, 0.28000, 'Beijing.Haidian', 2),
('d1003', '2018-10-03 14:38:16.600', 13.40000, 223, 0.29000, 'Beijing.Haidian', 2),
('d1004', '2018-10-03 14:38:05.000', 10.80000, 223, 0.29000, 'Beijing.Haidian', 3),
('d1004', '2018-10-03 14:38:06.500', 11.50000, 221, 0.35000, 'Beijing.Haidian', 3)]
lines = [('d1001', '2018-10-03 14:38:05.000', 10.30000, 219, 0.31000, 'California.SanFrancisco', 2),
('d1001', '2018-10-03 14:38:15.000', 12.60000, 218, 0.33000, 'California.SanFrancisco', 2),
('d1001', '2018-10-03 14:38:16.800', 12.30000, 221, 0.31000, 'California.SanFrancisco', 2),
('d1002', '2018-10-03 14:38:16.650', 10.30000, 218, 0.25000, 'California.SanFrancisco', 3),
('d1003', '2018-10-03 14:38:05.500', 11.80000, 221, 0.28000, 'California.LosAngeles', 2),
('d1003', '2018-10-03 14:38:16.600', 13.40000, 223, 0.29000, 'California.LosAngeles', 2),
('d1004', '2018-10-03 14:38:05.000', 10.80000, 223, 0.29000, 'California.LosAngeles', 3),
('d1004', '2018-10-03 14:38:06.500', 11.50000, 221, 0.35000, 'California.LosAngeles', 3)]
def get_ts(ts: str):
......
......@@ -16,10 +16,10 @@ cursor.execute("CREATE DATABASE power")
cursor.execute("CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)")
# insert data
cursor.execute("""INSERT INTO power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
power.d1002 USING power.meters TAGS(Beijing.Chaoyang, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
power.d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)""")
cursor.execute("""INSERT INTO power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)""")
print("inserted row count:", cursor.rowcount)
# query data
......
......@@ -3,12 +3,12 @@ import json
import taos
from taos import SmlProtocol, SmlPrecision
lines = [{"metric": "meters.current", "timestamp": 1648432611249, "value": 10.3, "tags": {"location": "Beijing.Chaoyang", "groupid": 2}},
lines = [{"metric": "meters.current", "timestamp": 1648432611249, "value": 10.3, "tags": {"location": "California.SanFrancisco", "groupid": 2}},
{"metric": "meters.voltage", "timestamp": 1648432611249, "value": 219,
"tags": {"location": "Beijing.Haidian", "groupid": 1}},
"tags": {"location": "California.LosAngeles", "groupid": 1}},
{"metric": "meters.current", "timestamp": 1648432611250, "value": 12.6,
"tags": {"location": "Beijing.Chaoyang", "groupid": 2}},
{"metric": "meters.voltage", "timestamp": 1648432611250, "value": 221, "tags": {"location": "Beijing.Haidian", "groupid": 1}}]
"tags": {"location": "California.SanFrancisco", "groupid": 2}},
{"metric": "meters.voltage", "timestamp": 1648432611250, "value": 221, "tags": {"location": "California.LosAngeles", "groupid": 1}}]
def get_connection():
......
import taos
from taos import SmlProtocol, SmlPrecision
lines = ["meters,location=Beijing.Haidian,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249000",
"meters,location=Beijing.Haidian,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611249500",
"meters,location=Beijing.Haidian,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249300",
"meters,location=Beijing.Haidian,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611249800",
lines = ["meters,location=California.LosAngeles,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249000",
"meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611249500",
"meters,location=California.LosAngeles,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249300",
"meters,location=California.LosAngeles,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611249800",
]
......
......@@ -3,10 +3,10 @@ from datetime import datetime
# ANCHOR: bind_batch
table_tags = {
"d1001": ('Beijing.Chaoyang', 2),
"d1002": ('Beijing.Chaoyang', 3),
"d1003": ('Beijing.Haidian', 2),
"d1004": ('Beijing.Haidian', 3)
"d1001": ('California.SanFrancisco', 2),
"d1002": ('California.SanFrancisco', 3),
"d1003": ('California.LosAngeles', 2),
"d1004": ('California.LosAngeles', 3)
}
table_values = {
......
import taos
lines = ["d1001,2018-10-03 14:38:05.000,10.30000,219,0.31000,Beijing.Chaoyang,2",
"d1004,2018-10-03 14:38:05.000,10.80000,223,0.29000,Beijing.Haidian,3",
"d1003,2018-10-03 14:38:05.500,11.80000,221,0.28000,Beijing.Haidian,2",
"d1004,2018-10-03 14:38:06.500,11.50000,221,0.35000,Beijing.Haidian,3",
"d1002,2018-10-03 14:38:16.650,10.30000,218,0.25000,Beijing.Chaoyang,3",
"d1001,2018-10-03 14:38:15.000,12.60000,218,0.33000,Beijing.Chaoyang,2",
"d1001,2018-10-03 14:38:16.800,12.30000,221,0.31000,Beijing.Chaoyang,2",
"d1003,2018-10-03 14:38:16.600,13.40000,223,0.29000,Beijing.Haidian,2"]
lines = ["d1001,2018-10-03 14:38:05.000,10.30000,219,0.31000,California.SanFrancisco,2",
"d1004,2018-10-03 14:38:05.000,10.80000,223,0.29000,California.LosAngeles,3",
"d1003,2018-10-03 14:38:05.500,11.80000,221,0.28000,California.LosAngeles,2",
"d1004,2018-10-03 14:38:06.500,11.50000,221,0.35000,California.LosAngeles,3",
"d1002,2018-10-03 14:38:16.650,10.30000,218,0.25000,California.SanFrancisco,3",
"d1001,2018-10-03 14:38:15.000,12.60000,218,0.33000,California.SanFrancisco,2",
"d1001,2018-10-03 14:38:16.800,12.30000,221,0.31000,California.SanFrancisco,2",
"d1003,2018-10-03 14:38:16.600,13.40000,223,0.29000,California.LosAngeles,2"]
def get_connection() -> taos.TaosConnection:
......@@ -25,10 +25,10 @@ def create_stable(conn: taos.TaosConnection):
# The generated SQL is:
# INSERT INTO d1001 USING meters TAGS(Beijing.Chaoyang, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
# d1002 USING meters TAGS(Beijing.Chaoyang, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
# d1003 USING meters TAGS(Beijing.Haidian, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
# d1004 USING meters TAGS(Beijing.Haidian, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)
# INSERT INTO d1001 USING meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
# d1002 USING meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
# d1003 USING meters TAGS(California.LosAngeles, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
# d1004 USING meters TAGS(California.LosAngeles, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)
def get_sql():
global lines
......
......@@ -14,8 +14,8 @@ def query_api_demo(conn: taos.TaosConnection):
# field count: 7
# meta of files[1]: {name: ts, type: 9, bytes: 8}
# ======================Iterate on result=========================
# ('d1001', datetime.datetime(2018, 10, 3, 14, 38, 5), 10.300000190734863, 219, 0.3100000023841858, 'Beijing.Chaoyang', 2)
# ('d1001', datetime.datetime(2018, 10, 3, 14, 38, 15), 12.600000381469727, 218, 0.33000001311302185, 'Beijing.Chaoyang', 2)
# ('d1001', datetime.datetime(2018, 10, 3, 14, 38, 5), 10.300000190734863, 219, 0.3100000023841858, 'California.SanFrancisco', 2)
# ('d1001', datetime.datetime(2018, 10, 3, 14, 38, 15), 12.600000381469727, 218, 0.33000001311302185, 'California.SanFrancisco', 2)
# ANCHOR_END: iter
# ANCHOR: fetch_all
......
......@@ -2,14 +2,14 @@ import taos
from taos import SmlProtocol, SmlPrecision
# format: <metric> <timestamp> <value> <tagk_1>=<tagv_1>[ <tagk_n>=<tagv_n>]
lines = ["meters.current 1648432611249 10.3 location=Beijing.Chaoyang groupid=2",
"meters.current 1648432611250 12.6 location=Beijing.Chaoyang groupid=2",
"meters.current 1648432611249 10.8 location=Beijing.Haidian groupid=3",
"meters.current 1648432611250 11.3 location=Beijing.Haidian groupid=3",
"meters.voltage 1648432611249 219 location=Beijing.Chaoyang groupid=2",
"meters.voltage 1648432611250 218 location=Beijing.Chaoyang groupid=2",
"meters.voltage 1648432611249 221 location=Beijing.Haidian groupid=3",
"meters.voltage 1648432611250 217 location=Beijing.Haidian groupid=3",
lines = ["meters.current 1648432611249 10.3 location=California.SanFrancisco groupid=2",
"meters.current 1648432611250 12.6 location=California.SanFrancisco groupid=2",
"meters.current 1648432611249 10.8 location=California.LosAngeles groupid=3",
"meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3",
"meters.voltage 1648432611249 219 location=California.SanFrancisco groupid=2",
"meters.voltage 1648432611250 218 location=California.SanFrancisco groupid=2",
"meters.voltage 1648432611249 221 location=California.LosAngeles groupid=3",
"meters.voltage 1648432611250 217 location=California.LosAngeles groupid=3",
]
......
......@@ -12,7 +12,7 @@ async fn main() -> Result<(), Error> {
stmt.set_tbname_tags(
"d1001",
[
Field::Binary(BString::from("Beijing.Chaoyang")),
Field::Binary(BString::from("California.SanFrancisco")),
Field::Int(2),
],
)?;
......
......@@ -5,10 +5,10 @@ async fn main() -> Result<(), Error> {
let taos = TaosCfg::default().connect().expect("fail to connect");
taos.create_database("power").await?;
taos.exec("CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)").await?;
let sql = "INSERT INTO power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
power.d1002 USING power.meters TAGS(Beijing.Chaoyang, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
power.d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)";
let sql = "INSERT INTO power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)";
let result = taos.query(sql).await?;
println!("{:?}", result);
Ok(())
......
......@@ -5,10 +5,10 @@ fn main() {
let taos = TaosCfg::default().connect().expect("fail to connect");
taos.raw_query("CREATE DATABASE test").unwrap();
taos.raw_query("USE test").unwrap();
let lines = ["meters,location=Beijing.Haidian,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
"meters,location=Beijing.Haidian,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
"meters,location=Beijing.Haidian,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
"meters,location=Beijing.Haidian,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250"];
let lines = ["meters,location=California.LosAngeles,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
"meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
"meters,location=California.LosAngeles,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
"meters,location=California.LosAngeles,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250"];
let affected_rows = taos
.schemaless_insert(
&lines,
......
......@@ -6,10 +6,10 @@ fn main() {
taos.raw_query("CREATE DATABASE test").unwrap();
taos.raw_query("USE test").unwrap();
let lines = [
r#"[{"metric": "meters.current", "timestamp": 1648432611249, "value": 10.3, "tags": {"location": "Beijing.Chaoyang", "groupid": 2}},
{"metric": "meters.voltage", "timestamp": 1648432611249, "value": 219, "tags": {"location": "Beijing.Haidian", "groupid": 1}},
{"metric": "meters.current", "timestamp": 1648432611250, "value": 12.6, "tags": {"location": "Beijing.Chaoyang", "groupid": 2}},
{"metric": "meters.voltage", "timestamp": 1648432611250, "value": 221, "tags": {"location": "Beijing.Haidian", "groupid": 1}}]"#,
r#"[{"metric": "meters.current", "timestamp": 1648432611249, "value": 10.3, "tags": {"location": "California.SanFrancisco", "groupid": 2}},
{"metric": "meters.voltage", "timestamp": 1648432611249, "value": 219, "tags": {"location": "California.LosAngeles", "groupid": 1}},
{"metric": "meters.current", "timestamp": 1648432611250, "value": 12.6, "tags": {"location": "California.SanFrancisco", "groupid": 2}},
{"metric": "meters.voltage", "timestamp": 1648432611250, "value": 221, "tags": {"location": "California.LosAngeles", "groupid": 1}}]"#,
];
let affected_rows = taos
......
......@@ -6,14 +6,14 @@ fn main() {
taos.raw_query("CREATE DATABASE test").unwrap();
taos.raw_query("USE test").unwrap();
let lines = [
"meters.current 1648432611249 10.3 location=Beijing.Chaoyang groupid=2",
"meters.current 1648432611250 12.6 location=Beijing.Chaoyang groupid=2",
"meters.current 1648432611249 10.8 location=Beijing.Haidian groupid=3",
"meters.current 1648432611250 11.3 location=Beijing.Haidian groupid=3",
"meters.voltage 1648432611249 219 location=Beijing.Chaoyang groupid=2",
"meters.voltage 1648432611250 218 location=Beijing.Chaoyang groupid=2",
"meters.voltage 1648432611249 221 location=Beijing.Haidian groupid=3",
"meters.voltage 1648432611250 217 location=Beijing.Haidian groupid=3",
"meters.current 1648432611249 10.3 location=California.SanFrancisco groupid=2",
"meters.current 1648432611250 12.6 location=California.SanFrancisco groupid=2",
"meters.current 1648432611249 10.8 location=California.LosAngeles groupid=3",
"meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3",
"meters.voltage 1648432611249 219 location=California.SanFrancisco groupid=2",
"meters.voltage 1648432611250 218 location=California.SanFrancisco groupid=2",
"meters.voltage 1648432611249 221 location=California.LosAngeles groupid=3",
"meters.voltage 1648432611250 217 location=California.LosAngeles groupid=3",
];
let affected_rows = taos
.schemaless_insert(
......
......@@ -34,7 +34,6 @@ extern "C" {
#define TSDB_INS_TABLE_USER_FUNCTIONS "user_functions"
#define TSDB_INS_TABLE_USER_INDEXES "user_indexes"
#define TSDB_INS_TABLE_USER_STABLES "user_stables"
#define TSDB_INS_TABLE_USER_STREAMS "user_streams"
#define TSDB_INS_TABLE_USER_TABLES "user_tables"
#define TSDB_INS_TABLE_USER_TABLE_DISTRIBUTED "user_table_distributed"
#define TSDB_INS_TABLE_USER_USERS "user_users"
......
......@@ -53,10 +53,9 @@ typedef enum EStreamType {
} EStreamType;
typedef struct {
uint32_t numOfTables;
SArray* pGroupList;
SArray* pTableList;
SHashObj* map; // speedup acquire the tableQueryInfo by table uid
} STableGroupInfo;
} STableListInfo;
typedef struct SColumnDataAgg {
int16_t colId;
......
......@@ -455,7 +455,8 @@ int32_t tDeserializeSMDropStbReq(void* buf, int32_t bufLen, SMDropStbReq* pReq);
typedef struct {
char name[TSDB_TABLE_FNAME_LEN];
int8_t alterType;
int32_t verInBlock;
int32_t tagVer;
int32_t colVer;
int32_t numOfFields;
SArray* pFields;
int32_t ttl;
......@@ -1663,6 +1664,10 @@ typedef struct {
int32_t tSerializeSMDropCgroupReq(void* buf, int32_t bufLen, SMDropCgroupReq* pReq);
int32_t tDeserializeSMDropCgroupReq(void* buf, int32_t bufLen, SMDropCgroupReq* pReq);
typedef struct {
int8_t reserved;
} SMDropCgroupRsp;
typedef struct {
char name[TSDB_TABLE_FNAME_LEN];
int8_t alterType;
......@@ -1724,9 +1729,9 @@ int32_t tDecodeSVDropStbReq(SDecoder* pCoder, SVDropStbReq* pReq);
#define TD_CREATE_IF_NOT_EXISTS 0x1
typedef struct SVCreateTbReq {
int32_t flags;
char* name;
tb_uid_t uid;
int64_t ctime;
char* name;
int32_t ttl;
int8_t type;
union {
......
......@@ -151,6 +151,7 @@ enum {
TD_DEF_MSG_TYPE(TDMT_MND_MQ_CONSUMER_LOST, "mnode-mq-consumer-lost", SMqConsumerLostMsg, NULL)
TD_DEF_MSG_TYPE(TDMT_MND_MQ_CONSUMER_RECOVER, "mnode-mq-consumer-recover", SMqConsumerRecoverMsg, NULL)
TD_DEF_MSG_TYPE(TDMT_MND_MQ_DO_REBALANCE, "mnode-mq-do-rebalance", SMqDoRebalanceMsg, NULL)
TD_DEF_MSG_TYPE(TDMT_MND_MQ_DROP_CGROUP, "mnode-mq-drop-cgroup", SMqDropCGroupReq, SMqDropCGroupRsp)
TD_DEF_MSG_TYPE(TDMT_MND_MQ_COMMIT_OFFSET, "mnode-mq-commit-offset", SMqCMCommitOffsetReq, SMqCMCommitOffsetRsp)
TD_DEF_MSG_TYPE(TDMT_MND_CREATE_STREAM, "mnode-create-stream", SCMCreateStreamReq, SCMCreateStreamRsp)
TD_DEF_MSG_TYPE(TDMT_MND_ALTER_STREAM, "mnode-alter-stream", NULL, NULL)
......@@ -195,11 +196,8 @@ enum {
TD_DEF_MSG_TYPE(TDMT_VND_EXPLAIN, "vnode-explain", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_VND_SUBSCRIBE, "vnode-subscribe", SMVSubscribeReq, SMVSubscribeRsp)
TD_DEF_MSG_TYPE(TDMT_VND_CONSUME, "vnode-consume", SMqCVConsumeReq, SMqCVConsumeRsp)
TD_DEF_MSG_TYPE(TDMT_VND_CONSUME, "vnode-consume", SMqPollReq, SMqDataBlkRsp)
TD_DEF_MSG_TYPE(TDMT_VND_TASK_DEPLOY, "vnode-task-deploy", SStreamTaskDeployReq, SStreamTaskDeployRsp)
TD_DEF_MSG_TYPE(TDMT_VND_TASK_PIPE_EXEC, "vnode-task-pipe-exec", SStreamTaskExecReq, SStreamTaskExecRsp)
TD_DEF_MSG_TYPE(TDMT_VND_TASK_MERGE_EXEC, "vnode-task-merge-exec", SStreamTaskExecReq, SStreamTaskExecRsp)
TD_DEF_MSG_TYPE(TDMT_VND_TASK_WRITE_EXEC, "vnode-task-write-exec", SStreamTaskExecReq, SStreamTaskExecRsp)
TD_DEF_MSG_TYPE(TDMT_VND_STREAM_TRIGGER, "vnode-stream-trigger", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_VND_TASK_RUN, "vnode-stream-task-run", NULL, NULL)
......@@ -236,9 +234,13 @@ enum {
// Requests handled by SNODE
TD_NEW_MSG_SEG(TDMT_SND_MSG)
TD_DEF_MSG_TYPE(TDMT_SND_TASK_DEPLOY, "snode-task-deploy", SStreamTaskDeployReq, SStreamTaskDeployRsp)
TD_DEF_MSG_TYPE(TDMT_SND_TASK_EXEC, "snode-task-exec", SStreamTaskExecReq, SStreamTaskExecRsp)
TD_DEF_MSG_TYPE(TDMT_SND_TASK_PIPE_EXEC, "snode-task-pipe-exec", SStreamTaskExecReq, SStreamTaskExecRsp)
TD_DEF_MSG_TYPE(TDMT_SND_TASK_MERGE_EXEC, "snode-task-merge-exec", SStreamTaskExecReq, SStreamTaskExecRsp)
//TD_DEF_MSG_TYPE(TDMT_SND_TASK_EXEC, "snode-task-exec", SStreamTaskExecReq, SStreamTaskExecRsp)
//TD_DEF_MSG_TYPE(TDMT_SND_TASK_PIPE_EXEC, "snode-task-pipe-exec", SStreamTaskExecReq, SStreamTaskExecRsp)
//TD_DEF_MSG_TYPE(TDMT_SND_TASK_MERGE_EXEC, "snode-task-merge-exec", SStreamTaskExecReq, SStreamTaskExecRsp)
TD_DEF_MSG_TYPE(TDMT_SND_TASK_RUN, "snode-stream-task-run", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_SND_TASK_DISPATCH, "snode-stream-task-dispatch", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_SND_TASK_RECOVER, "snode-stream-task-recover", NULL, NULL)
// Requests handled by SCHEDULER
TD_NEW_MSG_SEG(TDMT_SCH_MSG)
......
......@@ -101,6 +101,7 @@ typedef struct SDbVgVersion {
typedef struct STbSVersion {
char* tbFName;
int32_t sver;
int32_t tver;
} STbSVersion;
typedef struct SUserAuthVersion {
......
......@@ -61,7 +61,7 @@ qTaskInfo_t qCreateStreamExecTaskInfo(void* msg, void* streamReadHandle);
* @param type
* @return
*/
int32_t qSetStreamInput(qTaskInfo_t tinfo, const void* input, int32_t type);
int32_t qSetStreamInput(qTaskInfo_t tinfo, const void* input, int32_t type, bool assignUid);
/**
* Set multiple input data blocks for the stream scan.
......@@ -71,7 +71,7 @@ int32_t qSetStreamInput(qTaskInfo_t tinfo, const void* input, int32_t type);
* @param type
* @return
*/
int32_t qSetMultiStreamInput(qTaskInfo_t tinfo, const void* pBlocks, size_t numOfBlocks, int32_t type);
int32_t qSetMultiStreamInput(qTaskInfo_t tinfo, const void* pBlocks, size_t numOfBlocks, int32_t type, bool assignUid);
/**
* Update the table id list, add or remove.
......
......@@ -191,7 +191,7 @@ extern int32_t (*queryProcessMsgRsp[TDMT_MAX])(void* output, char* msg, int32_t
#define NEED_CLIENT_RM_TBLMETA_ERROR(_code) \
((_code) == TSDB_CODE_PAR_TABLE_NOT_EXIST || (_code) == TSDB_CODE_VND_TB_NOT_EXIST || \
(_code) == TSDB_CODE_PAR_INVALID_COLUMNS_NUM || (_code) == TSDB_CODE_PAR_INVALID_COLUMN || \
(_code) == TSDB_CODE_PAR_TAGS_NOT_MATCHED)
(_code) == TSDB_CODE_PAR_TAGS_NOT_MATCHED || (_code == TSDB_CODE_PAR_VALUE_TOO_LONG))
#define NEED_CLIENT_REFRESH_VG_ERROR(_code) \
((_code) == TSDB_CODE_VND_HASH_MISMATCH || (_code) == TSDB_CODE_VND_INVALID_VGROUP_ID)
#define NEED_CLIENT_REFRESH_TBLMETA_ERROR(_code) ((_code) == TSDB_CODE_TDB_TABLE_RECREATED)
......
......@@ -23,6 +23,8 @@ extern "C" {
#include "catalog.h"
#include "planner.h"
extern tsem_t schdRspSem;
typedef struct SSchedulerCfg {
uint32_t maxJobNum;
int32_t maxNodeTableNum;
......@@ -62,6 +64,11 @@ typedef struct STaskInfo {
SSubQueryMsg *msg;
} STaskInfo;
typedef struct SSchdFetchParam {
void **pData;
int32_t* code;
} SSchdFetchParam;
typedef void (*schedulerExecCallback)(SQueryResult* pResult, void* param, int32_t code);
typedef void (*schedulerFetchCallback)(void* pResult, void* param, int32_t code);
......@@ -113,23 +120,8 @@ void schedulerFreeJob(int64_t job);
void schedulerDestroy(void);
/**
* convert dag to task list
* @param pDag
* @param pTasks SArray**<STaskInfo>
* @return
*/
int32_t schedulerConvertDagToTaskList(SQueryPlan* pDag, SArray **pTasks);
/**
* make one task info's multiple copies
* @param src
* @param dst SArray**<STaskInfo>
* @return
*/
int32_t schedulerCopyTask(STaskInfo *src, SArray **dst, int32_t copyNum);
void schedulerFreeTaskList(SArray *taskList);
void schdExecCallback(SQueryResult* pResult, void* param, int32_t code);
void schdFetchCallback(void* pResult, void* param, int32_t code);
#ifdef __cplusplus
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册