未验证 提交 1634038a 编写于 作者: T Tao Liu 提交者: GitHub

Merge pull request #1644 from taosdata/develop_old

Develop old
[submodule "src/connector/go"]
path = src/connector/go
url = https://github.com/taosdata/driver-go
[![Build Status](https://travis-ci.org/taosdata/TDengine.svg?branch=develop)](https://travis-ci.org/taosdata/TDengine)
[![Build status](https://ci.appveyor.com/api/projects/status/kf3pwh2or5afsgl9/branch/develop?svg=true)](https://ci.appveyor.com/project/sangshuduo/tdengine-2n8ge/branch/develop)
[![TDengine](TDenginelogo.png)](https://www.taosdata.com)
# What is TDengine?
......
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
# Test binary, built with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
# Dependency directories (remove the comment below to include it)
vendor/
# Project specific files
cmd/alert/alert
cmd/alert/alert.log
*.db
*.gz
\ No newline at end of file
# Alert
The Alert application reads data from [TDEngine](https://www.taosdata.com/), calculating according to predefined rules to generate alerts, and pushes alerts to downstream applications like [AlertManager](https://github.com/prometheus/alertmanager).
## Install
### From Binary
Precompiled binaries is available at [taosdata website](https://www.taosdata.com/en/getting-started/), please download and unpack it by below shell command.
```
$ tar -xzf tdengine-alert-$version-$OS-$ARCH.tar.gz
```
If you have no TDengine server or client installed, please execute below command to install the required driver library:
```
$ ./install_driver.sh
```
### From Source Code
Two prerequisites are required to install from source.
1. TDEngine server or client must be installed.
2. Latest [Go](https://golang.org) language must be installed.
When these two prerequisites are ready, please follow steps below to build the application:
```
$ mkdir taosdata
$ cd taosdata
$ git clone https://github.com/taosdata/tdengine.git
$ cd tdengine/alert/cmd/alert
$ go build
```
If `go build` fails because some of the dependency packages cannot be downloaded, please follow steps in [goproxy.io](https://goproxy.io) to configure `GOPROXY` and try `go build` again.
## Configure
The configuration file format of Alert application is standard `json`, below is its default content, please revise according to actual scenario.
```json
{
"port": 8100,
"database": "file:alert.db",
"tdengine": "root:taosdata@/tcp(127.0.0.1:0)/",
"log": {
"level": "production",
"path": "alert.log"
},
"receivers": {
"alertManager": "http://127.0.0.1:9093/api/v1/alerts",
"console": true
}
}
```
The use of each configuration item is:
* **port**: This is the `http` service port which enables other application to manage rules by `restful API`.
* **database**: rules are stored in a `sqlite` database, this is the path of the database file (if the file does not exist, the alert application creates it automatically).
* **tdengine**: connection string of `TDEngine` server, note in most cases the database information should be put in a rule, thus it should NOT be included here.
* **log > level**: log level, could be `production` or `debug`.
* **log > path**: log output file path.
* **receivers > alertManager**: the alert application pushes alerts to `AlertManager` at this URL.
* **receivers > console**: print out alerts to console (stdout) or not.
When the configruation file is ready, the alert application can be started with below command (`alert.cfg` is the path of the configuration file):
```
$ ./alert -cfg alert.cfg
```
## Prepare an alert rule
From technical aspect, an alert could be defined as: query and filter recent data from `TDEngine`, and calculating out a boolean value from these data according to a formula, and trigger an alert if the boolean value last for a certain duration.
This is a rule example in `json` format:
```json
{
"name": "rule1",
"sql": "select sum(col1) as sumCol1 from test.meters where ts > now - 1h group by areaid",
"expr": "sumCol1 > 10",
"for": "10m",
"period": "1m",
"labels": {
"ruleName": "rule1"
},
"annotations": {
"summary": "sum of rule {{$labels.ruleName}} of area {{$values.areaid}} is {{$values.sumCol1}}"
}
}
```
The fields of the rule is explained below:
* **name**: the name of the rule, must be unique.
* **sql**: this is the `sql` statement used to query data from `TDEngine`, columns of the query result are used in later processing, so please give the column an alias if aggregation functions are used.
* **expr**: an expression whose result is a boolean value, arithmatic and logical calculations can be included in the expression, and builtin functions (see below) are also supported. Alerts are only triggered when the expression evaluates to `true`.
* **for**: this item is a duration which default value is zero second. when `expr` evaluates to `true` and last at least this duration, an alert is triggered.
* **period**: the interval for the alert application to check the rule, default is 1 minute.
* **labels**: a label list, labels are used to generate alert information. note if the `sql` statement includes a `group by` clause, the `group by` columns are inserted into this list automatically.
* **annotations**: the template of alert information which is in [go template](https://golang.org/pkg/text/template) syntax, labels can be referenced by `$labels.<label name>` and columns of the query result can be referenced by `$values.<column name>`.
### Operators
Operators which can be used in the `expr` field of a rule are list below, `()` can be to change precedence if default does not meet requirement.
<table>
<thead>
<tr> <td>Operator</td><td>Unary/Binary</td><td>Precedence</td><td>Effect</td> </tr>
</thead>
<tbody>
<tr> <td>~</td><td>Unary</td><td>6</td><td>Bitwise Not</td> </tr>
<tr> <td>!</td><td>Unary</td><td>6</td><td>Logical Not</td> </tr>
<tr> <td>+</td><td>Unary</td><td>6</td><td>Positive Sign</td> </tr>
<tr> <td>-</td><td>Unary</td><td>6</td><td>Negative Sign</td> </tr>
<tr> <td>*</td><td>Binary</td><td>5</td><td>Multiplication</td> </tr>
<tr> <td>/</td><td>Binary</td><td>5</td><td>Division</td> </tr>
<tr> <td>%</td><td>Binary</td><td>5</td><td>Modulus</td> </tr>
<tr> <td><<</td><td>Binary</td><td>5</td><td>Bitwise Left Shift</td> </tr>
<tr> <td>>></td><td>Binary</td><td>5</td><td>Bitwise Right Shift</td> </tr>
<tr> <td>&</td><td>Binary</td><td>5</td><td>Bitwise And</td> </tr>
<tr> <td>+</td><td>Binary</td><td>4</td><td>Addition</td> </tr>
<tr> <td>-</td><td>Binary</td><td>4</td><td>Subtraction</td> </tr>
<tr> <td>|</td><td>Binary</td><td>4</td><td>Bitwise Or</td> </tr>
<tr> <td>^</td><td>Binary</td><td>4</td><td>Bitwise Xor</td> </tr>
<tr> <td>==</td><td>Binary</td><td>3</td><td>Equal</td> </tr>
<tr> <td>!=</td><td>Binary</td><td>3</td><td>Not Equal</td> </tr>
<tr> <td><</td><td>Binary</td><td>3</td><td>Less Than</td> </tr>
<tr> <td><=</td><td>Binary</td><td>3</td><td>Less Than or Equal</td> </tr>
<tr> <td>></td><td>Binary</td><td>3</td><td>Great Than</td> </tr>
<tr> <td>>=</td><td>Binary</td><td>3</td><td>Great Than or Equal</td> </tr>
<tr> <td>&&</td><td>Binary</td><td>2</td><td>Logical And</td> </tr>
<tr> <td>||</td><td>Binary</td><td>1</td><td>Logical Or</td> </tr>
</tbody>
</table>
### Built-in Functions
Built-in function can be used in the `expr` field of a rule.
* **min**: returns the minimum one of its arguments, for example: `min(1, 2, 3)` returns `1`.
* **max**: returns the maximum one of its arguments, for example: `max(1, 2, 3)` returns `3`.
* **sum**: returns the sum of its arguments, for example: `sum(1, 2, 3)` returns `6`.
* **avg**: returns the average of its arguments, for example: `avg(1, 2, 3)` returns `2`.
* **sqrt**: returns the square root of its argument, for example: `sqrt(9)` returns `3`.
* **ceil**: returns the minimum integer which greater or equal to its argument, for example: `ceil(9.1)` returns `10`.
* **floor**: returns the maximum integer which lesser or equal to its argument, for example: `floor(9.9)` returns `9`.
* **round**: round its argument to nearest integer, for examples: `round(9.9)` returns `10` and `round(9.1)` returns `9`.
* **log**: returns the natural logarithm of its argument, for example: `log(10)` returns `2.302585`.
* **log10**: returns base 10 logarithm of its argument, for example: `log10(10)` return `1`.
* **abs**: returns the absolute value of its argument, for example: `abs(-1)` returns `1`.
* **if**: if the first argument is `true` returns its second argument, and returns its third argument otherwise, for examples: `if(true, 10, 100)` returns `10` and `if(false, 10, 100)` returns `100`.
## Rule Management
* Add / Update
* API address: http://\<server\>:\<port\>/api/update-rule
* Method: POST
* Body: the rule
* Example:curl -d '@rule.json' http://localhost:8100/api/update-rule
* Delete
* API address: http://\<server\>:\<port\>/api/delete-rule?name=\<rule name\>
* Method:DELETE
* Example:curl -X DELETE http://localhost:8100/api/delete-rule?name=rule1
* Enable / Disable
* API address: http://\<server\>:\<port\>/api/enable-rule?name=\<rule name\>&enable=[true | false]
* Method POST
* Example:curl -X POST http://localhost:8100/api/enable-rule?name=rule1&enable=true
* Retrieve rule list
* API address: http://\<server\>:\<port\>/api/list-rule
* Method: GET
* Example:curl http://localhost:8100/api/list-rule
# Alert
报警监测程序,从 [TDEngine](https://www.taosdata.com/) 读取数据后,根据预定义的规则计算和生成报警,并将它们推送到 [AlertManager](https://github.com/prometheus/alertmanager) 或其它下游应用。
## 安装
### 使用编译好的二进制文件
您可以从 [涛思数据](https://www.taosdata.com/cn/getting-started/) 官网下载最新的安装包。下载完成后,使用以下命令解压:
```
$ tar -xzf tdengine-alert-$version-$OS-$ARCH.tar.gz
```
如果您之前没有安装过 TDengine 的服务端或客户端,您需要使用下面的命令安装 TDengine 的动态库:
```
$ ./install_driver.sh
```
### 从源码安装
从源码安装需要在您用于编译的计算机上提前安装好 TDEngine 的服务端或客户端,如果您还没有安装,可以参考 TDEngine 的文档。
报警监测程序使用 [Go语言](https://golang.org) 开发,请安装最新版的 Go 语言编译环境。
```
$ mkdir taosdata
$ cd taosdata
$ git clone https://github.com/taosdata/tdengine.git
$ cd tdengine/alert/cmd/alert
$ go build
```
如果由于部分包无法下载导致 `go build` 失败,请根据 [goproxy.io](https://goproxy.io) 上的说明配置好 `GOPROXY` 再重新执行 `go build`
## 配置
报警监测程序的配置文件采用标准`json`格式,下面是默认的文件内容,请根据实际情况修改。
```json
{
"port": 8100,
"database": "file:alert.db",
"tdengine": "root:taosdata@/tcp(127.0.0.1:0)/",
"log": {
"level": "production",
"path": "alert.log"
},
"receivers": {
"alertManager": "http://127.0.0.1:9093/api/v1/alerts",
"console": true
}
}
```
其中:
* **port**:报警监测程序支持使用 `restful API` 对规则进行管理,这个参数用于配置 `http` 服务的侦听端口。
* **database**:报警监测程序将规则保存到了一个 `sqlite` 数据库中,这个参数用于指定数据库文件的路径(不需要提前创建这个文件,如果它不存在,程序会自动创建它)。
* **tdengine**`TDEngine` 的连接信息,一般来说,数据库信息应该在报警规则中指定,所以这里 **不** 应包含这一部分信息。
* **log > level**:日志的记录级别,可选 `production``debug`
* **log > path**:日志文件的路径。
* **receivers > alertManager**:报警监测程序会将报警推送到 `AlertManager`,在这里指定 `AlertManager` 的接收地址。
* **receivers > console**:是否输出到控制台 (stdout)。
准备好配置文件后,可使用下面的命令启动报警监测程序( `alert.cfg` 是配置文件的路径):
```
$ ./alert -cfg alert.cfg
```
## 编写报警规则
从技术角度,可以将报警描述为:从 `TDEngine` 中查询最近一段时间、符合一定过滤条件的数据,并基于这些数据根据定义好的计算方法得出一个结果,当结果符合某个条件且持续一定时间后,触发报警。
根据上面的描述,可以很容易的知道报警规则中需要包含的大部分信息。 以下是一个完整的报警规则,采用标准 `json` 格式:
```json
{
"name": "rule1",
"sql": "select sum(col1) as sumCol1 from test.meters where ts > now - 1h group by areaid",
"expr": "sumCol1 > 10",
"for": "10m",
"period": "1m",
"labels": {
"ruleName": "rule1"
},
"annotations": {
"summary": "sum of rule {{$labels.ruleName}} of area {{$values.areaid}} is {{$values.sumCol1}}"
}
}
```
其中:
* **name**:用于为规则指定一个唯一的名字。
* **sql**:从 `TDEngine` 中查询数据时使用的 `sql` 语句,查询结果中的列将被后续计算使用,所以,如果使用了聚合函数,请为这一列指定一个别名。
* **expr**:一个计算结果为布尔型的表达式,支持算数运算、逻辑运算,并且内置了部分函数,也可以引用查询结果中的列。 当表达式计算结果为 `true` 时,进入报警状态。
* **for**:当表达式计算结果为 `true` 的连续时长超过这个选项时,触发报警,否则报警处于“待定”状态。默认为0,表示一旦计算结果为 `true`,立即触发报警。
* **period**:规则的检查周期,默认1分钟。
* **labels**:人为指定的标签列表,标签可以在生成报警信息引用。如果 `sql` 中包含 `group by` 子句,则所有用于分组的字段会被自动加入这个标签列表中。
* **annotations**:用于定义报警信息,使用 [go template](https://golang.org/pkg/text/template) 语法,其中,可以通过 `$labels.<label name>` 引用标签,也可以通过 `$values.<column name>` 引用查询结果中的列。
### 运算符
以下是 `expr` 字段中支持的运算符,您可以使用 `()` 改变运算的优先级。
<table>
<thead>
<tr> <td>运算符</td><td>单目/双目</td><td>优先级</td><td>作用</td> </tr>
</thead>
<tbody>
<tr> <td>~</td><td>单目</td><td>6</td><td>按位取反</td> </tr>
<tr> <td>!</td><td>单目</td><td>6</td><td>逻辑非</td> </tr>
<tr> <td>+</td><td>单目</td><td>6</td><td>正号</td> </tr>
<tr> <td>-</td><td>单目</td><td>6</td><td>负号</td> </tr>
<tr> <td>*</td><td>双目</td><td>5</td><td>乘法</td> </tr>
<tr> <td>/</td><td>双目</td><td>5</td><td>除法</td> </tr>
<tr> <td>%</td><td>双目</td><td>5</td><td>取模(余数)</td> </tr>
<tr> <td><<</td><td>双目</td><td>5</td><td>按位左移</td> </tr>
<tr> <td>>></td><td>双目</td><td>5</td><td>按位右移</td> </tr>
<tr> <td>&</td><td>双目</td><td>5</td><td>按位与</td> </tr>
<tr> <td>+</td><td>双目</td><td>4</td><td>加法</td> </tr>
<tr> <td>-</td><td>双目</td><td>4</td><td>减法</td> </tr>
<tr> <td>|</td><td>双目</td><td>4</td><td>按位或</td> </tr>
<tr> <td>^</td><td>双目</td><td>4</td><td>按位异或</td> </tr>
<tr> <td>==</td><td>双目</td><td>3</td><td>等于</td> </tr>
<tr> <td>!=</td><td>双目</td><td>3</td><td>不等于</td> </tr>
<tr> <td><</td><td>双目</td><td>3</td><td>小于</td> </tr>
<tr> <td><=</td><td>双目</td><td>3</td><td>小于或等于</td> </tr>
<tr> <td>></td><td>双目</td><td>3</td><td>大于</td> </tr>
<tr> <td>>=</td><td>双目</td><td>3</td><td>大于或等于</td> </tr>
<tr> <td>&&</td><td>双目</td><td>2</td><td>逻辑与</td> </tr>
<tr> <td>||</td><td>双目</td><td>1</td><td>逻辑或</td> </tr>
</tbody>
</table>
### 内置函数
目前支持以下内置函数,可以在报警规则的 `expr` 字段中使用这些函数:
* **min**:取多个值中的最小值,例如 `min(1, 2, 3)` 返回 `1`
* **max**:取多个值中的最大值,例如 `max(1, 2, 3)` 返回 `3`
* **sum**:求和,例如 `sum(1, 2, 3)` 返回 `6`
* **avg**:求算术平均值,例如 `avg(1, 2, 3)` 返回 `2`
* **sqrt**:计算平方根,例如 `sqrt(9)` 返回 `3`
* **ceil**:上取整,例如 `ceil(9.1)` 返回 `10`
* **floor**:下取整,例如 `floor(9.9)` 返回 `9`
* **round**:四舍五入,例如 `round(9.9)` 返回 `10``round(9.1)` 返回 `9`
* **log**:计算自然对数,例如 `log(10)` 返回 `2.302585`
* **log10**:计算以10为底的对数,例如 `log10(10)` 返回 `1`
* **abs**:计算绝对值,例如 `abs(-1)` 返回 `1`
* **if**:如果第一个参数为 `true`,返回第二个参数,否则返回第三个参数,例如 `if(true, 10, 100)` 返回 `10``if(false, 10, 100)` 返回 `100`
## 规则管理
* 添加或修改
* API地址:http://\<server\>:\<port\>/api/update-rule
* Method:POST
* Body:规则定义
* 示例:curl -d '@rule.json' http://localhost:8100/api/update-rule
* 删除
* API地址:http://\<server\>:\<port\>/api/delete-rule?name=\<rule name\>
* Method:DELETE
* 示例:curl -X DELETE http://localhost:8100/api/delete-rule?name=rule1
* 挂起或恢复
* API地址:http://\<server\>:\<port\>/api/enable-rule?name=\<rule name\>&enable=[true | false]
* Method:POST
* 示例:curl -X POST http://localhost:8100/api/enable-rule?name=rule1&enable=true
* 获取列表
* API地址:http://\<server\>:\<port\>/api/list-rule
* Method:GET
* 示例:curl http://localhost:8100/api/list-rule
package app
import (
"encoding/json"
"io/ioutil"
"net/http"
"strings"
"time"
"github.com/taosdata/alert/models"
"github.com/taosdata/alert/utils"
"github.com/taosdata/alert/utils/log"
)
func Init() error {
if e := initRule(); e != nil {
return e
}
http.HandleFunc("/api/list-rule", onListRule)
http.HandleFunc("/api/list-alert", onListAlert)
http.HandleFunc("/api/update-rule", onUpdateRule)
http.HandleFunc("/api/enable-rule", onEnableRule)
http.HandleFunc("/api/delete-rule", onDeleteRule)
return nil
}
func Uninit() error {
uninitRule()
return nil
}
func onListRule(w http.ResponseWriter, r *http.Request) {
var res []*Rule
rules.Range(func(k, v interface{}) bool {
res = append(res, v.(*Rule))
return true
})
w.Header().Add("Content-Type", "application/json; charset=utf-8")
json.NewEncoder(w).Encode(res)
}
func onListAlert(w http.ResponseWriter, r *http.Request) {
var alerts []*Alert
rn := r.URL.Query().Get("rule")
rules.Range(func(k, v interface{}) bool {
if len(rn) > 0 && rn != k.(string) {
return true
}
rule := v.(*Rule)
rule.Alerts.Range(func(k, v interface{}) bool {
alert := v.(*Alert)
// TODO: not go-routine safe
if alert.State != AlertStateWaiting {
alerts = append(alerts, v.(*Alert))
}
return true
})
return true
})
w.Header().Add("Content-Type", "application/json; charset=utf-8")
json.NewEncoder(w).Encode(alerts)
}
func onUpdateRule(w http.ResponseWriter, r *http.Request) {
data, e := ioutil.ReadAll(r.Body)
if e != nil {
log.Error("failed to read request body: ", e.Error())
w.WriteHeader(http.StatusBadRequest)
return
}
rule, e := newRule(string(data))
if e != nil {
log.Error("failed to parse rule: ", e.Error())
w.WriteHeader(http.StatusBadRequest)
return
}
if e = doUpdateRule(rule, string(data)); e != nil {
w.WriteHeader(http.StatusInternalServerError)
}
}
func doUpdateRule(rule *Rule, ruleStr string) error {
if _, ok := rules.Load(rule.Name); ok {
if len(utils.Cfg.Database) > 0 {
e := models.UpdateRule(rule.Name, ruleStr)
if e != nil {
log.Errorf("[%s]: update failed: %s", rule.Name, e.Error())
return e
}
}
log.Infof("[%s]: update succeeded.", rule.Name)
} else {
if len(utils.Cfg.Database) > 0 {
e := models.AddRule(&models.Rule{
Name: rule.Name,
Content: ruleStr,
})
if e != nil {
log.Errorf("[%s]: add failed: %s", rule.Name, e.Error())
return e
}
}
log.Infof("[%s]: add succeeded.", rule.Name)
}
rules.Store(rule.Name, rule)
return nil
}
func onEnableRule(w http.ResponseWriter, r *http.Request) {
var rule *Rule
name := r.URL.Query().Get("name")
enable := strings.ToLower(r.URL.Query().Get("enable")) == "true"
if x, ok := rules.Load(name); ok {
rule = x.(*Rule)
} else {
w.WriteHeader(http.StatusNotFound)
return
}
if rule.isEnabled() == enable {
return
}
if len(utils.Cfg.Database) > 0 {
if e := models.EnableRule(name, enable); e != nil {
if enable {
log.Errorf("[%s]: enable failed: ", name, e.Error())
} else {
log.Errorf("[%s]: disable failed: ", name, e.Error())
}
w.WriteHeader(http.StatusInternalServerError)
return
}
}
if enable {
rule = rule.clone()
rule.setNextRunTime(time.Now())
rules.Store(rule.Name, rule)
log.Infof("[%s]: enable succeeded.", name)
} else {
rule.setState(RuleStateDisabled)
log.Infof("[%s]: disable succeeded.", name)
}
}
func onDeleteRule(w http.ResponseWriter, r *http.Request) {
name := r.URL.Query().Get("name")
if len(name) == 0 {
return
}
if e := doDeleteRule(name); e != nil {
w.WriteHeader(http.StatusInternalServerError)
}
}
func doDeleteRule(name string) error {
if len(utils.Cfg.Database) > 0 {
if e := models.DeleteRule(name); e != nil {
log.Errorf("[%s]: delete failed: %s", name, e.Error())
return e
}
}
rules.Delete(name)
log.Infof("[%s]: delete succeeded.", name)
return nil
}
package expr
import (
"errors"
"io"
"math"
"strconv"
"strings"
"text/scanner"
)
var (
// compile errors
ErrorExpressionSyntax = errors.New("expression syntax error")
ErrorUnrecognizedFunction = errors.New("unrecognized function")
ErrorArgumentCount = errors.New("too many/few arguments")
ErrorInvalidFloat = errors.New("invalid float")
ErrorInvalidInteger = errors.New("invalid integer")
// eval errors
ErrorUnsupportedDataType = errors.New("unsupported data type")
ErrorInvalidOperationFloat = errors.New("invalid operation for float")
ErrorInvalidOperationInteger = errors.New("invalid operation for integer")
ErrorInvalidOperationBoolean = errors.New("invalid operation for boolean")
ErrorOnlyIntegerAllowed = errors.New("only integers is allowed")
ErrorDataTypeMismatch = errors.New("data type mismatch")
)
// binary operator precedence
// 5 * / % << >> &
// 4 + - | ^
// 3 == != < <= > >=
// 2 &&
// 1 ||
const (
opOr = -(iota + 1000) // ||
opAnd // &&
opEqual // ==
opNotEqual // !=
opGTE // >=
opLTE // <=
opLeftShift // <<
opRightShift // >>
)
type lexer struct {
scan scanner.Scanner
tok rune
}
func (l *lexer) init(src io.Reader) {
l.scan.Error = func(s *scanner.Scanner, msg string) {
panic(errors.New(msg))
}
l.scan.Mode = scanner.ScanIdents | scanner.ScanInts | scanner.ScanFloats | scanner.ScanStrings
l.scan.Init(src)
l.tok = l.next()
}
func (l *lexer) next() rune {
l.tok = l.scan.Scan()
switch l.tok {
case '|':
if l.scan.Peek() == '|' {
l.tok = opOr
l.scan.Scan()
}
case '&':
if l.scan.Peek() == '&' {
l.tok = opAnd
l.scan.Scan()
}
case '=':
if l.scan.Peek() == '=' {
l.tok = opEqual
l.scan.Scan()
} else {
// TODO: error
}
case '!':
if l.scan.Peek() == '=' {
l.tok = opNotEqual
l.scan.Scan()
} else {
// TODO: error
}
case '<':
if tok := l.scan.Peek(); tok == '<' {
l.tok = opLeftShift
l.scan.Scan()
} else if tok == '=' {
l.tok = opLTE
l.scan.Scan()
}
case '>':
if tok := l.scan.Peek(); tok == '>' {
l.tok = opRightShift
l.scan.Scan()
} else if tok == '=' {
l.tok = opGTE
l.scan.Scan()
}
}
return l.tok
}
func (l *lexer) token() rune {
return l.tok
}
func (l *lexer) text() string {
switch l.tok {
case opOr:
return "||"
case opAnd:
return "&&"
case opEqual:
return "=="
case opNotEqual:
return "!="
case opLeftShift:
return "<<"
case opLTE:
return "<="
case opRightShift:
return ">>"
case opGTE:
return ">="
default:
return l.scan.TokenText()
}
}
type Expr interface {
Eval(env func(string) interface{}) interface{}
}
type unaryExpr struct {
op rune
subExpr Expr
}
func (ue *unaryExpr) Eval(env func(string) interface{}) interface{} {
val := ue.subExpr.Eval(env)
switch v := val.(type) {
case float64:
if ue.op != '-' {
panic(ErrorInvalidOperationFloat)
}
return -v
case int64:
switch ue.op {
case '-':
return -v
case '~':
return ^v
default:
panic(ErrorInvalidOperationInteger)
}
case bool:
if ue.op != '!' {
panic(ErrorInvalidOperationBoolean)
}
return !v
default:
panic(ErrorUnsupportedDataType)
}
}
type binaryExpr struct {
op rune
lhs Expr
rhs Expr
}
func (be *binaryExpr) Eval(env func(string) interface{}) interface{} {
lval := be.lhs.Eval(env)
rval := be.rhs.Eval(env)
switch be.op {
case '*':
switch lv := lval.(type) {
case float64:
switch rv := rval.(type) {
case float64:
return lv * rv
case int64:
return lv * float64(rv)
case bool:
panic(ErrorInvalidOperationBoolean)
}
case int64:
switch rv := rval.(type) {
case float64:
return float64(lv) * rv
case int64:
return lv * rv
case bool:
panic(ErrorInvalidOperationBoolean)
}
case bool:
panic(ErrorInvalidOperationBoolean)
}
case '/':
switch lv := lval.(type) {
case float64:
switch rv := rval.(type) {
case float64:
if rv == 0 {
return math.Inf(int(lv))
} else {
return lv / rv
}
case int64:
if rv == 0 {
return math.Inf(int(lv))
} else {
return lv / float64(rv)
}
case bool:
panic(ErrorInvalidOperationBoolean)
}
case int64:
switch rv := rval.(type) {
case float64:
if rv == 0 {
return math.Inf(int(lv))
} else {
return float64(lv) / rv
}
case int64:
if rv == 0 {
return math.Inf(int(lv))
} else {
return lv / rv
}
case bool:
panic(ErrorInvalidOperationBoolean)
}
case bool:
panic(ErrorInvalidOperationBoolean)
}
case '%':
switch lv := lval.(type) {
case float64:
switch rv := rval.(type) {
case float64:
return math.Mod(lv, rv)
case int64:
return math.Mod(lv, float64(rv))
case bool:
panic(ErrorInvalidOperationBoolean)
}
case int64:
switch rv := rval.(type) {
case float64:
return math.Mod(float64(lv), rv)
case int64:
if rv == 0 {
return math.Inf(int(lv))
} else {
return lv % rv
}
case bool:
panic(ErrorInvalidOperationBoolean)
}
case bool:
panic(ErrorInvalidOperationBoolean)
}
case opLeftShift:
switch lv := lval.(type) {
case int64:
switch rv := rval.(type) {
case int64:
return lv << rv
default:
panic(ErrorOnlyIntegerAllowed)
}
default:
panic(ErrorOnlyIntegerAllowed)
}
case opRightShift:
switch lv := lval.(type) {
case int64:
switch rv := rval.(type) {
case int64:
return lv >> rv
default:
panic(ErrorOnlyIntegerAllowed)
}
default:
panic(ErrorOnlyIntegerAllowed)
}
case '&':
switch lv := lval.(type) {
case int64:
switch rv := rval.(type) {
case int64:
return lv & rv
default:
panic(ErrorOnlyIntegerAllowed)
}
default:
panic(ErrorOnlyIntegerAllowed)
}
case '+':
switch lv := lval.(type) {
case float64:
switch rv := rval.(type) {
case float64:
return lv + rv
case int64:
return lv + float64(rv)
case bool:
panic(ErrorInvalidOperationBoolean)
}
case int64:
switch rv := rval.(type) {
case float64:
return float64(lv) + rv
case int64:
return lv + rv
case bool:
panic(ErrorInvalidOperationBoolean)
}
case bool:
panic(ErrorInvalidOperationBoolean)
}
case '-':
switch lv := lval.(type) {
case float64:
switch rv := rval.(type) {
case float64:
return lv - rv
case int64:
return lv - float64(rv)
case bool:
panic(ErrorInvalidOperationBoolean)
}
case int64:
switch rv := rval.(type) {
case float64:
return float64(lv) - rv
case int64:
return lv - rv
case bool:
panic(ErrorInvalidOperationBoolean)
}
case bool:
panic(ErrorInvalidOperationBoolean)
}
case '|':
switch lv := lval.(type) {
case int64:
switch rv := rval.(type) {
case int64:
return lv | rv
default:
panic(ErrorOnlyIntegerAllowed)
}
default:
panic(ErrorOnlyIntegerAllowed)
}
case '^':
switch lv := lval.(type) {
case int64:
switch rv := rval.(type) {
case int64:
return lv ^ rv
default:
panic(ErrorOnlyIntegerAllowed)
}
default:
panic(ErrorOnlyIntegerAllowed)
}
case opEqual:
switch lv := lval.(type) {
case float64:
switch rv := rval.(type) {
case float64:
return lv == rv
case int64:
return lv == float64(rv)
case bool:
panic(ErrorDataTypeMismatch)
}
case int64:
switch rv := rval.(type) {
case float64:
return float64(lv) == rv
case int64:
return lv == rv
case bool:
panic(ErrorDataTypeMismatch)
}
case bool:
switch rv := rval.(type) {
case float64:
case int64:
case bool:
return lv == rv
}
}
case opNotEqual:
switch lv := lval.(type) {
case float64:
switch rv := rval.(type) {
case float64:
return lv != rv
case int64:
return lv != float64(rv)
case bool:
panic(ErrorDataTypeMismatch)
}
case int64:
switch rv := rval.(type) {
case float64:
return float64(lv) != rv
case int64:
return lv != rv
case bool:
panic(ErrorDataTypeMismatch)
}
case bool:
switch rv := rval.(type) {
case float64:
case int64:
case bool:
return lv != rv
}
}
case '<':
switch lv := lval.(type) {
case float64:
switch rv := rval.(type) {
case float64:
return lv < rv
case int64:
return lv < float64(rv)
case bool:
panic(ErrorDataTypeMismatch)
}
case int64:
switch rv := rval.(type) {
case float64:
return float64(lv) < rv
case int64:
return lv < rv
case bool:
panic(ErrorDataTypeMismatch)
}
case bool:
panic(ErrorInvalidOperationBoolean)
}
case opLTE:
switch lv := lval.(type) {
case float64:
switch rv := rval.(type) {
case float64:
return lv <= rv
case int64:
return lv <= float64(rv)
case bool:
panic(ErrorDataTypeMismatch)
}
case int64:
switch rv := rval.(type) {
case float64:
return float64(lv) <= rv
case int64:
return lv <= rv
case bool:
panic(ErrorDataTypeMismatch)
}
case bool:
panic(ErrorInvalidOperationBoolean)
}
case '>':
switch lv := lval.(type) {
case float64:
switch rv := rval.(type) {
case float64:
return lv > rv
case int64:
return lv > float64(rv)
case bool:
panic(ErrorDataTypeMismatch)
}
case int64:
switch rv := rval.(type) {
case float64:
return float64(lv) > rv
case int64:
return lv > rv
case bool:
panic(ErrorDataTypeMismatch)
}
case bool:
panic(ErrorInvalidOperationBoolean)
}
case opGTE:
switch lv := lval.(type) {
case float64:
switch rv := rval.(type) {
case float64:
return lv >= rv
case int64:
return lv >= float64(rv)
case bool:
panic(ErrorDataTypeMismatch)
}
case int64:
switch rv := rval.(type) {
case float64:
return float64(lv) >= rv
case int64:
return lv >= rv
case bool:
panic(ErrorDataTypeMismatch)
}
case bool:
panic(ErrorInvalidOperationBoolean)
}
case opAnd:
switch lv := lval.(type) {
case bool:
switch rv := rval.(type) {
case bool:
return lv && rv
default:
panic(ErrorOnlyIntegerAllowed)
}
default:
panic(ErrorOnlyIntegerAllowed)
}
case opOr:
switch lv := lval.(type) {
case bool:
switch rv := rval.(type) {
case bool:
return lv || rv
default:
panic(ErrorOnlyIntegerAllowed)
}
default:
panic(ErrorOnlyIntegerAllowed)
}
}
return nil
}
type funcExpr struct {
name string
args []Expr
}
func (fe *funcExpr) Eval(env func(string) interface{}) interface{} {
argv := make([]interface{}, 0, len(fe.args))
for _, arg := range fe.args {
argv = append(argv, arg.Eval(env))
}
return funcs[fe.name].call(argv)
}
type floatExpr struct {
val float64
}
func (fe *floatExpr) Eval(env func(string) interface{}) interface{} {
return fe.val
}
type intExpr struct {
val int64
}
func (ie *intExpr) Eval(env func(string) interface{}) interface{} {
return ie.val
}
type boolExpr struct {
val bool
}
func (be *boolExpr) Eval(env func(string) interface{}) interface{} {
return be.val
}
type stringExpr struct {
val string
}
func (se *stringExpr) Eval(env func(string) interface{}) interface{} {
return se.val
}
type varExpr struct {
name string
}
func (ve *varExpr) Eval(env func(string) interface{}) interface{} {
return env(ve.name)
}
func Compile(src string) (expr Expr, err error) {
defer func() {
switch x := recover().(type) {
case nil:
case error:
err = x
default:
}
}()
lexer := lexer{}
lexer.init(strings.NewReader(src))
expr = parseBinary(&lexer, 0)
if lexer.token() != scanner.EOF {
panic(ErrorExpressionSyntax)
}
return expr, nil
}
func precedence(op rune) int {
switch op {
case opOr:
return 1
case opAnd:
return 2
case opEqual, opNotEqual, '<', '>', opGTE, opLTE:
return 3
case '+', '-', '|', '^':
return 4
case '*', '/', '%', opLeftShift, opRightShift, '&':
return 5
}
return 0
}
// binary = unary ('+' binary)*
func parseBinary(lexer *lexer, lastPrec int) Expr {
lhs := parseUnary(lexer)
for {
op := lexer.token()
prec := precedence(op)
if prec <= lastPrec {
break
}
lexer.next() // consume operator
rhs := parseBinary(lexer, prec)
lhs = &binaryExpr{op: op, lhs: lhs, rhs: rhs}
}
return lhs
}
// unary = '+|-' expr | primary
func parseUnary(lexer *lexer) Expr {
flag := false
for tok := lexer.token(); ; tok = lexer.next() {
if tok == '-' {
flag = !flag
} else if tok != '+' {
break
}
}
if flag {
return &unaryExpr{op: '-', subExpr: parsePrimary(lexer)}
}
flag = false
for tok := lexer.token(); tok == '!'; tok = lexer.next() {
flag = !flag
}
if flag {
return &unaryExpr{op: '!', subExpr: parsePrimary(lexer)}
}
flag = false
for tok := lexer.token(); tok == '~'; tok = lexer.next() {
flag = !flag
}
if flag {
return &unaryExpr{op: '~', subExpr: parsePrimary(lexer)}
}
return parsePrimary(lexer)
}
// primary = id
// | id '(' expr ',' ... ',' expr ')'
// | num
// | '(' expr ')'
func parsePrimary(lexer *lexer) Expr {
switch lexer.token() {
case '+', '-', '!', '~':
return parseUnary(lexer)
case '(':
lexer.next() // consume '('
node := parseBinary(lexer, 0)
if lexer.token() != ')' {
panic(ErrorExpressionSyntax)
}
lexer.next() // consume ')'
return node
case scanner.Ident:
id := strings.ToLower(lexer.text())
if lexer.next() != '(' {
if id == "true" {
return &boolExpr{val: true}
} else if id == "false" {
return &boolExpr{val: false}
} else {
return &varExpr{name: id}
}
}
node := funcExpr{name: id}
for lexer.next() != ')' {
arg := parseBinary(lexer, 0)
node.args = append(node.args, arg)
if lexer.token() != ',' {
break
}
}
if lexer.token() != ')' {
panic(ErrorExpressionSyntax)
}
if fn, ok := funcs[id]; !ok {
panic(ErrorUnrecognizedFunction)
} else if fn.minArgs >= 0 && len(node.args) < fn.minArgs {
panic(ErrorArgumentCount)
} else if fn.maxArgs >= 0 && len(node.args) > fn.maxArgs {
panic(ErrorArgumentCount)
}
lexer.next() // consume it
return &node
case scanner.Int:
val, e := strconv.ParseInt(lexer.text(), 0, 64)
if e != nil {
panic(ErrorInvalidFloat)
}
lexer.next()
return &intExpr{val: val}
case scanner.Float:
val, e := strconv.ParseFloat(lexer.text(), 0)
if e != nil {
panic(ErrorInvalidInteger)
}
lexer.next()
return &floatExpr{val: val}
case scanner.String:
panic(errors.New("strings are not allowed in expression at present"))
val := lexer.text()
lexer.next()
return &stringExpr{val: val}
default:
panic(ErrorExpressionSyntax)
}
}
package expr
import "testing"
func TestIntArithmetic(t *testing.T) {
cases := []struct {
expr string
expected int64
}{
{"+10", 10},
{"-10", -10},
{"3 + 4 + 5 + 6 * 7 + 8", 62},
{"3 + 4 + (5 + 6) * 7 + 8", 92},
{"3 + 4 + (5 + 6) * 7 / 11 + 8", 22},
{"3 + 4 + -5 * 6 / 7 % 8", 3},
{"10 - 5", 5},
}
for _, c := range cases {
expr, e := Compile(c.expr)
if e != nil {
t.Errorf("failed to compile expression '%s': %s", c.expr, e.Error())
}
if res := expr.Eval(nil); res.(int64) != c.expected {
t.Errorf("result for expression '%s' is %v, but expected is %v", c.expr, res, c.expected)
}
}
}
func TestFloatArithmetic(t *testing.T) {
cases := []struct {
expr string
expected float64
}{
{"+10.5", 10.5},
{"-10.5", -10.5},
{"3.1 + 4.2 + 5 + 6 * 7 + 8", 62.3},
{"3.1 + 4.2 + (5 + 6) * 7 + 8.3", 92.6},
{"3.1 + 4.2 + (5.1 + 5.9) * 7 / 11 + 8", 22.3},
{"3.3 + 4.2 - 4.0 * 7.5 / 3", -2.5},
{"3.3 + 4.2 - 4 * 7.0 / 2", -6.5},
{"3.5/2.0", 1.75},
{"3.5/2", 1.75},
{"7 / 3.5", 2},
{"3.5 % 2.0", 1.5},
{"3.5 % 2", 1.5},
{"7 % 2.5", 2},
{"7.3 - 2", 5.3},
{"7 - 2.3", 4.7},
{"1 + 1.5", 2.5},
}
for _, c := range cases {
expr, e := Compile(c.expr)
if e != nil {
t.Errorf("failed to compile expression '%s': %s", c.expr, e.Error())
}
if res := expr.Eval(nil); res.(float64) != c.expected {
t.Errorf("result for expression '%s' is %v, but expected is %v", c.expr, res, c.expected)
}
}
}
func TestVariable(t *testing.T) {
variables := map[string]interface{}{
"a": int64(6),
"b": int64(7),
}
env := func(key string) interface{} {
return variables[key]
}
cases := []struct {
expr string
expected int64
}{
{"3 + 4 + (+5) + a * b + 8", 62},
{"3 + 4 + (5 + a) * b + 8", 92},
{"3 + 4 + (5 + a) * b / 11 + 8", 22},
}
for _, c := range cases {
expr, e := Compile(c.expr)
if e != nil {
t.Errorf("failed to compile expression '%s': %s", c.expr, e.Error())
}
if res := expr.Eval(env); res.(int64) != c.expected {
t.Errorf("result for expression '%s' is %v, but expected is %v", c.expr, res, c.expected)
}
}
}
func TestFunction(t *testing.T) {
variables := map[string]interface{}{
"a": int64(6),
"b": 7.0,
}
env := func(key string) interface{} {
return variables[key]
}
cases := []struct {
expr string
expected float64
}{
{"sum(3, 4, 5, a * b, 8)", 62},
{"sum(3, 4, (5 + a) * b, 8)", 92},
{"sum(3, 4, (5 + a) * b / 11, 8)", 22},
}
for _, c := range cases {
expr, e := Compile(c.expr)
if e != nil {
t.Errorf("failed to compile expression '%s': %s", c.expr, e.Error())
}
if res := expr.Eval(env); res.(float64) != c.expected {
t.Errorf("result for expression '%s' is %v, but expected is %v", c.expr, res, c.expected)
}
}
}
func TestLogical(t *testing.T) {
cases := []struct {
expr string
expected bool
}{
{"true", true},
{"false", false},
{"true == true", true},
{"true == false", false},
{"true != true", false},
{"true != false", true},
{"5 > 3", true},
{"5 < 3", false},
{"5.2 > 3", true},
{"5.2 < 3", false},
{"5 > 3.1", true},
{"5 < 3.1", false},
{"5.1 > 3.3", true},
{"5.1 < 3.3", false},
{"5 >= 3", true},
{"5 <= 3", false},
{"5.2 >= 3", true},
{"5.2 <= 3", false},
{"5 >= 3.1", true},
{"5 <= 3.1", false},
{"5.1 >= 3.3", true},
{"5.1 <= 3.3", false},
{"5 != 3", true},
{"5.2 != 3.2", true},
{"5.2 != 3", true},
{"5 != 3.2", true},
{"5 == 3", false},
{"5.2 == 3.2", false},
{"5.2 == 3", false},
{"5 == 3.2", false},
{"!(5 > 3)", false},
{"5>3 && 3>1", true},
{"5<3 || 3<1", false},
{"4<=4 || 3<1", true},
{"4<4 || 3>=1", true},
}
for _, c := range cases {
expr, e := Compile(c.expr)
if e != nil {
t.Errorf("failed to compile expression '%s': %s", c.expr, e.Error())
}
if res := expr.Eval(nil); res.(bool) != c.expected {
t.Errorf("result for expression '%s' is %v, but expected is %v", c.expr, res, c.expected)
}
}
}
func TestBitwise(t *testing.T) {
cases := []struct {
expr string
expected int64
}{
{"0x0C & 0x04", 0x04},
{"0x08 | 0x04", 0x0C},
{"0x0C ^ 0x04", 0x08},
{"0x01 << 2", 0x04},
{"0x04 >> 2", 0x01},
{"~0x04", ^0x04},
}
for _, c := range cases {
expr, e := Compile(c.expr)
if e != nil {
t.Errorf("failed to compile expression '%s': %s", c.expr, e.Error())
}
if res := expr.Eval(nil); res.(int64) != c.expected {
t.Errorf("result for expression '%s' is 0x%X, but expected is 0x%X", c.expr, res, c.expected)
}
}
}
package expr
import (
"math"
)
type builtInFunc struct {
minArgs, maxArgs int
call func([]interface{}) interface{}
}
func fnMin(args []interface{}) interface{} {
res := args[0]
for _, arg := range args[1:] {
switch v1 := res.(type) {
case int64:
switch v2 := arg.(type) {
case int64:
if v2 < v1 {
res = v2
}
case float64:
res = math.Min(float64(v1), v2)
default:
panic(ErrorUnsupportedDataType)
}
case float64:
switch v2 := arg.(type) {
case int64:
res = math.Min(v1, float64(v2))
case float64:
res = math.Min(v1, v2)
default:
panic(ErrorUnsupportedDataType)
}
default:
panic(ErrorUnsupportedDataType)
}
}
return res
}
func fnMax(args []interface{}) interface{} {
res := args[0]
for _, arg := range args[1:] {
switch v1 := res.(type) {
case int64:
switch v2 := arg.(type) {
case int64:
if v2 > v1 {
res = v2
}
case float64:
res = math.Max(float64(v1), v2)
default:
panic(ErrorUnsupportedDataType)
}
case float64:
switch v2 := arg.(type) {
case int64:
res = math.Max(v1, float64(v2))
case float64:
res = math.Max(v1, v2)
default:
panic(ErrorUnsupportedDataType)
}
default:
panic(ErrorUnsupportedDataType)
}
}
return res
}
func fnSum(args []interface{}) interface{} {
res := float64(0)
for _, arg := range args {
switch v := arg.(type) {
case int64:
res += float64(v)
case float64:
res += v
default:
panic(ErrorUnsupportedDataType)
}
}
return res
}
func fnAvg(args []interface{}) interface{} {
return fnSum(args).(float64) / float64(len(args))
}
func fnSqrt(args []interface{}) interface{} {
switch v := args[0].(type) {
case int64:
return math.Sqrt(float64(v))
case float64:
return math.Sqrt(v)
default:
panic(ErrorUnsupportedDataType)
}
}
func fnFloor(args []interface{}) interface{} {
switch v := args[0].(type) {
case int64:
return v
case float64:
return math.Floor(v)
default:
panic(ErrorUnsupportedDataType)
}
}
func fnCeil(args []interface{}) interface{} {
switch v := args[0].(type) {
case int64:
return v
case float64:
return math.Ceil(v)
default:
panic(ErrorUnsupportedDataType)
}
}
func fnRound(args []interface{}) interface{} {
switch v := args[0].(type) {
case int64:
return v
case float64:
return math.Round(v)
default:
panic(ErrorUnsupportedDataType)
}
}
func fnLog(args []interface{}) interface{} {
switch v := args[0].(type) {
case int64:
return math.Log(float64(v))
case float64:
return math.Log(v)
default:
panic(ErrorUnsupportedDataType)
}
}
func fnLog10(args []interface{}) interface{} {
switch v := args[0].(type) {
case int64:
return math.Log10(float64(v))
case float64:
return math.Log10(v)
default:
panic(ErrorUnsupportedDataType)
}
}
func fnAbs(args []interface{}) interface{} {
switch v := args[0].(type) {
case int64:
if v < 0 {
return -v
}
return v
case float64:
return math.Abs(v)
default:
panic(ErrorUnsupportedDataType)
}
}
func fnIf(args []interface{}) interface{} {
v, ok := args[0].(bool)
if !ok {
panic(ErrorUnsupportedDataType)
}
if v {
return args[1]
} else {
return args[2]
}
}
var funcs = map[string]builtInFunc{
"min": builtInFunc{minArgs: 1, maxArgs: -1, call: fnMin},
"max": builtInFunc{minArgs: 1, maxArgs: -1, call: fnMax},
"sum": builtInFunc{minArgs: 1, maxArgs: -1, call: fnSum},
"avg": builtInFunc{minArgs: 1, maxArgs: -1, call: fnAvg},
"sqrt": builtInFunc{minArgs: 1, maxArgs: 1, call: fnSqrt},
"ceil": builtInFunc{minArgs: 1, maxArgs: 1, call: fnCeil},
"floor": builtInFunc{minArgs: 1, maxArgs: 1, call: fnFloor},
"round": builtInFunc{minArgs: 1, maxArgs: 1, call: fnRound},
"log": builtInFunc{minArgs: 1, maxArgs: 1, call: fnLog},
"log10": builtInFunc{minArgs: 1, maxArgs: 1, call: fnLog10},
"abs": builtInFunc{minArgs: 1, maxArgs: 1, call: fnAbs},
"if": builtInFunc{minArgs: 3, maxArgs: 3, call: fnIf},
}
package expr
import (
"math"
"testing"
)
func TestMax(t *testing.T) {
cases := []struct {
args []interface{}
expected float64
}{
{[]interface{}{int64(1), int64(2), int64(3), int64(4), int64(5)}, 5},
{[]interface{}{int64(1), int64(2), float64(3), int64(4), float64(5)}, 5},
{[]interface{}{int64(-1), int64(-2), float64(-3), int64(-4), float64(-5)}, -1},
{[]interface{}{int64(-1), int64(-1), float64(-1), int64(-1), float64(-1)}, -1},
{[]interface{}{int64(-1), int64(0), float64(-1), int64(-1), float64(-1)}, 0},
}
for _, c := range cases {
r := fnMax(c.args)
switch v := r.(type) {
case int64:
if v != int64(c.expected) {
t.Errorf("max(%v) = %v, want %v", c.args, v, int64(c.expected))
}
case float64:
if v != c.expected {
t.Errorf("max(%v) = %v, want %v", c.args, v, c.expected)
}
default:
t.Errorf("unknown result type max(%v)", c.args)
}
}
}
func TestMin(t *testing.T) {
cases := []struct {
args []interface{}
expected float64
}{
{[]interface{}{int64(1), int64(2), int64(3), int64(4), int64(5)}, 1},
{[]interface{}{int64(5), int64(4), float64(3), int64(2), float64(1)}, 1},
{[]interface{}{int64(-1), int64(-2), float64(-3), int64(-4), float64(-5)}, -5},
{[]interface{}{int64(-1), int64(-1), float64(-1), int64(-1), float64(-1)}, -1},
{[]interface{}{int64(1), int64(0), float64(1), int64(1), float64(1)}, 0},
}
for _, c := range cases {
r := fnMin(c.args)
switch v := r.(type) {
case int64:
if v != int64(c.expected) {
t.Errorf("min(%v) = %v, want %v", c.args, v, int64(c.expected))
}
case float64:
if v != c.expected {
t.Errorf("min(%v) = %v, want %v", c.args, v, c.expected)
}
default:
t.Errorf("unknown result type min(%v)", c.args)
}
}
}
func TestSumAvg(t *testing.T) {
cases := []struct {
args []interface{}
expected float64
}{
{[]interface{}{int64(1)}, 1},
{[]interface{}{int64(1), int64(2), int64(3), int64(4), int64(5)}, 15},
{[]interface{}{int64(5), int64(4), float64(3), int64(2), float64(1)}, 15},
{[]interface{}{int64(-1), int64(-2), float64(-3), int64(-4), float64(-5)}, -15},
{[]interface{}{int64(-1), int64(-1), float64(-1), int64(-1), float64(-1)}, -5},
{[]interface{}{int64(1), int64(0), float64(1), int64(1), float64(1)}, 4},
}
for _, c := range cases {
r := fnSum(c.args)
switch v := r.(type) {
case float64:
if v != c.expected {
t.Errorf("sum(%v) = %v, want %v", c.args, v, c.expected)
}
default:
t.Errorf("unknown result type sum(%v)", c.args)
}
}
for _, c := range cases {
r := fnAvg(c.args)
expected := c.expected / float64(len(c.args))
switch v := r.(type) {
case float64:
if v != expected {
t.Errorf("avg(%v) = %v, want %v", c.args, v, expected)
}
default:
t.Errorf("unknown result type avg(%v)", c.args)
}
}
}
func TestSqrt(t *testing.T) {
cases := []struct {
arg interface{}
expected float64
}{
{int64(0), 0},
{int64(1), 1},
{int64(256), 16},
{10.0, math.Sqrt(10)},
{10000.0, math.Sqrt(10000)},
}
for _, c := range cases {
r := fnSqrt([]interface{}{c.arg})
switch v := r.(type) {
case float64:
if v != c.expected {
t.Errorf("sqrt(%v) = %v, want %v", c.arg, v, c.expected)
}
default:
t.Errorf("unknown result type sqrt(%v)", c.arg)
}
}
}
func TestFloor(t *testing.T) {
cases := []struct {
arg interface{}
expected float64
}{
{int64(0), 0},
{int64(1), 1},
{int64(-1), -1},
{10.4, 10},
{-10.4, -11},
{10.8, 10},
{-10.8, -11},
}
for _, c := range cases {
r := fnFloor([]interface{}{c.arg})
switch v := r.(type) {
case int64:
if v != int64(c.expected) {
t.Errorf("floor(%v) = %v, want %v", c.arg, v, int64(c.expected))
}
case float64:
if v != c.expected {
t.Errorf("floor(%v) = %v, want %v", c.arg, v, c.expected)
}
default:
t.Errorf("unknown result type floor(%v)", c.arg)
}
}
}
func TestCeil(t *testing.T) {
cases := []struct {
arg interface{}
expected float64
}{
{int64(0), 0},
{int64(1), 1},
{int64(-1), -1},
{10.4, 11},
{-10.4, -10},
{10.8, 11},
{-10.8, -10},
}
for _, c := range cases {
r := fnCeil([]interface{}{c.arg})
switch v := r.(type) {
case int64:
if v != int64(c.expected) {
t.Errorf("ceil(%v) = %v, want %v", c.arg, v, int64(c.expected))
}
case float64:
if v != c.expected {
t.Errorf("ceil(%v) = %v, want %v", c.arg, v, c.expected)
}
default:
t.Errorf("unknown result type ceil(%v)", c.arg)
}
}
}
func TestRound(t *testing.T) {
cases := []struct {
arg interface{}
expected float64
}{
{int64(0), 0},
{int64(1), 1},
{int64(-1), -1},
{10.4, 10},
{-10.4, -10},
{10.8, 11},
{-10.8, -11},
}
for _, c := range cases {
r := fnRound([]interface{}{c.arg})
switch v := r.(type) {
case int64:
if v != int64(c.expected) {
t.Errorf("round(%v) = %v, want %v", c.arg, v, int64(c.expected))
}
case float64:
if v != c.expected {
t.Errorf("round(%v) = %v, want %v", c.arg, v, c.expected)
}
default:
t.Errorf("unknown result type round(%v)", c.arg)
}
}
}
func TestLog(t *testing.T) {
cases := []struct {
arg interface{}
expected float64
}{
{int64(1), math.Log(1)},
{0.1, math.Log(0.1)},
{10.4, math.Log(10.4)},
{10.8, math.Log(10.8)},
}
for _, c := range cases {
r := fnLog([]interface{}{c.arg})
switch v := r.(type) {
case float64:
if v != c.expected {
t.Errorf("log(%v) = %v, want %v", c.arg, v, c.expected)
}
default:
t.Errorf("unknown result type log(%v)", c.arg)
}
}
}
func TestLog10(t *testing.T) {
cases := []struct {
arg interface{}
expected float64
}{
{int64(1), math.Log10(1)},
{0.1, math.Log10(0.1)},
{10.4, math.Log10(10.4)},
{10.8, math.Log10(10.8)},
{int64(100), math.Log10(100)},
}
for _, c := range cases {
r := fnLog10([]interface{}{c.arg})
switch v := r.(type) {
case float64:
if v != c.expected {
t.Errorf("log10(%v) = %v, want %v", c.arg, v, c.expected)
}
default:
t.Errorf("unknown result type log10(%v)", c.arg)
}
}
}
func TestAbs(t *testing.T) {
cases := []struct {
arg interface{}
expected float64
}{
{int64(1), 1},
{int64(0), 0},
{int64(-1), 1},
{10.4, 10.4},
{-10.4, 10.4},
}
for _, c := range cases {
r := fnAbs([]interface{}{c.arg})
switch v := r.(type) {
case int64:
if v != int64(c.expected) {
t.Errorf("abs(%v) = %v, want %v", c.arg, v, int64(c.expected))
}
case float64:
if v != c.expected {
t.Errorf("abs(%v) = %v, want %v", c.arg, v, c.expected)
}
default:
t.Errorf("unknown result type abs(%v)", c.arg)
}
}
}
func TestIf(t *testing.T) {
cases := []struct {
args []interface{}
expected float64
}{
{[]interface{}{true, int64(10), int64(20)}, 10},
{[]interface{}{false, int64(10), int64(20)}, 20},
{[]interface{}{true, 10.3, 20.6}, 10.3},
{[]interface{}{false, 10.3, 20.6}, 20.6},
{[]interface{}{true, int64(10), 20.6}, 10},
{[]interface{}{false, int64(10), 20.6}, 20.6},
}
for _, c := range cases {
r := fnIf(c.args)
switch v := r.(type) {
case int64:
if v != int64(c.expected) {
t.Errorf("if(%v) = %v, want %v", c.args, v, int64(c.expected))
}
case float64:
if v != c.expected {
t.Errorf("if(%v) = %v, want %v", c.args, v, c.expected)
}
default:
t.Errorf("unknown result type if(%v)", c.args)
}
}
}
package app
import (
"regexp"
"strings"
)
type RouteMatchCriteria struct {
Tag string `yaml:"tag"`
Value string `yaml:"match"`
Re *regexp.Regexp `yaml:"-"`
}
func (c *RouteMatchCriteria) UnmarshalYAML(unmarshal func(interface{}) error) error {
var v map[string]string
if e := unmarshal(&v); e != nil {
return e
}
for k, a := range v {
c.Tag = k
c.Value = a
if strings.HasPrefix(a, "re:") {
re, e := regexp.Compile(a[3:])
if e != nil {
return e
}
c.Re = re
}
}
return nil
}
type Route struct {
Continue bool `yaml:"continue"`
Receiver string `yaml:"receiver"`
GroupWait Duration `yaml:"group_wait"`
GroupInterval Duration `yaml:"group_interval"`
RepeatInterval Duration `yaml:"repeat_interval"`
GroupBy []string `yaml:"group_by"`
Match []RouteMatchCriteria `yaml:"match"`
Routes []*Route `yaml:"routes"`
}
package app
import (
"bytes"
"database/sql"
"encoding/json"
"errors"
"fmt"
"net/http"
"os"
"regexp"
"strings"
"sync"
"sync/atomic"
"text/scanner"
"text/template"
"time"
"github.com/taosdata/alert/app/expr"
"github.com/taosdata/alert/models"
"github.com/taosdata/alert/utils"
"github.com/taosdata/alert/utils/log"
)
type Duration struct{ time.Duration }
func (d Duration) MarshalJSON() ([]byte, error) {
return json.Marshal(d.String())
}
func (d *Duration) doUnmarshal(v interface{}) error {
switch value := v.(type) {
case float64:
*d = Duration{time.Duration(value)}
return nil
case string:
if duration, e := time.ParseDuration(value); e != nil {
return e
} else {
*d = Duration{duration}
}
return nil
default:
return errors.New("invalid duration")
}
}
func (d *Duration) UnmarshalJSON(b []byte) error {
var v interface{}
if e := json.Unmarshal(b, &v); e != nil {
return e
}
return d.doUnmarshal(v)
}
const (
AlertStateWaiting = iota
AlertStatePending
AlertStateFiring
)
type Alert struct {
State uint8 `json:"-"`
LastRefreshAt time.Time `json:"-"`
StartsAt time.Time `json:"startsAt,omitempty"`
EndsAt time.Time `json:"endsAt,omitempty"`
Values map[string]interface{} `json:"values"`
Labels map[string]string `json:"labels"`
Annotations map[string]string `json:"annotations"`
}
func (alert *Alert) doRefresh(firing bool, rule *Rule) bool {
switch {
case (!firing) && (alert.State == AlertStateWaiting):
return false
case (!firing) && (alert.State == AlertStatePending):
alert.State = AlertStateWaiting
return false
case (!firing) && (alert.State == AlertStateFiring):
alert.State = AlertStateWaiting
alert.EndsAt = time.Now()
case firing && (alert.State == AlertStateWaiting):
alert.StartsAt = time.Now()
if rule.For.Nanoseconds() > 0 {
alert.State = AlertStatePending
return false
}
alert.State = AlertStateFiring
case firing && (alert.State == AlertStatePending):
if time.Now().Sub(alert.StartsAt) < rule.For.Duration {
return false
}
alert.StartsAt = alert.StartsAt.Add(rule.For.Duration)
alert.State = AlertStateFiring
case firing && (alert.State == AlertStateFiring):
}
return true
}
func (alert *Alert) refresh(rule *Rule, values map[string]interface{}) {
alert.LastRefreshAt = time.Now()
defer func() {
switch x := recover().(type) {
case nil:
case error:
rule.setState(RuleStateError)
log.Errorf("[%s]: failed to evaluate: %s", rule.Name, x.Error())
default:
rule.setState(RuleStateError)
log.Errorf("[%s]: failed to evaluate: unknown error", rule.Name)
}
}()
alert.Values = values
res := rule.Expr.Eval(func(key string) interface{} {
// ToLower is required as column name in result is in lower case
return alert.Values[strings.ToLower(key)]
})
val, ok := res.(bool)
if !ok {
rule.setState(RuleStateError)
log.Errorf("[%s]: result type is not bool", rule.Name)
return
}
if !alert.doRefresh(val, rule) {
return
}
buf := bytes.Buffer{}
alert.Annotations = map[string]string{}
for k, v := range rule.Annotations {
if e := v.Execute(&buf, alert); e != nil {
log.Errorf("[%s]: failed to generate annotation '%s': %s", rule.Name, k, e.Error())
} else {
alert.Annotations[k] = buf.String()
}
buf.Reset()
}
buf.Reset()
if e := json.NewEncoder(&buf).Encode(alert); e != nil {
log.Errorf("[%s]: failed to serialize alert to JSON: %s", rule.Name, e.Error())
} else {
chAlert <- buf.String()
}
}
const (
RuleStateNormal = iota
RuleStateError
RuleStateDisabled
RuleStateRunning = 0x04
)
type Rule struct {
Name string `json:"name"`
State uint32 `json:"state"`
SQL string `json:"sql"`
GroupByCols []string `json:"-"`
For Duration `json:"for"`
Period Duration `json:"period"`
NextRunTime time.Time `json:"-"`
RawExpr string `json:"expr"`
Expr expr.Expr `json:"-"`
Labels map[string]string `json:"labels"`
RawAnnotations map[string]string `json:"annotations"`
Annotations map[string]*template.Template `json:"-"`
Alerts sync.Map `json:"-"`
}
func (rule *Rule) clone() *Rule {
return &Rule{
Name: rule.Name,
State: RuleStateNormal,
SQL: rule.SQL,
GroupByCols: rule.GroupByCols,
For: rule.For,
Period: rule.Period,
NextRunTime: time.Time{},
RawExpr: rule.RawExpr,
Expr: rule.Expr,
Labels: rule.Labels,
RawAnnotations: rule.RawAnnotations,
Annotations: rule.Annotations,
// don't copy alerts
}
}
func (rule *Rule) setState(s uint32) {
for {
old := atomic.LoadUint32(&rule.State)
new := old&0xffffffc0 | s
if atomic.CompareAndSwapUint32(&rule.State, old, new) {
break
}
}
}
func (rule *Rule) state() uint32 {
return atomic.LoadUint32(&rule.State) & 0xffffffc0
}
func (rule *Rule) isEnabled() bool {
state := atomic.LoadUint32(&rule.State)
return state&RuleStateDisabled == 0
}
func (rule *Rule) setNextRunTime(tm time.Time) {
rule.NextRunTime = tm.Round(rule.Period.Duration)
if rule.NextRunTime.Before(tm) {
rule.NextRunTime = rule.NextRunTime.Add(rule.Period.Duration)
}
}
func parseGroupBy(sql string) (cols []string, err error) {
defer func() {
if e := recover(); e != nil {
err = e.(error)
}
}()
s := scanner.Scanner{
Error: func(s *scanner.Scanner, msg string) {
panic(errors.New(msg))
},
Mode: scanner.ScanIdents | scanner.ScanInts | scanner.ScanFloats,
}
s.Init(strings.NewReader(sql))
if s.Scan() != scanner.Ident || strings.ToLower(s.TokenText()) != "select" {
err = errors.New("only select statement is allowed.")
return
}
hasGroupBy := false
for t := s.Scan(); t != scanner.EOF; t = s.Scan() {
if t != scanner.Ident {
continue
}
if strings.ToLower(s.TokenText()) != "group" {
continue
}
if s.Scan() != scanner.Ident {
continue
}
if strings.ToLower(s.TokenText()) == "by" {
hasGroupBy = true
break
}
}
if !hasGroupBy {
return
}
for {
if s.Scan() != scanner.Ident {
err = errors.New("SQL statement syntax error.")
return
}
col := strings.ToLower(s.TokenText())
cols = append(cols, col)
if s.Scan() != ',' {
break
}
}
return
}
func (rule *Rule) parseGroupBy() (err error) {
cols, e := parseGroupBy(rule.SQL)
if e == nil {
rule.GroupByCols = cols
}
return nil
}
func (rule *Rule) getAlert(values map[string]interface{}) *Alert {
sb := strings.Builder{}
for _, name := range rule.GroupByCols {
value := values[name]
if value == nil {
} else {
sb.WriteString(fmt.Sprint(value))
}
sb.WriteByte('_')
}
var alert *Alert
key := sb.String()
if v, ok := rule.Alerts.Load(key); ok {
alert = v.(*Alert)
}
if alert == nil {
alert = &Alert{Labels: map[string]string{}}
for k, v := range rule.Labels {
alert.Labels[k] = v
}
for _, name := range rule.GroupByCols {
value := values[name]
if value == nil {
alert.Labels[name] = ""
} else {
alert.Labels[name] = fmt.Sprint(value)
}
}
rule.Alerts.Store(key, alert)
}
return alert
}
func (rule *Rule) preRun(tm time.Time) bool {
if tm.Before(rule.NextRunTime) {
return false
}
rule.setNextRunTime(tm)
for {
state := atomic.LoadUint32(&rule.State)
if state != RuleStateNormal {
return false
}
if atomic.CompareAndSwapUint32(&rule.State, state, RuleStateRunning) {
break
}
}
return true
}
func (rule *Rule) run(db *sql.DB) {
rows, e := db.Query(rule.SQL)
if e != nil {
log.Errorf("[%s]: failed to query TDengine: %s", rule.Name, e.Error())
return
}
cols, e := rows.ColumnTypes()
if e != nil {
log.Errorf("[%s]: unable to get column information: %s", rule.Name, e.Error())
return
}
for rows.Next() {
values := make([]interface{}, 0, len(cols))
for range cols {
var v interface{}
values = append(values, &v)
}
rows.Scan(values...)
m := make(map[string]interface{})
for i, col := range cols {
name := strings.ToLower(col.Name())
m[name] = *(values[i].(*interface{}))
}
alert := rule.getAlert(m)
alert.refresh(rule, m)
}
now := time.Now()
rule.Alerts.Range(func(k, v interface{}) bool {
alert := v.(*Alert)
if now.Sub(alert.LastRefreshAt) > rule.Period.Duration*10 {
rule.Alerts.Delete(k)
}
return true
})
}
func (rule *Rule) postRun() {
for {
old := atomic.LoadUint32(&rule.State)
new := old & ^uint32(RuleStateRunning)
if atomic.CompareAndSwapUint32(&rule.State, old, new) {
break
}
}
}
func newRule(str string) (*Rule, error) {
rule := Rule{}
e := json.NewDecoder(strings.NewReader(str)).Decode(&rule)
if e != nil {
return nil, e
}
if rule.Period.Nanoseconds() <= 0 {
rule.Period = Duration{time.Minute}
}
rule.setNextRunTime(time.Now())
if rule.For.Nanoseconds() < 0 {
rule.For = Duration{0}
}
if e = rule.parseGroupBy(); e != nil {
return nil, e
}
if expr, e := expr.Compile(rule.RawExpr); e != nil {
return nil, e
} else {
rule.Expr = expr
}
rule.Annotations = map[string]*template.Template{}
for k, v := range rule.RawAnnotations {
v = reValue.ReplaceAllStringFunc(v, func(s string) string {
// as column name in query result is always in lower case,
// we need to convert value reference in annotations to
// lower case
return strings.ToLower(s)
})
text := "{{$labels := .Labels}}{{$values := .Values}}" + v
tmpl, e := template.New(k).Parse(text)
if e != nil {
return nil, e
}
rule.Annotations[k] = tmpl
}
return &rule, nil
}
const (
batchSize = 1024
)
var (
rules sync.Map
wg sync.WaitGroup
chStop = make(chan struct{})
chAlert = make(chan string, batchSize)
reValue = regexp.MustCompile(`\$values\.[_a-zA-Z0-9]+`)
)
func runRules() {
defer wg.Done()
db, e := sql.Open("taosSql", utils.Cfg.TDengine)
if e != nil {
log.Fatal("failed to connect to TDengine: ", e.Error())
}
defer db.Close()
ticker := time.NewTicker(time.Second)
defer ticker.Stop()
LOOP:
for {
var tm time.Time
select {
case <-chStop:
close(chAlert)
break LOOP
case tm = <-ticker.C:
}
rules.Range(func(k, v interface{}) bool {
rule := v.(*Rule)
if !rule.preRun(tm) {
return true
}
wg.Add(1)
go func(rule *Rule) {
defer wg.Done()
defer rule.postRun()
rule.run(db)
}(rule)
return true
})
}
}
func doPushAlerts(alerts []string) {
defer wg.Done()
if len(utils.Cfg.Receivers.AlertManager) == 0 {
return
}
buf := bytes.Buffer{}
buf.WriteByte('[')
for i, alert := range alerts {
if i > 0 {
buf.WriteByte(',')
}
buf.WriteString(alert)
}
buf.WriteByte(']')
log.Debug(buf.String())
resp, e := http.DefaultClient.Post(utils.Cfg.Receivers.AlertManager, "application/json", &buf)
if e != nil {
log.Errorf("failed to push alerts to downstream: %s", e.Error())
return
}
resp.Body.Close()
}
func pushAlerts() {
defer wg.Done()
ticker := time.NewTicker(time.Millisecond * 100)
defer ticker.Stop()
alerts := make([]string, 0, batchSize)
LOOP:
for {
select {
case alert := <-chAlert:
if utils.Cfg.Receivers.Console {
fmt.Print(alert)
}
if len(alert) == 0 {
if len(alerts) > 0 {
wg.Add(1)
doPushAlerts(alerts)
}
break LOOP
}
if len(alerts) == batchSize {
wg.Add(1)
go doPushAlerts(alerts)
alerts = make([]string, 0, batchSize)
}
alerts = append(alerts, alert)
case <-ticker.C:
if len(alerts) > 0 {
wg.Add(1)
go doPushAlerts(alerts)
alerts = make([]string, 0, batchSize)
}
}
}
}
func loadRuleFromDatabase() error {
allRules, e := models.LoadAllRule()
if e != nil {
log.Error("failed to load rules from database:", e.Error())
return e
}
count := 0
for _, r := range allRules {
rule, e := newRule(r.Content)
if e != nil {
log.Errorf("[%s]: parse failed: %s", r.Name, e.Error())
continue
}
if !r.Enabled {
rule.setState(RuleStateDisabled)
}
rules.Store(rule.Name, rule)
count++
}
log.Infof("total %d rules loaded", count)
return nil
}
func loadRuleFromFile() error {
f, e := os.Open(utils.Cfg.RuleFile)
if e != nil {
log.Error("failed to load rules from file:", e.Error())
return e
}
defer f.Close()
var allRules []Rule
e = json.NewDecoder(f).Decode(&allRules)
if e != nil {
log.Error("failed to parse rule file:", e.Error())
return e
}
for i := 0; i < len(allRules); i++ {
rule := &allRules[i]
rules.Store(rule.Name, rule)
}
log.Infof("total %d rules loaded", len(allRules))
return nil
}
func initRule() error {
if len(utils.Cfg.Database) > 0 {
if e := loadRuleFromDatabase(); e != nil {
return e
}
} else {
if e := loadRuleFromFile(); e != nil {
return e
}
}
wg.Add(2)
go runRules()
go pushAlerts()
return nil
}
func uninitRule() error {
close(chStop)
wg.Wait()
return nil
}
package app
import (
"fmt"
"testing"
"github.com/taosdata/alert/utils/log"
)
func TestParseGroupBy(t *testing.T) {
cases := []struct {
sql string
cols []string
}{
{
sql: "select * from a",
cols: []string{},
},
{
sql: "select * from a group by abc",
cols: []string{"abc"},
},
{
sql: "select * from a group by abc, def",
cols: []string{"abc", "def"},
},
{
sql: "select * from a Group by abc, def order by abc",
cols: []string{"abc", "def"},
},
}
for _, c := range cases {
cols, e := parseGroupBy(c.sql)
if e != nil {
t.Errorf("failed to parse sql '%s': %s", c.sql, e.Error())
}
for i := range cols {
if i >= len(c.cols) {
t.Errorf("count of group by columns of '%s' is wrong", c.sql)
}
if c.cols[i] != cols[i] {
t.Errorf("wrong group by columns for '%s'", c.sql)
}
}
}
}
func TestManagement(t *testing.T) {
const format = `{"name":"rule%d", "sql":"select count(*) as count from meters", "expr":"count>2"}`
log.Init()
for i := 0; i < 5; i++ {
s := fmt.Sprintf(format, i)
rule, e := newRule(s)
if e != nil {
t.Errorf("failed to create rule: %s", e.Error())
}
e = doUpdateRule(rule, s)
if e != nil {
t.Errorf("failed to add or update rule: %s", e.Error())
}
}
for i := 0; i < 5; i++ {
name := fmt.Sprintf("rule%d", i)
if _, ok := rules.Load(name); !ok {
t.Errorf("rule '%s' does not exist", name)
}
}
name := "rule1"
if e := doDeleteRule(name); e != nil {
t.Errorf("failed to delete rule: %s", e.Error())
}
if _, ok := rules.Load(name); ok {
t.Errorf("rule '%s' should not exist any more", name)
}
}
{
"port": 8100,
"database": "file:alert.db",
"tdengine": "root:taosdata@/tcp(127.0.0.1:0)/",
"log": {
"level": "debug",
"path": ""
},
"receivers": {
"alertManager": "http://127.0.0.1:9093/api/v1/alerts",
"console": true
}
}
#!/bin/bash
#
# This file is used to install TDengine client library on linux systems.
set -e
#set -x
# -----------------------Variables definition---------------------
script_dir=$(dirname $(readlink -f "$0"))
# Dynamic directory
lib_link_dir="/usr/lib"
#install main path
install_main_dir="/usr/local/taos"
# Color setting
RED='\033[0;31m'
GREEN='\033[1;32m'
GREEN_DARK='\033[0;32m'
GREEN_UNDERLINE='\033[4;32m'
NC='\033[0m'
csudo=""
if command -v sudo > /dev/null; then
csudo="sudo"
fi
function clean_driver() {
${csudo} rm -f /usr/lib/libtaos.so || :
}
function install_driver() {
echo -e "${GREEN}Start to install TDengine client driver ...${NC}"
#create install main dir and all sub dir
${csudo} mkdir -p ${install_main_dir}
${csudo} mkdir -p ${install_main_dir}/driver
${csudo} rm -f ${lib_link_dir}/libtaos.* || :
${csudo} cp -rf ${script_dir}/driver/* ${install_main_dir}/driver && ${csudo} chmod 777 ${install_main_dir}/driver/*
${csudo} ln -s ${install_main_dir}/driver/libtaos.* ${lib_link_dir}/libtaos.so.1
${csudo} ln -s ${lib_link_dir}/libtaos.so.1 ${lib_link_dir}/libtaos.so
echo
echo -e "\033[44;32;1mTDengine client driver is successfully installed!${NC}"
}
install_driver
package main
import (
"context"
"flag"
"fmt"
"io"
"net/http"
"os"
"os/signal"
"path/filepath"
"runtime"
"strconv"
"time"
"github.com/taosdata/alert/app"
"github.com/taosdata/alert/models"
"github.com/taosdata/alert/utils"
"github.com/taosdata/alert/utils/log"
_ "github.com/mattn/go-sqlite3"
_ "github.com/taosdata/driver-go/taosSql"
)
type httpHandler struct {
}
func (h *httpHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
start := time.Now()
path := r.URL.Path
http.DefaultServeMux.ServeHTTP(w, r)
duration := time.Now().Sub(start)
log.Debugf("[%s]\t%s\t%s", r.Method, path, duration)
}
func serveWeb() *http.Server {
log.Info("Listening at port: ", utils.Cfg.Port)
srv := &http.Server{
Addr: ":" + strconv.Itoa(int(utils.Cfg.Port)),
Handler: &httpHandler{},
}
go func() {
if e := srv.ListenAndServe(); e != nil {
log.Error(e.Error())
}
}()
return srv
}
func copyFile(dst, src string) error {
if dst == src {
return nil
}
in, e := os.Open(src)
if e != nil {
return e
}
defer in.Close()
out, e := os.Create(dst)
if e != nil {
return e
}
defer out.Close()
_, e = io.Copy(out, in)
return e
}
func doSetup(cfgPath string) error {
exePath, e := os.Executable()
if e != nil {
fmt.Fprintf(os.Stderr, "failed to get executable path: %s\n", e.Error())
return e
}
if !filepath.IsAbs(cfgPath) {
dir := filepath.Dir(exePath)
cfgPath = filepath.Join(dir, cfgPath)
}
e = copyFile("/etc/taos/alert.cfg", cfgPath)
if e != nil {
fmt.Fprintf(os.Stderr, "failed copy configuration file: %s\n", e.Error())
return e
}
f, e := os.Create("/etc/systemd/system/alert.service")
if e != nil {
fmt.Printf("failed to create alert service: %s\n", e.Error())
return e
}
defer f.Close()
const content = `[Unit]
Description=Alert (TDengine Alert Service)
After=syslog.target
After=network.target
[Service]
RestartSec=2s
Type=simple
WorkingDirectory=/var/lib/taos/
ExecStart=%s -cfg /etc/taos/alert.cfg
Restart=always
[Install]
WantedBy=multi-user.target
`
_, e = fmt.Fprintf(f, content, exePath)
if e != nil {
fmt.Printf("failed to create alert.service: %s\n", e.Error())
return e
}
return nil
}
const version = "TDengine alert v1.0.0"
func main() {
var (
cfgPath string
setup bool
showVersion bool
)
flag.StringVar(&cfgPath, "cfg", "alert.cfg", "path of configuration file")
flag.BoolVar(&setup, "setup", false, "setup the service as a daemon")
flag.BoolVar(&showVersion, "version", false, "show version information")
flag.Parse()
if showVersion {
fmt.Println(version)
return
}
if setup {
if runtime.GOOS == "linux" {
doSetup(cfgPath)
} else {
fmt.Fprintln(os.Stderr, "can only run as a daemon mode in linux.")
}
return
}
if e := utils.LoadConfig(cfgPath); e != nil {
fmt.Fprintln(os.Stderr, "failed to load configuration")
return
}
if e := log.Init(); e != nil {
fmt.Fprintln(os.Stderr, "failed to initialize logger:", e.Error())
return
}
defer log.Sync()
if e := models.Init(); e != nil {
log.Fatal("failed to initialize database:", e.Error())
}
if e := app.Init(); e != nil {
log.Fatal("failed to initialize application:", e.Error())
}
// start web server
srv := serveWeb()
// wait `Ctrl-C` or `Kill` to exit, `Kill` does not work on Windows
interrupt := make(chan os.Signal)
signal.Notify(interrupt, os.Interrupt)
signal.Notify(interrupt, os.Kill)
<-interrupt
fmt.Println("'Ctrl + C' received, exiting...")
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
srv.Shutdown(ctx)
cancel()
app.Uninit()
models.Uninit()
}
{
"name": "CarTooFast",
"period": "10s",
"sql": "select avg(speed) as avgspeed from test.cars where ts > now - 5m group by id",
"expr": "avgSpeed > 100",
"for": "0s",
"labels": {
"ruleName": "CarTooFast"
},
"annotations": {
"summary": "car {{$values.id}} is too fast, its average speed is {{$values.avgSpeed}}km/h"
}
}
\ No newline at end of file
module github.com/taosdata/alert
go 1.14
require (
github.com/jmoiron/sqlx v1.2.0
github.com/mattn/go-sqlite3 v2.0.3+incompatible
github.com/taosdata/driver-go v0.0.0-20200311072652-8c58c512b6ac
go.uber.org/zap v1.14.1
google.golang.org/appengine v1.6.5 // indirect
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c
)
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/go-sql-driver/mysql v1.4.0 h1:7LxgVwFb2hIQtMm87NdgAVfXjnt4OePseqT1tKx+opk=
github.com/go-sql-driver/mysql v1.4.0/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/jmoiron/sqlx v1.2.0 h1:41Ip0zITnmWNR/vHV+S4m+VoUivnWY5E4OJfLZjCJMA=
github.com/jmoiron/sqlx v1.2.0/go.mod h1:1FEQNm3xlJgrMD+FBdI9+xvCksHtbpVBBw5dYhBSsks=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/lib/pq v1.0.0 h1:X5PMW56eZitiTeO7tKzZxFCSpbFZJtkMMooicw2us9A=
github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/mattn/go-sqlite3 v1.9.0/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc=
github.com/mattn/go-sqlite3 v2.0.3+incompatible h1:gXHsfypPkaMZrKbD5209QV9jbUTJKjyR5WD3HYQSd+U=
github.com/mattn/go-sqlite3 v2.0.3+incompatible/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc=
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/taosdata/driver-go v0.0.0-20200311072652-8c58c512b6ac h1:uZplMwObJj8mfgI4ZvYPNHRn+fNz2leiMPqShsjtEEc=
github.com/taosdata/driver-go v0.0.0-20200311072652-8c58c512b6ac/go.mod h1:TuMZDpnBrjNO07rneM2C5qMYFqIro4aupL2cUOGGo/I=
go.uber.org/atomic v1.5.0 h1:OI5t8sDa1Or+q8AeE+yKeB/SDYioSHAgcVljj9JIETY=
go.uber.org/atomic v1.5.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
go.uber.org/atomic v1.6.0 h1:Ezj3JGmsOnG1MoRWQkPBsKLe9DwWD9QeXzTRzzldNVk=
go.uber.org/atomic v1.6.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
go.uber.org/multierr v1.3.0 h1:sFPn2GLc3poCkfrpIXGhBD2X0CMIo4Q/zSULXrj/+uc=
go.uber.org/multierr v1.3.0/go.mod h1:VgVr7evmIr6uPjLBxg28wmKNXyqE9akIJ5XnfpiKl+4=
go.uber.org/multierr v1.5.0 h1:KCa4XfM8CWFCpxXRGok+Q0SS/0XBhMDbHHGABQLvD2A=
go.uber.org/multierr v1.5.0/go.mod h1:FeouvMocqHpRaaGuG9EjoKcStLC43Zu/fmqdUMPcKYU=
go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee h1:0mgffUl7nfd+FpvXMVz4IDEaUSmT1ysygQC7qYo7sG4=
go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee/go.mod h1:vJERXedbb3MVM5f9Ejo0C68/HhF8uaILCdgjnY+goOA=
go.uber.org/zap v1.14.0 h1:/pduUoebOeeJzTDFuoMgC6nRkiasr1sBCIEorly7m4o=
go.uber.org/zap v1.14.0/go.mod h1:zwrFLgMcdUuIBviXEYEH1YKNaOBnKXsx2IPda5bBwHM=
go.uber.org/zap v1.14.1 h1:nYDKopTbvAPq/NrUVZwT15y2lpROBiLLyoRTbXOYWOo=
go.uber.org/zap v1.14.1/go.mod h1:Mb2vm2krFEG5DV0W9qcHBYFtp/Wku1cvYaqPsS/WYfc=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de h1:5hukYrvBGR8/eNkX5mdUezrA6JiaEZDtJb9Ei+1LlBs=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5 h1:hKsoRgsbwY1NafxrwTs+k64bikrLBkAgPir1TNCj3Zs=
golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.6.5 h1:tycE03LOZYQNhDpS27tcQdAzLCVMaj7QT2SXxebnpCM=
google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
honnef.co/go/tools v0.0.1-2019.2.3 h1:3JgtbtFHMiCmsznwGVTUWbgGov+pVqnlf1dEJTNAXeM=
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
package models
import (
"fmt"
"strconv"
"time"
"github.com/jmoiron/sqlx"
"github.com/taosdata/alert/utils"
"github.com/taosdata/alert/utils/log"
)
var db *sqlx.DB
func Init() error {
xdb, e := sqlx.Connect("sqlite3", utils.Cfg.Database)
if e == nil {
db = xdb
}
return upgrade()
}
func Uninit() error {
db.Close()
return nil
}
func getStringOption(tx *sqlx.Tx, name string) (string, error) {
const qs = "SELECT * FROM `option` WHERE `name`=?"
var (
e error
o struct {
Name string `db:"name"`
Value string `db:"value"`
}
)
if tx != nil {
e = tx.Get(&o, qs, name)
} else {
e = db.Get(&o, qs, name)
}
if e != nil {
return "", e
}
return o.Value, nil
}
func getIntOption(tx *sqlx.Tx, name string) (int, error) {
s, e := getStringOption(tx, name)
if e != nil {
return 0, e
}
v, e := strconv.ParseInt(s, 10, 64)
return int(v), e
}
func setOption(tx *sqlx.Tx, name string, value interface{}) error {
const qs = "REPLACE INTO `option`(`name`, `value`) VALUES(?, ?);"
var (
e error
sv string
)
switch v := value.(type) {
case time.Time:
sv = v.Format(time.RFC3339)
default:
sv = fmt.Sprint(value)
}
if tx != nil {
_, e = tx.Exec(qs, name, sv)
} else {
_, e = db.Exec(qs, name, sv)
}
return e
}
var upgradeScripts = []struct {
ver int
stmts []string
}{
{
ver: 0,
stmts: []string{
"CREATE TABLE `option`( `name` VARCHAR(63) PRIMARY KEY, `value` VARCHAR(255) NOT NULL) WITHOUT ROWID;",
"CREATE TABLE `rule`( `name` VARCHAR(63) PRIMARY KEY, `enabled` TINYINT(1) NOT NULL, `created_at` DATETIME NOT NULL, `updated_at` DATETIME NOT NULL, `content` TEXT(65535) NOT NULL);",
},
},
}
func upgrade() error {
const dbVersion = "database version"
ver, e := getIntOption(nil, dbVersion)
if e != nil { // regards all errors as schema not created
ver = -1 // set ver to -1 to execute all statements
}
tx, e := db.Beginx()
if e != nil {
return e
}
for _, us := range upgradeScripts {
if us.ver <= ver {
continue
}
log.Info("upgrading database to version: ", us.ver)
for _, s := range us.stmts {
if _, e = tx.Exec(s); e != nil {
tx.Rollback()
return e
}
}
ver = us.ver
}
if e = setOption(tx, dbVersion, ver); e != nil {
tx.Rollback()
return e
}
return tx.Commit()
}
package models
import "time"
const (
sqlSelectAllRule = "SELECT * FROM `rule`;"
sqlSelectRule = "SELECT * FROM `rule` WHERE `name` = ?;"
sqlInsertRule = "INSERT INTO `rule`(`name`, `enabled`, `created_at`, `updated_at`, `content`) VALUES(:name, :enabled, :created_at, :updated_at, :content);"
sqlUpdateRule = "UPDATE `rule` SET `content` = :content, `updated_at` = :updated_at WHERE `name` = :name;"
sqlEnableRule = "UPDATE `rule` SET `enabled` = :enabled, `updated_at` = :updated_at WHERE `name` = :name;"
sqlDeleteRule = "DELETE FROM `rule` WHERE `name` = ?;"
)
type Rule struct {
Name string `db:"name"`
Enabled bool `db:"enabled"`
CreatedAt time.Time `db:"created_at"`
UpdatedAt time.Time `db:"updated_at"`
Content string `db:"content"`
}
func AddRule(r *Rule) error {
r.CreatedAt = time.Now()
r.Enabled = true
r.UpdatedAt = r.CreatedAt
_, e := db.NamedExec(sqlInsertRule, r)
return e
}
func UpdateRule(name string, content string) error {
r := Rule{
Name: name,
UpdatedAt: time.Now(),
Content: content,
}
_, e := db.NamedExec(sqlUpdateRule, &r)
return e
}
func EnableRule(name string, enabled bool) error {
r := Rule{
Name: name,
Enabled: enabled,
UpdatedAt: time.Now(),
}
if res, e := db.NamedExec(sqlEnableRule, &r); e != nil {
return e
} else if n, e := res.RowsAffected(); n != 1 {
return e
}
return nil
}
func DeleteRule(name string) error {
_, e := db.Exec(sqlDeleteRule, name)
return e
}
func GetRuleByName(name string) (*Rule, error) {
r := Rule{}
if e := db.Get(&r, sqlSelectRule, name); e != nil {
return nil, e
}
return &r, nil
}
func LoadAllRule() ([]Rule, error) {
var rules []Rule
if e := db.Select(&rules, sqlSelectAllRule); e != nil {
return nil, e
}
return rules, nil
}
set -e
# releash.sh -c [arm | arm64 | x64 | x86]
# -o [linux | darwin | windows]
# set parameters by default value
cpuType=x64 # [arm | arm64 | x64 | x86]
osType=linux # [linux | darwin | windows]
while getopts "h:c:o:" arg
do
case $arg in
c)
#echo "cpuType=$OPTARG"
cpuType=$(echo $OPTARG)
;;
o)
#echo "osType=$OPTARG"
osType=$(echo $OPTARG)
;;
h)
echo "Usage: `basename $0` -c [arm | arm64 | x64 | x86] -o [linux | darwin | windows]"
exit 0
;;
?) #unknown option
echo "unknown argument"
exit 1
;;
esac
done
startdir=$(pwd)
scriptdir=$(dirname $(readlink -f $0))
cd ${scriptdir}/cmd/alert
version=$(grep 'const version =' main.go | awk '{print $NF}')
version=${version%\"}
echo "cpuType=${cpuType}"
echo "osType=${osType}"
echo "version=${version}"
GOOS=${osType} GOARCH=${cpuType} go build
GZIP=-9 tar -zcf ${startdir}/tdengine-alert-${version}-${osType}-${cpuType}.tar.gz alert alert.cfg install_driver.sh driver/
sql connect
sleep 100
sql drop database if exists test
sql create database test
sql use test
print ====== create super table
sql create table cars (ts timestamp, speed int) tags(id int)
print ====== create tables
$i = 0
while $i < 5
$tb = car . $i
sql create table $tb using cars tags( $i )
$i = $i + 1
endw
{
"name": "test1",
"period": "10s",
"sql": "select avg(speed) as avgspeed from test.cars group by id",
"expr": "avgSpeed >= 3",
"for": "0s",
"labels": {
"ruleName": "test1"
},
"annotations": {
"summary": "speed of car(id = {{$labels.id}}) is too high: {{$values.avgSpeed}}"
}
}
\ No newline at end of file
sql connect
sleep 100
print ====== insert 10 records to table 0
$i = 10
while $i > 0
$ms = $i . s
sql insert into test.car0 values(now - $ms , 1)
$i = $i - 1
endw
sql connect
sleep 100
print ====== insert another records table 0
sql insert into test.car0 values(now , 100)
sql connect
sleep 100
print ====== insert 10 records to table 0, 1, 2
$i = 10
while $i > 0
$ms = $i . s
sql insert into test.car0 values(now - $ms , 1)
sql insert into test.car1 values(now - $ms , $i )
sql insert into test.car2 values(now - $ms , 10)
$i = $i - 1
endw
# wait until $1 alerts are generates, and at most wait $2 seconds
# return 0 if wait succeeded, 1 if wait timeout
function waitAlert() {
local i=0
while [ $i -lt $2 ]; do
local c=$(wc -l alert.out | awk '{print $1}')
if [ $c -ge $1 ]; then
return 0
fi
let "i=$i+1"
sleep 1s
done
return 1
}
# prepare environment
kill -INT `ps aux | grep 'alert -cfg' | grep -v grep | awk '{print $2}'`
rm -f alert.db
rm -f alert.out
../cmd/alert/alert -cfg ../cmd/alert/alert.cfg > alert.out &
../../td/debug/build/bin/tsim -c /etc/taos -f ./prepare.sim
# add a rule to alert application
curl -d '@rule.json' http://localhost:8100/api/update-rule
# step 1: add some data but not trigger an alert
../../td/debug/build/bin/tsim -c /etc/taos -f ./step1.sim
# wait 20 seconds, should not get an alert
waitAlert 1 20
res=$?
if [ $res -eq 0 ]; then
echo 'should not have alerts here'
exit 1
fi
# step 2: trigger an alert
../../td/debug/build/bin/tsim -c /etc/taos -f ./step2.sim
# wait 30 seconds for the alert
waitAlert 1 30
res=$?
if [ $res -eq 1 ]; then
echo 'there should be an alert now'
exit 1
fi
# compare whether the generate alert meet expectation
diff <(uniq alert.out | sed -n 1p | jq -cS 'del(.startsAt, .endsAt)') <(jq -cSn '{"values":{"avgspeed":10,"id":0},"labels":{"id":"0","ruleName":"test1"},"annotations":{"summary":"speed of car(id = 0) is too high: 10"}}')
if [ $? -ne 0 ]; then
echo 'the generated alert does not meet expectation'
exit 1
fi
# step 3: add more data, trigger another 3 alerts
../../td/debug/build/bin/tsim -c /etc/taos -f ./step3.sim
# wait 30 seconds for the alerts
waitAlert 4 30
res=$?
if [ $res -eq 1 ]; then
echo 'there should be 4 alerts now'
exit 1
fi
# compare whether the generate alert meet expectation
diff <(uniq alert.out | sed -n 2p | jq -cS 'del(.startsAt, .endsAt)') <(jq -cSn '{"annotations":{"summary":"speed of car(id = 0) is too high: 5.714285714285714"},"labels":{"id":"0","ruleName":"test1"},"values":{"avgspeed":5.714285714285714,"id":0}}')
if [ $? -ne 0 ]; then
echo 'the generated alert does not meet expectation'
exit 1
fi
diff <(uniq alert.out | sed -n 3p | jq -cS 'del(.startsAt, .endsAt)') <(jq -cSn '{"annotations":{"summary":"speed of car(id = 1) is too high: 5.5"},"labels":{"id":"1","ruleName":"test1"},"values":{"avgspeed":5.5,"id":1}}')
if [ $? -ne 0 ]; then
echo 'the generated alert does not meet expectation'
exit 1
fi
diff <(uniq alert.out | sed -n 4p | jq -cS 'del(.startsAt, .endsAt)') <(jq -cSn '{"annotations":{"summary":"speed of car(id = 2) is too high: 10"},"labels":{"id":"2","ruleName":"test1"},"values":{"avgspeed":10,"id":2}}')
if [ $? -ne 0 ]; then
echo 'the generated alert does not meet expectation'
exit 1
fi
kill -INT `ps aux | grep 'alert -cfg' | grep -v grep | awk '{print $2}'`
package utils
import (
"encoding/json"
"os"
"gopkg.in/yaml.v3"
)
type Config struct {
Port uint16 `json:"port,omitempty" yaml:"port,omitempty"`
Database string `json:"database,omitempty" yaml:"database,omitempty"`
RuleFile string `json:"ruleFile,omitempty" yaml:"ruleFile,omitempty"`
Log struct {
Level string `json:"level,omitempty" yaml:"level,omitempty"`
Path string `json:"path,omitempty" yaml:"path,omitempty"`
} `json:"log" yaml:"log"`
TDengine string `json:"tdengine,omitempty" yaml:"tdengine,omitempty"`
Receivers struct {
AlertManager string `json:"alertManager,omitempty" yaml:"alertManager,omitempty"`
Console bool `json:"console"`
} `json:"receivers" yaml:"receivers"`
}
var Cfg Config
func LoadConfig(path string) error {
f, e := os.Open(path)
if e != nil {
return e
}
defer f.Close()
e = yaml.NewDecoder(f).Decode(&Cfg)
if e != nil {
f.Seek(0, 0)
e = json.NewDecoder(f).Decode(&Cfg)
}
return e
}
package log
import (
"github.com/taosdata/alert/utils"
"go.uber.org/zap"
)
var logger *zap.SugaredLogger
func Init() error {
var cfg zap.Config
if utils.Cfg.Log.Level == "debug" {
cfg = zap.NewDevelopmentConfig()
} else {
cfg = zap.NewProductionConfig()
}
if len(utils.Cfg.Log.Path) > 0 {
cfg.OutputPaths = []string{utils.Cfg.Log.Path}
}
l, e := cfg.Build()
if e != nil {
return e
}
logger = l.Sugar()
return nil
}
// Debug package logger
func Debug(args ...interface{}) {
logger.Debug(args...)
}
// Debugf package logger
func Debugf(template string, args ...interface{}) {
logger.Debugf(template, args...)
}
// Info package logger
func Info(args ...interface{}) {
logger.Info(args...)
}
// Infof package logger
func Infof(template string, args ...interface{}) {
logger.Infof(template, args...)
}
// Warn package logger
func Warn(args ...interface{}) {
logger.Warn(args...)
}
// Warnf package logger
func Warnf(template string, args ...interface{}) {
logger.Warnf(template, args...)
}
// Error package logger
func Error(args ...interface{}) {
logger.Error(args...)
}
// Errorf package logger
func Errorf(template string, args ...interface{}) {
logger.Errorf(template, args...)
}
// Fatal package logger
func Fatal(args ...interface{}) {
logger.Fatal(args...)
}
// Fatalf package logger
func Fatalf(template string, args ...interface{}) {
logger.Fatalf(template, args...)
}
// Panic package logger
func Panic(args ...interface{}) {
logger.Panic(args...)
}
// Panicf package logger
func Panicf(template string, args ...interface{}) {
logger.Panicf(template, args...)
}
func Sync() error {
return logger.Sync()
}
......@@ -740,7 +740,7 @@ The return value is like:
## Go Connector
TDengine also provides a Go client package named _taosSql_ for users to access TDengine with Go. The package is in _/usr/local/taos/connector/go/src/taosSql_ by default if you installed TDengine. Users can copy the directory _/usr/local/taos/connector/go/src/taosSql_ to the _src_ directory of your project and import the package in the source code for use.
TDengine also provides a Go client package named _taosSql_ for users to access TDengine with Go. The package is in _/usr/local/taos/connector/go/driver-go/taosSql_ by default if you installed TDengine. Users can copy the directory _/usr/local/taos/connector/go/driver-go/taosSql_ to the _src_ directory of your project and import the package in the source code for use.
```Go
import (
......
......@@ -10,6 +10,7 @@ set -e
# -o [Linux | Kylin | Alpine | Raspberrypi | Darwin | Windows | ...]
# -V [stable | beta]
# -l [full | lite]
# -u [yes | no]
# set parameters by default value
verMode=edge # [cluster, edge]
......@@ -17,8 +18,9 @@ verType=stable # [stable, beta]
cpuType=x64 # [aarch32 | aarch64 | x64 | x86 | mips64 ...]
osType=Linux # [Linux | Kylin | Alpine | Raspberrypi | Darwin | Windows | ...]
pagMode=full # [full | lite]
cloudVer=no # [yes | no]
while getopts "hv:V:c:o:l:" arg
while getopts "hv:V:c:o:l:u:" arg
do
case $arg in
v)
......@@ -41,8 +43,12 @@ do
#echo "osType=$OPTARG"
osType=$(echo $OPTARG)
;;
u)
#echo "cloudVer=$OPTARG"
cloudVer=$(echo $OPTARG)
;;
h)
echo "Usage: `basename $0` -v [cluster | edge] -c [aarch32 | aarch64 | x64 | x86 | mips64 ...] -o [Linux | Kylin | Alpine | Raspberrypi | Darwin | Windows | ...] -V [stable | beta] -l [full | lite]"
echo "Usage: `basename $0` -v [cluster | edge] -c [aarch32 | aarch64 | x64 | x86 | mips64 ...] -o [Linux | Kylin | Alpine | Raspberrypi | Darwin | Windows | ...] -V [stable | beta] -l [full | lite] -u [yes | no]"
exit 0
;;
?) #unknow option
......@@ -52,7 +58,7 @@ do
esac
done
echo "verMode=${verMode} verType=${verType} cpuType=${cpuType} osType=${osType} pagMode=${pagMode}"
echo "verMode=${verMode} verType=${verType} cpuType=${cpuType} osType=${osType} pagMode=${pagMode} cloudVer=${cloudVer}"
curr_dir=$(pwd)
......@@ -204,7 +210,7 @@ if [[ "$cpuType" == "x64" ]] || [[ "$cpuType" == "aarch64" ]] || [[ "$cpuType" =
if [ "$verMode" != "cluster" ]; then
cmake ../ -DCPUTYPE=${cpuType} -DPAGMODE=${pagMode}
else
cmake ../../ -DCPUTYPE=${cpuType}
cmake ../../ -DCPUTYPE=${cpuType} -DCLOUDVER=${cloudVer}
fi
else
echo "input cpuType=${cpuType} error!!!"
......@@ -244,8 +250,8 @@ if [ "$osType" != "Darwin" ]; then
echo "====do tar.gz package for all systems===="
cd ${script_dir}/tools
${csudo} ./makepkg.sh ${compile_dir} ${version} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode}
${csudo} ./makeclient.sh ${compile_dir} ${version} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode}
${csudo} ./makepkg.sh ${compile_dir} ${version} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${cloudVer}
${csudo} ./makeclient.sh ${compile_dir} ${version} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${cloudVer}
else
cd ${script_dir}/tools
./makeclient.sh ${compile_dir} ${version} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType}
......
......@@ -13,6 +13,7 @@ osType=$5
verMode=$6
verType=$7
pagMode=$8
cloudVer=$9
if [ "$osType" != "Darwin" ]; then
script_dir="$(dirname $(readlink -f $0))"
......@@ -122,7 +123,11 @@ fi
cd ${release_dir}
if [ "$verMode" == "cluster" ]; then
pkg_name=${install_dir}-${version}-${osType}-${cpuType}
if [ "$cloudVer" == "yes" ]; then
pkg_name=${install_dir}-cloud-${version}-${osType}-${cpuType}
else
pkg_name=${install_dir}-${version}-${osType}-${cpuType}
fi
elif [ "$verMode" == "edge" ]; then
pkg_name=${install_dir}-${version}-${osType}-${cpuType}
else
......
......@@ -14,6 +14,7 @@ osType=$5
verMode=$6
verType=$7
pagMode=$8
cloudVer=$9
script_dir="$(dirname $(readlink -f $0))"
top_dir="$(readlink -f ${script_dir}/../..)"
......@@ -131,7 +132,11 @@ fi
cd ${release_dir}
if [ "$verMode" == "cluster" ]; then
pkg_name=${install_dir}-${version}-${osType}-${cpuType}
if [ "$cloudVer" == "yes" ]; then
pkg_name=${install_dir}-cloud-${version}-${osType}-${cpuType}
else
pkg_name=${install_dir}-${version}-${osType}-${cpuType}
fi
elif [ "$verMode" == "edge" ]; then
pkg_name=${install_dir}-${version}-${osType}-${cpuType}
else
......
......@@ -197,7 +197,7 @@ typedef struct SDataBlockList {
typedef struct SQueryInfo {
int16_t command; // the command may be different for each subclause, so keep it seperately.
uint16_t type; // query/insert/import type
char intervalTimeUnit;
char slidingTimeUnit;
int64_t etime, stime;
int64_t intervalTime; // aggregation time interval
......
......@@ -347,8 +347,8 @@ void tscProcessAsyncRes(SSchedMsg *pMsg) {
(*pSql->fp)(pSql->param, taosres, code);
if (shouldFree) {
tscFreeSqlObj(pSql);
tscTrace("%p Async sql is automatically freed in async res", pSql);
tscFreeSqlObj(pSql);
}
}
......
......@@ -20,6 +20,7 @@
#include "thistogram.h"
#include "tinterpolation.h"
#include "tlog.h"
#include "tpercentile.h"
#include "tscJoinProcess.h"
#include "tscSyntaxtreefunction.h"
#include "tscompression.h"
......@@ -27,7 +28,6 @@
#include "ttime.h"
#include "ttypes.h"
#include "tutil.h"
#include "tpercentile.h"
#define GET_INPUT_CHAR(x) (((char *)((x)->aInputElemBuf)) + ((x)->startOffset) * ((x)->inputBytes))
#define GET_INPUT_CHAR_INDEX(x, y) (GET_INPUT_CHAR(x) + (y) * (x)->inputBytes)
......@@ -4104,8 +4104,6 @@ static void twa_function(SQLFunctionCtx *pCtx) {
if (pResInfo->superTableQ) {
memcpy(pCtx->aOutputBuf, pInfo, sizeof(STwaInfo));
}
// pCtx->numOfIteratedElems += notNullElems;
}
static void twa_function_f(SQLFunctionCtx *pCtx, int32_t index) {
......@@ -4138,7 +4136,6 @@ static void twa_function_f(SQLFunctionCtx *pCtx, int32_t index) {
pInfo->lastKey = primaryKey[index];
setTWALastVal(pCtx, pData, 0, pInfo);
// pCtx->numOfIteratedElems += 1;
pResInfo->hasResult = DATA_SET_FLAG;
if (pResInfo->superTableQ) {
......@@ -4403,10 +4400,8 @@ static double do_calc_rate(const SRateInfo* pRateInfo) {
}
}
int64_t duration = pRateInfo->lastKey - pRateInfo->firstKey;
duration = (duration + 500) / 1000;
double resultVal = ((double)diff) / duration;
double duration = (pRateInfo->lastKey - pRateInfo->firstKey) / 1000.0;
double resultVal = diff / duration;
pTrace("do_calc_rate() isIRate:%d firstKey:%" PRId64 " lastKey:%" PRId64 " firstValue:%f lastValue:%f CorrectionValue:%f resultVal:%f",
pRateInfo->isIRate, pRateInfo->firstKey, pRateInfo->lastKey, pRateInfo->firstValue, pRateInfo->lastValue, pRateInfo->CorrectionValue, resultVal);
......@@ -4447,62 +4442,191 @@ static void rate_function(SQLFunctionCtx *pCtx) {
TSKEY *primaryKey = pCtx->ptsList;
pTrace("%p rate_function() size:%d, hasNull:%d", pCtx, pCtx->size, pCtx->hasNull);
for (int32_t i = 0; i < pCtx->size; ++i) {
char *pData = GET_INPUT_CHAR_INDEX(pCtx, i);
if (pCtx->hasNull && isNull(pData, pCtx->inputType)) {
pTrace("%p rate_function() index of null data:%d", pCtx, i);
continue;
if (pCtx->order == TSQL_SO_ASC) {
#ifdef NOT_EQUINIX
// prev interpolation exists
if (pCtx->prev.key != -1) {
pRateInfo->firstValue = pCtx->prev.data;
pRateInfo->firstKey = pCtx->prev.key;
pCtx->prev.key = -1; // clear the flag
pTrace("%p get prev interpolation for firstValue:%f firstKey:%" PRId64, pCtx, pRateInfo->firstValue, pRateInfo->firstKey);
if (-DBL_MAX == pRateInfo->lastValue) {
pRateInfo->lastValue = pCtx->prev.data;
pRateInfo->lastKey = pCtx->prev.key;
} else if (pCtx->prev.data < pRateInfo->lastValue) {
pRateInfo->CorrectionValue += pRateInfo->lastValue;
pTrace("CorrectionValue:%f", pRateInfo->CorrectionValue);
pRateInfo->lastValue = pCtx->prev.data;
pRateInfo->lastKey = pCtx->prev.key;
pTrace("lastValue:%f lastKey:%" PRId64, pRateInfo->lastValue, pRateInfo->lastKey);
}
}
notNullElems++;
#endif
double v = 0;
switch (pCtx->inputType) {
case TSDB_DATA_TYPE_TINYINT:
v = (double)GET_INT8_VAL(pData);
break;
case TSDB_DATA_TYPE_SMALLINT:
v = (double)GET_INT16_VAL(pData);
break;
case TSDB_DATA_TYPE_INT:
v = (double)GET_INT32_VAL(pData);
break;
case TSDB_DATA_TYPE_BIGINT:
v = (double)GET_INT64_VAL(pData);
break;
case TSDB_DATA_TYPE_FLOAT:
v = (double)GET_FLOAT_VAL(pData);
break;
case TSDB_DATA_TYPE_DOUBLE:
v = (double)GET_DOUBLE_VAL(pData);
break;
default:
assert(0);
for (int32_t i = 0; i < pCtx->size; ++i) {
char *pData = GET_INPUT_CHAR_INDEX(pCtx, i);
if (pCtx->hasNull && isNull(pData, pCtx->inputType)) {
pTrace("%p rate_function() index of null data:%d", pCtx, i);
continue;
}
notNullElems++;
double v = 0;
switch (pCtx->inputType) {
case TSDB_DATA_TYPE_TINYINT:
v = (double)GET_INT8_VAL(pData);
break;
case TSDB_DATA_TYPE_SMALLINT:
v = (double)GET_INT16_VAL(pData);
break;
case TSDB_DATA_TYPE_INT:
v = (double)GET_INT32_VAL(pData);
break;
case TSDB_DATA_TYPE_BIGINT:
v = (double)GET_INT64_VAL(pData);
break;
case TSDB_DATA_TYPE_FLOAT:
v = (double)GET_FLOAT_VAL(pData);
break;
case TSDB_DATA_TYPE_DOUBLE:
v = (double)GET_DOUBLE_VAL(pData);
break;
default:
assert(0);
}
if ((-DBL_MAX == pRateInfo->firstValue) || (INT64_MIN == pRateInfo->firstKey)) {
pRateInfo->firstValue = v;
pRateInfo->firstKey = primaryKey[i];
pTrace("firstValue:%f firstKey:%" PRId64, pRateInfo->firstValue, pRateInfo->firstKey);
}
if (-DBL_MAX == pRateInfo->lastValue) {
pRateInfo->lastValue = v;
} else if (v < pRateInfo->lastValue) {
pRateInfo->CorrectionValue += pRateInfo->lastValue;
pTrace("CorrectionValue:%f", pRateInfo->CorrectionValue);
}
pRateInfo->lastValue = v;
pRateInfo->lastKey = primaryKey[i];
pTrace("lastValue:%f lastKey:%" PRId64, pRateInfo->lastValue, pRateInfo->lastKey);
}
if ((-DBL_MAX == pRateInfo->firstValue) || (INT64_MIN == pRateInfo->firstKey)) {
pRateInfo->firstValue = v;
pRateInfo->firstKey = primaryKey[i];
if (!pCtx->hasNull) {
assert(pCtx->size == notNullElems);
}
#ifdef NOT_EQUINIX
if (pCtx->next.key != -1) {
if (pCtx->next.data < pRateInfo->lastValue) {
pRateInfo->CorrectionValue += pRateInfo->lastValue;
pTrace("CorrectionValue:%f", pRateInfo->CorrectionValue);
}
pRateInfo->lastValue = pCtx->next.data;
pRateInfo->lastKey = pCtx->next.key;
pCtx->next.key = -1;
pTrace("%p get next interpolation for lastValue:%f lastKey:%" PRId64, pCtx, pRateInfo->lastValue, pRateInfo->lastKey);
}
#endif
} else {
#ifdef NOT_EQUINIX
if (pCtx->next.key != -1) {
pRateInfo->lastValue = pCtx->next.data;
pRateInfo->lastKey = pCtx->next.key;
pCtx->next.key = -1;
pTrace("%p get next interpolation for lastValue:%f lastKey:%" PRId64, pCtx, pRateInfo->lastValue, pRateInfo->lastKey);
if (-DBL_MAX == pRateInfo->firstValue) {
pRateInfo->firstValue = pCtx->next.data;
pRateInfo->firstKey = pCtx->next.key;
} else if (pCtx->next.data > pRateInfo->firstValue) {
pRateInfo->CorrectionValue += pCtx->next.data;
pTrace("CorrectionValue:%f", pRateInfo->CorrectionValue);
pRateInfo->firstValue = pCtx->next.data;
pRateInfo->firstKey = pCtx->next.key;
pTrace("firstValue:%f firstKey:%" PRId64, pRateInfo->firstValue, pRateInfo->firstKey);
}
}
#endif
for (int32_t i = pCtx->size - 1; i >= 0; --i) {
char *pData = GET_INPUT_CHAR_INDEX(pCtx, i);
if (pCtx->hasNull && isNull(pData, pCtx->inputType)) {
pTrace("%p rate_function() index of null data:%d", pCtx, i);
continue;
}
notNullElems++;
double v = 0;
switch (pCtx->inputType) {
case TSDB_DATA_TYPE_TINYINT:
v = (double)GET_INT8_VAL(pData);
break;
case TSDB_DATA_TYPE_SMALLINT:
v = (double)GET_INT16_VAL(pData);
break;
case TSDB_DATA_TYPE_INT:
v = (double)GET_INT32_VAL(pData);
break;
case TSDB_DATA_TYPE_BIGINT:
v = (double)GET_INT64_VAL(pData);
break;
case TSDB_DATA_TYPE_FLOAT:
v = (double)GET_FLOAT_VAL(pData);
break;
case TSDB_DATA_TYPE_DOUBLE:
v = (double)GET_DOUBLE_VAL(pData);
break;
default:
assert(0);
}
if ((-DBL_MAX == pRateInfo->lastValue) || (INT64_MIN == pRateInfo->lastKey)) {
pRateInfo->lastValue = v;
pRateInfo->lastKey = primaryKey[i];
pTrace("firstValue:%f firstKey:%" PRId64, pRateInfo->lastValue, pRateInfo->lastKey);
}
if (-DBL_MAX == pRateInfo->firstValue) {
pRateInfo->firstValue = v;
} else if (v > pRateInfo->firstValue) {
pRateInfo->CorrectionValue += v;
pTrace("CorrectionValue:%f", pRateInfo->CorrectionValue);
}
pRateInfo->firstValue = v;
pRateInfo->firstKey = primaryKey[i];
pTrace("firstValue:%f firstKey:%" PRId64, pRateInfo->firstValue, pRateInfo->firstKey);
}
if (-DBL_MAX == pRateInfo->lastValue) {
pRateInfo->lastValue = v;
} else if (v < pRateInfo->lastValue) {
pRateInfo->CorrectionValue += pRateInfo->lastValue;
pTrace("CorrectionValue:%f", pRateInfo->CorrectionValue);
if (!pCtx->hasNull) {
assert(pCtx->size == notNullElems);
}
pRateInfo->lastValue = v;
pRateInfo->lastKey = primaryKey[i];
pTrace("lastValue:%f lastKey:%" PRId64, pRateInfo->lastValue, pRateInfo->lastKey);
}
if (!pCtx->hasNull) {
assert(pCtx->size == notNullElems);
}
#ifdef NOT_EQUINIX
if (pCtx->prev.key != -1) {
if (pCtx->prev.data > pRateInfo->firstValue) {
pRateInfo->CorrectionValue += pCtx->prev.data;
pTrace("CorrectionValue:%f", pRateInfo->CorrectionValue);
}
pRateInfo->firstValue = pCtx->prev.data;
pRateInfo->firstKey = pCtx->prev.key;
pCtx->prev.key = -1;
pTrace("%p get prev interpolation for firstValue:%f firstKey:%" PRId64, pCtx, pRateInfo->firstValue, pRateInfo->firstKey);
}
#endif
};
SET_VAL(pCtx, notNullElems, 1);
......@@ -4637,11 +4761,11 @@ static void rate_finalizer(SQLFunctionCtx *pCtx) {
pTrace("%p isIRate:%d firstKey:%" PRId64 " lastKey:%" PRId64 " firstValue:%f lastValue:%f CorrectionValue:%f hasResult:%d",
pCtx, pRateInfo->isIRate, pRateInfo->firstKey, pRateInfo->lastKey, pRateInfo->firstValue, pRateInfo->lastValue, pRateInfo->CorrectionValue, pRateInfo->hasResult);
if (pRateInfo->hasResult != DATA_SET_FLAG) {
if ((pRateInfo->hasResult != DATA_SET_FLAG) || (INT64_MIN == pRateInfo->lastKey) || (INT64_MIN == pRateInfo->firstKey)) {
setNull(pCtx->aOutputBuf, TSDB_DATA_TYPE_DOUBLE, sizeof(double));
return;
}
*(double*)pCtx->aOutputBuf = do_calc_rate(pRateInfo);
pTrace("rate_finalizer() output result:%f", *(double *)pCtx->aOutputBuf);
......@@ -4668,7 +4792,17 @@ static void irate_function(SQLFunctionCtx *pCtx) {
if (pCtx->size < 1) {
return;
}
#ifdef NOT_EQUINIX
// next interpolation exists
if (pCtx->next.key != -1) {
pRateInfo->lastValue = pCtx->next.data;
pRateInfo->lastKey = pCtx->next.key;
pCtx->next.key = -1; // clear the flag
pTrace("%p irate_function() get next interpolation for lastValue:%f lastKey:%" PRId64, pCtx, pRateInfo->lastValue, pRateInfo->lastKey);
}
#endif
for (int32_t i = pCtx->size - 1; i >= 0; --i) {
char *pData = GET_INPUT_CHAR_INDEX(pCtx, i);
if (pCtx->hasNull && isNull(pData, pCtx->inputType)) {
......@@ -4702,24 +4836,32 @@ static void irate_function(SQLFunctionCtx *pCtx) {
assert(0);
}
// TODO: calc once if only call this function once ????
if ((INT64_MIN == pRateInfo->lastKey) || (-DBL_MAX == pRateInfo->lastValue)) {
if (1 == notNullElems) {
pRateInfo->lastValue = v;
pRateInfo->lastKey = primaryKey[i];
pTrace("%p irate_function() lastValue:%f lastKey:%" PRId64, pCtx, pRateInfo->lastValue, pRateInfo->lastKey);
continue;
}
if ((INT64_MIN == pRateInfo->firstKey) || (-DBL_MAX == pRateInfo->firstValue)){
pRateInfo->firstValue = v;
pRateInfo->firstKey = primaryKey[i];
pTrace("%p irate_function() firstValue:%f firstKey:%" PRId64, pCtx, pRateInfo->firstValue, pRateInfo->firstKey);
break;
}
pRateInfo->firstValue = v;
pRateInfo->firstKey = primaryKey[i];
pTrace("%p irate_function() firstValue:%f firstKey:%" PRId64, pCtx, pRateInfo->firstValue, pRateInfo->firstKey);
break;
}
#ifdef NOT_EQUINIX
if (pCtx->prev.key != -1) {
if ((INT64_MIN == pRateInfo->firstKey) || (-DBL_MAX == pRateInfo->firstValue)) {
pRateInfo->firstValue = pCtx->prev.data;
pRateInfo->firstKey = pCtx->prev.key;
pCtx->prev.key = -1;
pTrace("%p irate_function() get prev interpolation for firstValue:%f firstKey:%" PRId64, pCtx, pRateInfo->firstValue, pRateInfo->firstKey);
}
}
#endif
SET_VAL(pCtx, notNullElems, 1);
if (notNullElems > 0) {
......@@ -4803,6 +4945,10 @@ static void do_sumrate_merge(SQLFunctionCtx *pCtx) {
if (pInput->hasResult != DATA_SET_FLAG) {
continue;
} else if (pInput->num == 0) {
if ((INT64_MIN == pInput->lastKey) || (INT64_MIN == pInput->firstKey)) {
continue;
}
pRateInfo->sum += do_calc_rate(pInput);
pRateInfo->num++;
} else {
......@@ -4843,6 +4989,11 @@ static void sumrate_finalizer(SQLFunctionCtx *pCtx) {
if (pRateInfo->num == 0) {
// from meter
if ((INT64_MIN == pRateInfo->lastKey) || (INT64_MIN == pRateInfo->firstKey)) {
setNull(pCtx->aOutputBuf, TSDB_DATA_TYPE_DOUBLE, sizeof(double));
return;
}
*(double*)pCtx->aOutputBuf = do_calc_rate(pRateInfo);
} else if (pCtx->functionId == TSDB_FUNC_SUM_RATE || pCtx->functionId == TSDB_FUNC_SUM_IRATE) {
*(double*)pCtx->aOutputBuf = pRateInfo->sum;
......
......@@ -254,7 +254,9 @@ int32_t tsParseOneColumnData(SSchema *pSchema, SSQLToken *pToken, char *payload,
if (pToken->type == TK_NULL) {
*((int32_t *)payload) = TSDB_DATA_FLOAT_NULL;
} else if ((pToken->type == TK_STRING) && (pToken->n != 0) &&
(strncasecmp(TSDB_DATA_NULL_STR_L, pToken->z, pToken->n) == 0)) {
((strncasecmp(TSDB_DATA_NULL_STR_L, pToken->z, pToken->n) == 0)
|| (strncasecmp("nan", pToken->z, pToken->n) == 0)
|| (strncasecmp("-nan", pToken->z, pToken->n) == 0))) {
*((int32_t *)payload) = TSDB_DATA_FLOAT_NULL;
} else {
double dv;
......@@ -278,8 +280,10 @@ int32_t tsParseOneColumnData(SSchema *pSchema, SSQLToken *pToken, char *payload,
case TSDB_DATA_TYPE_DOUBLE:
if (pToken->type == TK_NULL) {
*((int64_t *)payload) = TSDB_DATA_DOUBLE_NULL;
} else if ((pToken->type == TK_STRING) && (pToken->n != 0) &&
(strncasecmp(TSDB_DATA_NULL_STR_L, pToken->z, pToken->n) == 0)) {
} else if ((pToken->type == TK_STRING) && (pToken->n != 0) &&
((strncasecmp(TSDB_DATA_NULL_STR_L, pToken->z, pToken->n) == 0)
|| (strncasecmp("nan", pToken->z, pToken->n) == 0)
|| (strncasecmp("-nan", pToken->z, pToken->n) == 0))) {
*((int64_t *)payload) = TSDB_DATA_DOUBLE_NULL;
} else {
double dv;
......
......@@ -292,7 +292,7 @@ void tscKillConnection(STscObj *pObj) {
pthread_mutex_unlock(&pObj->mutex);
taos_close(pObj);
tscTrace("connection:%p is killed", pObj);
taos_close(pObj);
}
......@@ -598,9 +598,6 @@ int32_t parseIntervalClause(SQueryInfo* pQueryInfo, SQuerySQL* pQuerySql) {
pQueryInfo->intervalTime = pQueryInfo->intervalTime / 1000;
}
/* parser has filter the illegal type, no need to check here */
pQueryInfo->intervalTimeUnit = pQuerySql->interval.z[pQuerySql->interval.n - 1];
// interval cannot be less than 10 milliseconds
if (pQueryInfo->intervalTime < tsMinIntervalTime) {
return invalidSqlErrMsg(pQueryInfo->msg, msg2);
......@@ -689,10 +686,15 @@ int32_t parseSlidingClause(SQueryInfo* pQueryInfo, SQuerySQL* pQuerySql) {
if (pQueryInfo->slidingTime > pQueryInfo->intervalTime) {
return invalidSqlErrMsg(pQueryInfo->msg, msg1);
}
pQueryInfo->slidingTimeUnit = pQuerySql->sliding.z[pQuerySql->sliding.n - 1];
} else {
pQueryInfo->slidingTime = pQueryInfo->intervalTime;
// parser has filter the illegal type, no need to check here
pQueryInfo->slidingTimeUnit = pQuerySql->interval.z[pQuerySql->interval.n - 1];
}
return TSDB_CODE_SUCCESS;
}
......@@ -1636,13 +1638,16 @@ int32_t addExprAndResultField(SQueryInfo* pQueryInfo, int32_t colIdx, tSQLExprIt
// set the first column ts for diff query
if (optr == TK_DIFF) {
colIdx += 1;
SColumnIndex indexTS = {.tableIndex = index.tableIndex, .columnIndex = 0};
SColumnIndex indexTS = {.tableIndex = index.tableIndex, .columnIndex = PRIMARYKEY_TIMESTAMP_COL_INDEX};
SSqlExpr* pExpr = tscSqlExprInsert(pQueryInfo, 0, TSDB_FUNC_TS_DUMMY, &indexTS, TSDB_DATA_TYPE_TIMESTAMP,
TSDB_KEYSIZE, TSDB_KEYSIZE);
SColumnList ids = getColumnList(1, 0, 0);
insertResultField(pQueryInfo, 0, &ids, TSDB_KEYSIZE, TSDB_DATA_TYPE_TIMESTAMP, aAggs[TSDB_FUNC_TS_DUMMY].aName,
pExpr);
} else if ((optr >= TK_RATE) && (optr <= TK_AVG_IRATE)) {
SColumnIndex index1 = {.tableIndex = index.tableIndex, .columnIndex = PRIMARYKEY_TIMESTAMP_COL_INDEX};
tscColumnBaseInfoInsert(pQueryInfo, &index1);
}
// functions can not be applied to tags
......
......@@ -446,6 +446,8 @@ int32_t getTimestampInUsFromStrImpl(int64_t val, char unit, int64_t *result) {
break;
case 'a':
break;
case 'u':
return 0;
default: {
;
return -1;
......
......@@ -233,6 +233,7 @@ void tscCreateLocalReducer(tExtMemBuffer **pMemBuffer, int32_t numOfBuffer, tOrd
}
assert(idx >= pReducer->numOfBuffer);
if (idx == 0) {
free(pReducer);
return;
}
......@@ -324,7 +325,7 @@ void tscCreateLocalReducer(tExtMemBuffer **pMemBuffer, int32_t numOfBuffer, tOrd
int64_t stime = (pQueryInfo->stime < pQueryInfo->etime) ? pQueryInfo->stime : pQueryInfo->etime;
int64_t revisedSTime =
taosGetIntervalStartTimestamp(stime, pQueryInfo->intervalTime, pQueryInfo->intervalTimeUnit, prec);
taosGetIntervalStartTimestamp(stime, pQueryInfo->intervalTime, pQueryInfo->slidingTimeUnit, prec);
SInterpolationInfo *pInterpoInfo = &pReducer->interpolationInfo;
taosInitInterpoInfo(pInterpoInfo, pQueryInfo->order.order, revisedSTime, pQueryInfo->groupbyExpr.numOfGroupCols,
......@@ -613,6 +614,7 @@ int32_t tscLocalReducerEnvCreate(SSqlObj *pSql, tExtMemBuffer ***pMemBuffer, tOr
pSchema = (SSchema *)calloc(1, sizeof(SSchema) * pQueryInfo->exprsInfo.numOfExprs);
if (pSchema == NULL) {
tfree(*pMemBuffer);
tscError("%p failed to allocate memory", pSql);
pRes->code = TSDB_CODE_CLI_OUT_OF_MEMORY;
return pRes->code;
......@@ -634,15 +636,27 @@ int32_t tscLocalReducerEnvCreate(SSqlObj *pSql, tExtMemBuffer ***pMemBuffer, tOr
}
pModel = createColumnModel(pSchema, pQueryInfo->exprsInfo.numOfExprs, capacity);
if (pModel == NULL) {
goto _error_memory;
}
for (int32_t i = 0; i < pMeterMetaInfo->pMetricMeta->numOfVnodes; ++i) {
(*pMemBuffer)[i] = createExtMemBuffer(nBufferSizes, rlen, pModel);
if ((*pMemBuffer)[i] == NULL) {
for (int32_t j=0; j < i; ++j ) {
destroyExtMemBuffer((*pMemBuffer)[j]);
}
goto _error_memory;
}
(*pMemBuffer)[i]->flushModel = MULTIPLE_APPEND_MODEL;
}
if (createOrderDescriptor(pOrderDesc, pCmd, pModel) != TSDB_CODE_SUCCESS) {
pRes->code = TSDB_CODE_CLI_OUT_OF_MEMORY;
return pRes->code;
for (int32_t i = 0; i < pMeterMetaInfo->pMetricMeta->numOfVnodes; ++i) {
destroyExtMemBuffer((*pMemBuffer)[i]);
}
goto _error_memory;
}
// final result depends on the fields number
......@@ -685,6 +699,13 @@ int32_t tscLocalReducerEnvCreate(SSqlObj *pSql, tExtMemBuffer ***pMemBuffer, tOr
tfree(pSchema);
return TSDB_CODE_SUCCESS;
_error_memory:
tfree(pSchema);
tfree(*pMemBuffer);
tscError("%p failed to allocate memory", pSql);
pRes->code = TSDB_CODE_CLI_OUT_OF_MEMORY;
return pRes->code;
}
/**
......@@ -698,7 +719,7 @@ void tscLocalReducerEnvDestroy(tExtMemBuffer **pMemBuffer, tOrderDescriptor *pDe
destroyColumnModel(pFinalModel);
tOrderDescDestroy(pDesc);
for (int32_t i = 0; i < numOfVnodes; ++i) {
pMemBuffer[i] = destoryExtMemBuffer(pMemBuffer[i]);
pMemBuffer[i] = destroyExtMemBuffer(pMemBuffer[i]);
}
tfree(pMemBuffer);
......@@ -779,7 +800,7 @@ void savePrevRecordAndSetupInterpoInfo(SLocalReducer *pLocalReducer, SQueryInfo
int64_t stime = (pQueryInfo->stime < pQueryInfo->etime) ? pQueryInfo->stime : pQueryInfo->etime;
int64_t revisedSTime =
taosGetIntervalStartTimestamp(stime, pQueryInfo->intervalTime, pQueryInfo->intervalTimeUnit, prec);
taosGetIntervalStartTimestamp(stime, pQueryInfo->intervalTime, pQueryInfo->slidingTimeUnit, prec);
taosInitInterpoInfo(pInterpoInfo, pQueryInfo->order.order, revisedSTime, pQueryInfo->groupbyExpr.numOfGroupCols,
pLocalReducer->rowSize);
......@@ -923,7 +944,7 @@ static void doInterpolateResult(SSqlObj *pSql, SLocalReducer *pLocalReducer, boo
while (1) {
int32_t remains = taosNumOfRemainPoints(pInterpoInfo);
TSKEY etime = taosGetRevisedEndKey(actualETime, pQueryInfo->order.order, pQueryInfo->intervalTime,
pQueryInfo->intervalTimeUnit, precision);
pQueryInfo->slidingTimeUnit, precision);
int32_t nrows = taosGetNumOfResultWithInterpo(pInterpoInfo, pPrimaryKeys, remains, pQueryInfo->intervalTime, etime,
pLocalReducer->resColModel->capacity);
......@@ -1275,7 +1296,7 @@ static void resetEnvForNewResultset(SSqlRes *pRes, SSqlCmd *pCmd, SLocalReducer
if (pQueryInfo->interpoType != TSDB_INTERPO_NONE) {
int64_t stime = (pQueryInfo->stime < pQueryInfo->etime) ? pQueryInfo->stime : pQueryInfo->etime;
int64_t newTime =
taosGetIntervalStartTimestamp(stime, pQueryInfo->intervalTime, pQueryInfo->intervalTimeUnit, precision);
taosGetIntervalStartTimestamp(stime, pQueryInfo->intervalTime, pQueryInfo->slidingTimeUnit, precision);
taosInitInterpoInfo(&pLocalReducer->interpolationInfo, pQueryInfo->order.order, newTime,
pQueryInfo->groupbyExpr.numOfGroupCols, pLocalReducer->rowSize);
......@@ -1305,7 +1326,7 @@ static bool doInterpolationForCurrentGroup(SSqlObj *pSql) {
int32_t remain = taosNumOfRemainPoints(pInterpoInfo);
TSKEY ekey =
taosGetRevisedEndKey(etime, pQueryInfo->order.order, pQueryInfo->intervalTime, pQueryInfo->intervalTimeUnit, p);
taosGetRevisedEndKey(etime, pQueryInfo->order.order, pQueryInfo->intervalTime, pQueryInfo->slidingTimeUnit, p);
int32_t rows = taosGetNumOfResultWithInterpo(pInterpoInfo, (TSKEY *)pLocalReducer->pBufForInterpo, remain,
pQueryInfo->intervalTime, ekey, pLocalReducer->resColModel->capacity);
if (rows > 0) { // do interpo
......@@ -1338,7 +1359,7 @@ static bool doHandleLastRemainData(SSqlObj *pSql) {
int64_t etime = (pQueryInfo->stime < pQueryInfo->etime) ? pQueryInfo->etime : pQueryInfo->stime;
etime = taosGetRevisedEndKey(etime, pQueryInfo->order.order, pQueryInfo->intervalTime,
pQueryInfo->intervalTimeUnit, precision);
pQueryInfo->slidingTimeUnit, precision);
int32_t rows = taosGetNumOfResultWithInterpo(pInterpoInfo, NULL, 0, pQueryInfo->intervalTime, etime,
pLocalReducer->resColModel->capacity);
if (rows > 0) { // do interpo
......
......@@ -398,9 +398,9 @@ void *tscProcessMsgFromServer(char *msg, void *ahandle, void *thandle) {
if (pSql->freed || pObj->signature != pObj) {
tscTrace("%p sql is already released or DB connection is closed, freed:%d pObj:%p signature:%p", pSql, pSql->freed,
pObj, pObj->signature);
taosAddConnIntoCache(tscConnCache, pSql->thandle, pSql->ip, pSql->vnode, pObj->user);
//taosAddConnIntoCache(tscConnCache, pSql->thandle, pSql->ip, pSql->vnode, pObj->user);
tscFreeSqlObj(pSql);
return ahandle;
return NULL;
}
SMeterMetaInfo *pMeterMetaInfo = tscGetMeterMetaInfo(pCmd, pCmd->clauseIndex, 0);
......@@ -600,8 +600,8 @@ void *tscProcessMsgFromServer(char *msg, void *ahandle, void *thandle) {
taos_close(pObj);
tscTrace("%p Async sql close failed connection", pSql);
} else {
tscFreeSqlObj(pSql);
tscTrace("%p Async sql is automatically freed", pSql);
tscFreeSqlObj(pSql);
}
}
}
......@@ -1681,7 +1681,7 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
}
pQueryMsg->intervalTime = htobe64(pQueryInfo->intervalTime);
pQueryMsg->intervalTimeUnit = pQueryInfo->intervalTimeUnit;
pQueryMsg->slidingTimeUnit = pQueryInfo->slidingTimeUnit;
pQueryMsg->slidingTime = htobe64(pQueryInfo->slidingTime);
if (pQueryInfo->intervalTime < 0) {
......@@ -2148,7 +2148,12 @@ int32_t tscBuildDropAcctMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
pMsg += sizeof(SDropUserMsg);
pCmd->payloadLen = pMsg - pStart;
pCmd->msgType = TSDB_MSG_TYPE_DROP_USER;
if (pInfo->type == TSDB_SQL_DROP_ACCT) {
pCmd->msgType = TSDB_MSG_TYPE_DROP_ACCT;
} else {
pCmd->msgType = TSDB_MSG_TYPE_DROP_USER;
}
return TSDB_CODE_SUCCESS;
}
......
......@@ -796,8 +796,8 @@ void taos_free_result_imp(TAOS_RES *res, int keepCmd) {
tscTrace("%p qhandle is null, abort free, fp:%p", pSql, pSql->fp);
if (pSql->fp != NULL) {
pSql->thandle = NULL;
tscFreeSqlObj(pSql);
tscTrace("%p Async SqlObj is freed by app", pSql);
tscFreeSqlObj(pSql);
} else if (keepCmd) {
tscFreeSqlResult(pSql);
} else {
......
......@@ -374,8 +374,7 @@ static void tscSetNextLaunchTimer(SSqlStream *pStream, SSqlObj *pSql) {
}
static void tscSetSlidingWindowInfo(SSqlObj *pSql, SSqlStream *pStream) {
int64_t minIntervalTime =
(pStream->precision == TSDB_TIME_PRECISION_MICRO) ? tsMinIntervalTime * 1000L : tsMinIntervalTime;
int64_t minIntervalTime = tsMinIntervalTime;
SQueryInfo* pQueryInfo = tscGetQueryInfoDetail(&pSql->cmd, 0);
......@@ -391,8 +390,7 @@ static void tscSetSlidingWindowInfo(SSqlObj *pSql, SSqlStream *pStream) {
pQueryInfo->slidingTime = pQueryInfo->intervalTime;
}
int64_t minSlidingTime =
(pStream->precision == TSDB_TIME_PRECISION_MICRO) ? tsMinSlidingTime * 1000L : tsMinSlidingTime;
int64_t minSlidingTime = tsMinSlidingTime;
if (pQueryInfo->slidingTime == -1) {
pQueryInfo->slidingTime = pQueryInfo->intervalTime;
......@@ -582,10 +580,10 @@ void taos_close_stream(TAOS_STREAM *handle) {
tscRemoveFromStreamList(pStream, pSql);
taosTmrStopA(&(pStream->pTimer));
tscTrace("%p stream:%p is closed", pSql, pStream);
tscFreeSqlObj(pSql);
pStream->pSql = NULL;
tscTrace("%p stream:%p is closed", pSql, pStream);
tfree(pStream);
}
}
......@@ -104,6 +104,7 @@ static SSub* tscCreateSubscription(STscObj* pObj, const char* topic, const char*
return NULL;
}
char* sqlstr = NULL;
SSqlObj* pSql = calloc(1, sizeof(SSqlObj));
if (pSql == NULL) {
globalCode = TSDB_CODE_CLI_OUT_OF_MEMORY;
......@@ -114,7 +115,7 @@ static SSub* tscCreateSubscription(STscObj* pObj, const char* topic, const char*
pSql->signature = pSql;
pSql->pTscObj = pObj;
char* sqlstr = (char*)malloc(strlen(sql) + 1);
sqlstr = (char*)malloc(strlen(sql) + 1);
if (sqlstr == NULL) {
tscError("failed to allocate sql string for subscription");
goto failed;
......
......@@ -87,6 +87,7 @@ void tscGetMetricMetaCacheKey(SQueryInfo* pQueryInfo, char* str, uint64_t uid) {
MD5Update(&ctx, (uint8_t*)tmp, keyLen);
char* pStr = base64_encode(ctx.digest, tListLen(ctx.digest));
strcpy(str, pStr);
free(pStr);
}
free(tmp);
......
Subproject commit 8c58c512b6acda8bcdfa48fdc7140227b5221766
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package taosSql
import "C"
import (
"context"
"errors"
"database/sql/driver"
"unsafe"
"strconv"
"strings"
"time"
)
type taosConn struct {
taos unsafe.Pointer
affectedRows int
insertId int
cfg *config
status statusFlag
parseTime bool
reset bool // set when the Go SQL package calls ResetSession
}
type taosSqlResult struct {
affectedRows int64
insertId int64
}
func (res *taosSqlResult) LastInsertId() (int64, error) {
return res.insertId, nil
}
func (res *taosSqlResult) RowsAffected() (int64, error) {
return res.affectedRows, nil
}
func (mc *taosConn) Begin() (driver.Tx, error) {
taosLog.Println("taosSql not support transaction")
return nil, errors.New("taosSql not support transaction")
}
func (mc *taosConn) Close() (err error) {
if mc.taos == nil {
return errConnNoExist
}
mc.taos_close()
return nil
}
func (mc *taosConn) Prepare(query string) (driver.Stmt, error) {
if mc.taos == nil {
return nil, errInvalidConn
}
stmt := &taosSqlStmt{
mc: mc,
pSql: query,
}
// find ? count and save to stmt.paramCount
stmt.paramCount = strings.Count(query, "?")
//fmt.Printf("prepare alloc stmt:%p, sql:%s\n", stmt, query)
taosLog.Printf("prepare alloc stmt:%p, sql:%s\n", stmt, query)
return stmt, nil
}
func (mc *taosConn) interpolateParams(query string, args []driver.Value) (string, error) {
// Number of ? should be same to len(args)
if strings.Count(query, "?") != len(args) {
return "", driver.ErrSkip
}
buf := make([]byte, defaultBufSize)
buf = buf[:0] // clear buf
argPos := 0
for i := 0; i < len(query); i++ {
q := strings.IndexByte(query[i:], '?')
if q == -1 {
buf = append(buf, query[i:]...)
break
}
buf = append(buf, query[i:i+q]...)
i += q
arg := args[argPos]
argPos++
if arg == nil {
buf = append(buf, "NULL"...)
continue
}
switch v := arg.(type) {
case int64:
buf = strconv.AppendInt(buf, v, 10)
case uint64:
// Handle uint64 explicitly because our custom ConvertValue emits unsigned values
buf = strconv.AppendUint(buf, v, 10)
case float64:
buf = strconv.AppendFloat(buf, v, 'g', -1, 64)
case bool:
if v {
buf = append(buf, '1')
} else {
buf = append(buf, '0')
}
case time.Time:
if v.IsZero() {
buf = append(buf, "'0000-00-00'"...)
} else {
v := v.In(mc.cfg.loc)
v = v.Add(time.Nanosecond * 500) // To round under microsecond
year := v.Year()
year100 := year / 100
year1 := year % 100
month := v.Month()
day := v.Day()
hour := v.Hour()
minute := v.Minute()
second := v.Second()
micro := v.Nanosecond() / 1000
buf = append(buf, []byte{
'\'',
digits10[year100], digits01[year100],
digits10[year1], digits01[year1],
'-',
digits10[month], digits01[month],
'-',
digits10[day], digits01[day],
' ',
digits10[hour], digits01[hour],
':',
digits10[minute], digits01[minute],
':',
digits10[second], digits01[second],
}...)
if micro != 0 {
micro10000 := micro / 10000
micro100 := micro / 100 % 100
micro1 := micro % 100
buf = append(buf, []byte{
'.',
digits10[micro10000], digits01[micro10000],
digits10[micro100], digits01[micro100],
digits10[micro1], digits01[micro1],
}...)
}
buf = append(buf, '\'')
}
case []byte:
if v == nil {
buf = append(buf, "NULL"...)
} else {
buf = append(buf, "_binary'"...)
if mc.status&statusNoBackslashEscapes == 0 {
buf = escapeBytesBackslash(buf, v)
} else {
buf = escapeBytesQuotes(buf, v)
}
buf = append(buf, '\'')
}
case string:
//buf = append(buf, '\'')
if mc.status&statusNoBackslashEscapes == 0 {
buf = escapeStringBackslash(buf, v)
} else {
buf = escapeStringQuotes(buf, v)
}
//buf = append(buf, '\'')
default:
return "", driver.ErrSkip
}
//if len(buf)+4 > mc.maxAllowedPacket {
if len(buf)+4 > maxTaosSqlLen {
return "", driver.ErrSkip
}
}
if argPos != len(args) {
return "", driver.ErrSkip
}
return string(buf), nil
}
func (mc *taosConn) Exec(query string, args []driver.Value) (driver.Result, error) {
if mc.taos == nil {
return nil, driver.ErrBadConn
}
if len(args) != 0 {
if !mc.cfg.interpolateParams {
return nil, driver.ErrSkip
}
// try to interpolate the parameters to save extra roundtrips for preparing and closing a statement
prepared, err := mc.interpolateParams(query, args)
if err != nil {
return nil, err
}
query = prepared
}
mc.affectedRows = 0
mc.insertId = 0
_, err := mc.taosQuery(query)
if err == nil {
return &taosSqlResult{
affectedRows: int64(mc.affectedRows),
insertId: int64(mc.insertId),
}, err
}
return nil, err
}
func (mc *taosConn) Query(query string, args []driver.Value) (driver.Rows, error) {
return mc.query(query, args)
}
func (mc *taosConn) query(query string, args []driver.Value) (*textRows, error) {
if mc.taos == nil {
return nil, driver.ErrBadConn
}
if len(args) != 0 {
if !mc.cfg.interpolateParams {
return nil, driver.ErrSkip
}
// try client-side prepare to reduce roundtrip
prepared, err := mc.interpolateParams(query, args)
if err != nil {
return nil, err
}
query = prepared
}
num_fields, err := mc.taosQuery(query)
if err == nil {
// Read Result
rows := new(textRows)
rows.mc = mc
// Columns field
rows.rs.columns, err = mc.readColumns(num_fields)
return rows, err
}
return nil, err
}
// Ping implements driver.Pinger interface
func (mc *taosConn) Ping(ctx context.Context) (err error) {
if mc.taos != nil {
return nil
}
return errInvalidConn
}
// BeginTx implements driver.ConnBeginTx interface
func (mc *taosConn) BeginTx(ctx context.Context, opts driver.TxOptions) (driver.Tx, error) {
taosLog.Println("taosSql not support transaction")
return nil, errors.New("taosSql not support transaction")
}
func (mc *taosConn) QueryContext(ctx context.Context, query string, args []driver.NamedValue) (driver.Rows, error) {
if mc.taos == nil {
return nil, errInvalidConn
}
dargs, err := namedValueToValue(args)
if err != nil {
return nil, err
}
rows, err := mc.query(query, dargs)
if err != nil {
return nil, err
}
return rows, err
}
func (mc *taosConn) ExecContext(ctx context.Context, query string, args []driver.NamedValue) (driver.Result, error) {
if mc.taos == nil {
return nil, errInvalidConn
}
dargs, err := namedValueToValue(args)
if err != nil {
return nil, err
}
return mc.Exec(query, dargs)
}
func (mc *taosConn) PrepareContext(ctx context.Context, query string) (driver.Stmt, error) {
if mc.taos == nil {
return nil, errInvalidConn
}
stmt, err := mc.Prepare(query)
if err != nil {
return nil, err
}
return stmt, nil
}
func (stmt *taosSqlStmt) QueryContext(ctx context.Context, args []driver.NamedValue) (driver.Rows, error) {
if stmt.mc == nil {
return nil, errInvalidConn
}
dargs, err := namedValueToValue(args)
if err != nil {
return nil, err
}
rows, err := stmt.query(dargs)
if err != nil {
return nil, err
}
return rows, err
}
func (stmt *taosSqlStmt) ExecContext(ctx context.Context, args []driver.NamedValue) (driver.Result, error) {
if stmt.mc == nil {
return nil, errInvalidConn
}
dargs, err := namedValueToValue(args)
if err != nil {
return nil, err
}
return stmt.Exec(dargs)
}
func (mc *taosConn) CheckNamedValue(nv *driver.NamedValue) (err error) {
nv.Value, err = converter{}.ConvertValue(nv.Value)
return
}
// ResetSession implements driver.SessionResetter.
// (From Go 1.10)
func (mc *taosConn) ResetSession(ctx context.Context) error {
if mc.taos == nil {
return driver.ErrBadConn
}
mc.reset = true
return nil
}
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package taosSql
import (
"context"
"database/sql/driver"
)
type connector struct {
cfg *config
}
// Connect implements driver.Connector interface.
// Connect returns a connection to the database.
func (c *connector) Connect(ctx context.Context) (driver.Conn, error) {
var err error
// New taosConn
mc := &taosConn{
cfg: c.cfg,
parseTime: c.cfg.parseTime,
}
// Connect to Server
mc.taos, err = mc.taosConnect(mc.cfg.addr, mc.cfg.user, mc.cfg.passwd, mc.cfg.dbName, mc.cfg.port)
if err != nil {
return nil, err
}
return mc, nil
}
// Driver implements driver.Connector interface.
// Driver returns &taosSQLDriver{}.
func (c *connector) Driver() driver.Driver {
return &taosSQLDriver{}
}
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package taosSql
const (
timeFormat = "2006-01-02 15:04:05"
maxTaosSqlLen = 65380
defaultBufSize = maxTaosSqlLen + 32
)
type fieldType byte
type fieldFlag uint16
const (
flagNotNULL fieldFlag = 1 << iota
)
type statusFlag uint16
const (
statusInTrans statusFlag = 1 << iota
statusInAutocommit
statusReserved // Not in documentation
statusMoreResultsExists
statusNoGoodIndexUsed
statusNoIndexUsed
statusCursorExists
statusLastRowSent
statusDbDropped
statusNoBackslashEscapes
statusMetadataChanged
statusQueryWasSlow
statusPsOutParams
statusInTransReadonly
statusSessionStateChanged
)
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package taosSql
import (
"context"
"database/sql"
"database/sql/driver"
)
// taosSqlDriver is exported to make the driver directly accessible.
// In general the driver is used via the database/sql package.
type taosSQLDriver struct{}
// Open new Connection.
// the DSN string is formatted
func (d taosSQLDriver) Open(dsn string) (driver.Conn, error) {
cfg, err := parseDSN(dsn)
if err != nil {
return nil, err
}
c := &connector{
cfg: cfg,
}
return c.Connect(context.Background())
}
func init() {
sql.Register("taosSql", &taosSQLDriver{})
taosLogInit()
}
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package taosSql
import (
"errors"
"net/url"
"strconv"
"strings"
"time"
)
var (
errInvalidDSNUnescaped = errors.New("invalid DSN: did you forget to escape a param value?")
errInvalidDSNAddr = errors.New("invalid DSN: network address not terminated (missing closing brace)")
errInvalidDSNPort = errors.New("invalid DSN: network port is not a valid number")
errInvalidDSNNoSlash = errors.New("invalid DSN: missing the slash separating the database name")
)
// Config is a configuration parsed from a DSN string.
// If a new Config is created instead of being parsed from a DSN string,
// the NewConfig function should be used, which sets default values.
type config struct {
user string // Username
passwd string // Password (requires User)
net string // Network type
addr string // Network address (requires Net)
port int
dbName string // Database name
params map[string]string // Connection parameters
loc *time.Location // Location for time.Time values
columnsWithAlias bool // Prepend table alias to column names
interpolateParams bool // Interpolate placeholders into query string
parseTime bool // Parse time values to time.Time
}
// NewConfig creates a new Config and sets default values.
func newConfig() *config {
return &config{
loc: time.UTC,
interpolateParams: true,
parseTime: true,
}
}
// ParseDSN parses the DSN string to a Config
func parseDSN(dsn string) (cfg *config, err error) {
taosLog.Println("input dsn:", dsn)
// New config with some default values
cfg = newConfig()
// [user[:password]@][net[(addr)]]/dbname[?param1=value1&paramN=valueN]
// Find the last '/' (since the password or the net addr might contain a '/')
foundSlash := false
for i := len(dsn) - 1; i >= 0; i-- {
if dsn[i] == '/' {
foundSlash = true
var j, k int
// left part is empty if i <= 0
if i > 0 {
// [username[:password]@][protocol[(address)]]
// Find the last '@' in dsn[:i]
for j = i; j >= 0; j-- {
if dsn[j] == '@' {
// username[:password]
// Find the first ':' in dsn[:j]
for k = 0; k < j; k++ {
if dsn[k] == ':' {
cfg.passwd = dsn[k+1 : j]
break
}
}
cfg.user = dsn[:k]
break
}
}
// [protocol[(address)]]
// Find the first '(' in dsn[j+1:i]
for k = j + 1; k < i; k++ {
if dsn[k] == '(' {
// dsn[i-1] must be == ')' if an address is specified
if dsn[i-1] != ')' {
if strings.ContainsRune(dsn[k+1:i], ')') {
return nil, errInvalidDSNUnescaped
}
return nil, errInvalidDSNAddr
}
strs := strings.Split(dsn[k+1:i-1], ":")
if len(strs) == 1 {
return nil, errInvalidDSNAddr
}
cfg.addr = strs[0]
cfg.port, err = strconv.Atoi(strs[1])
if err != nil {
return nil, errInvalidDSNPort
}
break
}
}
cfg.net = dsn[j+1 : k]
}
// dbname[?param1=value1&...&paramN=valueN]
// Find the first '?' in dsn[i+1:]
for j = i + 1; j < len(dsn); j++ {
if dsn[j] == '?' {
if err = parseDSNParams(cfg, dsn[j+1:]); err != nil {
return
}
break
}
}
cfg.dbName = dsn[i+1 : j]
break
}
}
if !foundSlash && len(dsn) > 0 {
return nil, errInvalidDSNNoSlash
}
taosLog.Printf("cfg info: %+v", cfg)
return
}
// parseDSNParams parses the DSN "query string"
// Values must be url.QueryEscape'ed
func parseDSNParams(cfg *config, params string) (err error) {
for _, v := range strings.Split(params, "&") {
param := strings.SplitN(v, "=", 2)
if len(param) != 2 {
continue
}
// cfg params
switch value := param[1]; param[0] {
case "columnsWithAlias":
var isBool bool
cfg.columnsWithAlias, isBool = readBool(value)
if !isBool {
return errors.New("invalid bool value: " + value)
}
// Enable client side placeholder substitution
case "interpolateParams":
var isBool bool
cfg.interpolateParams, isBool = readBool(value)
if !isBool {
return errors.New("invalid bool value: " + value)
}
// Time Location
case "loc":
if value, err = url.QueryUnescape(value); err != nil {
return
}
cfg.loc, err = time.LoadLocation(value)
if err != nil {
return
}
// time.Time parsing
case "parseTime":
var isBool bool
cfg.parseTime, isBool = readBool(value)
if !isBool {
return errors.New("invalid bool value: " + value)
}
default:
// lazy init
if cfg.params == nil {
cfg.params = make(map[string]string)
}
if cfg.params[param[0]], err = url.QueryUnescape(value); err != nil {
return
}
}
}
return
}
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package taosSql
/*
#cgo CFLAGS : -I/usr/include
#cgo LDFLAGS: -L/usr/lib -ltaos
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <taos.h>
*/
import "C"
import (
"database/sql/driver"
"errors"
"fmt"
"io"
"strconv"
"time"
"unsafe"
)
/******************************************************************************
* Result *
******************************************************************************/
// Read Packets as Field Packets until EOF-Packet or an Error appears
func (mc *taosConn) readColumns(count int) ([]taosSqlField, error) {
columns := make([]taosSqlField, count)
var result unsafe.Pointer
result = C.taos_use_result(mc.taos)
if result == nil {
return nil, errors.New("invalid result")
}
pFields := (*C.struct_taosField)(C.taos_fetch_fields(result))
// TODO: Optimized rewriting !!!!
fields := (*[1 << 30]C.struct_taosField)(unsafe.Pointer(pFields))
for i := 0; i < count; i++ {
//columns[i].tableName = ms.taos.
//fmt.Println(reflect.TypeOf(fields[i].name))
var charray []byte
for j := range fields[i].name {
//fmt.Println("fields[i].name[j]: ", fields[i].name[j])
if fields[i].name[j] != 0 {
charray = append(charray, byte(fields[i].name[j]))
} else {
break
}
}
columns[i].name = string(charray)
columns[i].length = (uint32)(fields[i].bytes)
columns[i].fieldType = fieldType(fields[i]._type)
columns[i].flags = 0
// columns[i].decimals = 0
//columns[i].charSet = 0
}
return columns, nil
}
func (rows *taosSqlRows) readRow(dest []driver.Value) error {
mc := rows.mc
if rows.rs.done || mc == nil {
return io.EOF
}
var result unsafe.Pointer
result = C.taos_use_result(mc.taos)
if result == nil {
return errors.New(C.GoString(C.taos_errstr(mc.taos)))
}
//var row *unsafe.Pointer
row := C.taos_fetch_row(result)
if row == nil {
rows.rs.done = true
C.taos_free_result(result)
rows.mc = nil
return io.EOF
}
// because sizeof(void*) == sizeof(int*) == 8
// notes: sizeof(int) == 8 in go, but sizeof(int) == 4 in C.
for i := range dest {
currentRow := (unsafe.Pointer)(uintptr(*((*int)(unsafe.Pointer(uintptr(unsafe.Pointer(row)) + uintptr(i)*unsafe.Sizeof(int(0)))))))
if currentRow == nil {
dest[i] = nil
continue
}
switch rows.rs.columns[i].fieldType {
case C.TSDB_DATA_TYPE_BOOL:
if (*((*byte)(currentRow))) != 0 {
dest[i] = true
} else {
dest[i] = false
}
break
case C.TSDB_DATA_TYPE_TINYINT:
dest[i] = (int)(*((*byte)(currentRow)))
break
case C.TSDB_DATA_TYPE_SMALLINT:
dest[i] = (int16)(*((*int16)(currentRow)))
break
case C.TSDB_DATA_TYPE_INT:
dest[i] = (int)(*((*int32)(currentRow))) // notes int32 of go <----> int of C
break
case C.TSDB_DATA_TYPE_BIGINT:
dest[i] = (int64)(*((*int64)(currentRow)))
break
case C.TSDB_DATA_TYPE_FLOAT:
dest[i] = (*((*float32)(currentRow)))
break
case C.TSDB_DATA_TYPE_DOUBLE:
dest[i] = (*((*float64)(currentRow)))
break
case C.TSDB_DATA_TYPE_BINARY, C.TSDB_DATA_TYPE_NCHAR:
charLen := rows.rs.columns[i].length
var index uint32
binaryVal := make([]byte, charLen)
for index = 0; index < charLen; index++ {
binaryVal[index] = *((*byte)(unsafe.Pointer(uintptr(currentRow) + uintptr(index))))
}
dest[i] = string(binaryVal[:])
break
case C.TSDB_DATA_TYPE_TIMESTAMP:
if mc.cfg.parseTime == true {
timestamp := (int64)(*((*int64)(currentRow)))
dest[i] = timestampConvertToString(timestamp, int(C.taos_result_precision(result)))
} else {
dest[i] = (int64)(*((*int64)(currentRow)))
}
break
default:
fmt.Println("default fieldType: set dest[] to nil")
dest[i] = nil
break
}
}
return nil
}
// Read result as Field format until all rows or an Error appears
// call this func in conn mode
func (rows *textRows) readRow(dest []driver.Value) error {
return rows.taosSqlRows.readRow(dest)
}
// call thsi func in stmt mode
func (rows *binaryRows) readRow(dest []driver.Value) error {
return rows.taosSqlRows.readRow(dest)
}
func timestampConvertToString(timestamp int64, precision int) string {
var decimal, sVal, nsVal int64
if precision == 0 {
decimal = timestamp % 1000
sVal = timestamp / 1000
nsVal = decimal * 1000
} else {
decimal = timestamp % 1000000
sVal = timestamp / 1000000
nsVal = decimal * 1000000
}
date_time := time.Unix(sVal, nsVal)
//const base_format = "2006-01-02 15:04:05"
str_time := date_time.Format(timeFormat)
return (str_time + "." + strconv.Itoa(int(decimal)))
}
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package taosSql
/*
#cgo CFLAGS : -I/usr/include
#cgo LDFLAGS: -L/usr/lib -ltaos
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <taos.h>
*/
import "C"
import (
"database/sql"
"database/sql/driver"
"io"
"math"
"reflect"
)
type taosSqlField struct {
tableName string
name string
length uint32
flags fieldFlag // indicate whether this field can is null
fieldType fieldType
decimals byte
charSet uint8
}
type resultSet struct {
columns []taosSqlField
columnNames []string
done bool
}
type taosSqlRows struct {
mc *taosConn
rs resultSet
}
type binaryRows struct {
taosSqlRows
}
type textRows struct {
taosSqlRows
}
func (rows *taosSqlRows) Columns() []string {
if rows.rs.columnNames != nil {
return rows.rs.columnNames
}
columns := make([]string, len(rows.rs.columns))
if rows.mc != nil && rows.mc.cfg.columnsWithAlias {
for i := range columns {
if tableName := rows.rs.columns[i].tableName; len(tableName) > 0 {
columns[i] = tableName + "." + rows.rs.columns[i].name
} else {
columns[i] = rows.rs.columns[i].name
}
}
} else {
for i := range columns {
columns[i] = rows.rs.columns[i].name
}
}
rows.rs.columnNames = columns
return columns
}
func (rows *taosSqlRows) ColumnTypeDatabaseTypeName(i int) string {
return rows.rs.columns[i].typeDatabaseName()
}
func (rows *taosSqlRows) ColumnTypeLength(i int) (length int64, ok bool) {
return int64(rows.rs.columns[i].length), true
}
func (rows *taosSqlRows) ColumnTypeNullable(i int) (nullable, ok bool) {
return rows.rs.columns[i].flags&flagNotNULL == 0, true
}
func (rows *taosSqlRows) ColumnTypePrecisionScale(i int) (int64, int64, bool) {
column := rows.rs.columns[i]
decimals := int64(column.decimals)
switch column.fieldType {
case C.TSDB_DATA_TYPE_FLOAT:
fallthrough
case C.TSDB_DATA_TYPE_DOUBLE:
if decimals == 0x1f {
return math.MaxInt64, math.MaxInt64, true
}
return math.MaxInt64, decimals, true
}
return 0, 0, false
}
func (rows *taosSqlRows) ColumnTypeScanType(i int) reflect.Type {
return rows.rs.columns[i].scanType()
}
func (rows *taosSqlRows) Close() error {
if rows.mc != nil {
result := C.taos_use_result(rows.mc.taos)
if result != nil {
C.taos_free_result(result)
}
rows.mc = nil
}
return nil
}
func (rows *taosSqlRows) HasNextResultSet() (b bool) {
if rows.mc == nil {
return false
}
return rows.mc.status&statusMoreResultsExists != 0
}
func (rows *taosSqlRows) nextResultSet() (int, error) {
if rows.mc == nil {
return 0, io.EOF
}
// Remove unread packets from stream
if !rows.rs.done {
rows.rs.done = true
}
if !rows.HasNextResultSet() {
rows.mc = nil
return 0, io.EOF
}
rows.rs = resultSet{}
return 0,nil
}
func (rows *taosSqlRows) nextNotEmptyResultSet() (int, error) {
for {
resLen, err := rows.nextResultSet()
if err != nil {
return 0, err
}
if resLen > 0 {
return resLen, nil
}
rows.rs.done = true
}
}
func (rows *binaryRows) NextResultSet() error {
resLen, err := rows.nextNotEmptyResultSet()
if err != nil {
return err
}
rows.rs.columns, err = rows.mc.readColumns(resLen)
return err
}
// stmt.Query return binary rows, and get row from this func
func (rows *binaryRows) Next(dest []driver.Value) error {
if mc := rows.mc; mc != nil {
// Fetch next row from stream
return rows.readRow(dest)
}
return io.EOF
}
func (rows *textRows) NextResultSet() (err error) {
resLen, err := rows.nextNotEmptyResultSet()
if err != nil {
return err
}
rows.rs.columns, err = rows.mc.readColumns(resLen)
return err
}
// db.Query return text rows, and get row from this func
func (rows *textRows) Next(dest []driver.Value) error {
if mc := rows.mc; mc != nil {
// Fetch next row from stream
return rows.readRow(dest)
}
return io.EOF
}
func (mf *taosSqlField) typeDatabaseName() string {
//fmt.Println("######## (mf *taosSqlField) typeDatabaseName() mf.fieldType:", mf.fieldType)
switch mf.fieldType {
case C.TSDB_DATA_TYPE_BOOL:
return "BOOL"
case C.TSDB_DATA_TYPE_TINYINT:
return "TINYINT"
case C.TSDB_DATA_TYPE_SMALLINT:
return "SMALLINT"
case C.TSDB_DATA_TYPE_INT:
return "INT"
case C.TSDB_DATA_TYPE_BIGINT:
return "BIGINT"
case C.TSDB_DATA_TYPE_FLOAT:
return "FLOAT"
case C.TSDB_DATA_TYPE_DOUBLE:
return "DOUBLE"
case C.TSDB_DATA_TYPE_BINARY:
return "BINARY"
case C.TSDB_DATA_TYPE_NCHAR:
return "NCHAR"
case C.TSDB_DATA_TYPE_TIMESTAMP:
return "TIMESTAMP"
default:
return ""
}
}
var (
scanTypeFloat32 = reflect.TypeOf(float32(0))
scanTypeFloat64 = reflect.TypeOf(float64(0))
scanTypeInt8 = reflect.TypeOf(int8(0))
scanTypeInt16 = reflect.TypeOf(int16(0))
scanTypeInt32 = reflect.TypeOf(int32(0))
scanTypeInt64 = reflect.TypeOf(int64(0))
scanTypeNullTime = reflect.TypeOf(NullTime{})
scanTypeRawBytes = reflect.TypeOf(sql.RawBytes{})
scanTypeUnknown = reflect.TypeOf(new(interface{}))
)
func (mf *taosSqlField) scanType() reflect.Type {
//fmt.Println("######## (mf *taosSqlField) scanType() mf.fieldType:", mf.fieldType)
switch mf.fieldType {
case C.TSDB_DATA_TYPE_BOOL:
return scanTypeInt8
case C.TSDB_DATA_TYPE_TINYINT:
return scanTypeInt8
case C.TSDB_DATA_TYPE_SMALLINT:
return scanTypeInt16
case C.TSDB_DATA_TYPE_INT:
return scanTypeInt32
case C.TSDB_DATA_TYPE_BIGINT:
return scanTypeInt64
case C.TSDB_DATA_TYPE_FLOAT:
return scanTypeFloat32
case C.TSDB_DATA_TYPE_DOUBLE:
return scanTypeFloat64
case C.TSDB_DATA_TYPE_BINARY:
return scanTypeRawBytes
case C.TSDB_DATA_TYPE_NCHAR:
return scanTypeRawBytes
case C.TSDB_DATA_TYPE_TIMESTAMP:
return scanTypeNullTime
default:
return scanTypeUnknown
}
}
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package taosSql
import (
"database/sql/driver"
"fmt"
"reflect"
)
type taosSqlStmt struct {
mc *taosConn
id uint32
pSql string
paramCount int
}
func (stmt *taosSqlStmt) Close() error {
return nil
}
func (stmt *taosSqlStmt) NumInput() int {
return stmt.paramCount
}
func (stmt *taosSqlStmt) Exec(args []driver.Value) (driver.Result, error) {
if stmt.mc == nil || stmt.mc.taos == nil {
return nil, errInvalidConn
}
return stmt.mc.Exec(stmt.pSql, args)
}
func (stmt *taosSqlStmt) Query(args []driver.Value) (driver.Rows, error) {
if stmt.mc == nil || stmt.mc.taos == nil {
return nil, errInvalidConn
}
return stmt.query(args)
}
func (stmt *taosSqlStmt) query(args []driver.Value) (*binaryRows, error) {
mc := stmt.mc
if mc == nil || mc.taos == nil {
return nil, errInvalidConn
}
querySql := stmt.pSql
if len(args) != 0 {
if !mc.cfg.interpolateParams {
return nil, driver.ErrSkip
}
// try client-side prepare to reduce roundtrip
prepared, err := mc.interpolateParams(stmt.pSql, args)
if err != nil {
return nil, err
}
querySql = prepared
}
num_fields, err := mc.taosQuery(querySql)
if err == nil {
// Read Result
rows := new(binaryRows)
rows.mc = mc
// Columns field
rows.rs.columns, err = mc.readColumns(num_fields)
return rows, err
}
return nil, err
}
type converter struct{}
// ConvertValue mirrors the reference/default converter in database/sql/driver
// with _one_ exception. We support uint64 with their high bit and the default
// implementation does not. This function should be kept in sync with
// database/sql/driver defaultConverter.ConvertValue() except for that
// deliberate difference.
func (c converter) ConvertValue(v interface{}) (driver.Value, error) {
if driver.IsValue(v) {
return v, nil
}
if vr, ok := v.(driver.Valuer); ok {
sv, err := callValuerValue(vr)
if err != nil {
return nil, err
}
if !driver.IsValue(sv) {
return nil, fmt.Errorf("non-Value type %T returned from Value", sv)
}
return sv, nil
}
rv := reflect.ValueOf(v)
switch rv.Kind() {
case reflect.Ptr:
// indirect pointers
if rv.IsNil() {
return nil, nil
} else {
return c.ConvertValue(rv.Elem().Interface())
}
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return rv.Int(), nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return rv.Uint(), nil
case reflect.Float32, reflect.Float64:
return rv.Float(), nil
case reflect.Bool:
return rv.Bool(), nil
case reflect.Slice:
ek := rv.Type().Elem().Kind()
if ek == reflect.Uint8 {
return rv.Bytes(), nil
}
return nil, fmt.Errorf("unsupported type %T, a slice of %s", v, ek)
case reflect.String:
return rv.String(), nil
}
return nil, fmt.Errorf("unsupported type %T, a %s", v, rv.Kind())
}
var valuerReflectType = reflect.TypeOf((*driver.Valuer)(nil)).Elem()
// callValuerValue returns vr.Value(), with one exception:
// If vr.Value is an auto-generated method on a pointer type and the
// pointer is nil, it would panic at runtime in the panicwrap
// method. Treat it like nil instead.
//
// This is so people can implement driver.Value on value types and
// still use nil pointers to those types to mean nil/NULL, just like
// string/*string.
//
// This is an exact copy of the same-named unexported function from the
// database/sql package.
func callValuerValue(vr driver.Valuer) (v driver.Value, err error) {
if rv := reflect.ValueOf(vr); rv.Kind() == reflect.Ptr &&
rv.IsNil() &&
rv.Type().Elem().Implements(valuerReflectType) {
return nil, nil
}
return vr.Value()
}
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package taosSql
import (
"bufio"
"errors"
"fmt"
"io"
"log"
"os"
"strings"
)
// Various errors the driver might return.
var (
errInvalidConn = errors.New("invalid connection")
errConnNoExist = errors.New("no existent connection ")
)
var taosLog *log.Logger
// SetLogger is used to set the logger for critical errors.
// The initial logger
func taosLogInit() {
cfgName := "/etc/taos/taos.cfg"
logNameDefault := "/var/log/taos/taosgo.log"
var logName string
// get log path from cfg file
cfgFile, err := os.OpenFile(cfgName, os.O_RDONLY, 0644)
defer cfgFile.Close()
if err != nil {
fmt.Println(err)
logName = logNameDefault
} else {
logName, err = getLogNameFromCfg(cfgFile)
if err != nil {
fmt.Println(err)
logName = logNameDefault
}
}
logFile, err := os.OpenFile(logName, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
if err != nil {
fmt.Println(err)
os.Exit(1)
}
taosLog = log.New(logFile, "", log.LstdFlags)
taosLog.SetPrefix("TAOS DRIVER ")
taosLog.SetFlags(log.LstdFlags|log.Lshortfile)
}
func getLogNameFromCfg(f *os.File) (string, error) {
// Create file buf, *Reader
r := bufio.NewReader(f)
for {
//read one line, return to slice b
b, _, err := r.ReadLine()
if err != nil {
if err == io.EOF {
break
}
panic(err)
}
// Remove space of left and right
s := strings.TrimSpace(string(b))
if strings.Index(s, "#") == 0 {
// comment line
continue
}
if len(s) == 0 {
continue
}
var ns string
// If there is a comment on the right of the line, must be remove
index := strings.Index(s, "#")
if index > 0 {
// Gets the string to the left of the comment to determine whether it is empty
ns = s[:index]
if len(ns) == 0 {
continue
}
} else {
ns = s;
}
ss := strings.Fields(ns)
if strings.Compare("logDir", ss[0]) != 0 {
continue
}
if len(ss) < 2 {
break
}
// Add a filename after the path
logName := ss[1] + "/taosgo.log"
return logName,nil
}
return "", errors.New("no config log path, use default")
}
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package taosSql
/*
#cgo CFLAGS : -I/usr/include
#cgo LDFLAGS: -L/usr/lib -ltaos
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <taos.h>
*/
import "C"
import (
"errors"
"unsafe"
)
func (mc *taosConn) taosConnect(ip, user, pass, db string, port int) (taos unsafe.Pointer, err error) {
cuser := C.CString(user)
cpass := C.CString(pass)
cip := C.CString(ip)
cdb := C.CString(db)
defer C.free(unsafe.Pointer(cip))
defer C.free(unsafe.Pointer(cuser))
defer C.free(unsafe.Pointer(cpass))
defer C.free(unsafe.Pointer(cdb))
taosObj := C.taos_connect(cip, cuser, cpass, cdb, (C.ushort)(port))
if taosObj == nil {
return nil, errors.New("taos_connect() fail!")
}
return (unsafe.Pointer)(taosObj), nil
}
func (mc *taosConn) taosQuery(sqlstr string) (int, error) {
//taosLog.Printf("taosQuery() input sql:%s\n", sqlstr)
csqlstr := C.CString(sqlstr)
defer C.free(unsafe.Pointer(csqlstr))
code := int(C.taos_query(mc.taos, csqlstr))
if 0 != code {
mc.taos_error()
errStr := C.GoString(C.taos_errstr(mc.taos))
taosLog.Println("taos_query() failed:", errStr)
taosLog.Printf("taosQuery() input sql:%s\n", sqlstr)
return 0, errors.New(errStr)
}
// read result and save into mc struct
num_fields := int(C.taos_field_count(mc.taos))
if 0 == num_fields { // there are no select and show kinds of commands
mc.affectedRows = int(C.taos_affected_rows(mc.taos))
mc.insertId = 0
}
return num_fields, nil
}
func (mc *taosConn) taos_close() {
C.taos_close(mc.taos)
}
func (mc *taosConn) taos_error() {
// free local resouce: allocated memory/metric-meta refcnt
//var pRes unsafe.Pointer
pRes := C.taos_use_result(mc.taos)
C.taos_free_result(pRes)
}
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package taosSql
/*
#cgo CFLAGS : -I/usr/include
#include <stdlib.h>
#cgo LDFLAGS: -L/usr/lib -ltaos
void taosSetAllocMode(int mode, const char* path, _Bool autoDump);
void taosDumpMemoryLeak();
*/
import "C"
import (
"database/sql/driver"
"errors"
"fmt"
"sync/atomic"
"time"
"unsafe"
)
// Returns the bool value of the input.
// The 2nd return value indicates if the input was a valid bool value
func readBool(input string) (value bool, valid bool) {
switch input {
case "1", "true", "TRUE", "True":
return true, true
case "0", "false", "FALSE", "False":
return false, true
}
// Not a valid bool value
return
}
/******************************************************************************
* Time related utils *
******************************************************************************/
// NullTime represents a time.Time that may be NULL.
// NullTime implements the Scanner interface so
// it can be used as a scan destination:
//
// var nt NullTime
// err := db.QueryRow("SELECT time FROM foo WHERE id=?", id).Scan(&nt)
// ...
// if nt.Valid {
// // use nt.Time
// } else {
// // NULL value
// }
//
// This NullTime implementation is not driver-specific
type NullTime struct {
Time time.Time
Valid bool // Valid is true if Time is not NULL
}
// Scan implements the Scanner interface.
// The value type must be time.Time or string / []byte (formatted time-string),
// otherwise Scan fails.
func (nt *NullTime) Scan(value interface{}) (err error) {
if value == nil {
nt.Time, nt.Valid = time.Time{}, false
return
}
switch v := value.(type) {
case time.Time:
nt.Time, nt.Valid = v, true
return
case []byte:
nt.Time, err = parseDateTime(string(v), time.UTC)
nt.Valid = (err == nil)
return
case string:
nt.Time, err = parseDateTime(v, time.UTC)
nt.Valid = (err == nil)
return
}
nt.Valid = false
return fmt.Errorf("Can't convert %T to time.Time", value)
}
// Value implements the driver Valuer interface.
func (nt NullTime) Value() (driver.Value, error) {
if !nt.Valid {
return nil, nil
}
return nt.Time, nil
}
func parseDateTime(str string, loc *time.Location) (t time.Time, err error) {
base := "0000-00-00 00:00:00.0000000"
switch len(str) {
case 10, 19, 21, 22, 23, 24, 25, 26: // up to "YYYY-MM-DD HH:MM:SS.MMMMMM"
if str == base[:len(str)] {
return
}
t, err = time.Parse(timeFormat[:len(str)], str)
default:
err = fmt.Errorf("invalid time string: %s", str)
return
}
// Adjust location
if err == nil && loc != time.UTC {
y, mo, d := t.Date()
h, mi, s := t.Clock()
t, err = time.Date(y, mo, d, h, mi, s, t.Nanosecond(), loc), nil
}
return
}
// zeroDateTime is used in formatBinaryDateTime to avoid an allocation
// if the DATE or DATETIME has the zero value.
// It must never be changed.
// The current behavior depends on database/sql copying the result.
var zeroDateTime = []byte("0000-00-00 00:00:00.000000")
const digits01 = "0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789"
const digits10 = "0000000000111111111122222222223333333333444444444455555555556666666666777777777788888888889999999999"
/******************************************************************************
* Convert from and to bytes *
******************************************************************************/
func uint64ToBytes(n uint64) []byte {
return []byte{
byte(n),
byte(n >> 8),
byte(n >> 16),
byte(n >> 24),
byte(n >> 32),
byte(n >> 40),
byte(n >> 48),
byte(n >> 56),
}
}
func uint64ToString(n uint64) []byte {
var a [20]byte
i := 20
// U+0030 = 0
// ...
// U+0039 = 9
var q uint64
for n >= 10 {
i--
q = n / 10
a[i] = uint8(n-q*10) + 0x30
n = q
}
i--
a[i] = uint8(n) + 0x30
return a[i:]
}
// treats string value as unsigned integer representation
func stringToInt(b []byte) int {
val := 0
for i := range b {
val *= 10
val += int(b[i] - 0x30)
}
return val
}
// reserveBuffer checks cap(buf) and expand buffer to len(buf) + appendSize.
// If cap(buf) is not enough, reallocate new buffer.
func reserveBuffer(buf []byte, appendSize int) []byte {
newSize := len(buf) + appendSize
if cap(buf) < newSize {
// Grow buffer exponentially
newBuf := make([]byte, len(buf)*2+appendSize)
copy(newBuf, buf)
buf = newBuf
}
return buf[:newSize]
}
// escapeBytesBackslash escapes []byte with backslashes (\)
// This escapes the contents of a string (provided as []byte) by adding backslashes before special
// characters, and turning others into specific escape sequences, such as
// turning newlines into \n and null bytes into \0.
func escapeBytesBackslash(buf, v []byte) []byte {
pos := len(buf)
buf = reserveBuffer(buf, len(v)*2)
for _, c := range v {
switch c {
case '\x00':
buf[pos] = '\\'
buf[pos+1] = '0'
pos += 2
case '\n':
buf[pos] = '\\'
buf[pos+1] = 'n'
pos += 2
case '\r':
buf[pos] = '\\'
buf[pos+1] = 'r'
pos += 2
case '\x1a':
buf[pos] = '\\'
buf[pos+1] = 'Z'
pos += 2
case '\'':
buf[pos] = '\\'
buf[pos+1] = '\''
pos += 2
case '"':
buf[pos] = '\\'
buf[pos+1] = '"'
pos += 2
case '\\':
buf[pos] = '\\'
buf[pos+1] = '\\'
pos += 2
default:
buf[pos] = c
pos++
}
}
return buf[:pos]
}
// escapeStringBackslash is similar to escapeBytesBackslash but for string.
func escapeStringBackslash(buf []byte, v string) []byte {
pos := len(buf)
buf = reserveBuffer(buf, len(v)*2)
for i := 0; i < len(v); i++ {
c := v[i]
switch c {
case '\x00':
buf[pos] = '\\'
buf[pos+1] = '0'
pos += 2
case '\n':
buf[pos] = '\\'
buf[pos+1] = 'n'
pos += 2
case '\r':
buf[pos] = '\\'
buf[pos+1] = 'r'
pos += 2
case '\x1a':
buf[pos] = '\\'
buf[pos+1] = 'Z'
pos += 2
//case '\'':
// buf[pos] = '\\'
// buf[pos+1] = '\''
// pos += 2
case '"':
buf[pos] = '\\'
buf[pos+1] = '"'
pos += 2
case '\\':
buf[pos] = '\\'
buf[pos+1] = '\\'
pos += 2
default:
buf[pos] = c
pos++
}
}
return buf[:pos]
}
// escapeBytesQuotes escapes apostrophes in []byte by doubling them up.
// This escapes the contents of a string by doubling up any apostrophes that
// it contains. This is used when the NO_BACKSLASH_ESCAPES SQL_MODE is in
// effect on the server.
func escapeBytesQuotes(buf, v []byte) []byte {
pos := len(buf)
buf = reserveBuffer(buf, len(v)*2)
for _, c := range v {
if c == '\'' {
buf[pos] = '\''
buf[pos+1] = '\''
pos += 2
} else {
buf[pos] = c
pos++
}
}
return buf[:pos]
}
// escapeStringQuotes is similar to escapeBytesQuotes but for string.
func escapeStringQuotes(buf []byte, v string) []byte {
pos := len(buf)
buf = reserveBuffer(buf, len(v)*2)
for i := 0; i < len(v); i++ {
c := v[i]
if c == '\'' {
buf[pos] = '\''
buf[pos+1] = '\''
pos += 2
} else {
buf[pos] = c
pos++
}
}
return buf[:pos]
}
/******************************************************************************
* Sync utils *
******************************************************************************/
// noCopy may be embedded into structs which must not be copied
// after the first use.
//
// See https://github.com/golang/go/issues/8005#issuecomment-190753527
// for details.
type noCopy struct{}
// Lock is a no-op used by -copylocks checker from `go vet`.
func (*noCopy) Lock() {}
// atomicBool is a wrapper around uint32 for usage as a boolean value with
// atomic access.
type atomicBool struct {
_noCopy noCopy
value uint32
}
// IsSet returns whether the current boolean value is true
func (ab *atomicBool) IsSet() bool {
return atomic.LoadUint32(&ab.value) > 0
}
// Set sets the value of the bool regardless of the previous value
func (ab *atomicBool) Set(value bool) {
if value {
atomic.StoreUint32(&ab.value, 1)
} else {
atomic.StoreUint32(&ab.value, 0)
}
}
// TrySet sets the value of the bool and returns whether the value changed
func (ab *atomicBool) TrySet(value bool) bool {
if value {
return atomic.SwapUint32(&ab.value, 1) == 0
}
return atomic.SwapUint32(&ab.value, 0) > 0
}
// atomicError is a wrapper for atomically accessed error values
type atomicError struct {
_noCopy noCopy
value atomic.Value
}
// Set sets the error value regardless of the previous value.
// The value must not be nil
func (ae *atomicError) Set(value error) {
ae.value.Store(value)
}
// Value returns the current error value
func (ae *atomicError) Value() error {
if v := ae.value.Load(); v != nil {
// this will panic if the value doesn't implement the error interface
return v.(error)
}
return nil
}
func namedValueToValue(named []driver.NamedValue) ([]driver.Value, error) {
dargs := make([]driver.Value, len(named))
for n, param := range named {
if len(param.Name) > 0 {
// TODO: support the use of Named Parameters #561
return nil, errors.New("taosSql: driver does not support the use of Named Parameters")
}
dargs[n] = param.Value
}
return dargs, nil
}
/******************************************************************************
* Utils for C memory issues debugging *
******************************************************************************/
func SetAllocMode(mode int32, path string) {
cpath := C.CString(path)
defer C.free(unsafe.Pointer(cpath))
C.taosSetAllocMode(C.int(mode), cpath, false)
}
func DumpMemoryLeak() {
C.taosDumpMemoryLeak()
}
......@@ -105,7 +105,7 @@ extern SSdbPeer *sdbPeer[];
#endif
void *sdbOpenTable(int maxRows, int32_t maxRowSize, char *name, uint8_t keyType, char *directory,
void *sdbOpenTable(int maxRows, int32_t maxRowSize, char *name, char keyType, char *directory,
void *(*appTool)(char, void *, char *, int, int *));
void *sdbGetRow(void *handle, void *key);
......
......@@ -513,7 +513,7 @@ typedef struct {
int16_t orderColId;
int16_t numOfCols; // the number of columns will be load from vnode
char intervalTimeUnit; // time interval type, for revisement of interval(1d)
char slidingTimeUnit; // time interval type, for revisement of interval(1d)
int64_t intervalTime; // time interval for aggregation, in million second
int64_t slidingTime; // value for sliding window
......
......@@ -136,7 +136,7 @@ tExtMemBuffer *createExtMemBuffer(int32_t inMemSize, int32_t elemSize, SColumnMo
* @param pMemBuffer
* @return
*/
void *destoryExtMemBuffer(tExtMemBuffer *pMemBuffer);
void *destroyExtMemBuffer(tExtMemBuffer *pMemBuffer);
/**
* @param pMemBuffer
......
......@@ -30,7 +30,7 @@ typedef struct SInterpolationInfo {
char * prevValues; // previous row of data
char * nextValues; // next row of data
int32_t numOfTags;
char ** pTags; // tags value for current interoplation
char ** pTags; // tags value for current interpolation
} SInterpolationInfo;
typedef struct SPoint {
......@@ -83,6 +83,8 @@ int32_t taosDoInterpoResult(SInterpolationInfo *pInterpoInfo, int16_t interpoTyp
int taosDoLinearInterpolation(int32_t type, SPoint *point1, SPoint *point2, SPoint *point);
int taosDoLinearInterpolationD(int32_t type, SPoint* point1, SPoint* point2, SPoint* point);
#ifdef __cplusplus
}
#endif
......
......@@ -103,8 +103,8 @@ extern "C" {
#define TSDB_MAX_ALLOWED_SQL_LEN (8*1024*1024U) // sql length should be less than 6mb
#define TSDB_MAX_BYTES_PER_ROW TSDB_MAX_COLUMNS * 16
#define TSDB_MAX_TAGS_LEN 512
#define TSDB_MAX_TAGS 32
#define TSDB_MAX_TAGS_LEN 2048
#define TSDB_MAX_TAGS 128
#define TSDB_AUTH_LEN 16
#define TSDB_KEY_LEN 16
......@@ -133,7 +133,7 @@ extern "C" {
#define TSDB_DEFAULT_PKT_SIZE 65480 //same as RPC_MAX_UDP_SIZE
#define TSDB_PAYLOAD_SIZE (TSDB_DEFAULT_PKT_SIZE - 100)
#define TSDB_DEFAULT_PAYLOAD_SIZE 1024 // default payload size
#define TSDB_DEFAULT_PAYLOAD_SIZE 4096 // default payload size
#define TSDB_EXTRA_PAYLOAD_SIZE 128 // extra bytes for auth
#define TSDB_SQLCMD_SIZE 1024
#define TSDB_MAX_VNODES 256
......
......@@ -167,6 +167,11 @@ typedef struct SExtTagsInfo {
struct SQLFunctionCtx **pTagCtxList;
} SExtTagsInfo;
typedef struct SBoundaryData {
TSKEY key;
double data;
} SBoundaryData;
// sql function runtime context
typedef struct SQLFunctionCtx {
int32_t startOffset;
......@@ -195,6 +200,8 @@ typedef struct SQLFunctionCtx {
SResultInfo *resultInfo;
SExtTagsInfo tagInfo;
SBoundaryData prev; // this value may be less or equalled to the start time of time window
SBoundaryData next; // this value may be greater or equalled to the end time of time window
} SQLFunctionCtx;
typedef struct SQLAggFuncElem {
......
......@@ -141,6 +141,7 @@ static void shellSourceFile(TAOS *con, char *fptr) {
if (wordexp(fptr, &full_path, 0) != 0) {
fprintf(stderr, "ERROR: illegal file name\n");
free(cmd);
return;
}
......@@ -166,6 +167,7 @@ static void shellSourceFile(TAOS *con, char *fptr) {
if (f == NULL) {
fprintf(stderr, "ERROR: failed to open file %s\n", fname);
wordfree(&full_path);
free(cmd);
return;
}
......
......@@ -325,6 +325,7 @@ int taosDumpOut(struct arguments *arguments);
int taosDumpIn(struct arguments *arguments);
void taosDumpCreateDbClause(SDbInfo *dbInfo, bool isDumpProperty, FILE *fp);
int taosDumpDb(SDbInfo *dbInfo, struct arguments *arguments, FILE *fp, TAOS *taosCon);
int32_t taosDumpStable(char *table, FILE *fp, TAOS* taosCon);
void taosDumpCreateTableClause(STableDef *tableDes, int numOfCols, FILE *fp);
void taosDumpCreateMTableClause(STableDef *tableDes, char *metric, int numOfCols, FILE *fp);
int32_t taosDumpTable(char *table, char *metric, struct arguments *arguments, FILE *fp, TAOS* taosCon);
......@@ -616,7 +617,7 @@ int taosDumpOut(struct arguments *arguments) {
int32_t count = 0;
STableRecordInfo tableRecordInfo;
char tmpBuf[TSDB_FILENAME_LEN+1] = {0};
char tmpBuf[TSDB_FILENAME_LEN+9] = {0};
if (arguments->outpath[0] != 0) {
sprintf(tmpBuf, "%s/dbs.sql", arguments->outpath);
} else {
......@@ -927,7 +928,7 @@ void* taosDumpOutWorkThreadFp(void *arg)
STableRecord tableRecord;
int fd;
char tmpFileName[TSDB_FILENAME_LEN + 1] = {0};
char tmpFileName[TSDB_FILENAME_LEN + 128] = {0};
sprintf(tmpFileName, ".tables.tmp.%d", pThread->threadIndex);
fd = open(tmpFileName, O_RDWR | O_CREAT, S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH);
if (fd == -1) {
......@@ -936,7 +937,7 @@ void* taosDumpOutWorkThreadFp(void *arg)
}
FILE *fp = NULL;
memset(tmpFileName, 0, TSDB_FILENAME_LEN);
memset(tmpFileName, 0, TSDB_FILENAME_LEN + 128);
if (tsArguments.outpath[0] != 0) {
sprintf(tmpFileName, "%s/%s.tables.%d.sql", tsArguments.outpath, pThread->dbName, pThread->threadIndex);
......
......@@ -75,6 +75,8 @@ bool httpParseTaosdAuthToken(HttpContext *pContext, char *token, int len) {
unsigned char *base64 = base64_decode(token, len, &outlen);
if (base64 == NULL || outlen == 0) {
httpError("context:%p, fd:%d, ip:%s, taosd token:%s parsed error", pContext, pContext->fd, pContext->ipstr, token);
if (base64)
free(base64);
return false;
}
if (outlen != (TSDB_USER_LEN + TSDB_PASSWORD_LEN)) {
......
......@@ -68,9 +68,7 @@ bool restProcessSqlRequest(HttpContext* pContext, int timestampFmt) {
}
/*
* for async test
* /
// for async test
/*
if (httpCheckUsedbSql(sql)) {
httpSendErrorResp(pContext, HTTP_NO_EXEC_USEDB);
......
......@@ -13,8 +13,8 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef TDENGINE_PLATFORM_LINUX_H
#define TDENGINE_PLATFORM_LINUX_H
#ifndef TDENGINE_PLATFORM_DARWIN_H
#define TDENGINE_PLATFORM_DARWIN_H
#ifdef __cplusplus
extern "C" {
......
......@@ -206,17 +206,20 @@ void *taosInitTcpClient(char *ip, uint16_t port, char *label, int num, void *fp,
if (pthread_mutex_init(&(pTcp->mutex), NULL) < 0) {
tError("%s failed to init TCP mutex, reason:%s", label, strerror(errno));
free(pTcp);
return NULL;
}
if (pthread_cond_init(&(pTcp->fdReady), NULL) != 0) {
tError("%s init TCP condition variable failed, reason:%s\n", label, strerror(errno));
free(pTcp);
return NULL;
}
pTcp->pollFd = epoll_create(10); // size does not matter
if (pTcp->pollFd < 0) {
tError("%s failed to create TCP epoll", label);
free(pTcp);
return NULL;
}
......@@ -226,6 +229,7 @@ void *taosInitTcpClient(char *ip, uint16_t port, char *label, int num, void *fp,
pthread_attr_setdetachstate(&thattr, PTHREAD_CREATE_JOINABLE);
if (pthread_create(&(pTcp->thread), &thattr, taosReadTcpData, (void *)(pTcp)) != 0) {
tError("%s failed to create TCP read data thread, reason:%s", label, strerror(errno));
free(pTcp);
return NULL;
}
......
......@@ -389,6 +389,7 @@ void *taosInitTcpServer(char *ip, uint16_t port, char *label, int numOfThreads,
pServerObj->pThreadObj = (SThreadObj *)malloc(sizeof(SThreadObj) * (size_t)numOfThreads);
if (pServerObj->pThreadObj == NULL) {
tError("TCP:%s no enough memory", label);
free(pServerObj);
return NULL;
}
memset(pServerObj->pThreadObj, 0, sizeof(SThreadObj) * (size_t)numOfThreads);
......@@ -401,17 +402,23 @@ void *taosInitTcpServer(char *ip, uint16_t port, char *label, int numOfThreads,
if (pthread_mutex_init(&(pThreadObj->threadMutex), NULL) < 0) {
tError("%s failed to init TCP process data mutex, reason:%s", label, strerror(errno));
free(pServerObj->pThreadObj);
free(pServerObj);
return NULL;
}
if (pthread_cond_init(&(pThreadObj->fdReady), NULL) != 0) {
tError("%s init TCP condition variable failed, reason:%s\n", label, strerror(errno));
free(pServerObj->pThreadObj);
free(pServerObj);
return NULL;
}
pThreadObj->pollFd = epoll_create(10); // size does not matter
if (pThreadObj->pollFd < 0) {
tError("%s failed to create TCP epoll", label);
free(pServerObj->pThreadObj);
free(pServerObj);
return NULL;
}
......@@ -419,6 +426,8 @@ void *taosInitTcpServer(char *ip, uint16_t port, char *label, int numOfThreads,
pthread_attr_setdetachstate(&thattr, PTHREAD_CREATE_JOINABLE);
if (pthread_create(&(pThreadObj->thread), &thattr, (void *)taosProcessTcpData, (void *)(pThreadObj)) != 0) {
tError("%s failed to create TCP process data thread, reason:%s", label, strerror(errno));
free(pServerObj->pThreadObj);
free(pServerObj);
return NULL;
}
......@@ -430,6 +439,8 @@ void *taosInitTcpServer(char *ip, uint16_t port, char *label, int numOfThreads,
pthread_attr_setdetachstate(&thattr, PTHREAD_CREATE_JOINABLE);
if (pthread_create(&(pServerObj->thread), &thattr, (void *)taosAcceptTcpConnection, (void *)(pServerObj)) != 0) {
tError("%s failed to create TCP accept thread, reason:%s", label, strerror(errno));
free(pServerObj->pThreadObj);
free(pServerObj);
return NULL;
}
......
......@@ -127,7 +127,7 @@ typedef struct {
} SMnodeStatus;
typedef struct {
uint8_t dbId;
char dbId;
char type;
uint64_t version;
short dataLen;
......
......@@ -289,7 +289,7 @@ sdb_exit1:
return -1;
}
void *sdbOpenTable(int maxRows, int32_t maxRowSize, char *name, uint8_t keyType, char *directory,
void *sdbOpenTable(int maxRows, int32_t maxRowSize, char *name, char keyType, char *directory,
void *(*appTool)(char, void *, char *, int, int *)) {
SSdbTable *pTable = (SSdbTable *)malloc(sizeof(SSdbTable));
if (pTable == NULL) return NULL;
......@@ -310,7 +310,7 @@ void *sdbOpenTable(int maxRows, int32_t maxRowSize, char *name, uint8_t keyType,
pTable->appTool = appTool;
sprintf(pTable->fn, "%s/%s.db", directory, pTable->name);
if (sdbInitIndexFp[keyType] != NULL) pTable->iHandle = (*sdbInitIndexFp[keyType])(maxRows, sizeof(SRowMeta));
if (sdbInitIndexFp[(int)keyType] != NULL) pTable->iHandle = (*sdbInitIndexFp[(int)keyType])(maxRows, sizeof(SRowMeta));
pthread_mutex_init(&pTable->mutex, NULL);
......@@ -812,10 +812,11 @@ void sdbResetTable(SSdbTable *pTable) {
SRowHead *rowHead = NULL;
void * pMetaRow = NULL;
int64_t oldId = pTable->id;
int oldNumOfRows = pTable->numOfRows;
//TODO: check
//int oldNumOfRows = pTable->numOfRows;
if (sdbOpenSdbFile(pTable) < 0) return;
pTable->numOfRows = oldNumOfRows;
//pTable->numOfRows = oldNumOfRows;
total_size = sizeof(SRowHead) + pTable->maxRowSize + sizeof(TSCKSUM);
rowHead = (SRowHead *)malloc(total_size);
......
......@@ -261,7 +261,7 @@ typedef struct SQuery {
TSKEY ekey;
int64_t intervalTime;
int64_t slidingTime; // sliding time for sliding window query
char intervalTimeUnit; // interval data type, used for daytime revise
char slidingTimeUnit; // interval data type, used for daytime revise
int8_t precision;
int16_t numOfOutputCols;
int16_t interpoType;
......
......@@ -85,12 +85,6 @@ typedef enum {
QUERY_NO_DATA_TO_CHECK = 0x8u,
} vnodeQueryStatus;
typedef struct SPointInterpoSupporter {
int32_t numOfCols;
char** pPrevPoint;
char** pNextPoint;
} SPointInterpoSupporter;
typedef struct SBlockInfo {
TSKEY keyFirst;
TSKEY keyLast;
......@@ -285,6 +279,7 @@ void clearClosedTimeWindow(SQueryRuntimeEnv* pRuntimeEnv);
int32_t numOfClosedTimeWindow(SWindowResInfo* pWindowResInfo);
void closeTimeWindow(SWindowResInfo* pWindowResInfo, int32_t slot);
void closeAllTimeWindow(SWindowResInfo* pWindowResInfo);
SWindowResult* getWindowRes(SWindowResInfo* pWindowResInfo, size_t index);
#ifdef __cplusplus
}
......
......@@ -141,6 +141,12 @@ typedef struct SWindowResInfo {
int64_t threshold; // threshold for return completed results.
} SWindowResInfo;
typedef struct SPointInterpoSupporter {
int32_t numOfCols;
char** pPrevPoint;
char** pNextPoint;
} SPointInterpoSupporter;
typedef struct SQueryRuntimeEnv {
SPositionInfo startPos; /* the start position, used for secondary/third iteration */
SPositionInfo endPos; /* the last access position in query, served as the start pos of reversed order query */
......@@ -172,6 +178,10 @@ typedef struct SQueryRuntimeEnv {
bool stableQuery; // is super table query or not
SQueryDiskbasedResultBuf* pResultBuf; // query result buffer based on blocked-wised disk file
bool hasTimeWindow;
char** lastRowInBlock;
bool interpoSearch;
/*
* Temporarily hold the in-memory cache block info during scan cache blocks
* Here we do not use the cache block info from pMeterObj, simple because it may change anytime
......
......@@ -567,6 +567,7 @@ int mgmtCreateMeter(SDbObj *pDb, SCreateTableMsg *pCreate) {
pMetric = mgmtGetMeter(pTagData);
if (pMetric == NULL) {
mError("table:%s, corresponding super table does not exist", pCreate->meterId);
free(pMeter);
return TSDB_CODE_INVALID_TABLE;
}
......
......@@ -1332,36 +1332,37 @@ _rsp:
pRsp = (STaosRsp *)pMsg;
pRsp->code = code;
pMsg += sizeof(STaosRsp);
pConnectRsp = (SConnectRsp *)pRsp->more;
sprintf(pConnectRsp->acctId, "%x", pConn->pAcct->acctId);
strcpy(pConnectRsp->version, version);
pConnectRsp->writeAuth = pConn->writeAuth;
pConnectRsp->superAuth = pConn->superAuth;
pMsg += sizeof(SConnectRsp);
int size;
if (pSdbPublicIpList != NULL && pSdbIpList != NULL) {
size = pSdbPublicIpList->numOfIps * 4 + sizeof(SIpList);
if (pConn->usePublicIp) {
memcpy(pMsg, pSdbPublicIpList, size);
if (code == 0) {
sprintf(pConnectRsp->acctId, "%x", pConn->pAcct->acctId);
strcpy(pConnectRsp->version, version);
pConnectRsp->writeAuth = pConn->writeAuth;
pConnectRsp->superAuth = pConn->superAuth;
pMsg += sizeof(SConnectRsp);
int size;
if (pSdbPublicIpList != NULL && pSdbIpList != NULL) {
size = pSdbPublicIpList->numOfIps * 4 + sizeof(SIpList);
if (pConn->usePublicIp) {
memcpy(pMsg, pSdbPublicIpList, size);
} else {
memcpy(pMsg, pSdbIpList, size);
}
} else {
memcpy(pMsg, pSdbIpList, size);
SIpList tmpIpList;
tmpIpList.numOfIps = 0;
size = tmpIpList.numOfIps * 4 + sizeof(SIpList);
memcpy(pMsg, &tmpIpList, size);
}
} else {
SIpList tmpIpList;
tmpIpList.numOfIps = 0;
size = tmpIpList.numOfIps * 4 + sizeof(SIpList);
memcpy(pMsg, &tmpIpList, size);
}
pMsg += size;
// set the time resolution: millisecond or microsecond
*((uint32_t *)pMsg) = tsTimePrecision;
pMsg += sizeof(uint32_t);
pMsg += size;
if (code != 0) {
// set the time resolution: millisecond or microsecond
*((uint32_t *)pMsg) = tsTimePrecision;
pMsg += sizeof(uint32_t);
} else {
pConnectRsp->writeAuth = 0;
pConnectRsp->superAuth = 0;
pConn->pAcct = NULL;
......
......@@ -711,7 +711,6 @@ static int32_t mgmtFilterMeterByIndex(STabObj* pMetric, tQueryResultset* pRes, c
// failed to build expression, no result, return immediately
if (pExpr == NULL) {
mError("metric:%s, no result returned, error in super table query expression:%s", pMetric->meterId, pCond);
tfree(pCond);
return TSDB_CODE_OPS_NOT_SUPPORT;
} else { // query according to the binary expression
......
......@@ -184,6 +184,12 @@ int mgmtGetUserMeta(SMeterMeta *pMeta, SShowObj *pShow, SConnObj *pConn) {
pSchema[cols].bytes = htons(pShow->bytes[cols]);
cols++;
pShow->bytes[cols] = TSDB_USER_LEN;
pSchema[cols].type = TSDB_DATA_TYPE_BINARY;
strcpy(pSchema[cols].name, "account");
pSchema[cols].bytes = htons(pShow->bytes[cols]);
cols++;
pMeta->numOfColumns = htons(cols);
pShow->numOfColumns = cols;
......@@ -230,6 +236,10 @@ int mgmtRetrieveUsers(SShowObj *pShow, char *data, int rows, SConnObj *pConn) {
*(int64_t *)pWrite = pUser->createdTime;
cols++;
pWrite = data + pShow->offset[cols] * rows + pShow->bytes[cols] * numOfRows;
strcpy(pWrite, pUser->acct);
cols++;
numOfRows++;
}
pShow->numOfReads += numOfRows;
......
......@@ -405,6 +405,12 @@ void *mgmtVgroupActionDelete(void *row, char *str, int size, int *ssize) {
void *mgmtVgroupActionUpdate(void *row, char *str, int size, int *ssize) {
mgmtVgroupActionReset(row, str, size, ssize);
SVgObj *pVgroup = (SVgObj *)row;
if (pVgroup->idPool == NULL) {
mgmtVgroupActionInsert(row, str, size, ssize);
return NULL;
}
int oldTables = taosIdPoolMaxSize(pVgroup->idPool);
SDbObj *pDb = mgmtGetDb(pVgroup->dbName);
......
......@@ -413,7 +413,7 @@ void vnodeRemoveFile(int vnode, int fileId) {
vnodeGetDnameFromLname(headName, dataName, lastName, dHeadName, dDataName, dLastName);
int fd = open(headName, O_RDWR | O_CREAT, S_IRWXU | S_IRWXG | S_IRWXO);
if (fd > 0) {
if (fd >= 0) {
vnodeGetHeadFileHeaderInfo(fd, &headInfo);
atomic_fetch_add_64(&(pVnode->vnodeStatistic.totalStorage), -headInfo.totalStorage);
close(fd);
......
......@@ -80,6 +80,17 @@ static int32_t getGroupResultId(int32_t groupIndex) {
return base + (groupIndex * 10000);
}
static bool needsBoundaryTS(SQuery *pQuery) {
for(int32_t i = 0; i < pQuery->numOfOutputCols; ++i) {
int32_t functionId = pQuery->pSelectExpr[i].pBase.functionId;
if (functionId >= TSDB_FUNC_RATE && functionId <= TSDB_FUNC_AVG_IRATE) {
return true;
}
}
return false;
}
static FORCE_INLINE bool isIntervalQuery(SQuery *pQuery) { return pQuery->intervalTime > 0; }
// check the offset value integrity
......@@ -579,9 +590,9 @@ bool doRevisedResultsByLimit(SQInfo *pQInfo) {
return false;
}
static void setExecParams(SQuery *pQuery, SQLFunctionCtx *pCtx, int64_t StartQueryTimestamp, void *inputData,
static void setExecParams(SQueryRuntimeEnv *pRuntimeEnv, SQLFunctionCtx *pCtx, int64_t StartQueryTimestamp, void *inputData,
char *primaryColumnData, int32_t size, int32_t functionId, SField *pField, bool hasNull,
int32_t blockStatus, void *param, int32_t scanFlag);
void *param);
void createQueryResultInfo(SQuery *pQuery, SWindowResult *pResultRow, bool isSTableQuery, SPosInfo *posInfo);
......@@ -1088,7 +1099,7 @@ bool isCacheBlockValid(SQuery *pQuery, SCacheBlock *pBlock, SMeterObj *pMeterObj
}
SCacheInfo* pCacheInfo = (SCacheInfo*) pMeterObj->pCache;
if (pCacheInfo->commitPoint == pMeterObj->pointsPerBlock && pQuery->slot == pCacheInfo->currentSlot) {
if (pCacheInfo->commitPoint == pMeterObj->pointsPerBlock && pQuery->slot == pCacheInfo->commitSlot) {
dWarn("QInfo:%p vid:%d sid:%d id:%s, cache block is committed, ignore. slot:%d first:%d, last:%d, numOfBlocks:%d",
GET_QINFO_ADDR(pQuery), pMeterObj->vnode, pMeterObj->sid, pMeterObj->meterId, slot, pQuery->firstSlot,
pQuery->currentSlot, pQuery->numOfBlocks);
......@@ -1518,14 +1529,6 @@ static STimeWindow getActiveTimeWindow(SWindowResInfo *pWindowResInfo, int64_t t
w.ekey = w.skey + pQuery->intervalTime - 1;
}
/*
* query border check, skey should not be bounded by the query time range, since the value skey will
* be used as the time window index value. So we only change ekey of time window accordingly.
*/
if (w.ekey > pQuery->ekey && QUERY_IS_ASC_QUERY(pQuery)) {
w.ekey = pQuery->ekey;
}
assert(ts >= w.skey && ts <= w.ekey && w.skey != 0);
return w;
......@@ -1651,8 +1654,12 @@ static void doCheckQueryCompleted(SQueryRuntimeEnv *pRuntimeEnv, TSKEY lastKey,
continue;
}
if ((pResult->window.ekey <= lastKey && QUERY_IS_ASC_QUERY(pQuery)) ||
(pResult->window.skey >= lastKey && !QUERY_IS_ASC_QUERY(pQuery))) {
/*
* when the ekey equals to lastKey of current block, do NOT close it, since the interpolation may
* be involved.
*/
if ((pResult->window.ekey < lastKey && QUERY_IS_ASC_QUERY(pQuery)) ||
(pResult->window.skey > lastKey && !QUERY_IS_ASC_QUERY(pQuery))) {
closeTimeWindow(pWindowResInfo, i);
} else {
skey = pResult->window.skey;
......@@ -1742,7 +1749,8 @@ static void doBlockwiseApplyFunctions(SQueryRuntimeEnv *pRuntimeEnv, SWindowStat
pCtx[k].size = forwardStep;
pCtx[k].startOffset = (QUERY_IS_ASC_QUERY(pQuery)) ? startPos : startPos - (forwardStep - 1);
if ((aAggs[functionId].nStatus & TSDB_FUNCSTATE_SELECTIVITY) != 0) {
if ((aAggs[functionId].nStatus & TSDB_FUNCSTATE_SELECTIVITY) != 0
|| ((functionId >= TSDB_FUNC_RATE) && (functionId <= TSDB_FUNC_AVG_IRATE))) {
pCtx[k].ptsList = (TSKEY *)((char*)pRuntimeEnv->primaryColBuffer->data + pCtx[k].startOffset * TSDB_KEYSIZE);
}
......@@ -1829,6 +1837,31 @@ static int32_t getNextQualifiedWindow(SQueryRuntimeEnv *pRuntimeEnv, STimeWindow
}
}
static void handleInBlockEkeyInterpolation(SQueryRuntimeEnv* pRuntimeEnv, int32_t endPos,
const TSKEY* primaryKeyCol, STimeWindow* win, SQLFunctionCtx* pCtx) {
// this query time window ended in the current data block
SQuery* pQuery = pRuntimeEnv->pQuery;
TSKEY lastKey = primaryKeyCol[endPos];
TSKEY e = win->skey + pQuery->intervalTime;
TSKEY next = primaryKeyCol[endPos + 1];
// the next key is beyond the query time range
if ((next > pQuery->ekey && QUERY_IS_ASC_QUERY(pQuery)) || (next > pQuery->skey && !QUERY_IS_ASC_QUERY(pQuery))) {
pCtx->next.key = -1;
return;
}
pCtx->next.key = e;
char *d = pCtx->aInputElemBuf + pCtx->inputBytes * endPos;
SPoint point1 = (SPoint){.key = lastKey, .val = d};
SPoint point2 = (SPoint){.key = next, .val = (d + pCtx->inputBytes)};
SPoint point = (SPoint){.key = pCtx->next.key, .val = &pCtx->next.data};
taosDoLinearInterpolationD(pCtx->inputType, &point1, &point2, &point);
}
static TSKEY reviseWindowEkey(SQuery *pQuery, STimeWindow *pWindow) {
TSKEY ekey = -1;
if (QUERY_IS_ASC_QUERY(pQuery)) {
......@@ -1846,6 +1879,257 @@ static TSKEY reviseWindowEkey(SQuery *pQuery, STimeWindow *pWindow) {
return ekey;
}
static void interpolateEndKeyValue(SQueryRuntimeEnv *pRuntimeEnv, SBlockInfo* pBlockInfo, STimeWindow* win,
int32_t endPos, SQLFunctionCtx* pCtx, int32_t index) {
SQuery* pQuery = pRuntimeEnv->pQuery;
TSKEY *primaryKeyCol = (TSKEY *)pRuntimeEnv->primaryColBuffer->data;
// if current query window beyonds the whole query window, do not employ the interpolation
if ((win->ekey >= pQuery->ekey && QUERY_IS_ASC_QUERY(pQuery)) ||
(win->ekey >= pQuery->skey && !QUERY_IS_ASC_QUERY(pQuery))) {
pCtx->next.key = -1;
return;
}
if (!QUERY_IS_ASC_QUERY(pQuery) && win->skey >= pBlockInfo->keyLast) {
pCtx->next.key = -1;
return;
}
if (QUERY_IS_ASC_QUERY(pQuery)) {
/*
* the time window closed before current data block, use the interpolation to generate
* the final result part, endPos equals to -1 means that this time window ends before current data block.
*/
if (win->ekey < pBlockInfo->keyFirst) {
assert(endPos == -1);
TSKEY prev = *(int64_t*) pRuntimeEnv->lastRowInBlock[0];
TSKEY next = pBlockInfo->keyFirst;
pCtx->next.key = win->skey + pQuery->intervalTime;
char *d = pCtx->aInputElemBuf;
SPoint point1 = (SPoint){.key = prev, .val = pRuntimeEnv->lastRowInBlock[index]};
SPoint point2 = (SPoint){.key = next, .val = d};
SPoint point = (SPoint){.key = pCtx->next.key, .val = &pCtx->next.data};
taosDoLinearInterpolationD(pCtx->inputType, &point1, &point2, &point);
} else if (win->ekey < pBlockInfo->keyLast) {
handleInBlockEkeyInterpolation(pRuntimeEnv, endPos, primaryKeyCol, win, pCtx);
} else {
//do nothing now, the interpolation will be handled before processing the next data block
assert(win->ekey >= pBlockInfo->keyLast);
pCtx->next.key = -1;
}
} else { // desc order query
//the time window closed before current data block, use the interpolation to generate the final result part.
if (win->ekey >= pBlockInfo->keyLast) {
TSKEY prev = pBlockInfo->keyLast;
TSKEY next = *(TSKEY*) pRuntimeEnv->lastRowInBlock[0];
pCtx->next.key = win->skey + pQuery->intervalTime;
char *d = pCtx->aInputElemBuf + (pBlockInfo->size - 1) * pCtx->inputBytes;
SPoint point1 = (SPoint){.key = prev, .val = d};
SPoint point2 = (SPoint){.key = next, .val = pRuntimeEnv->lastRowInBlock[index]};
SPoint point = (SPoint){.key = pCtx->next.key, .val = &pCtx->next.data};
taosDoLinearInterpolationD(pCtx->inputType, &point1, &point2, &point);
} else if (win->ekey < pBlockInfo->keyLast) {
handleInBlockEkeyInterpolation(pRuntimeEnv, endPos, primaryKeyCol, win, pCtx);
} else {
pCtx->next.key = -1;
}
}
}
static void handleInBlockSkeyInterpolation (SQueryRuntimeEnv* pRuntimeEnv, int32_t startPos,
const TSKEY* primaryKeyCol, STimeWindow* win, SQLFunctionCtx* pCtx) {
assert(startPos > 0);
SQuery* pQuery = pRuntimeEnv->pQuery;
TSKEY prev = primaryKeyCol[startPos - 1];
TSKEY next = primaryKeyCol[startPos];
if (!QUERY_IS_ASC_QUERY(pQuery) && prev < pQuery->ekey) {
pCtx->prev.key = -1;
return;
}
pCtx->prev.key = win->skey;
char *d = pCtx->aInputElemBuf + pCtx->inputBytes * (startPos - 1);
SPoint point1 = (SPoint){.key = prev, .val = d};
SPoint point2 = (SPoint){.key = next, .val = (d + pCtx->inputBytes)};
SPoint point = (SPoint){.key = pCtx->prev.key, .val = &pCtx->prev.data};
taosDoLinearInterpolationD(pCtx->inputType, &point1, &point2, &point);
}
static void interpolateStartKeyValue(SQueryRuntimeEnv *pRuntimeEnv, SBlockInfo* pBlockInfo, SWindowResInfo* pWindowResInfo,
STimeWindow* win, int32_t startPos, SQLFunctionCtx* pCtx, int32_t index) {
SQuery* pQuery = pRuntimeEnv->pQuery;
TSKEY *primaryKeyCol = (TSKEY *)pRuntimeEnv->primaryColBuffer->data;
TSKEY skey = primaryKeyCol[startPos];
/*
* no need the start time interpolation
* 1. current window is the first window in either ascending or descending order output
* 2. time window start exactly from a timestamp with data
*/
if (skey == win->skey || win->skey < pWindowResInfo->startTime ||
(win->skey <= pQuery->skey && QUERY_IS_ASC_QUERY(pQuery)) ||
(win->skey <= pQuery->ekey && !QUERY_IS_ASC_QUERY(pQuery))) {
pCtx->prev.key = -1;
return;
}
if (QUERY_IS_ASC_QUERY(pQuery)) {
// the queried time window and time window of data block must be intersect
// assert(win->ekey >= pBlockInfo->keyFirst && win->skey <= pBlockInfo->keyLast);
/*
* this win should not be the first time window that starts from a less timestamp than
* the skey of current data block
*/
if (win->skey < pBlockInfo->keyFirst) {
TSKEY prev = *(TSKEY*) pRuntimeEnv->lastRowInBlock[0];
TSKEY next = pBlockInfo->keyFirst;
pCtx->prev.key = win->skey;
char *d = pCtx->aInputElemBuf;
SPoint point1 = (SPoint){.key = prev, .val = pRuntimeEnv->lastRowInBlock[index]};
SPoint point2 = (SPoint){.key = next, .val = d};
SPoint point = (SPoint){.key = pCtx->prev.key, .val = &pCtx->prev.data};
taosDoLinearInterpolationD(pCtx->inputType, &point1, &point2, &point);
} else {
handleInBlockSkeyInterpolation(pRuntimeEnv, startPos, primaryKeyCol, win, pCtx);
}
} else { // desc order
if (win->skey > pBlockInfo->keyLast) {
//this pBlockInfo located before current time window
TSKEY prev = pBlockInfo->keyLast;
TSKEY next = *(TSKEY*) pRuntimeEnv->lastRowInBlock[0];
pCtx->prev.key = win->skey;
char *d = pCtx->aInputElemBuf + (pBlockInfo->size - 1) * pCtx->inputBytes;
SPoint point1 = (SPoint){.key = prev, .val = d};
SPoint point2 = (SPoint){.key = next, .val = pRuntimeEnv->lastRowInBlock[index]};
SPoint point = (SPoint){.key = pCtx->prev.key, .val = &pCtx->prev.data};
taosDoLinearInterpolationD(pCtx->inputType, &point1, &point2, &point);
} else {
// the queried time window and time window of data block must be intersect
assert(win->ekey >= pBlockInfo->keyFirst && win->skey <= pBlockInfo->keyLast);
if (win->skey >= pBlockInfo->keyFirst) {
// the pBlockInfo is intersected with query time window
handleInBlockSkeyInterpolation(pRuntimeEnv, startPos, primaryKeyCol, win, pCtx);
} else {
assert(win->skey < pBlockInfo->keyFirst && win->ekey >= pBlockInfo->keyFirst);
pCtx->prev.key = -1;
}
}
}
}
static void doSetInterpolationDataForTimeWindow(SQueryRuntimeEnv* pRuntimeEnv, SWindowResInfo *pWindowResInfo,
SBlockInfo* pBlockInfo, STimeWindow* win, int32_t startPos, int32_t forwardStep) {
SQuery* pQuery = pRuntimeEnv->pQuery;
int32_t step = GET_FORWARD_DIRECTION_FACTOR(pQuery->order.order);
if (!pRuntimeEnv->interpoSearch) {
return;
}
int32_t s = startPos;
int32_t e = forwardStep * step + startPos - step;
if (!QUERY_IS_ASC_QUERY(pQuery)) {
SWAP(s, e, int32_t);
}
// interpolate for skey value
for(int32_t i = 0; i < pQuery->numOfOutputCols; ++i) {
if ((pQuery->pSelectExpr[i].pBase.functionId < TSDB_FUNC_RATE)
|| (pQuery->pSelectExpr[i].pBase.functionId > TSDB_FUNC_AVG_IRATE)) {
continue;
}
SColIndexEx *pCol = &pQuery->pSelectExpr[i].pBase.colInfo;
interpolateStartKeyValue(pRuntimeEnv, pBlockInfo, pWindowResInfo, win, s, &pRuntimeEnv->pCtx[i], pCol->colIdxInBuf);
}
// interpolate for ekey value
for(int32_t i = 0; i < pQuery->numOfOutputCols; ++i) {
if ((pQuery->pSelectExpr[i].pBase.functionId < TSDB_FUNC_RATE)
|| (pQuery->pSelectExpr[i].pBase.functionId > TSDB_FUNC_AVG_IRATE)) {
continue;
}
SColIndexEx *pCol = &pQuery->pSelectExpr[i].pBase.colInfo;
interpolateEndKeyValue(pRuntimeEnv, pBlockInfo, win, e, &pRuntimeEnv->pCtx[i], pCol->colIdxInBuf);
}
}
static void doInterpolatePrevTimeWindow(SQueryRuntimeEnv* pRuntimeEnv, SWindowResInfo* pWindowResInfo, SBlockInfo* pBlockInfo,
TSKEY ts, int32_t offset, STimeWindow* win) {
// get current not closed time window
SQuery* pQuery = pRuntimeEnv->pQuery;
int32_t slot = pWindowResInfo->curIndex;
if (slot == -1 || !pRuntimeEnv->interpoSearch) {
return;
}
while (slot < pWindowResInfo->size) {
STimeWindow w = getWindowResult(pWindowResInfo, slot)->window;
if (w.skey == win->skey) {
assert(w.ekey == win->ekey);
break;
}
// do not check for the closed time window
SWindowResult* pWindowRes = getWindowRes(pWindowResInfo, slot);
if (pWindowRes->status.closed) {
slot += 1;
continue;
}
// if current active window locates before current data block, do interpolate the result and close it
assert((w.skey < win->skey && w.ekey < ts && QUERY_IS_ASC_QUERY(pQuery)) ||
(w.skey > win->skey && w.skey > ts && !QUERY_IS_ASC_QUERY(pQuery)));
int32_t forwardStep = 0;
doSetInterpolationDataForTimeWindow(pRuntimeEnv, pWindowResInfo, pBlockInfo, &w, offset, forwardStep);
// set correct output buffer for interplate result. todo handle error
if (setWindowOutputBufByKey(pRuntimeEnv, pWindowResInfo, pRuntimeEnv->pMeterObj->sid, &w) != TSDB_CODE_SUCCESS) {
continue;
}
SWindowStatus *pStatus = getTimeWindowResStatus(pWindowResInfo, slot);
doBlockwiseApplyFunctions(pRuntimeEnv, pStatus, &w, pQuery->pos, forwardStep);
closeTimeWindow(pWindowResInfo, slot);
// try next time window
slot += 1;
}
}
/**
*
* @param pRuntimeEnv
......@@ -1872,7 +2156,6 @@ static int32_t blockwiseApplyAllFunctions(SQueryRuntimeEnv *pRuntimeEnv, int32_t
int32_t functionId = pQuery->pSelectExpr[k].pBase.functionId;
SField dummyField = {0};
bool hasNull = hasNullVal(pQuery, k, pBlockInfo, pFields, isDiskFileBlock);
char *dataBlock = getDataBlocks(pRuntimeEnv, &sasArray[k], k, forwardStep);
......@@ -1890,29 +2173,33 @@ static int32_t blockwiseApplyAllFunctions(SQueryRuntimeEnv *pRuntimeEnv, int32_t
}
}
setExecParams(pQuery, &pCtx[k], pQuery->skey, dataBlock, (char *)primaryKeyCol, forwardStep, functionId, tpField,
hasNull, pRuntimeEnv->blockStatus, &sasArray[k], pRuntimeEnv->scanFlag);
setExecParams(pRuntimeEnv, &pCtx[k], pQuery->skey, dataBlock, (char *)primaryKeyCol, forwardStep, functionId, tpField,
hasNull, &sasArray[k]);
}
int32_t step = GET_FORWARD_DIRECTION_FACTOR(pQuery->order.order);
if (isIntervalQuery(pQuery)) {
int32_t offset = GET_COL_DATA_POS(pQuery, 0, step);
TSKEY ts = primaryKeyCol[offset];
STimeWindow win = getActiveTimeWindow(pWindowResInfo, ts, pQuery);
doInterpolatePrevTimeWindow(pRuntimeEnv, pWindowResInfo, pBlockInfo, ts, offset, &win);
if (setWindowOutputBufByKey(pRuntimeEnv, pWindowResInfo, pRuntimeEnv->pMeterObj->sid, &win) != TSDB_CODE_SUCCESS) {
return 0;
}
TSKEY ekey = reviseWindowEkey(pQuery, &win);
forwardStep = getNumOfRowsInTimeWindow(pQuery, pBlockInfo, primaryKeyCol, pQuery->pos, ekey, searchFn, true);
doSetInterpolationDataForTimeWindow(pRuntimeEnv, pWindowResInfo, pBlockInfo, &win, offset, forwardStep);
SWindowStatus *pStatus = getTimeWindowResStatus(pWindowResInfo, curTimeWindow(pWindowResInfo));
doBlockwiseApplyFunctions(pRuntimeEnv, pStatus, &win, pQuery->pos, forwardStep);
int32_t index = pWindowResInfo->curIndex;
int32_t index = pWindowResInfo->curIndex;
STimeWindow nextWin = win;
while (1) {
int32_t startPos =
getNextQualifiedWindow(pRuntimeEnv, &nextWin, pWindowResInfo, pBlockInfo, primaryKeyCol, searchFn);
......@@ -1928,7 +2215,9 @@ static int32_t blockwiseApplyAllFunctions(SQueryRuntimeEnv *pRuntimeEnv, int32_t
ekey = reviseWindowEkey(pQuery, &nextWin);
forwardStep = getNumOfRowsInTimeWindow(pQuery, pBlockInfo, primaryKeyCol, startPos, ekey, searchFn, true);
doSetInterpolationDataForTimeWindow(pRuntimeEnv, pWindowResInfo, pBlockInfo, &nextWin, startPos, forwardStep);
pStatus = getTimeWindowResStatus(pWindowResInfo, curTimeWindow(pWindowResInfo));
doBlockwiseApplyFunctions(pRuntimeEnv, pStatus, &nextWin, startPos, forwardStep);
}
......@@ -1942,6 +2231,9 @@ static int32_t blockwiseApplyAllFunctions(SQueryRuntimeEnv *pRuntimeEnv, int32_t
*/
for (int32_t k = 0; k < pQuery->numOfOutputCols; ++k) {
int32_t functionId = pQuery->pSelectExpr[k].pBase.functionId;
pCtx[k].next.key = -1;
pCtx[k].prev.key = -1;
if (functionNeedToExecute(pRuntimeEnv, &pCtx[k], functionId)) {
aAggs[functionId].xFunction(&pCtx[k]);
}
......@@ -1956,6 +2248,14 @@ static int32_t blockwiseApplyAllFunctions(SQueryRuntimeEnv *pRuntimeEnv, int32_t
if (!isIntervalQuery(pQuery)) {
num = getNumOfResult(pRuntimeEnv) - prevNumOfRes;
}
// save the last row in current data block
for(int32_t i = 0; i < pQuery->numOfCols; ++i) {
SColumnInfo* pColInfo = &pQuery->colList[i].data;
int32_t s = (QUERY_IS_ASC_QUERY(pQuery))? pColInfo->bytes * (pBlockInfo->size - 1) : 0;
memcpy(pRuntimeEnv->lastRowInBlock[i], pRuntimeEnv->colDataBuffer[i]->data + s, pColInfo->bytes);
}
tfree(sasArray);
return (int32_t)num;
......@@ -2166,6 +2466,31 @@ void closeAllTimeWindow(SWindowResInfo *pWindowResInfo) {
}
}
SWindowResult* getWindowRes(SWindowResInfo* pWindowResInfo, size_t index) {
assert(index < pWindowResInfo->size);
return &pWindowResInfo->pResult[index];
}
/*
* remove the results that are not the FIRST time window that spreads beyond the
* the last qualified time stamp in case of sliding query, which the sliding time is not equalled to the interval time
*/
void removeRedundantWindow(SWindowResInfo *pWindowResInfo, TSKEY lastKey, int32_t order) {
assert(pWindowResInfo->size >= 0 && pWindowResInfo->capacity >= pWindowResInfo->size);
int32_t i = 0;
while(i < pWindowResInfo->size &&
((pWindowResInfo->pResult[i].window.ekey < lastKey && order == QUERY_ASC_FORWARD_STEP) ||
(pWindowResInfo->pResult[i].window.skey > lastKey && order == QUERY_DESC_FORWARD_STEP))) {
++i;
}
// assert(i < pWindowResInfo->size);
if (i < pWindowResInfo->size) {
pWindowResInfo->size = (i + 1);
}
}
static int32_t setGroupResultOutputBuf(SQueryRuntimeEnv *pRuntimeEnv, char *pData, int16_t type, int16_t bytes) {
if (isNull(pData, type)) { // ignore the null value
return -1;
......@@ -2311,10 +2636,9 @@ static int32_t rowwiseApplyAllFunctions(SQueryRuntimeEnv *pRuntimeEnv, int32_t *
bool hasNull = hasNullVal(pQuery, k, pBlockInfo, pFields, isDiskFileBlock);
char *dataBlock = getDataBlocks(pRuntimeEnv, &sasArray[k], k, *forwardStep);
TSKEY ts = pQuery->skey; // QUERY_IS_ASC_QUERY(pQuery) ? pRuntimeEnv->intervalWindow.skey :
// pRuntimeEnv->intervalWindow.ekey;
setExecParams(pQuery, &pCtx[k], ts, dataBlock, (char *)primaryKeyCol, (*forwardStep), functionId, pFields, hasNull,
pRuntimeEnv->blockStatus, &sasArray[k], pRuntimeEnv->scanFlag);
TSKEY ts = pQuery->skey;
setExecParams(pRuntimeEnv, &pCtx[k], ts, dataBlock, (char *)primaryKeyCol, (*forwardStep), functionId, pFields, hasNull,
&sasArray[k]);
}
// set the input column data
......@@ -2340,7 +2664,9 @@ static int32_t rowwiseApplyAllFunctions(SQueryRuntimeEnv *pRuntimeEnv, int32_t *
int32_t j = 0;
TSKEY lastKey = -1;
int32_t lastIndex = -1;
//bool firstAccessedPoint = true;
for (j = 0; j < (*forwardStep); ++j) {
int32_t offset = GET_COL_DATA_POS(pQuery, j, step);
......@@ -2362,8 +2688,31 @@ static int32_t rowwiseApplyAllFunctions(SQueryRuntimeEnv *pRuntimeEnv, int32_t *
// interval window query
if (isIntervalQuery(pQuery)) {
// decide the time window according to the primary timestamp
int64_t ts = primaryKeyCol[offset];
TSKEY ts = primaryKeyCol[offset];
STimeWindow win = getActiveTimeWindow(pWindowResInfo, ts, pQuery);
// if (firstAccessedPoint) {
// doInterpolatePrevTimeWindow(pRuntimeEnv, pWindowResInfo, pBlockInfo, ts, offset, &win);
// firstAccessedPoint = false;
// } else {
// int32_t index = pWindowResInfo->curIndex;
// STimeWindow w = getWindowResult(pWindowResInfo, index)->window;
//
// if (w.skey == win.skey) { // do nothing
// assert(w.ekey == win.ekey);
// } else {
// assert((w.skey < win.skey && w.ekey < ts && QUERY_IS_ASC_QUERY(pQuery)) ||
// (w.skey > win.skey && w.skey > ts && !QUERY_IS_ASC_QUERY(pQuery)));
//
// // set the endkey interpolation for the previous
// for(int32_t i = 0; i < pQuery->numOfOutputCols; ++i) {
// SColIndexEx *pCol = &pQuery->pSelectExpr[i].pBase.colInfo;
//
// interpolateEndKeyValue(pRuntimeEnv, pBlockInfo, win, e, &pRuntimeEnv->pCtx[i], pCol->colIdxInBuf);
// }
//
// }
// }
int32_t ret = setWindowOutputBufByKey(pRuntimeEnv, pWindowResInfo, pRuntimeEnv->pMeterObj->sid, &win);
if (ret != TSDB_CODE_SUCCESS) { // null data, too many state code
......@@ -2377,6 +2726,8 @@ static int32_t rowwiseApplyAllFunctions(SQueryRuntimeEnv *pRuntimeEnv, int32_t *
doRowwiseApplyFunctions(pRuntimeEnv, pStatus, &win, offset);
lastKey = ts;
lastIndex = j;
STimeWindow nextWin = win;
int32_t index = pWindowResInfo->curIndex;
int32_t sid = pRuntimeEnv->pMeterObj->sid;
......@@ -2421,6 +2772,9 @@ static int32_t rowwiseApplyAllFunctions(SQueryRuntimeEnv *pRuntimeEnv, int32_t *
for (int32_t k = 0; k < pQuery->numOfOutputCols; ++k) {
int32_t functionId = pQuery->pSelectExpr[k].pBase.functionId;
pCtx[k].next.key = -1;
pCtx[k].prev.key = -1;
if (functionNeedToExecute(pRuntimeEnv, &pCtx[k], functionId)) {
aAggs[functionId].xFunctionF(&pCtx[k], offset);
}
......@@ -2434,7 +2788,7 @@ static int32_t rowwiseApplyAllFunctions(SQueryRuntimeEnv *pRuntimeEnv, int32_t *
break;
}
}
/*
* pointsOffset is the maximum available space in result buffer update the actual forward step for query that
* requires checking buffer during loop
......@@ -2445,6 +2799,15 @@ static int32_t rowwiseApplyAllFunctions(SQueryRuntimeEnv *pRuntimeEnv, int32_t *
break;
}
}
// save the last accessed row of current data block for interpolation
int32_t index = GET_COL_DATA_POS(pQuery, lastIndex, step);
for(int32_t i = 0; i < pQuery->numOfCols; ++i) {
SColumnInfo* pColInfo = &pQuery->colList[i].data;
int32_t s = pColInfo->bytes * index;
memcpy(pRuntimeEnv->lastRowInBlock[i], pRuntimeEnv->colDataBuffer[i]->data + s, pColInfo->bytes);
}
free(sasArray);
......@@ -2657,11 +3020,24 @@ int32_t getNextDataFileCompInfo(SQueryRuntimeEnv *pRuntimeEnv, SMeterObj *pMeter
return fileIndex;
}
void setExecParams(SQuery *pQuery, SQLFunctionCtx *pCtx, int64_t startQueryTimestamp, void *inputData,
static void getOneRowFromDataBlock(SQueryRuntimeEnv *pRuntimeEnv, char **dst, int32_t pos) {
SQuery *pQuery = pRuntimeEnv->pQuery;
for (int32_t i = 0; i < pQuery->numOfCols; ++i) {
int32_t bytes = pQuery->colList[i].data.bytes;
memcpy(dst[i], pRuntimeEnv->colDataBuffer[i]->data + pos * bytes, bytes);
}
}
void setExecParams(SQueryRuntimeEnv *pRuntimeEnv, SQLFunctionCtx *pCtx, int64_t startQueryTimestamp, void *inputData,
char *primaryColumnData, int32_t size, int32_t functionId, SField *pField, bool hasNull,
int32_t blockStatus, void *param, int32_t scanFlag) {
void *param) {
SQuery* pQuery = pRuntimeEnv->pQuery;
int32_t startOffset = (QUERY_IS_ASC_QUERY(pQuery)) ? pQuery->pos : pQuery->pos - (size - 1);
int32_t scanFlag = pRuntimeEnv->scanFlag;
int32_t blockStatus = pRuntimeEnv->blockStatus;
pCtx->nStartQueryTimestamp = startQueryTimestamp;
pCtx->scanFlag = scanFlag;
......@@ -2913,6 +3289,12 @@ static void teardownQueryRuntimeEnv(SQueryRuntimeEnv *pRuntimeEnv) {
tfree(pRuntimeEnv->pInterpoBuf);
}
for (int32_t i = 0; i < pQuery->numOfCols; ++i) {
tfree(pRuntimeEnv->lastRowInBlock[i]);
}
tfree(pRuntimeEnv->lastRowInBlock);
destroyDiskbasedResultBuf(pRuntimeEnv->pResultBuf);
pRuntimeEnv->pTSBuf = tsBufDestory(pRuntimeEnv->pTSBuf);
}
......@@ -2924,6 +3306,7 @@ static int64_t getOldestKey(int32_t numOfFiles, int64_t fileId, SVnodeCfg *pCfg)
}
bool isQueryKilled(SQuery *pQuery) {
return false;
SQInfo *pQInfo = (SQInfo *)GET_QINFO_ADDR(pQuery);
/*
......@@ -3324,7 +3707,7 @@ void vnodeCheckIfDataExists(SQueryRuntimeEnv *pRuntimeEnv, SMeterObj *pMeterObj,
void doGetAlignedIntervalQueryRangeImpl(SQuery *pQuery, int64_t pKey, int64_t keyFirst, int64_t keyLast,
int64_t *actualSkey, int64_t *actualEkey, int64_t *skey, int64_t *ekey) {
assert(pKey >= keyFirst && pKey <= keyLast);
*skey = taosGetIntervalStartTimestamp(pKey, pQuery->intervalTime, pQuery->intervalTimeUnit, pQuery->precision);
*skey = taosGetIntervalStartTimestamp(pKey, pQuery->slidingTime, pQuery->slidingTimeUnit, pQuery->precision);
if (keyFirst > (INT64_MAX - pQuery->intervalTime)) {
/*
......@@ -3355,13 +3738,62 @@ void doGetAlignedIntervalQueryRangeImpl(SQuery *pQuery, int64_t pKey, int64_t ke
}
}
static void getOneRowFromDataBlock(SQueryRuntimeEnv *pRuntimeEnv, char **dst, int32_t pos) {
SQuery *pQuery = pRuntimeEnv->pQuery;
for (int32_t i = 0; i < pQuery->numOfCols; ++i) {
int32_t bytes = pQuery->colList[i].data.bytes;
memcpy(dst[i], pRuntimeEnv->colDataBuffer[i]->data + pos * bytes, bytes);
static bool loadPrevDataPoint(SQueryRuntimeEnv* pRuntimeEnv, char** result) {
SQuery* pQuery = pRuntimeEnv->pQuery;
SMeterObj* pMeterObj = pRuntimeEnv->pMeterObj;
/* the qualified point is not the first point in data block */
if (pQuery->pos > 0) {
int32_t prevPos = pQuery->pos - 1;
/* save the point that is directly after the specified point */
getOneRowFromDataBlock(pRuntimeEnv, result, prevPos);
} else {
__block_search_fn_t searchFn = vnodeSearchKeyFunc[pMeterObj->searchAlgorithm];
savePointPosition(&pRuntimeEnv->startPos, pQuery->fileId, pQuery->slot, pQuery->pos);
// backwards movement would not set the pQuery->pos correct. We need to set it manually later.
moveToNextBlock(pRuntimeEnv, QUERY_DESC_FORWARD_STEP, searchFn, true);
/*
* no previous data exists.
* reset the status and load the data block that contains the qualified point
*/
if (Q_STATUS_EQUAL(pQuery->over, QUERY_NO_DATA_TO_CHECK)) {
dTrace("QInfo:%p no previous data block, start fileId:%d, slot:%d, pos:%d, qrange:%" PRId64 "-%" PRId64
", out of range",
GET_QINFO_ADDR(pQuery), pRuntimeEnv->startPos.fileId, pRuntimeEnv->startPos.slot,
pRuntimeEnv->startPos.pos, pQuery->skey, pQuery->ekey);
// no result, return immediately
setQueryStatus(pQuery, QUERY_COMPLETED);
return false;
} else { // prev has been located
if (pQuery->fileId >= 0) {
pQuery->pos = pQuery->pBlock[pQuery->slot].numOfPoints - 1;
getOneRowFromDataBlock(pRuntimeEnv, result, pQuery->pos);
qTrace("QInfo:%p get prev data point, fileId:%d, slot:%d, pos:%d, pQuery->pos:%d", GET_QINFO_ADDR(pQuery),
pQuery->fileId, pQuery->slot, pQuery->pos, pQuery->pos);
// restore to the start position
loadRequiredBlockIntoMem(pRuntimeEnv, &pRuntimeEnv->startPos);
} else {
// moveToNextBlock make sure there is a available cache block, if exists
assert(vnodeIsDatablockLoaded(pRuntimeEnv, pMeterObj, -1, true) == DISK_BLOCK_NO_NEED_TO_LOAD);
SCacheBlock* pBlock = &pRuntimeEnv->cacheBlock;
pQuery->pos = pBlock->numOfPoints - 1;
getOneRowFromDataBlock(pRuntimeEnv, result, pQuery->pos);
qTrace("QInfo:%p get prev data point, fileId:%d, slot:%d, pos:%d, pQuery->pos:%d", GET_QINFO_ADDR(pQuery),
pQuery->fileId, pQuery->slot, pBlock->numOfPoints - 1, pQuery->pos);
}
}
}
return true;
}
static bool getNeighborPoints(STableQuerySupportObj *pSupporter, SMeterObj *pMeterObj,
......@@ -3381,10 +3813,8 @@ static bool getNeighborPoints(STableQuerySupportObj *pSupporter, SMeterObj *pMet
} else {
assert(QUERY_IS_ASC_QUERY(pQuery));
}
assert(pPointInterpSupporter != NULL && pQuery->skey == pQuery->ekey);
SCacheBlock *pBlock = NULL;
qTrace("QInfo:%p get next data point, fileId:%d, slot:%d, pos:%d", GET_QINFO_ADDR(pQuery), pQuery->fileId,
pQuery->slot, pQuery->pos);
......@@ -3413,55 +3843,9 @@ static bool getNeighborPoints(STableQuerySupportObj *pSupporter, SMeterObj *pMet
}
return true;
}
/* the qualified point is not the first point in data block */
if (pQuery->pos > 0) {
int32_t prevPos = pQuery->pos - 1;
/* save the point that is directly after the specified point */
getOneRowFromDataBlock(pRuntimeEnv, pPointInterpSupporter->pPrevPoint, prevPos);
} else {
__block_search_fn_t searchFn = vnodeSearchKeyFunc[pMeterObj->searchAlgorithm];
savePointPosition(&pRuntimeEnv->startPos, pQuery->fileId, pQuery->slot, pQuery->pos);
// backwards movement would not set the pQuery->pos correct. We need to set it manually later.
moveToNextBlock(pRuntimeEnv, QUERY_DESC_FORWARD_STEP, searchFn, true);
/*
* no previous data exists.
* reset the status and load the data block that contains the qualified point
*/
if (Q_STATUS_EQUAL(pQuery->over, QUERY_NO_DATA_TO_CHECK)) {
dTrace("QInfo:%p no previous data block, start fileId:%d, slot:%d, pos:%d, qrange:%" PRId64 "-%" PRId64
", out of range",
GET_QINFO_ADDR(pQuery), pRuntimeEnv->startPos.fileId, pRuntimeEnv->startPos.slot,
pRuntimeEnv->startPos.pos, pQuery->skey, pQuery->ekey);
// no result, return immediately
setQueryStatus(pQuery, QUERY_COMPLETED);
return false;
} else { // prev has been located
if (pQuery->fileId >= 0) {
pQuery->pos = pQuery->pBlock[pQuery->slot].numOfPoints - 1;
getOneRowFromDataBlock(pRuntimeEnv, pPointInterpSupporter->pPrevPoint, pQuery->pos);
qTrace("QInfo:%p get prev data point, fileId:%d, slot:%d, pos:%d, pQuery->pos:%d", GET_QINFO_ADDR(pQuery),
pQuery->fileId, pQuery->slot, pQuery->pos, pQuery->pos);
} else {
// moveToNextBlock make sure there is a available cache block, if exists
assert(vnodeIsDatablockLoaded(pRuntimeEnv, pMeterObj, -1, true) == DISK_BLOCK_NO_NEED_TO_LOAD);
pBlock = &pRuntimeEnv->cacheBlock;
pQuery->pos = pBlock->numOfPoints - 1;
getOneRowFromDataBlock(pRuntimeEnv, pPointInterpSupporter->pPrevPoint, pQuery->pos);
qTrace("QInfo:%p get prev data point, fileId:%d, slot:%d, pos:%d, pQuery->pos:%d", GET_QINFO_ADDR(pQuery),
pQuery->fileId, pQuery->slot, pBlock->numOfPoints - 1, pQuery->pos);
}
}
}
loadPrevDataPoint(pRuntimeEnv, pPointInterpSupporter->pPrevPoint);
pQuery->skey = *(TSKEY *)pPointInterpSupporter->pPrevPoint[0];
pQuery->ekey = *(TSKEY *)pPointInterpSupporter->pNextPoint[0];
pQuery->lastKey = pQuery->skey;
......@@ -3635,8 +4019,17 @@ bool normalizedFirstQueryRange(bool dataInDisk, bool dataInCache, STableQuerySup
if (key != NULL) {
*key = nextKey;
}
return doGetQueryPos(nextKey, pSupporter, pPointInterpSupporter);
// needs the data before the begin timestamp of query time window
if (nextKey != pQuery->skey) {
if (!pRuntimeEnv->hasTimeWindow) {
pQuery->skey = nextKey; // change the query skey
pQuery->lastKey = pQuery->skey;
}
return true;
} else {
return doGetQueryPos(nextKey, pSupporter, pPointInterpSupporter);
}
}
// set no data in file
......@@ -4223,63 +4616,6 @@ static bool forwardQueryStartPosIfNeeded(SQInfo *pQInfo, STableQuerySupportObj *
}
}
// if (win.ekey <= blockInfo.keyLast) {
// pQuery->limit.offset -= 1;
//
// if (win.ekey == blockInfo.keyLast) {
// moveToNextBlock(pRuntimeEnv, step, searchFn, false);
// if (Q_STATUS_EQUAL(pQuery->over, QUERY_NO_DATA_TO_CHECK)) {
// break;
// }
//
// // next block does not included in time range, abort query
// blockInfo = getBlockInfo(pRuntimeEnv);
// if ((blockInfo.keyFirst > pQuery->ekey && QUERY_IS_ASC_QUERY(pQuery)) ||
// (blockInfo.keyLast < pQuery->ekey && !QUERY_IS_ASC_QUERY(pQuery))) {
// setQueryStatus(pQuery, QUERY_COMPLETED);
// break;
// }
//
// // set the window that start from the next data block
// win = getActiveTimeWindow(pWindowResInfo, blockInfo.keyFirst, pQuery);
// } else {
// // the time window is closed in current data block, load disk file block into memory to
// // check the next time window
// if (IS_DISK_DATA_BLOCK(pQuery)) {
// getTimestampInDiskBlock(pRuntimeEnv, 0);
// }
//
// STimeWindow nextWin = win;
// int32_t startPos =
// getNextQualifiedWindow(pRuntimeEnv, &nextWin, pWindowResInfo, &blockInfo, primaryKey, searchFn);
//
// if (startPos < 0) { // failed to find the qualified time window
// assert((nextWin.skey > pQuery->ekey && QUERY_IS_ASC_QUERY(pQuery)) ||
// (nextWin.ekey < pQuery->ekey && !QUERY_IS_ASC_QUERY(pQuery)));
//
// setQueryStatus(pQuery, QUERY_COMPLETED);
// break;
// } else { // set the abort info
// pQuery->pos = startPos;
// pQuery->lastKey = primaryKey[startPos];
// win = nextWin;
// }
// }
//
// continue;
// }
//
// moveToNextBlock(pRuntimeEnv, step, searchFn, false);
// if (Q_STATUS_EQUAL(pQuery->over, QUERY_NO_DATA_TO_CHECK)) {
// break;
// }
//
// blockInfo = getBlockInfo(pRuntimeEnv);
// if ((blockInfo.keyFirst > pQuery->ekey && QUERY_IS_ASC_QUERY(pQuery)) ||
// (blockInfo.keyLast < pQuery->ekey && !QUERY_IS_ASC_QUERY(pQuery))) {
// setQueryStatus(pQuery, QUERY_COMPLETED);
// break;
// }
}
if (Q_STATUS_EQUAL(pQuery->over, QUERY_NO_DATA_TO_CHECK | QUERY_COMPLETED) || pQuery->limit.offset > 0) {
......@@ -4468,7 +4804,7 @@ void pointInterpSupporterSetData(SQInfo *pQInfo, SPointInterpoSupporter *pPointI
}
void pointInterpSupporterInit(SQuery *pQuery, SPointInterpoSupporter *pInterpoSupport) {
if (isPointInterpoQuery(pQuery)) {
if (isPointInterpoQuery(pQuery) || needsBoundaryTS(pQuery)) {
pInterpoSupport->pPrevPoint = malloc(pQuery->numOfCols * POINTER_BYTES);
pInterpoSupport->pNextPoint = malloc(pQuery->numOfCols * POINTER_BYTES);
......@@ -4547,12 +4883,15 @@ static int32_t allocateRuntimeEnvBuf(SQueryRuntimeEnv *pRuntimeEnv, SMeterObj *p
SQuery *pQuery = pRuntimeEnv->pQuery;
// To make sure the start position of each buffer is aligned to 4bytes in 32-bit ARM system.
pRuntimeEnv->lastRowInBlock = calloc(pQuery->numOfCols, POINTER_BYTES);
for (int32_t i = 0; i < pQuery->numOfCols; ++i) {
int32_t bytes = pQuery->colList[i].data.bytes;
pRuntimeEnv->colDataBuffer[i] = calloc(1, sizeof(SData) + EXTRA_BYTES + pMeterObj->pointsPerFileBlock * bytes);
if (pRuntimeEnv->colDataBuffer[i] == NULL) {
goto _error_clean;
}
pRuntimeEnv->lastRowInBlock[i] = calloc(1, bytes);
}
// record the maximum column width among columns of this meter/metric
......@@ -4666,7 +5005,9 @@ int32_t vnodeQueryTablePrepare(SQInfo *pQInfo, SMeterObj *pMeterObj, STableQuery
SQueryRuntimeEnv *pRuntimeEnv = &pSupporter->runtimeEnv;
pRuntimeEnv->pQuery = pQuery;
pRuntimeEnv->pMeterObj = pMeterObj;
pRuntimeEnv->hasTimeWindow = !notHasQueryTimeRange(pQuery);
pRuntimeEnv->interpoSearch = needsBoundaryTS(pQuery);
if ((code = allocateRuntimeEnvBuf(pRuntimeEnv, pMeterObj)) != TSDB_CODE_SUCCESS) {
return code;
}
......@@ -4721,9 +5062,9 @@ int32_t vnodeQueryTablePrepare(SQInfo *pQInfo, SMeterObj *pMeterObj, STableQuery
/* query on single table */
pSupporter->numOfMeters = 1;
setQueryStatus(pQuery, QUERY_NOT_COMPLETED);
SPointInterpoSupporter interpInfo = {0};
pointInterpSupporterInit(pQuery, &interpInfo);
SPointInterpoSupporter interpoSupporter = {0};
pointInterpSupporterInit(pQuery, &interpoSupporter);
/*
* in case of last_row query without query range, we set the query timestamp to
......@@ -4731,11 +5072,11 @@ int32_t vnodeQueryTablePrepare(SQInfo *pQInfo, SMeterObj *pMeterObj, STableQuery
*/
if (isFirstLastRowQuery(pQuery) && notHasQueryTimeRange(pQuery)) {
if (!normalizeUnBoundLastRowQuery(pSupporter, &interpInfo)) {
if (!normalizeUnBoundLastRowQuery(pSupporter, &interpoSupporter)) {
sem_post(&pQInfo->dataReady);
pQInfo->over = 1;
pointInterpSupporterDestroy(&interpInfo);
pointInterpSupporterDestroy(&interpoSupporter);
return TSDB_CODE_SUCCESS;
}
} else { // find the skey and ekey in case of sliding query
......@@ -4749,23 +5090,34 @@ int32_t vnodeQueryTablePrepare(SQInfo *pQInfo, SMeterObj *pMeterObj, STableQuery
}
int64_t skey = 0;
if ((normalizedFirstQueryRange(dataInDisk, dataInCache, pSupporter, &interpInfo, &skey) == false) ||
if ((normalizedFirstQueryRange(dataInDisk, dataInCache, pSupporter, &interpoSupporter, &skey) == false) ||
(isFixedOutputQuery(pQuery) && !isTopBottomQuery(pQuery) && (pQuery->limit.offset > 0)) ||
(isTopBottomQuery(pQuery) && pQuery->limit.offset >= pQuery->pSelectExpr[1].pBase.arg[0].argValue.i64)) {
sem_post(&pQInfo->dataReady);
pQInfo->over = 1;
pointInterpSupporterDestroy(&interpInfo);
pointInterpSupporterDestroy(&interpoSupporter);
return TSDB_CODE_SUCCESS;
}
pQuery->skey = skey;
if (!QUERY_IS_ASC_QUERY(pQuery)) {
win.skey = minKey;
win.ekey = skey;
pQuery->ekey = minKey;
} else {
win.skey = skey;
win.ekey = pQuery->ekey;
}
// empty result
if (QUERY_IS_ASC_QUERY(pQuery) && win.skey > win.ekey) {
sem_post(&pQInfo->dataReady);
pQInfo->over = 1;
pointInterpSupporterDestroy(&interpoSupporter);
return TSDB_CODE_SUCCESS;
}
TSKEY skey1, ekey1;
TSKEY windowSKey = 0, windowEKey = 0;
......@@ -4784,13 +5136,13 @@ int32_t vnodeQueryTablePrepare(SQInfo *pQInfo, SMeterObj *pMeterObj, STableQuery
pQuery->over = QUERY_NOT_COMPLETED;
} else {
int64_t ekey = 0;
if ((normalizedFirstQueryRange(dataInDisk, dataInCache, pSupporter, &interpInfo, &ekey) == false) ||
if ((normalizedFirstQueryRange(dataInDisk, dataInCache, pSupporter, &interpoSupporter, &ekey) == false) ||
(isFixedOutputQuery(pQuery) && !isTopBottomQuery(pQuery) && (pQuery->limit.offset > 0)) ||
(isTopBottomQuery(pQuery) && pQuery->limit.offset >= pQuery->pSelectExpr[1].pBase.arg[0].argValue.i64)) {
sem_post(&pQInfo->dataReady);
pQInfo->over = 1;
pointInterpSupporterDestroy(&interpInfo);
pointInterpSupporterDestroy(&interpoSupporter);
return TSDB_CODE_SUCCESS;
}
}
......@@ -4800,14 +5152,14 @@ int32_t vnodeQueryTablePrepare(SQInfo *pQInfo, SMeterObj *pMeterObj, STableQuery
* here we set the value for before and after the specified time into the
* parameter for interpolation query
*/
pointInterpSupporterSetData(pQInfo, &interpInfo);
pointInterpSupporterDestroy(&interpInfo);
pointInterpSupporterSetData(pQInfo, &interpoSupporter);
pointInterpSupporterDestroy(&interpoSupporter);
if (!forwardQueryStartPosIfNeeded(pQInfo, pSupporter, dataInDisk, dataInCache)) {
return TSDB_CODE_SUCCESS;
}
int64_t rs = taosGetIntervalStartTimestamp(pSupporter->rawSKey, pQuery->intervalTime, pQuery->intervalTimeUnit,
int64_t rs = taosGetIntervalStartTimestamp(pSupporter->rawSKey, pQuery->intervalTime, pQuery->slidingTimeUnit,
pQuery->precision);
taosInitInterpoInfo(&pRuntimeEnv->interpoInfo, pQuery->order.order, rs, 0, 0);
allocMemForInterpo(pSupporter, pQuery, pMeterObj);
......@@ -4888,6 +5240,7 @@ int32_t vnodeSTableQueryPrepare(SQInfo *pQInfo, SQuery *pQuery, void *param) {
pSupporter->rawEKey = pQuery->ekey;
pSupporter->rawSKey = pQuery->skey;
pQuery->lastKey = pQuery->skey;
pRuntimeEnv->interpoSearch = needsBoundaryTS(pQuery);
// create runtime environment
SColumnModel *pTagSchemaInfo = pSupporter->pSidSet->pColumnModel;
......@@ -4943,7 +5296,7 @@ int32_t vnodeSTableQueryPrepare(SQInfo *pQInfo, SQuery *pQuery, void *param) {
}
TSKEY revisedStime = taosGetIntervalStartTimestamp(pSupporter->rawSKey, pQuery->intervalTime,
pQuery->intervalTimeUnit, pQuery->precision);
pQuery->slidingTimeUnit, pQuery->precision);
taosInitInterpoInfo(&pRuntimeEnv->interpoInfo, pQuery->order.order, revisedStime, 0, 0);
pRuntimeEnv->stableQuery = true;
......@@ -5389,6 +5742,8 @@ static int64_t doScanAllDataBlocks(SQueryRuntimeEnv *pRuntimeEnv) {
if (isIntervalQuery(pQuery) && IS_MASTER_SCAN(pRuntimeEnv)) {
if (Q_STATUS_EQUAL(pQuery->over, QUERY_COMPLETED | QUERY_NO_DATA_TO_CHECK)) {
closeAllTimeWindow(&pRuntimeEnv->windowResInfo);
removeRedundantWindow(&pRuntimeEnv->windowResInfo, pQuery->lastKey - step, step);
pRuntimeEnv->windowResInfo.curIndex = pRuntimeEnv->windowResInfo.size - 1;
} else if (Q_STATUS_EQUAL(pQuery->over, QUERY_RESBUF_FULL)) { // check if window needs to be closed
SBlockInfo blockInfo = getBlockInfo(pRuntimeEnv);
......@@ -5448,7 +5803,6 @@ void vnodeSetTagValueInParam(tSidSet *pSidSet, SQueryRuntimeEnv *pRuntimeEnv, SM
}
// set the join tag for first column
SSqlFuncExprMsg *pFuncMsg = &pQuery->pSelectExpr[0].pBase;
if (pFuncMsg->functionId == TSDB_FUNC_TS && pFuncMsg->colInfo.colIdx == PRIMARYKEY_TIMESTAMP_COL_INDEX &&
pRuntimeEnv->pTSBuf != NULL) {
assert(pFuncMsg->numOfParams == 1);
......@@ -5474,9 +5828,6 @@ static void doMerge(SQueryRuntimeEnv *pRuntimeEnv, int64_t timestamp, SWindowRes
pCtx[i].hasNull = true;
pCtx[i].nStartQueryTimestamp = timestamp;
pCtx[i].aInputElemBuf = getPosInResultPage(pRuntimeEnv, i, pWindowRes);
// pCtx[i].aInputElemBuf = ((char *)inputSrc->data) +
// ((int32_t)pRuntimeEnv->offset[i] * pRuntimeEnv->numOfRowsPerPage) +
// pCtx[i].outputBytes * inputIdx;
// in case of tag column, the tag information should be extracted from input buffer
if (functionId == TSDB_FUNC_TAG_DUMMY || functionId == TSDB_FUNC_TAG) {
......@@ -5837,6 +6188,7 @@ int32_t doMergeMetersResultsToGroupRes(STableQuerySupportObj *pSupporter, SQuery
} else { // copy data to disk buffer
if (buffer[0]->numOfElems == pQuery->pointsToRead) {
if (flushFromResultBuf(pSupporter, pQuery, pRuntimeEnv) != TSDB_CODE_SUCCESS) {
tfree(pTree);
return -1;
}
......@@ -7171,7 +7523,6 @@ void setIntervalQueryRange(SMeterQueryInfo *pMeterQueryInfo, STableQuerySupportO
doGetAlignedIntervalQueryRangeImpl(pQuery, win.skey, win.skey, win.ekey, &skey1, &ekey1, &windowSKey, &windowEKey);
pWindowResInfo->startTime = windowSKey; // windowSKey may be 0 in case of 1970 timestamp
// assert(pWindowResInfo->startTime > 0);
if (pWindowResInfo->prevSKey == 0) {
if (QUERY_IS_ASC_QUERY(pQuery)) {
......@@ -7476,6 +7827,8 @@ void stableApplyFunctionsOnBlock(STableQuerySupportObj *pSupporter, SMeterDataIn
updateWindowResNumOfRes(pRuntimeEnv, pMeterDataInfo);
updatelastkey(pQuery, pMeterQueryInfo);
doCheckQueryCompleted(pRuntimeEnv, pMeterQueryInfo->lastKey, pWindowResInfo);
}
// we need to split the refstatsult into different packages.
......@@ -7535,7 +7888,7 @@ bool vnodeHasRemainResults(void *handle) {
// query has completed
if (Q_STATUS_EQUAL(pQuery->over, QUERY_COMPLETED | QUERY_NO_DATA_TO_CHECK)) {
TSKEY ekey = taosGetRevisedEndKey(pSupporter->rawEKey, pQuery->order.order, pQuery->intervalTime,
pQuery->intervalTimeUnit, pQuery->precision);
pQuery->slidingTimeUnit, pQuery->precision);
int32_t numOfTotal = taosGetNumOfResultWithInterpo(pInterpoInfo, (TSKEY *)pRuntimeEnv->pInterpoBuf[0]->data,
remain, pQuery->intervalTime, ekey, pQuery->pointsToRead);
return numOfTotal > 0;
......@@ -7646,7 +7999,7 @@ int32_t vnodeQueryResultInterpolate(SQInfo *pQInfo, tFilePage **pDst, tFilePage
numOfRows = taosNumOfRemainPoints(&pRuntimeEnv->interpoInfo);
TSKEY ekey = taosGetRevisedEndKey(pSupporter->rawEKey, pQuery->order.order, pQuery->intervalTime,
pQuery->intervalTimeUnit, pQuery->precision);
pQuery->slidingTimeUnit, pQuery->precision);
int32_t numOfFinalRows = taosGetNumOfResultWithInterpo(&pRuntimeEnv->interpoInfo, (TSKEY *)pDataSrc[0]->data,
numOfRows, pQuery->intervalTime, ekey, pQuery->pointsToRead);
......
......@@ -269,7 +269,7 @@ static SQInfo *vnodeAllocateQInfoEx(SQueryMeterMsg *pQueryMsg, SSqlGroupbyExpr *
pQuery->intervalTime = pQueryMsg->intervalTime;
pQuery->slidingTime = pQueryMsg->slidingTime;
pQuery->interpoType = pQueryMsg->interpoType;
pQuery->intervalTimeUnit = pQueryMsg->intervalTimeUnit;
pQuery->slidingTimeUnit = pQueryMsg->slidingTimeUnit;
pQInfo->query.pointsToRead = vnodeList[pMeterObj->vnode].cfg.rowsInFileBlock;
......@@ -649,18 +649,28 @@ void *vnodeQueryOnSingleTable(SMeterObj **pMetersObj, SSqlGroupbyExpr *pGroupbyE
}
STableQuerySupportObj *pSupporter = (STableQuerySupportObj *)calloc(1, sizeof(STableQuerySupportObj));
if (pSupporter == NULL) {
*code = TSDB_CODE_SERV_OUT_OF_MEMORY;
goto _error;
}
pSupporter->numOfMeters = 1;
pSupporter->pMetersHashTable = taosInitHashTable(pSupporter->numOfMeters, taosIntHash_32, false);
taosAddToHashTable(pSupporter->pMetersHashTable, (const char*) &pMetersObj[0]->sid, sizeof(pMeterObj[0].sid),
(char *)&pMetersObj[0], POINTER_BYTES);
pSupporter->pSidSet = NULL;
pSupporter->subgroupIdx = -1;
pSupporter->pMeterSidExtInfo = NULL;
pQInfo->pTableQuerySupporter = pSupporter;
pSupporter->pMetersHashTable = taosInitHashTable(pSupporter->numOfMeters, taosIntHash_32, false);
if (pSupporter->pMetersHashTable == NULL) {
*code = TSDB_CODE_SERV_OUT_OF_MEMORY;
goto _error;
}
if (taosAddToHashTable(pSupporter->pMetersHashTable, (const char*) &pMetersObj[0]->sid, sizeof(pMeterObj[0].sid),
(char *)&pMetersObj[0], POINTER_BYTES) != 0) {
*code = TSDB_CODE_APP_ERROR;
goto _error;
}
STSBuf *pTSBuf = NULL;
if (pQueryMsg->tsLen > 0) {
// open new file to save the result
......
......@@ -769,7 +769,7 @@ int tsDecompressTimestampImp(const char *const input, const int nelements, char
delta_of_delta = 0;
} else {
if (is_bigendian()) {
memcpy(&dd1 + LONG_BYTES - nbytes, input + ipos, nbytes);
memcpy(((char *)(&dd1)) + LONG_BYTES - nbytes, input + ipos, nbytes);
} else {
memcpy(&dd1, input + ipos, nbytes);
}
......@@ -794,7 +794,7 @@ int tsDecompressTimestampImp(const char *const input, const int nelements, char
delta_of_delta = 0;
} else {
if (is_bigendian()) {
memcpy(&dd2 + LONG_BYTES - nbytes, input + ipos, nbytes);
memcpy(((char *)(&dd2)) + LONG_BYTES - nbytes, input + ipos, nbytes);
} else {
memcpy(&dd2, input + ipos, nbytes);
}
......
......@@ -80,7 +80,7 @@ tExtMemBuffer* createExtMemBuffer(int32_t inMemSize, int32_t elemSize, SColumnMo
return pMemBuffer;
}
void* destoryExtMemBuffer(tExtMemBuffer *pMemBuffer) {
void* destroyExtMemBuffer(tExtMemBuffer *pMemBuffer) {
if (pMemBuffer == NULL) {
return NULL;
}
......@@ -914,6 +914,7 @@ void tColModelDisplay(SColumnModel *pModel, void *pData, int32_t numOfRows, int3
char buf[4096] = {0};
taosUcs4ToMbs(val, pModel->pFields[j].field.bytes, buf);
printf("%s\t", buf);
break;
}
case TSDB_DATA_TYPE_BINARY: {
printBinaryData(val, pModel->pFields[j].field.bytes);
......@@ -965,6 +966,7 @@ void tColModelDisplayEx(SColumnModel *pModel, void *pData, int32_t numOfRows, in
char buf[128] = {0};
taosUcs4ToMbs(val, pModel->pFields[j].field.bytes, buf);
printf("%s\t", buf);
break;
}
case TSDB_DATA_TYPE_BINARY: {
printBinaryDataEx(val, pModel->pFields[j].field.bytes, &param[j]);
......
......@@ -155,11 +155,11 @@ char tsSocketType[4] = "udp";
// time precision, millisecond by default
int tsTimePrecision = TSDB_TIME_PRECISION_MILLI;
// 10 ms for sliding time, the value will changed in case of time precision changed
int tsMinSlidingTime = 10;
// 1 us for sliding time, the value will changed in case of time precision changed
int tsMinSlidingTime = 1;
// 10 ms for interval time range, changed accordingly
int tsMinIntervalTime = 10;
// 1 us for interval time range, changed accordingly
int tsMinIntervalTime = 1;
// 20sec, the maximum value of stream computing delay, changed accordingly
int tsMaxStreamComputDelay = 20000;
......@@ -631,10 +631,10 @@ static void doInitGlobalConfig() {
tsInitConfigOption(cfg++, "minSlidingTime", &tsMinSlidingTime, TSDB_CFG_VTYPE_INT,
TSDB_CFG_CTYPE_B_CONFIG | TSDB_CFG_CTYPE_B_SHOW,
10, 1000000, 0, TSDB_CFG_UTYPE_MS);
1, 1000000000, 0, TSDB_CFG_UTYPE_MS);
tsInitConfigOption(cfg++, "minIntervalTime", &tsMinIntervalTime, TSDB_CFG_VTYPE_INT,
TSDB_CFG_CTYPE_B_CONFIG | TSDB_CFG_CTYPE_B_SHOW,
10, 1000000, 0, TSDB_CFG_UTYPE_MS);
1, 1000000000, 0, TSDB_CFG_UTYPE_MS);
tsInitConfigOption(cfg++, "maxStreamCompDelay", &tsMaxStreamComputDelay, TSDB_CFG_VTYPE_INT,
TSDB_CFG_CTYPE_B_CONFIG | TSDB_CFG_CTYPE_B_SHOW,
10, 1000000000, 0, TSDB_CFG_UTYPE_MS);
......
......@@ -22,12 +22,12 @@
#define INTERPOL_IS_ASC_INTERPOL(interp) ((interp)->order == TSQL_SO_ASC)
int64_t taosGetIntervalStartTimestamp(int64_t startTime, int64_t timeRange, char intervalTimeUnit, int16_t precision) {
int64_t taosGetIntervalStartTimestamp(int64_t startTime, int64_t timeRange, char slidingTimeUnit, int16_t precision) {
if (timeRange == 0) {
return startTime;
}
if (intervalTimeUnit == 'a' || intervalTimeUnit == 'm' || intervalTimeUnit == 's' || intervalTimeUnit == 'h') {
if (slidingTimeUnit == 'a' || slidingTimeUnit == 'm' || slidingTimeUnit == 's' || slidingTimeUnit == 'h' || slidingTimeUnit == 'u') {
return (startTime / timeRange) * timeRange;
} else {
/*
......@@ -95,11 +95,11 @@ void taosInterpoSetStartInfo(SInterpolationInfo* pInterpoInfo, int32_t numOfRawD
pInterpoInfo->numOfRawDataInRows = numOfRawDataInRows;
}
TSKEY taosGetRevisedEndKey(TSKEY ekey, int32_t order, int32_t timeInterval, int8_t intervalTimeUnit, int8_t precision) {
TSKEY taosGetRevisedEndKey(TSKEY ekey, int32_t order, int32_t timeInterval, int8_t slidingTimeUnit, int8_t precision) {
if (order == TSQL_SO_ASC) {
return ekey;
} else {
return taosGetIntervalStartTimestamp(ekey, timeInterval, intervalTimeUnit, precision);
return taosGetIntervalStartTimestamp(ekey, timeInterval, slidingTimeUnit, precision);
}
}
......@@ -191,6 +191,49 @@ int taosDoLinearInterpolation(int32_t type, SPoint* point1, SPoint* point2, SPoi
return 0;
}
int taosDoLinearInterpolationD(int32_t type, SPoint* point1, SPoint* point2, SPoint* point) {
switch (type) {
case TSDB_DATA_TYPE_INT: {
*(double*) point->val = doLinearInterpolationImpl(*(int32_t*)point1->val, *(int32_t*)point2->val, point1->key,
point2->key, point->key);
break;
}
case TSDB_DATA_TYPE_FLOAT: {
*(double*)point->val =
doLinearInterpolationImpl(*(float*)point1->val, *(float*)point2->val, point1->key, point2->key, point->key);
break;
};
case TSDB_DATA_TYPE_DOUBLE: {
*(double*)point->val =
doLinearInterpolationImpl(*(double*)point1->val, *(double*)point2->val, point1->key, point2->key, point->key);
break;
};
case TSDB_DATA_TYPE_TIMESTAMP:
case TSDB_DATA_TYPE_BIGINT: {
*(double*)point->val = doLinearInterpolationImpl(*(int64_t*)point1->val, *(int64_t*)point2->val, point1->key,
point2->key, point->key);
break;
};
case TSDB_DATA_TYPE_SMALLINT: {
*(double*)point->val = doLinearInterpolationImpl(*(int16_t*)point1->val, *(int16_t*)point2->val, point1->key,
point2->key, point->key);
break;
};
case TSDB_DATA_TYPE_TINYINT: {
*(double*)point->val =
doLinearInterpolationImpl(*(int8_t*)point1->val, *(int8_t*)point2->val, point1->key, point2->key, point->key);
break;
};
default: {
// TODO: Deal with interpolation with bool and strings and timestamp
return -1;
}
}
return 0;
}
static char* getPos(char* data, int32_t bytes, int32_t index) { return data + index * bytes; }
static void setTagsValueInInterpolation(tFilePage** data, char** pTags, SColumnModel* pModel, int32_t order,
......
......@@ -32,7 +32,7 @@ tExtMemBuffer *releaseBucketsExceptFor(tMemBucket *pMemBucket, int16_t segIdx, i
pBuffer = pSeg->pBuffer[j];
} else {
if (pSeg->pBuffer && pSeg->pBuffer[j]) {
pSeg->pBuffer[j] = destoryExtMemBuffer(pSeg->pBuffer[j]);
pSeg->pBuffer[j] = destroyExtMemBuffer(pSeg->pBuffer[j]);
}
}
}
......@@ -338,7 +338,7 @@ void tMemBucketDestroy(tMemBucket *pBucket) {
for (int32_t j = 0; j < pSeg->numOfSlots; ++j) {
if (pSeg->pBuffer[j] != NULL) {
pSeg->pBuffer[j] = destoryExtMemBuffer(pSeg->pBuffer[j]);
pSeg->pBuffer[j] = destroyExtMemBuffer(pSeg->pBuffer[j]);
}
}
tfree(pSeg->pBuffer);
......@@ -588,7 +588,7 @@ void releaseBucket(tMemBucket *pMemBucket, int32_t segIdx, int32_t slotIdx) {
return;
}
pSeg->pBuffer[slotIdx] = destoryExtMemBuffer(pSeg->pBuffer[slotIdx]);
pSeg->pBuffer[slotIdx] = destroyExtMemBuffer(pSeg->pBuffer[slotIdx]);
}
////////////////////////////////////////////////////////////////////////////////////////////
......@@ -853,7 +853,7 @@ double getPercentileImpl(tMemBucket *pMemBucket, int32_t count, double fraction)
tMemBucketSegment *pSeg = &pMemBucket->pSegs[tt];
for (int32_t ttx = 0; ttx < pSeg->numOfSlots; ++ttx) {
if (pSeg->pBuffer && pSeg->pBuffer[ttx]) {
pSeg->pBuffer[ttx] = destoryExtMemBuffer(pSeg->pBuffer[ttx]);
pSeg->pBuffer[ttx] = destroyExtMemBuffer(pSeg->pBuffer[ttx]);
}
}
}
......
......@@ -378,6 +378,8 @@ static int32_t getTimestampInUsFromStrImpl(int64_t val, char unit, int64_t* resu
break;
case 'a':
break;
case 'u':
return 0;
default: {
;
return -1;
......
......@@ -510,7 +510,7 @@ uint32_t tSQLGetToken(char* z, uint32_t* tokenType) {
/* here is the 1a/2s/3m/9y */
if ((z[i] == 'a' || z[i] == 's' || z[i] == 'm' || z[i] == 'h' || z[i] == 'd' || z[i] == 'n' || z[i] == 'y' ||
z[i] == 'w' || z[i] == 'A' || z[i] == 'S' || z[i] == 'M' || z[i] == 'H' || z[i] == 'D' || z[i] == 'N' ||
z[i] == 'Y' || z[i] == 'W') &&
z[i] == 'Y' || z[i] == 'W' || z[i] == 'u' || z[i] == 'U') &&
(isIdChar[(uint8_t)z[i + 1]] == 0)) {
*tokenType = TK_VARIABLE;
i += 1;
......
// TAOS standard API example. The same syntax as MySQL, but only a subet
// to compile: gcc -o prepare prepare.c -ltaos
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
// # #include "taos.h" // TAOS header file
#include "taos.h"
void taosMsleep(int mseconds);
int main(int argc, char *argv[])
{
TAOS *taos;
TAOS_RES *result;
TAOS_STMT *stmt;
// connect to server
if (argc < 2) {
printf("please input server ip \n");
return 0;
}
// init TAOS
taos_init();
taos = taos_connect(argv[1], "root", "taosdata", NULL, 0);
if (taos == NULL) {
printf("failed to connect to db, reason:%s\n", taos_errstr(taos));
exit(1);
}
taos_query(taos, "drop database demo");
if (taos_query(taos, "create database demo") != 0) {
printf("failed to create database, reason:%s\n", taos_errstr(taos));
exit(1);
}
taos_query(taos, "use demo");
// create table
const char* sql = "create table m1 (ts timestamp, b bool, v1 tinyint, v2 smallint, v4 int, v8 bigint, f4 float, f8 double, bin binary(40), blob nchar(10))";
if (taos_query(taos, sql) != 0) {
printf("failed to create table, reason:%s\n", taos_errstr(taos));
exit(1);
}
// sleep for one second to make sure table is created on data node
// taosMsleep(1000);
// insert 10 records
struct {
int64_t ts;
int8_t b;
int8_t v1;
int16_t v2;
int32_t v4;
int64_t v8;
float f4;
double f8;
char bin[40];
char blob[80];
} v = {0};
stmt = taos_stmt_init(taos);
TAOS_BIND params[10];
params[0].buffer_type = TSDB_DATA_TYPE_TIMESTAMP;
params[0].buffer_length = sizeof(v.ts);
params[0].buffer = &v.ts;
params[0].length = &params[0].buffer_length;
params[0].is_null = NULL;
params[1].buffer_type = TSDB_DATA_TYPE_BOOL;
params[1].buffer_length = sizeof(v.b);
params[1].buffer = &v.b;
params[1].length = &params[1].buffer_length;
params[1].is_null = NULL;
params[2].buffer_type = TSDB_DATA_TYPE_TINYINT;
params[2].buffer_length = sizeof(v.v1);
params[2].buffer = &v.v1;
params[2].length = &params[2].buffer_length;
params[2].is_null = NULL;
params[3].buffer_type = TSDB_DATA_TYPE_SMALLINT;
params[3].buffer_length = sizeof(v.v2);
params[3].buffer = &v.v2;
params[3].length = &params[3].buffer_length;
params[3].is_null = NULL;
params[4].buffer_type = TSDB_DATA_TYPE_INT;
params[4].buffer_length = sizeof(v.v4);
params[4].buffer = &v.v4;
params[4].length = &params[4].buffer_length;
params[4].is_null = NULL;
params[5].buffer_type = TSDB_DATA_TYPE_BIGINT;
params[5].buffer_length = sizeof(v.v8);
params[5].buffer = &v.v8;
params[5].length = &params[5].buffer_length;
params[5].is_null = NULL;
params[6].buffer_type = TSDB_DATA_TYPE_FLOAT;
params[6].buffer_length = sizeof(v.f4);
params[6].buffer = &v.f4;
params[6].length = &params[6].buffer_length;
params[6].is_null = NULL;
params[7].buffer_type = TSDB_DATA_TYPE_DOUBLE;
params[7].buffer_length = sizeof(v.f8);
params[7].buffer = &v.f8;
params[7].length = &params[7].buffer_length;
params[7].is_null = NULL;
params[8].buffer_type = TSDB_DATA_TYPE_BINARY;
params[8].buffer_length = sizeof(v.bin);
params[8].buffer = v.bin;
params[8].length = &params[8].buffer_length;
params[8].is_null = NULL;
strcpy(v.blob, "一二三四五六七八九十");
params[9].buffer_type = TSDB_DATA_TYPE_NCHAR;
params[9].buffer_length = strlen(v.blob);
params[9].buffer = v.blob;
params[9].length = &params[9].buffer_length;
params[9].is_null = NULL;
int is_null = 1;
sql = "insert into m1 values(?,?,?,?,?,?,?,?,?,?)";
taos_stmt_prepare(stmt, sql, 0);
v.ts = 0;
for (int i = 0; i < 10; ++i) {
for (int j = 1; j < 10; ++j) {
params[j].is_null = ((i == j) ? &is_null : 0);
}
v.b = (int8_t)i % 2;
v.v1 = (int8_t)i;
v.v2 = (int16_t)(i * 2);
v.v4 = (int32_t)(i * 4);
v.v8 = (int64_t)(i * 8);
v.f4 = (float)(i * 40);
v.f8 = (double)(i * 80);
for (int j = 0; j < sizeof(v.bin) - 1; ++j) {
v.bin[j] = (char)(i + '0');
}
taos_stmt_bind_param(stmt, params);
taos_stmt_add_batch(stmt);
}
if (taos_stmt_execute(stmt) != 0) {
printf("failed to execute insert statement.\n");
exit(1);
}
taos_stmt_close(stmt);
// query the records
stmt = taos_stmt_init(taos);
taos_stmt_prepare(stmt, "SELECT * FROM m1 WHERE v1 > ? AND v2 < ?", 0);
v.v1 = 5;
v.v2 = 15;
taos_stmt_bind_param(stmt, params + 2);
if (taos_stmt_execute(stmt) != 0) {
printf("failed to execute select statement.\n");
exit(1);
}
result = taos_stmt_use_result(stmt);
TAOS_ROW row;
int rows = 0;
int num_fields = taos_num_fields(result);
TAOS_FIELD *fields = taos_fetch_fields(result);
char temp[256];
// fetch the records row by row
while ((row = taos_fetch_row(result))) {
rows++;
taos_print_row(temp, row, fields, num_fields);
printf("%s\n", temp);
}
taos_free_result(result);
taos_stmt_close(stmt);
return getchar();
}
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册