未验证 提交 1634038a 编写于 作者: T Tao Liu 提交者: GitHub

Merge pull request #1644 from taosdata/develop_old

Develop old
[submodule "src/connector/go"]
path = src/connector/go
url = https://github.com/taosdata/driver-go
[![Build Status](https://travis-ci.org/taosdata/TDengine.svg?branch=develop)](https://travis-ci.org/taosdata/TDengine)
[![Build status](https://ci.appveyor.com/api/projects/status/kf3pwh2or5afsgl9/branch/develop?svg=true)](https://ci.appveyor.com/project/sangshuduo/tdengine-2n8ge/branch/develop)
[![TDengine](TDenginelogo.png)](https://www.taosdata.com) [![TDengine](TDenginelogo.png)](https://www.taosdata.com)
# What is TDengine? # What is TDengine?
......
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
# Test binary, built with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
# Dependency directories (remove the comment below to include it)
vendor/
# Project specific files
cmd/alert/alert
cmd/alert/alert.log
*.db
*.gz
\ No newline at end of file
# Alert
The Alert application reads data from [TDEngine](https://www.taosdata.com/), calculating according to predefined rules to generate alerts, and pushes alerts to downstream applications like [AlertManager](https://github.com/prometheus/alertmanager).
## Install
### From Binary
Precompiled binaries is available at [taosdata website](https://www.taosdata.com/en/getting-started/), please download and unpack it by below shell command.
```
$ tar -xzf tdengine-alert-$version-$OS-$ARCH.tar.gz
```
If you have no TDengine server or client installed, please execute below command to install the required driver library:
```
$ ./install_driver.sh
```
### From Source Code
Two prerequisites are required to install from source.
1. TDEngine server or client must be installed.
2. Latest [Go](https://golang.org) language must be installed.
When these two prerequisites are ready, please follow steps below to build the application:
```
$ mkdir taosdata
$ cd taosdata
$ git clone https://github.com/taosdata/tdengine.git
$ cd tdengine/alert/cmd/alert
$ go build
```
If `go build` fails because some of the dependency packages cannot be downloaded, please follow steps in [goproxy.io](https://goproxy.io) to configure `GOPROXY` and try `go build` again.
## Configure
The configuration file format of Alert application is standard `json`, below is its default content, please revise according to actual scenario.
```json
{
"port": 8100,
"database": "file:alert.db",
"tdengine": "root:taosdata@/tcp(127.0.0.1:0)/",
"log": {
"level": "production",
"path": "alert.log"
},
"receivers": {
"alertManager": "http://127.0.0.1:9093/api/v1/alerts",
"console": true
}
}
```
The use of each configuration item is:
* **port**: This is the `http` service port which enables other application to manage rules by `restful API`.
* **database**: rules are stored in a `sqlite` database, this is the path of the database file (if the file does not exist, the alert application creates it automatically).
* **tdengine**: connection string of `TDEngine` server, note in most cases the database information should be put in a rule, thus it should NOT be included here.
* **log > level**: log level, could be `production` or `debug`.
* **log > path**: log output file path.
* **receivers > alertManager**: the alert application pushes alerts to `AlertManager` at this URL.
* **receivers > console**: print out alerts to console (stdout) or not.
When the configruation file is ready, the alert application can be started with below command (`alert.cfg` is the path of the configuration file):
```
$ ./alert -cfg alert.cfg
```
## Prepare an alert rule
From technical aspect, an alert could be defined as: query and filter recent data from `TDEngine`, and calculating out a boolean value from these data according to a formula, and trigger an alert if the boolean value last for a certain duration.
This is a rule example in `json` format:
```json
{
"name": "rule1",
"sql": "select sum(col1) as sumCol1 from test.meters where ts > now - 1h group by areaid",
"expr": "sumCol1 > 10",
"for": "10m",
"period": "1m",
"labels": {
"ruleName": "rule1"
},
"annotations": {
"summary": "sum of rule {{$labels.ruleName}} of area {{$values.areaid}} is {{$values.sumCol1}}"
}
}
```
The fields of the rule is explained below:
* **name**: the name of the rule, must be unique.
* **sql**: this is the `sql` statement used to query data from `TDEngine`, columns of the query result are used in later processing, so please give the column an alias if aggregation functions are used.
* **expr**: an expression whose result is a boolean value, arithmatic and logical calculations can be included in the expression, and builtin functions (see below) are also supported. Alerts are only triggered when the expression evaluates to `true`.
* **for**: this item is a duration which default value is zero second. when `expr` evaluates to `true` and last at least this duration, an alert is triggered.
* **period**: the interval for the alert application to check the rule, default is 1 minute.
* **labels**: a label list, labels are used to generate alert information. note if the `sql` statement includes a `group by` clause, the `group by` columns are inserted into this list automatically.
* **annotations**: the template of alert information which is in [go template](https://golang.org/pkg/text/template) syntax, labels can be referenced by `$labels.<label name>` and columns of the query result can be referenced by `$values.<column name>`.
### Operators
Operators which can be used in the `expr` field of a rule are list below, `()` can be to change precedence if default does not meet requirement.
<table>
<thead>
<tr> <td>Operator</td><td>Unary/Binary</td><td>Precedence</td><td>Effect</td> </tr>
</thead>
<tbody>
<tr> <td>~</td><td>Unary</td><td>6</td><td>Bitwise Not</td> </tr>
<tr> <td>!</td><td>Unary</td><td>6</td><td>Logical Not</td> </tr>
<tr> <td>+</td><td>Unary</td><td>6</td><td>Positive Sign</td> </tr>
<tr> <td>-</td><td>Unary</td><td>6</td><td>Negative Sign</td> </tr>
<tr> <td>*</td><td>Binary</td><td>5</td><td>Multiplication</td> </tr>
<tr> <td>/</td><td>Binary</td><td>5</td><td>Division</td> </tr>
<tr> <td>%</td><td>Binary</td><td>5</td><td>Modulus</td> </tr>
<tr> <td><<</td><td>Binary</td><td>5</td><td>Bitwise Left Shift</td> </tr>
<tr> <td>>></td><td>Binary</td><td>5</td><td>Bitwise Right Shift</td> </tr>
<tr> <td>&</td><td>Binary</td><td>5</td><td>Bitwise And</td> </tr>
<tr> <td>+</td><td>Binary</td><td>4</td><td>Addition</td> </tr>
<tr> <td>-</td><td>Binary</td><td>4</td><td>Subtraction</td> </tr>
<tr> <td>|</td><td>Binary</td><td>4</td><td>Bitwise Or</td> </tr>
<tr> <td>^</td><td>Binary</td><td>4</td><td>Bitwise Xor</td> </tr>
<tr> <td>==</td><td>Binary</td><td>3</td><td>Equal</td> </tr>
<tr> <td>!=</td><td>Binary</td><td>3</td><td>Not Equal</td> </tr>
<tr> <td><</td><td>Binary</td><td>3</td><td>Less Than</td> </tr>
<tr> <td><=</td><td>Binary</td><td>3</td><td>Less Than or Equal</td> </tr>
<tr> <td>></td><td>Binary</td><td>3</td><td>Great Than</td> </tr>
<tr> <td>>=</td><td>Binary</td><td>3</td><td>Great Than or Equal</td> </tr>
<tr> <td>&&</td><td>Binary</td><td>2</td><td>Logical And</td> </tr>
<tr> <td>||</td><td>Binary</td><td>1</td><td>Logical Or</td> </tr>
</tbody>
</table>
### Built-in Functions
Built-in function can be used in the `expr` field of a rule.
* **min**: returns the minimum one of its arguments, for example: `min(1, 2, 3)` returns `1`.
* **max**: returns the maximum one of its arguments, for example: `max(1, 2, 3)` returns `3`.
* **sum**: returns the sum of its arguments, for example: `sum(1, 2, 3)` returns `6`.
* **avg**: returns the average of its arguments, for example: `avg(1, 2, 3)` returns `2`.
* **sqrt**: returns the square root of its argument, for example: `sqrt(9)` returns `3`.
* **ceil**: returns the minimum integer which greater or equal to its argument, for example: `ceil(9.1)` returns `10`.
* **floor**: returns the maximum integer which lesser or equal to its argument, for example: `floor(9.9)` returns `9`.
* **round**: round its argument to nearest integer, for examples: `round(9.9)` returns `10` and `round(9.1)` returns `9`.
* **log**: returns the natural logarithm of its argument, for example: `log(10)` returns `2.302585`.
* **log10**: returns base 10 logarithm of its argument, for example: `log10(10)` return `1`.
* **abs**: returns the absolute value of its argument, for example: `abs(-1)` returns `1`.
* **if**: if the first argument is `true` returns its second argument, and returns its third argument otherwise, for examples: `if(true, 10, 100)` returns `10` and `if(false, 10, 100)` returns `100`.
## Rule Management
* Add / Update
* API address: http://\<server\>:\<port\>/api/update-rule
* Method: POST
* Body: the rule
* Example:curl -d '@rule.json' http://localhost:8100/api/update-rule
* Delete
* API address: http://\<server\>:\<port\>/api/delete-rule?name=\<rule name\>
* Method:DELETE
* Example:curl -X DELETE http://localhost:8100/api/delete-rule?name=rule1
* Enable / Disable
* API address: http://\<server\>:\<port\>/api/enable-rule?name=\<rule name\>&enable=[true | false]
* Method POST
* Example:curl -X POST http://localhost:8100/api/enable-rule?name=rule1&enable=true
* Retrieve rule list
* API address: http://\<server\>:\<port\>/api/list-rule
* Method: GET
* Example:curl http://localhost:8100/api/list-rule
# Alert
报警监测程序,从 [TDEngine](https://www.taosdata.com/) 读取数据后,根据预定义的规则计算和生成报警,并将它们推送到 [AlertManager](https://github.com/prometheus/alertmanager) 或其它下游应用。
## 安装
### 使用编译好的二进制文件
您可以从 [涛思数据](https://www.taosdata.com/cn/getting-started/) 官网下载最新的安装包。下载完成后,使用以下命令解压:
```
$ tar -xzf tdengine-alert-$version-$OS-$ARCH.tar.gz
```
如果您之前没有安装过 TDengine 的服务端或客户端,您需要使用下面的命令安装 TDengine 的动态库:
```
$ ./install_driver.sh
```
### 从源码安装
从源码安装需要在您用于编译的计算机上提前安装好 TDEngine 的服务端或客户端,如果您还没有安装,可以参考 TDEngine 的文档。
报警监测程序使用 [Go语言](https://golang.org) 开发,请安装最新版的 Go 语言编译环境。
```
$ mkdir taosdata
$ cd taosdata
$ git clone https://github.com/taosdata/tdengine.git
$ cd tdengine/alert/cmd/alert
$ go build
```
如果由于部分包无法下载导致 `go build` 失败,请根据 [goproxy.io](https://goproxy.io) 上的说明配置好 `GOPROXY` 再重新执行 `go build`
## 配置
报警监测程序的配置文件采用标准`json`格式,下面是默认的文件内容,请根据实际情况修改。
```json
{
"port": 8100,
"database": "file:alert.db",
"tdengine": "root:taosdata@/tcp(127.0.0.1:0)/",
"log": {
"level": "production",
"path": "alert.log"
},
"receivers": {
"alertManager": "http://127.0.0.1:9093/api/v1/alerts",
"console": true
}
}
```
其中:
* **port**:报警监测程序支持使用 `restful API` 对规则进行管理,这个参数用于配置 `http` 服务的侦听端口。
* **database**:报警监测程序将规则保存到了一个 `sqlite` 数据库中,这个参数用于指定数据库文件的路径(不需要提前创建这个文件,如果它不存在,程序会自动创建它)。
* **tdengine**`TDEngine` 的连接信息,一般来说,数据库信息应该在报警规则中指定,所以这里 **不** 应包含这一部分信息。
* **log > level**:日志的记录级别,可选 `production``debug`
* **log > path**:日志文件的路径。
* **receivers > alertManager**:报警监测程序会将报警推送到 `AlertManager`,在这里指定 `AlertManager` 的接收地址。
* **receivers > console**:是否输出到控制台 (stdout)。
准备好配置文件后,可使用下面的命令启动报警监测程序( `alert.cfg` 是配置文件的路径):
```
$ ./alert -cfg alert.cfg
```
## 编写报警规则
从技术角度,可以将报警描述为:从 `TDEngine` 中查询最近一段时间、符合一定过滤条件的数据,并基于这些数据根据定义好的计算方法得出一个结果,当结果符合某个条件且持续一定时间后,触发报警。
根据上面的描述,可以很容易的知道报警规则中需要包含的大部分信息。 以下是一个完整的报警规则,采用标准 `json` 格式:
```json
{
"name": "rule1",
"sql": "select sum(col1) as sumCol1 from test.meters where ts > now - 1h group by areaid",
"expr": "sumCol1 > 10",
"for": "10m",
"period": "1m",
"labels": {
"ruleName": "rule1"
},
"annotations": {
"summary": "sum of rule {{$labels.ruleName}} of area {{$values.areaid}} is {{$values.sumCol1}}"
}
}
```
其中:
* **name**:用于为规则指定一个唯一的名字。
* **sql**:从 `TDEngine` 中查询数据时使用的 `sql` 语句,查询结果中的列将被后续计算使用,所以,如果使用了聚合函数,请为这一列指定一个别名。
* **expr**:一个计算结果为布尔型的表达式,支持算数运算、逻辑运算,并且内置了部分函数,也可以引用查询结果中的列。 当表达式计算结果为 `true` 时,进入报警状态。
* **for**:当表达式计算结果为 `true` 的连续时长超过这个选项时,触发报警,否则报警处于“待定”状态。默认为0,表示一旦计算结果为 `true`,立即触发报警。
* **period**:规则的检查周期,默认1分钟。
* **labels**:人为指定的标签列表,标签可以在生成报警信息引用。如果 `sql` 中包含 `group by` 子句,则所有用于分组的字段会被自动加入这个标签列表中。
* **annotations**:用于定义报警信息,使用 [go template](https://golang.org/pkg/text/template) 语法,其中,可以通过 `$labels.<label name>` 引用标签,也可以通过 `$values.<column name>` 引用查询结果中的列。
### 运算符
以下是 `expr` 字段中支持的运算符,您可以使用 `()` 改变运算的优先级。
<table>
<thead>
<tr> <td>运算符</td><td>单目/双目</td><td>优先级</td><td>作用</td> </tr>
</thead>
<tbody>
<tr> <td>~</td><td>单目</td><td>6</td><td>按位取反</td> </tr>
<tr> <td>!</td><td>单目</td><td>6</td><td>逻辑非</td> </tr>
<tr> <td>+</td><td>单目</td><td>6</td><td>正号</td> </tr>
<tr> <td>-</td><td>单目</td><td>6</td><td>负号</td> </tr>
<tr> <td>*</td><td>双目</td><td>5</td><td>乘法</td> </tr>
<tr> <td>/</td><td>双目</td><td>5</td><td>除法</td> </tr>
<tr> <td>%</td><td>双目</td><td>5</td><td>取模(余数)</td> </tr>
<tr> <td><<</td><td>双目</td><td>5</td><td>按位左移</td> </tr>
<tr> <td>>></td><td>双目</td><td>5</td><td>按位右移</td> </tr>
<tr> <td>&</td><td>双目</td><td>5</td><td>按位与</td> </tr>
<tr> <td>+</td><td>双目</td><td>4</td><td>加法</td> </tr>
<tr> <td>-</td><td>双目</td><td>4</td><td>减法</td> </tr>
<tr> <td>|</td><td>双目</td><td>4</td><td>按位或</td> </tr>
<tr> <td>^</td><td>双目</td><td>4</td><td>按位异或</td> </tr>
<tr> <td>==</td><td>双目</td><td>3</td><td>等于</td> </tr>
<tr> <td>!=</td><td>双目</td><td>3</td><td>不等于</td> </tr>
<tr> <td><</td><td>双目</td><td>3</td><td>小于</td> </tr>
<tr> <td><=</td><td>双目</td><td>3</td><td>小于或等于</td> </tr>
<tr> <td>></td><td>双目</td><td>3</td><td>大于</td> </tr>
<tr> <td>>=</td><td>双目</td><td>3</td><td>大于或等于</td> </tr>
<tr> <td>&&</td><td>双目</td><td>2</td><td>逻辑与</td> </tr>
<tr> <td>||</td><td>双目</td><td>1</td><td>逻辑或</td> </tr>
</tbody>
</table>
### 内置函数
目前支持以下内置函数,可以在报警规则的 `expr` 字段中使用这些函数:
* **min**:取多个值中的最小值,例如 `min(1, 2, 3)` 返回 `1`
* **max**:取多个值中的最大值,例如 `max(1, 2, 3)` 返回 `3`
* **sum**:求和,例如 `sum(1, 2, 3)` 返回 `6`
* **avg**:求算术平均值,例如 `avg(1, 2, 3)` 返回 `2`
* **sqrt**:计算平方根,例如 `sqrt(9)` 返回 `3`
* **ceil**:上取整,例如 `ceil(9.1)` 返回 `10`
* **floor**:下取整,例如 `floor(9.9)` 返回 `9`
* **round**:四舍五入,例如 `round(9.9)` 返回 `10``round(9.1)` 返回 `9`
* **log**:计算自然对数,例如 `log(10)` 返回 `2.302585`
* **log10**:计算以10为底的对数,例如 `log10(10)` 返回 `1`
* **abs**:计算绝对值,例如 `abs(-1)` 返回 `1`
* **if**:如果第一个参数为 `true`,返回第二个参数,否则返回第三个参数,例如 `if(true, 10, 100)` 返回 `10``if(false, 10, 100)` 返回 `100`
## 规则管理
* 添加或修改
* API地址:http://\<server\>:\<port\>/api/update-rule
* Method:POST
* Body:规则定义
* 示例:curl -d '@rule.json' http://localhost:8100/api/update-rule
* 删除
* API地址:http://\<server\>:\<port\>/api/delete-rule?name=\<rule name\>
* Method:DELETE
* 示例:curl -X DELETE http://localhost:8100/api/delete-rule?name=rule1
* 挂起或恢复
* API地址:http://\<server\>:\<port\>/api/enable-rule?name=\<rule name\>&enable=[true | false]
* Method:POST
* 示例:curl -X POST http://localhost:8100/api/enable-rule?name=rule1&enable=true
* 获取列表
* API地址:http://\<server\>:\<port\>/api/list-rule
* Method:GET
* 示例:curl http://localhost:8100/api/list-rule
package app
import (
"encoding/json"
"io/ioutil"
"net/http"
"strings"
"time"
"github.com/taosdata/alert/models"
"github.com/taosdata/alert/utils"
"github.com/taosdata/alert/utils/log"
)
func Init() error {
if e := initRule(); e != nil {
return e
}
http.HandleFunc("/api/list-rule", onListRule)
http.HandleFunc("/api/list-alert", onListAlert)
http.HandleFunc("/api/update-rule", onUpdateRule)
http.HandleFunc("/api/enable-rule", onEnableRule)
http.HandleFunc("/api/delete-rule", onDeleteRule)
return nil
}
func Uninit() error {
uninitRule()
return nil
}
func onListRule(w http.ResponseWriter, r *http.Request) {
var res []*Rule
rules.Range(func(k, v interface{}) bool {
res = append(res, v.(*Rule))
return true
})
w.Header().Add("Content-Type", "application/json; charset=utf-8")
json.NewEncoder(w).Encode(res)
}
func onListAlert(w http.ResponseWriter, r *http.Request) {
var alerts []*Alert
rn := r.URL.Query().Get("rule")
rules.Range(func(k, v interface{}) bool {
if len(rn) > 0 && rn != k.(string) {
return true
}
rule := v.(*Rule)
rule.Alerts.Range(func(k, v interface{}) bool {
alert := v.(*Alert)
// TODO: not go-routine safe
if alert.State != AlertStateWaiting {
alerts = append(alerts, v.(*Alert))
}
return true
})
return true
})
w.Header().Add("Content-Type", "application/json; charset=utf-8")
json.NewEncoder(w).Encode(alerts)
}
func onUpdateRule(w http.ResponseWriter, r *http.Request) {
data, e := ioutil.ReadAll(r.Body)
if e != nil {
log.Error("failed to read request body: ", e.Error())
w.WriteHeader(http.StatusBadRequest)
return
}
rule, e := newRule(string(data))
if e != nil {
log.Error("failed to parse rule: ", e.Error())
w.WriteHeader(http.StatusBadRequest)
return
}
if e = doUpdateRule(rule, string(data)); e != nil {
w.WriteHeader(http.StatusInternalServerError)
}
}
func doUpdateRule(rule *Rule, ruleStr string) error {
if _, ok := rules.Load(rule.Name); ok {
if len(utils.Cfg.Database) > 0 {
e := models.UpdateRule(rule.Name, ruleStr)
if e != nil {
log.Errorf("[%s]: update failed: %s", rule.Name, e.Error())
return e
}
}
log.Infof("[%s]: update succeeded.", rule.Name)
} else {
if len(utils.Cfg.Database) > 0 {
e := models.AddRule(&models.Rule{
Name: rule.Name,
Content: ruleStr,
})
if e != nil {
log.Errorf("[%s]: add failed: %s", rule.Name, e.Error())
return e
}
}
log.Infof("[%s]: add succeeded.", rule.Name)
}
rules.Store(rule.Name, rule)
return nil
}
func onEnableRule(w http.ResponseWriter, r *http.Request) {
var rule *Rule
name := r.URL.Query().Get("name")
enable := strings.ToLower(r.URL.Query().Get("enable")) == "true"
if x, ok := rules.Load(name); ok {
rule = x.(*Rule)
} else {
w.WriteHeader(http.StatusNotFound)
return
}
if rule.isEnabled() == enable {
return
}
if len(utils.Cfg.Database) > 0 {
if e := models.EnableRule(name, enable); e != nil {
if enable {
log.Errorf("[%s]: enable failed: ", name, e.Error())
} else {
log.Errorf("[%s]: disable failed: ", name, e.Error())
}
w.WriteHeader(http.StatusInternalServerError)
return
}
}
if enable {
rule = rule.clone()
rule.setNextRunTime(time.Now())
rules.Store(rule.Name, rule)
log.Infof("[%s]: enable succeeded.", name)
} else {
rule.setState(RuleStateDisabled)
log.Infof("[%s]: disable succeeded.", name)
}
}
func onDeleteRule(w http.ResponseWriter, r *http.Request) {
name := r.URL.Query().Get("name")
if len(name) == 0 {
return
}
if e := doDeleteRule(name); e != nil {
w.WriteHeader(http.StatusInternalServerError)
}
}
func doDeleteRule(name string) error {
if len(utils.Cfg.Database) > 0 {
if e := models.DeleteRule(name); e != nil {
log.Errorf("[%s]: delete failed: %s", name, e.Error())
return e
}
}
rules.Delete(name)
log.Infof("[%s]: delete succeeded.", name)
return nil
}
package expr
import (
"errors"
"io"
"math"
"strconv"
"strings"
"text/scanner"
)
var (
// compile errors
ErrorExpressionSyntax = errors.New("expression syntax error")
ErrorUnrecognizedFunction = errors.New("unrecognized function")
ErrorArgumentCount = errors.New("too many/few arguments")
ErrorInvalidFloat = errors.New("invalid float")
ErrorInvalidInteger = errors.New("invalid integer")
// eval errors
ErrorUnsupportedDataType = errors.New("unsupported data type")
ErrorInvalidOperationFloat = errors.New("invalid operation for float")
ErrorInvalidOperationInteger = errors.New("invalid operation for integer")
ErrorInvalidOperationBoolean = errors.New("invalid operation for boolean")
ErrorOnlyIntegerAllowed = errors.New("only integers is allowed")
ErrorDataTypeMismatch = errors.New("data type mismatch")
)
// binary operator precedence
// 5 * / % << >> &
// 4 + - | ^
// 3 == != < <= > >=
// 2 &&
// 1 ||
const (
opOr = -(iota + 1000) // ||
opAnd // &&
opEqual // ==
opNotEqual // !=
opGTE // >=
opLTE // <=
opLeftShift // <<
opRightShift // >>
)
type lexer struct {
scan scanner.Scanner
tok rune
}
func (l *lexer) init(src io.Reader) {
l.scan.Error = func(s *scanner.Scanner, msg string) {
panic(errors.New(msg))
}
l.scan.Mode = scanner.ScanIdents | scanner.ScanInts | scanner.ScanFloats | scanner.ScanStrings
l.scan.Init(src)
l.tok = l.next()
}
func (l *lexer) next() rune {
l.tok = l.scan.Scan()
switch l.tok {
case '|':
if l.scan.Peek() == '|' {
l.tok = opOr
l.scan.Scan()
}
case '&':
if l.scan.Peek() == '&' {
l.tok = opAnd
l.scan.Scan()
}
case '=':
if l.scan.Peek() == '=' {
l.tok = opEqual
l.scan.Scan()
} else {
// TODO: error
}
case '!':
if l.scan.Peek() == '=' {
l.tok = opNotEqual
l.scan.Scan()
} else {
// TODO: error
}
case '<':
if tok := l.scan.Peek(); tok == '<' {
l.tok = opLeftShift
l.scan.Scan()
} else if tok == '=' {
l.tok = opLTE
l.scan.Scan()
}
case '>':
if tok := l.scan.Peek(); tok == '>' {
l.tok = opRightShift
l.scan.Scan()
} else if tok == '=' {
l.tok = opGTE
l.scan.Scan()
}
}
return l.tok
}
func (l *lexer) token() rune {
return l.tok
}
func (l *lexer) text() string {
switch l.tok {
case opOr:
return "||"
case opAnd:
return "&&"
case opEqual:
return "=="
case opNotEqual:
return "!="
case opLeftShift:
return "<<"
case opLTE:
return "<="
case opRightShift:
return ">>"
case opGTE:
return ">="
default:
return l.scan.TokenText()
}
}
type Expr interface {
Eval(env func(string) interface{}) interface{}
}
type unaryExpr struct {
op rune
subExpr Expr
}
func (ue *unaryExpr) Eval(env func(string) interface{}) interface{} {
val := ue.subExpr.Eval(env)
switch v := val.(type) {
case float64:
if ue.op != '-' {
panic(ErrorInvalidOperationFloat)
}
return -v
case int64:
switch ue.op {
case '-':
return -v
case '~':
return ^v
default:
panic(ErrorInvalidOperationInteger)
}
case bool:
if ue.op != '!' {
panic(ErrorInvalidOperationBoolean)
}
return !v
default:
panic(ErrorUnsupportedDataType)
}
}
type binaryExpr struct {
op rune
lhs Expr
rhs Expr
}
func (be *binaryExpr) Eval(env func(string) interface{}) interface{} {
lval := be.lhs.Eval(env)
rval := be.rhs.Eval(env)
switch be.op {
case '*':
switch lv := lval.(type) {
case float64:
switch rv := rval.(type) {
case float64:
return lv * rv
case int64:
return lv * float64(rv)
case bool:
panic(ErrorInvalidOperationBoolean)
}
case int64:
switch rv := rval.(type) {
case float64:
return float64(lv) * rv
case int64:
return lv * rv
case bool:
panic(ErrorInvalidOperationBoolean)
}
case bool:
panic(ErrorInvalidOperationBoolean)
}
case '/':
switch lv := lval.(type) {
case float64:
switch rv := rval.(type) {
case float64:
if rv == 0 {
return math.Inf(int(lv))
} else {
return lv / rv
}
case int64:
if rv == 0 {
return math.Inf(int(lv))
} else {
return lv / float64(rv)
}
case bool:
panic(ErrorInvalidOperationBoolean)
}
case int64:
switch rv := rval.(type) {
case float64:
if rv == 0 {
return math.Inf(int(lv))
} else {
return float64(lv) / rv
}
case int64:
if rv == 0 {
return math.Inf(int(lv))
} else {
return lv / rv
}
case bool:
panic(ErrorInvalidOperationBoolean)
}
case bool:
panic(ErrorInvalidOperationBoolean)
}
case '%':
switch lv := lval.(type) {
case float64:
switch rv := rval.(type) {
case float64:
return math.Mod(lv, rv)
case int64:
return math.Mod(lv, float64(rv))
case bool:
panic(ErrorInvalidOperationBoolean)
}
case int64:
switch rv := rval.(type) {
case float64:
return math.Mod(float64(lv), rv)
case int64:
if rv == 0 {
return math.Inf(int(lv))
} else {
return lv % rv
}
case bool:
panic(ErrorInvalidOperationBoolean)
}
case bool:
panic(ErrorInvalidOperationBoolean)
}
case opLeftShift:
switch lv := lval.(type) {
case int64:
switch rv := rval.(type) {
case int64:
return lv << rv
default:
panic(ErrorOnlyIntegerAllowed)
}
default:
panic(ErrorOnlyIntegerAllowed)
}
case opRightShift:
switch lv := lval.(type) {
case int64:
switch rv := rval.(type) {
case int64:
return lv >> rv
default:
panic(ErrorOnlyIntegerAllowed)
}
default:
panic(ErrorOnlyIntegerAllowed)
}
case '&':
switch lv := lval.(type) {
case int64:
switch rv := rval.(type) {
case int64:
return lv & rv
default:
panic(ErrorOnlyIntegerAllowed)
}
default:
panic(ErrorOnlyIntegerAllowed)
}
case '+':
switch lv := lval.(type) {
case float64:
switch rv := rval.(type) {
case float64:
return lv + rv
case int64:
return lv + float64(rv)
case bool:
panic(ErrorInvalidOperationBoolean)
}
case int64:
switch rv := rval.(type) {
case float64:
return float64(lv) + rv
case int64:
return lv + rv
case bool:
panic(ErrorInvalidOperationBoolean)
}
case bool:
panic(ErrorInvalidOperationBoolean)
}
case '-':
switch lv := lval.(type) {
case float64:
switch rv := rval.(type) {
case float64:
return lv - rv
case int64:
return lv - float64(rv)
case bool:
panic(ErrorInvalidOperationBoolean)
}
case int64:
switch rv := rval.(type) {
case float64:
return float64(lv) - rv
case int64:
return lv - rv
case bool:
panic(ErrorInvalidOperationBoolean)
}
case bool:
panic(ErrorInvalidOperationBoolean)
}
case '|':
switch lv := lval.(type) {
case int64:
switch rv := rval.(type) {
case int64:
return lv | rv
default:
panic(ErrorOnlyIntegerAllowed)
}
default:
panic(ErrorOnlyIntegerAllowed)
}
case '^':
switch lv := lval.(type) {
case int64:
switch rv := rval.(type) {
case int64:
return lv ^ rv
default:
panic(ErrorOnlyIntegerAllowed)
}
default:
panic(ErrorOnlyIntegerAllowed)
}
case opEqual:
switch lv := lval.(type) {
case float64:
switch rv := rval.(type) {
case float64:
return lv == rv
case int64:
return lv == float64(rv)
case bool:
panic(ErrorDataTypeMismatch)
}
case int64:
switch rv := rval.(type) {
case float64:
return float64(lv) == rv
case int64:
return lv == rv
case bool:
panic(ErrorDataTypeMismatch)
}
case bool:
switch rv := rval.(type) {
case float64:
case int64:
case bool:
return lv == rv
}
}
case opNotEqual:
switch lv := lval.(type) {
case float64:
switch rv := rval.(type) {
case float64:
return lv != rv
case int64:
return lv != float64(rv)
case bool:
panic(ErrorDataTypeMismatch)
}
case int64:
switch rv := rval.(type) {
case float64:
return float64(lv) != rv
case int64:
return lv != rv
case bool:
panic(ErrorDataTypeMismatch)
}
case bool:
switch rv := rval.(type) {
case float64:
case int64:
case bool:
return lv != rv
}
}
case '<':
switch lv := lval.(type) {
case float64:
switch rv := rval.(type) {
case float64:
return lv < rv
case int64:
return lv < float64(rv)
case bool:
panic(ErrorDataTypeMismatch)
}
case int64:
switch rv := rval.(type) {
case float64:
return float64(lv) < rv
case int64:
return lv < rv
case bool:
panic(ErrorDataTypeMismatch)
}
case bool:
panic(ErrorInvalidOperationBoolean)
}
case opLTE:
switch lv := lval.(type) {
case float64:
switch rv := rval.(type) {
case float64:
return lv <= rv
case int64:
return lv <= float64(rv)
case bool:
panic(ErrorDataTypeMismatch)
}
case int64:
switch rv := rval.(type) {
case float64:
return float64(lv) <= rv
case int64:
return lv <= rv
case bool:
panic(ErrorDataTypeMismatch)
}
case bool:
panic(ErrorInvalidOperationBoolean)
}
case '>':
switch lv := lval.(type) {
case float64:
switch rv := rval.(type) {
case float64:
return lv > rv
case int64:
return lv > float64(rv)
case bool:
panic(ErrorDataTypeMismatch)
}
case int64:
switch rv := rval.(type) {
case float64:
return float64(lv) > rv
case int64:
return lv > rv
case bool:
panic(ErrorDataTypeMismatch)
}
case bool:
panic(ErrorInvalidOperationBoolean)
}
case opGTE:
switch lv := lval.(type) {
case float64:
switch rv := rval.(type) {
case float64:
return lv >= rv
case int64:
return lv >= float64(rv)
case bool:
panic(ErrorDataTypeMismatch)
}
case int64:
switch rv := rval.(type) {
case float64:
return float64(lv) >= rv
case int64:
return lv >= rv
case bool:
panic(ErrorDataTypeMismatch)
}
case bool:
panic(ErrorInvalidOperationBoolean)
}
case opAnd:
switch lv := lval.(type) {
case bool:
switch rv := rval.(type) {
case bool:
return lv && rv
default:
panic(ErrorOnlyIntegerAllowed)
}
default:
panic(ErrorOnlyIntegerAllowed)
}
case opOr:
switch lv := lval.(type) {
case bool:
switch rv := rval.(type) {
case bool:
return lv || rv
default:
panic(ErrorOnlyIntegerAllowed)
}
default:
panic(ErrorOnlyIntegerAllowed)
}
}
return nil
}
type funcExpr struct {
name string
args []Expr
}
func (fe *funcExpr) Eval(env func(string) interface{}) interface{} {
argv := make([]interface{}, 0, len(fe.args))
for _, arg := range fe.args {
argv = append(argv, arg.Eval(env))
}
return funcs[fe.name].call(argv)
}
type floatExpr struct {
val float64
}
func (fe *floatExpr) Eval(env func(string) interface{}) interface{} {
return fe.val
}
type intExpr struct {
val int64
}
func (ie *intExpr) Eval(env func(string) interface{}) interface{} {
return ie.val
}
type boolExpr struct {
val bool
}
func (be *boolExpr) Eval(env func(string) interface{}) interface{} {
return be.val
}
type stringExpr struct {
val string
}
func (se *stringExpr) Eval(env func(string) interface{}) interface{} {
return se.val
}
type varExpr struct {
name string
}
func (ve *varExpr) Eval(env func(string) interface{}) interface{} {
return env(ve.name)
}
func Compile(src string) (expr Expr, err error) {
defer func() {
switch x := recover().(type) {
case nil:
case error:
err = x
default:
}
}()
lexer := lexer{}
lexer.init(strings.NewReader(src))
expr = parseBinary(&lexer, 0)
if lexer.token() != scanner.EOF {
panic(ErrorExpressionSyntax)
}
return expr, nil
}
func precedence(op rune) int {
switch op {
case opOr:
return 1
case opAnd:
return 2
case opEqual, opNotEqual, '<', '>', opGTE, opLTE:
return 3
case '+', '-', '|', '^':
return 4
case '*', '/', '%', opLeftShift, opRightShift, '&':
return 5
}
return 0
}
// binary = unary ('+' binary)*
func parseBinary(lexer *lexer, lastPrec int) Expr {
lhs := parseUnary(lexer)
for {
op := lexer.token()
prec := precedence(op)
if prec <= lastPrec {
break
}
lexer.next() // consume operator
rhs := parseBinary(lexer, prec)
lhs = &binaryExpr{op: op, lhs: lhs, rhs: rhs}
}
return lhs
}
// unary = '+|-' expr | primary
func parseUnary(lexer *lexer) Expr {
flag := false
for tok := lexer.token(); ; tok = lexer.next() {
if tok == '-' {
flag = !flag
} else if tok != '+' {
break
}
}
if flag {
return &unaryExpr{op: '-', subExpr: parsePrimary(lexer)}
}
flag = false
for tok := lexer.token(); tok == '!'; tok = lexer.next() {
flag = !flag
}
if flag {
return &unaryExpr{op: '!', subExpr: parsePrimary(lexer)}
}
flag = false
for tok := lexer.token(); tok == '~'; tok = lexer.next() {
flag = !flag
}
if flag {
return &unaryExpr{op: '~', subExpr: parsePrimary(lexer)}
}
return parsePrimary(lexer)
}
// primary = id
// | id '(' expr ',' ... ',' expr ')'
// | num
// | '(' expr ')'
func parsePrimary(lexer *lexer) Expr {
switch lexer.token() {
case '+', '-', '!', '~':
return parseUnary(lexer)
case '(':
lexer.next() // consume '('
node := parseBinary(lexer, 0)
if lexer.token() != ')' {
panic(ErrorExpressionSyntax)
}
lexer.next() // consume ')'
return node
case scanner.Ident:
id := strings.ToLower(lexer.text())
if lexer.next() != '(' {
if id == "true" {
return &boolExpr{val: true}
} else if id == "false" {
return &boolExpr{val: false}
} else {
return &varExpr{name: id}
}
}
node := funcExpr{name: id}
for lexer.next() != ')' {
arg := parseBinary(lexer, 0)
node.args = append(node.args, arg)
if lexer.token() != ',' {
break
}
}
if lexer.token() != ')' {
panic(ErrorExpressionSyntax)
}
if fn, ok := funcs[id]; !ok {
panic(ErrorUnrecognizedFunction)
} else if fn.minArgs >= 0 && len(node.args) < fn.minArgs {
panic(ErrorArgumentCount)
} else if fn.maxArgs >= 0 && len(node.args) > fn.maxArgs {
panic(ErrorArgumentCount)
}
lexer.next() // consume it
return &node
case scanner.Int:
val, e := strconv.ParseInt(lexer.text(), 0, 64)
if e != nil {
panic(ErrorInvalidFloat)
}
lexer.next()
return &intExpr{val: val}
case scanner.Float:
val, e := strconv.ParseFloat(lexer.text(), 0)
if e != nil {
panic(ErrorInvalidInteger)
}
lexer.next()
return &floatExpr{val: val}
case scanner.String:
panic(errors.New("strings are not allowed in expression at present"))
val := lexer.text()
lexer.next()
return &stringExpr{val: val}
default:
panic(ErrorExpressionSyntax)
}
}
package expr
import "testing"
func TestIntArithmetic(t *testing.T) {
cases := []struct {
expr string
expected int64
}{
{"+10", 10},
{"-10", -10},
{"3 + 4 + 5 + 6 * 7 + 8", 62},
{"3 + 4 + (5 + 6) * 7 + 8", 92},
{"3 + 4 + (5 + 6) * 7 / 11 + 8", 22},
{"3 + 4 + -5 * 6 / 7 % 8", 3},
{"10 - 5", 5},
}
for _, c := range cases {
expr, e := Compile(c.expr)
if e != nil {
t.Errorf("failed to compile expression '%s': %s", c.expr, e.Error())
}
if res := expr.Eval(nil); res.(int64) != c.expected {
t.Errorf("result for expression '%s' is %v, but expected is %v", c.expr, res, c.expected)
}
}
}
func TestFloatArithmetic(t *testing.T) {
cases := []struct {
expr string
expected float64
}{
{"+10.5", 10.5},
{"-10.5", -10.5},
{"3.1 + 4.2 + 5 + 6 * 7 + 8", 62.3},
{"3.1 + 4.2 + (5 + 6) * 7 + 8.3", 92.6},
{"3.1 + 4.2 + (5.1 + 5.9) * 7 / 11 + 8", 22.3},
{"3.3 + 4.2 - 4.0 * 7.5 / 3", -2.5},
{"3.3 + 4.2 - 4 * 7.0 / 2", -6.5},
{"3.5/2.0", 1.75},
{"3.5/2", 1.75},
{"7 / 3.5", 2},
{"3.5 % 2.0", 1.5},
{"3.5 % 2", 1.5},
{"7 % 2.5", 2},
{"7.3 - 2", 5.3},
{"7 - 2.3", 4.7},
{"1 + 1.5", 2.5},
}
for _, c := range cases {
expr, e := Compile(c.expr)
if e != nil {
t.Errorf("failed to compile expression '%s': %s", c.expr, e.Error())
}
if res := expr.Eval(nil); res.(float64) != c.expected {
t.Errorf("result for expression '%s' is %v, but expected is %v", c.expr, res, c.expected)
}
}
}
func TestVariable(t *testing.T) {
variables := map[string]interface{}{
"a": int64(6),
"b": int64(7),
}
env := func(key string) interface{} {
return variables[key]
}
cases := []struct {
expr string
expected int64
}{
{"3 + 4 + (+5) + a * b + 8", 62},
{"3 + 4 + (5 + a) * b + 8", 92},
{"3 + 4 + (5 + a) * b / 11 + 8", 22},
}
for _, c := range cases {
expr, e := Compile(c.expr)
if e != nil {
t.Errorf("failed to compile expression '%s': %s", c.expr, e.Error())
}
if res := expr.Eval(env); res.(int64) != c.expected {
t.Errorf("result for expression '%s' is %v, but expected is %v", c.expr, res, c.expected)
}
}
}
func TestFunction(t *testing.T) {
variables := map[string]interface{}{
"a": int64(6),
"b": 7.0,
}
env := func(key string) interface{} {
return variables[key]
}
cases := []struct {
expr string
expected float64
}{
{"sum(3, 4, 5, a * b, 8)", 62},
{"sum(3, 4, (5 + a) * b, 8)", 92},
{"sum(3, 4, (5 + a) * b / 11, 8)", 22},
}
for _, c := range cases {
expr, e := Compile(c.expr)
if e != nil {
t.Errorf("failed to compile expression '%s': %s", c.expr, e.Error())
}
if res := expr.Eval(env); res.(float64) != c.expected {
t.Errorf("result for expression '%s' is %v, but expected is %v", c.expr, res, c.expected)
}
}
}
func TestLogical(t *testing.T) {
cases := []struct {
expr string
expected bool
}{
{"true", true},
{"false", false},
{"true == true", true},
{"true == false", false},
{"true != true", false},
{"true != false", true},
{"5 > 3", true},
{"5 < 3", false},
{"5.2 > 3", true},
{"5.2 < 3", false},
{"5 > 3.1", true},
{"5 < 3.1", false},
{"5.1 > 3.3", true},
{"5.1 < 3.3", false},
{"5 >= 3", true},
{"5 <= 3", false},
{"5.2 >= 3", true},
{"5.2 <= 3", false},
{"5 >= 3.1", true},
{"5 <= 3.1", false},
{"5.1 >= 3.3", true},
{"5.1 <= 3.3", false},
{"5 != 3", true},
{"5.2 != 3.2", true},
{"5.2 != 3", true},
{"5 != 3.2", true},
{"5 == 3", false},
{"5.2 == 3.2", false},
{"5.2 == 3", false},
{"5 == 3.2", false},
{"!(5 > 3)", false},
{"5>3 && 3>1", true},
{"5<3 || 3<1", false},
{"4<=4 || 3<1", true},
{"4<4 || 3>=1", true},
}
for _, c := range cases {
expr, e := Compile(c.expr)
if e != nil {
t.Errorf("failed to compile expression '%s': %s", c.expr, e.Error())
}
if res := expr.Eval(nil); res.(bool) != c.expected {
t.Errorf("result for expression '%s' is %v, but expected is %v", c.expr, res, c.expected)
}
}
}
func TestBitwise(t *testing.T) {
cases := []struct {
expr string
expected int64
}{
{"0x0C & 0x04", 0x04},
{"0x08 | 0x04", 0x0C},
{"0x0C ^ 0x04", 0x08},
{"0x01 << 2", 0x04},
{"0x04 >> 2", 0x01},
{"~0x04", ^0x04},
}
for _, c := range cases {
expr, e := Compile(c.expr)
if e != nil {
t.Errorf("failed to compile expression '%s': %s", c.expr, e.Error())
}
if res := expr.Eval(nil); res.(int64) != c.expected {
t.Errorf("result for expression '%s' is 0x%X, but expected is 0x%X", c.expr, res, c.expected)
}
}
}
package expr
import (
"math"
)
type builtInFunc struct {
minArgs, maxArgs int
call func([]interface{}) interface{}
}
func fnMin(args []interface{}) interface{} {
res := args[0]
for _, arg := range args[1:] {
switch v1 := res.(type) {
case int64:
switch v2 := arg.(type) {
case int64:
if v2 < v1 {
res = v2
}
case float64:
res = math.Min(float64(v1), v2)
default:
panic(ErrorUnsupportedDataType)
}
case float64:
switch v2 := arg.(type) {
case int64:
res = math.Min(v1, float64(v2))
case float64:
res = math.Min(v1, v2)
default:
panic(ErrorUnsupportedDataType)
}
default:
panic(ErrorUnsupportedDataType)
}
}
return res
}
func fnMax(args []interface{}) interface{} {
res := args[0]
for _, arg := range args[1:] {
switch v1 := res.(type) {
case int64:
switch v2 := arg.(type) {
case int64:
if v2 > v1 {
res = v2
}
case float64:
res = math.Max(float64(v1), v2)
default:
panic(ErrorUnsupportedDataType)
}
case float64:
switch v2 := arg.(type) {
case int64:
res = math.Max(v1, float64(v2))
case float64:
res = math.Max(v1, v2)
default:
panic(ErrorUnsupportedDataType)
}
default:
panic(ErrorUnsupportedDataType)
}
}
return res
}
func fnSum(args []interface{}) interface{} {
res := float64(0)
for _, arg := range args {
switch v := arg.(type) {
case int64:
res += float64(v)
case float64:
res += v
default:
panic(ErrorUnsupportedDataType)
}
}
return res
}
func fnAvg(args []interface{}) interface{} {
return fnSum(args).(float64) / float64(len(args))
}
func fnSqrt(args []interface{}) interface{} {
switch v := args[0].(type) {
case int64:
return math.Sqrt(float64(v))
case float64:
return math.Sqrt(v)
default:
panic(ErrorUnsupportedDataType)
}
}
func fnFloor(args []interface{}) interface{} {
switch v := args[0].(type) {
case int64:
return v
case float64:
return math.Floor(v)
default:
panic(ErrorUnsupportedDataType)
}
}
func fnCeil(args []interface{}) interface{} {
switch v := args[0].(type) {
case int64:
return v
case float64:
return math.Ceil(v)
default:
panic(ErrorUnsupportedDataType)
}
}
func fnRound(args []interface{}) interface{} {
switch v := args[0].(type) {
case int64:
return v
case float64:
return math.Round(v)
default:
panic(ErrorUnsupportedDataType)
}
}
func fnLog(args []interface{}) interface{} {
switch v := args[0].(type) {
case int64:
return math.Log(float64(v))
case float64:
return math.Log(v)
default:
panic(ErrorUnsupportedDataType)
}
}
func fnLog10(args []interface{}) interface{} {
switch v := args[0].(type) {
case int64:
return math.Log10(float64(v))
case float64:
return math.Log10(v)
default:
panic(ErrorUnsupportedDataType)
}
}
func fnAbs(args []interface{}) interface{} {
switch v := args[0].(type) {
case int64:
if v < 0 {
return -v
}
return v
case float64:
return math.Abs(v)
default:
panic(ErrorUnsupportedDataType)
}
}
func fnIf(args []interface{}) interface{} {
v, ok := args[0].(bool)
if !ok {
panic(ErrorUnsupportedDataType)
}
if v {
return args[1]
} else {
return args[2]
}
}
var funcs = map[string]builtInFunc{
"min": builtInFunc{minArgs: 1, maxArgs: -1, call: fnMin},
"max": builtInFunc{minArgs: 1, maxArgs: -1, call: fnMax},
"sum": builtInFunc{minArgs: 1, maxArgs: -1, call: fnSum},
"avg": builtInFunc{minArgs: 1, maxArgs: -1, call: fnAvg},
"sqrt": builtInFunc{minArgs: 1, maxArgs: 1, call: fnSqrt},
"ceil": builtInFunc{minArgs: 1, maxArgs: 1, call: fnCeil},
"floor": builtInFunc{minArgs: 1, maxArgs: 1, call: fnFloor},
"round": builtInFunc{minArgs: 1, maxArgs: 1, call: fnRound},
"log": builtInFunc{minArgs: 1, maxArgs: 1, call: fnLog},
"log10": builtInFunc{minArgs: 1, maxArgs: 1, call: fnLog10},
"abs": builtInFunc{minArgs: 1, maxArgs: 1, call: fnAbs},
"if": builtInFunc{minArgs: 3, maxArgs: 3, call: fnIf},
}
package expr
import (
"math"
"testing"
)
func TestMax(t *testing.T) {
cases := []struct {
args []interface{}
expected float64
}{
{[]interface{}{int64(1), int64(2), int64(3), int64(4), int64(5)}, 5},
{[]interface{}{int64(1), int64(2), float64(3), int64(4), float64(5)}, 5},
{[]interface{}{int64(-1), int64(-2), float64(-3), int64(-4), float64(-5)}, -1},
{[]interface{}{int64(-1), int64(-1), float64(-1), int64(-1), float64(-1)}, -1},
{[]interface{}{int64(-1), int64(0), float64(-1), int64(-1), float64(-1)}, 0},
}
for _, c := range cases {
r := fnMax(c.args)
switch v := r.(type) {
case int64:
if v != int64(c.expected) {
t.Errorf("max(%v) = %v, want %v", c.args, v, int64(c.expected))
}
case float64:
if v != c.expected {
t.Errorf("max(%v) = %v, want %v", c.args, v, c.expected)
}
default:
t.Errorf("unknown result type max(%v)", c.args)
}
}
}
func TestMin(t *testing.T) {
cases := []struct {
args []interface{}
expected float64
}{
{[]interface{}{int64(1), int64(2), int64(3), int64(4), int64(5)}, 1},
{[]interface{}{int64(5), int64(4), float64(3), int64(2), float64(1)}, 1},
{[]interface{}{int64(-1), int64(-2), float64(-3), int64(-4), float64(-5)}, -5},
{[]interface{}{int64(-1), int64(-1), float64(-1), int64(-1), float64(-1)}, -1},
{[]interface{}{int64(1), int64(0), float64(1), int64(1), float64(1)}, 0},
}
for _, c := range cases {
r := fnMin(c.args)
switch v := r.(type) {
case int64:
if v != int64(c.expected) {
t.Errorf("min(%v) = %v, want %v", c.args, v, int64(c.expected))
}
case float64:
if v != c.expected {
t.Errorf("min(%v) = %v, want %v", c.args, v, c.expected)
}
default:
t.Errorf("unknown result type min(%v)", c.args)
}
}
}
func TestSumAvg(t *testing.T) {
cases := []struct {
args []interface{}
expected float64
}{
{[]interface{}{int64(1)}, 1},
{[]interface{}{int64(1), int64(2), int64(3), int64(4), int64(5)}, 15},
{[]interface{}{int64(5), int64(4), float64(3), int64(2), float64(1)}, 15},
{[]interface{}{int64(-1), int64(-2), float64(-3), int64(-4), float64(-5)}, -15},
{[]interface{}{int64(-1), int64(-1), float64(-1), int64(-1), float64(-1)}, -5},
{[]interface{}{int64(1), int64(0), float64(1), int64(1), float64(1)}, 4},
}
for _, c := range cases {
r := fnSum(c.args)
switch v := r.(type) {
case float64:
if v != c.expected {
t.Errorf("sum(%v) = %v, want %v", c.args, v, c.expected)
}
default:
t.Errorf("unknown result type sum(%v)", c.args)
}
}
for _, c := range cases {
r := fnAvg(c.args)
expected := c.expected / float64(len(c.args))
switch v := r.(type) {
case float64:
if v != expected {
t.Errorf("avg(%v) = %v, want %v", c.args, v, expected)
}
default:
t.Errorf("unknown result type avg(%v)", c.args)
}
}
}
func TestSqrt(t *testing.T) {
cases := []struct {
arg interface{}
expected float64
}{
{int64(0), 0},
{int64(1), 1},
{int64(256), 16},
{10.0, math.Sqrt(10)},
{10000.0, math.Sqrt(10000)},
}
for _, c := range cases {
r := fnSqrt([]interface{}{c.arg})
switch v := r.(type) {
case float64:
if v != c.expected {
t.Errorf("sqrt(%v) = %v, want %v", c.arg, v, c.expected)
}
default:
t.Errorf("unknown result type sqrt(%v)", c.arg)
}
}
}
func TestFloor(t *testing.T) {
cases := []struct {
arg interface{}
expected float64
}{
{int64(0), 0},
{int64(1), 1},
{int64(-1), -1},
{10.4, 10},
{-10.4, -11},
{10.8, 10},
{-10.8, -11},
}
for _, c := range cases {
r := fnFloor([]interface{}{c.arg})
switch v := r.(type) {
case int64:
if v != int64(c.expected) {
t.Errorf("floor(%v) = %v, want %v", c.arg, v, int64(c.expected))
}
case float64:
if v != c.expected {
t.Errorf("floor(%v) = %v, want %v", c.arg, v, c.expected)
}
default:
t.Errorf("unknown result type floor(%v)", c.arg)
}
}
}
func TestCeil(t *testing.T) {
cases := []struct {
arg interface{}
expected float64
}{
{int64(0), 0},
{int64(1), 1},
{int64(-1), -1},
{10.4, 11},
{-10.4, -10},
{10.8, 11},
{-10.8, -10},
}
for _, c := range cases {
r := fnCeil([]interface{}{c.arg})
switch v := r.(type) {
case int64:
if v != int64(c.expected) {
t.Errorf("ceil(%v) = %v, want %v", c.arg, v, int64(c.expected))
}
case float64:
if v != c.expected {
t.Errorf("ceil(%v) = %v, want %v", c.arg, v, c.expected)
}
default:
t.Errorf("unknown result type ceil(%v)", c.arg)
}
}
}
func TestRound(t *testing.T) {
cases := []struct {
arg interface{}
expected float64
}{
{int64(0), 0},
{int64(1), 1},
{int64(-1), -1},
{10.4, 10},
{-10.4, -10},
{10.8, 11},
{-10.8, -11},
}
for _, c := range cases {
r := fnRound([]interface{}{c.arg})
switch v := r.(type) {
case int64:
if v != int64(c.expected) {
t.Errorf("round(%v) = %v, want %v", c.arg, v, int64(c.expected))
}
case float64:
if v != c.expected {
t.Errorf("round(%v) = %v, want %v", c.arg, v, c.expected)
}
default:
t.Errorf("unknown result type round(%v)", c.arg)
}
}
}
func TestLog(t *testing.T) {
cases := []struct {
arg interface{}
expected float64
}{
{int64(1), math.Log(1)},
{0.1, math.Log(0.1)},
{10.4, math.Log(10.4)},
{10.8, math.Log(10.8)},
}
for _, c := range cases {
r := fnLog([]interface{}{c.arg})
switch v := r.(type) {
case float64:
if v != c.expected {
t.Errorf("log(%v) = %v, want %v", c.arg, v, c.expected)
}
default:
t.Errorf("unknown result type log(%v)", c.arg)
}
}
}
func TestLog10(t *testing.T) {
cases := []struct {
arg interface{}
expected float64
}{
{int64(1), math.Log10(1)},
{0.1, math.Log10(0.1)},
{10.4, math.Log10(10.4)},
{10.8, math.Log10(10.8)},
{int64(100), math.Log10(100)},
}
for _, c := range cases {
r := fnLog10([]interface{}{c.arg})
switch v := r.(type) {
case float64:
if v != c.expected {
t.Errorf("log10(%v) = %v, want %v", c.arg, v, c.expected)
}
default:
t.Errorf("unknown result type log10(%v)", c.arg)
}
}
}
func TestAbs(t *testing.T) {
cases := []struct {
arg interface{}
expected float64
}{
{int64(1), 1},
{int64(0), 0},
{int64(-1), 1},
{10.4, 10.4},
{-10.4, 10.4},
}
for _, c := range cases {
r := fnAbs([]interface{}{c.arg})
switch v := r.(type) {
case int64:
if v != int64(c.expected) {
t.Errorf("abs(%v) = %v, want %v", c.arg, v, int64(c.expected))
}
case float64:
if v != c.expected {
t.Errorf("abs(%v) = %v, want %v", c.arg, v, c.expected)
}
default:
t.Errorf("unknown result type abs(%v)", c.arg)
}
}
}
func TestIf(t *testing.T) {
cases := []struct {
args []interface{}
expected float64
}{
{[]interface{}{true, int64(10), int64(20)}, 10},
{[]interface{}{false, int64(10), int64(20)}, 20},
{[]interface{}{true, 10.3, 20.6}, 10.3},
{[]interface{}{false, 10.3, 20.6}, 20.6},
{[]interface{}{true, int64(10), 20.6}, 10},
{[]interface{}{false, int64(10), 20.6}, 20.6},
}
for _, c := range cases {
r := fnIf(c.args)
switch v := r.(type) {
case int64:
if v != int64(c.expected) {
t.Errorf("if(%v) = %v, want %v", c.args, v, int64(c.expected))
}
case float64:
if v != c.expected {
t.Errorf("if(%v) = %v, want %v", c.args, v, c.expected)
}
default:
t.Errorf("unknown result type if(%v)", c.args)
}
}
}
package app
import (
"regexp"
"strings"
)
type RouteMatchCriteria struct {
Tag string `yaml:"tag"`
Value string `yaml:"match"`
Re *regexp.Regexp `yaml:"-"`
}
func (c *RouteMatchCriteria) UnmarshalYAML(unmarshal func(interface{}) error) error {
var v map[string]string
if e := unmarshal(&v); e != nil {
return e
}
for k, a := range v {
c.Tag = k
c.Value = a
if strings.HasPrefix(a, "re:") {
re, e := regexp.Compile(a[3:])
if e != nil {
return e
}
c.Re = re
}
}
return nil
}
type Route struct {
Continue bool `yaml:"continue"`
Receiver string `yaml:"receiver"`
GroupWait Duration `yaml:"group_wait"`
GroupInterval Duration `yaml:"group_interval"`
RepeatInterval Duration `yaml:"repeat_interval"`
GroupBy []string `yaml:"group_by"`
Match []RouteMatchCriteria `yaml:"match"`
Routes []*Route `yaml:"routes"`
}
package app
import (
"bytes"
"database/sql"
"encoding/json"
"errors"
"fmt"
"net/http"
"os"
"regexp"
"strings"
"sync"
"sync/atomic"
"text/scanner"
"text/template"
"time"
"github.com/taosdata/alert/app/expr"
"github.com/taosdata/alert/models"
"github.com/taosdata/alert/utils"
"github.com/taosdata/alert/utils/log"
)
type Duration struct{ time.Duration }
func (d Duration) MarshalJSON() ([]byte, error) {
return json.Marshal(d.String())
}
func (d *Duration) doUnmarshal(v interface{}) error {
switch value := v.(type) {
case float64:
*d = Duration{time.Duration(value)}
return nil
case string:
if duration, e := time.ParseDuration(value); e != nil {
return e
} else {
*d = Duration{duration}
}
return nil
default:
return errors.New("invalid duration")
}
}
func (d *Duration) UnmarshalJSON(b []byte) error {
var v interface{}
if e := json.Unmarshal(b, &v); e != nil {
return e
}
return d.doUnmarshal(v)
}
const (
AlertStateWaiting = iota
AlertStatePending
AlertStateFiring
)
type Alert struct {
State uint8 `json:"-"`
LastRefreshAt time.Time `json:"-"`
StartsAt time.Time `json:"startsAt,omitempty"`
EndsAt time.Time `json:"endsAt,omitempty"`
Values map[string]interface{} `json:"values"`
Labels map[string]string `json:"labels"`
Annotations map[string]string `json:"annotations"`
}
func (alert *Alert) doRefresh(firing bool, rule *Rule) bool {
switch {
case (!firing) && (alert.State == AlertStateWaiting):
return false
case (!firing) && (alert.State == AlertStatePending):
alert.State = AlertStateWaiting
return false
case (!firing) && (alert.State == AlertStateFiring):
alert.State = AlertStateWaiting
alert.EndsAt = time.Now()
case firing && (alert.State == AlertStateWaiting):
alert.StartsAt = time.Now()
if rule.For.Nanoseconds() > 0 {
alert.State = AlertStatePending
return false
}
alert.State = AlertStateFiring
case firing && (alert.State == AlertStatePending):
if time.Now().Sub(alert.StartsAt) < rule.For.Duration {
return false
}
alert.StartsAt = alert.StartsAt.Add(rule.For.Duration)
alert.State = AlertStateFiring
case firing && (alert.State == AlertStateFiring):
}
return true
}
func (alert *Alert) refresh(rule *Rule, values map[string]interface{}) {
alert.LastRefreshAt = time.Now()
defer func() {
switch x := recover().(type) {
case nil:
case error:
rule.setState(RuleStateError)
log.Errorf("[%s]: failed to evaluate: %s", rule.Name, x.Error())
default:
rule.setState(RuleStateError)
log.Errorf("[%s]: failed to evaluate: unknown error", rule.Name)
}
}()
alert.Values = values
res := rule.Expr.Eval(func(key string) interface{} {
// ToLower is required as column name in result is in lower case
return alert.Values[strings.ToLower(key)]
})
val, ok := res.(bool)
if !ok {
rule.setState(RuleStateError)
log.Errorf("[%s]: result type is not bool", rule.Name)
return
}
if !alert.doRefresh(val, rule) {
return
}
buf := bytes.Buffer{}
alert.Annotations = map[string]string{}
for k, v := range rule.Annotations {
if e := v.Execute(&buf, alert); e != nil {
log.Errorf("[%s]: failed to generate annotation '%s': %s", rule.Name, k, e.Error())
} else {
alert.Annotations[k] = buf.String()
}
buf.Reset()
}
buf.Reset()
if e := json.NewEncoder(&buf).Encode(alert); e != nil {
log.Errorf("[%s]: failed to serialize alert to JSON: %s", rule.Name, e.Error())
} else {
chAlert <- buf.String()
}
}
const (
RuleStateNormal = iota
RuleStateError
RuleStateDisabled
RuleStateRunning = 0x04
)
type Rule struct {
Name string `json:"name"`
State uint32 `json:"state"`
SQL string `json:"sql"`
GroupByCols []string `json:"-"`
For Duration `json:"for"`
Period Duration `json:"period"`
NextRunTime time.Time `json:"-"`
RawExpr string `json:"expr"`
Expr expr.Expr `json:"-"`
Labels map[string]string `json:"labels"`
RawAnnotations map[string]string `json:"annotations"`
Annotations map[string]*template.Template `json:"-"`
Alerts sync.Map `json:"-"`
}
func (rule *Rule) clone() *Rule {
return &Rule{
Name: rule.Name,
State: RuleStateNormal,
SQL: rule.SQL,
GroupByCols: rule.GroupByCols,
For: rule.For,
Period: rule.Period,
NextRunTime: time.Time{},
RawExpr: rule.RawExpr,
Expr: rule.Expr,
Labels: rule.Labels,
RawAnnotations: rule.RawAnnotations,
Annotations: rule.Annotations,
// don't copy alerts
}
}
func (rule *Rule) setState(s uint32) {
for {
old := atomic.LoadUint32(&rule.State)
new := old&0xffffffc0 | s
if atomic.CompareAndSwapUint32(&rule.State, old, new) {
break
}
}
}
func (rule *Rule) state() uint32 {
return atomic.LoadUint32(&rule.State) & 0xffffffc0
}
func (rule *Rule) isEnabled() bool {
state := atomic.LoadUint32(&rule.State)
return state&RuleStateDisabled == 0
}
func (rule *Rule) setNextRunTime(tm time.Time) {
rule.NextRunTime = tm.Round(rule.Period.Duration)
if rule.NextRunTime.Before(tm) {
rule.NextRunTime = rule.NextRunTime.Add(rule.Period.Duration)
}
}
func parseGroupBy(sql string) (cols []string, err error) {
defer func() {
if e := recover(); e != nil {
err = e.(error)
}
}()
s := scanner.Scanner{
Error: func(s *scanner.Scanner, msg string) {
panic(errors.New(msg))
},
Mode: scanner.ScanIdents | scanner.ScanInts | scanner.ScanFloats,
}
s.Init(strings.NewReader(sql))
if s.Scan() != scanner.Ident || strings.ToLower(s.TokenText()) != "select" {
err = errors.New("only select statement is allowed.")
return
}
hasGroupBy := false
for t := s.Scan(); t != scanner.EOF; t = s.Scan() {
if t != scanner.Ident {
continue
}
if strings.ToLower(s.TokenText()) != "group" {
continue
}
if s.Scan() != scanner.Ident {
continue
}
if strings.ToLower(s.TokenText()) == "by" {
hasGroupBy = true
break
}
}
if !hasGroupBy {
return
}
for {
if s.Scan() != scanner.Ident {
err = errors.New("SQL statement syntax error.")
return
}
col := strings.ToLower(s.TokenText())
cols = append(cols, col)
if s.Scan() != ',' {
break
}
}
return
}
func (rule *Rule) parseGroupBy() (err error) {
cols, e := parseGroupBy(rule.SQL)
if e == nil {
rule.GroupByCols = cols
}
return nil
}
func (rule *Rule) getAlert(values map[string]interface{}) *Alert {
sb := strings.Builder{}
for _, name := range rule.GroupByCols {
value := values[name]
if value == nil {
} else {
sb.WriteString(fmt.Sprint(value))
}
sb.WriteByte('_')
}
var alert *Alert
key := sb.String()
if v, ok := rule.Alerts.Load(key); ok {
alert = v.(*Alert)
}
if alert == nil {
alert = &Alert{Labels: map[string]string{}}
for k, v := range rule.Labels {
alert.Labels[k] = v
}
for _, name := range rule.GroupByCols {
value := values[name]
if value == nil {
alert.Labels[name] = ""
} else {
alert.Labels[name] = fmt.Sprint(value)
}
}
rule.Alerts.Store(key, alert)
}
return alert
}
func (rule *Rule) preRun(tm time.Time) bool {
if tm.Before(rule.NextRunTime) {
return false
}
rule.setNextRunTime(tm)
for {
state := atomic.LoadUint32(&rule.State)
if state != RuleStateNormal {
return false
}
if atomic.CompareAndSwapUint32(&rule.State, state, RuleStateRunning) {
break
}
}
return true
}
func (rule *Rule) run(db *sql.DB) {
rows, e := db.Query(rule.SQL)
if e != nil {
log.Errorf("[%s]: failed to query TDengine: %s", rule.Name, e.Error())
return
}
cols, e := rows.ColumnTypes()
if e != nil {
log.Errorf("[%s]: unable to get column information: %s", rule.Name, e.Error())
return
}
for rows.Next() {
values := make([]interface{}, 0, len(cols))
for range cols {
var v interface{}
values = append(values, &v)
}
rows.Scan(values...)
m := make(map[string]interface{})
for i, col := range cols {
name := strings.ToLower(col.Name())
m[name] = *(values[i].(*interface{}))
}
alert := rule.getAlert(m)
alert.refresh(rule, m)
}
now := time.Now()
rule.Alerts.Range(func(k, v interface{}) bool {
alert := v.(*Alert)
if now.Sub(alert.LastRefreshAt) > rule.Period.Duration*10 {
rule.Alerts.Delete(k)
}
return true
})
}
func (rule *Rule) postRun() {
for {
old := atomic.LoadUint32(&rule.State)
new := old & ^uint32(RuleStateRunning)
if atomic.CompareAndSwapUint32(&rule.State, old, new) {
break
}
}
}
func newRule(str string) (*Rule, error) {
rule := Rule{}
e := json.NewDecoder(strings.NewReader(str)).Decode(&rule)
if e != nil {
return nil, e
}
if rule.Period.Nanoseconds() <= 0 {
rule.Period = Duration{time.Minute}
}
rule.setNextRunTime(time.Now())
if rule.For.Nanoseconds() < 0 {
rule.For = Duration{0}
}
if e = rule.parseGroupBy(); e != nil {
return nil, e
}
if expr, e := expr.Compile(rule.RawExpr); e != nil {
return nil, e
} else {
rule.Expr = expr
}
rule.Annotations = map[string]*template.Template{}
for k, v := range rule.RawAnnotations {
v = reValue.ReplaceAllStringFunc(v, func(s string) string {
// as column name in query result is always in lower case,
// we need to convert value reference in annotations to
// lower case
return strings.ToLower(s)
})
text := "{{$labels := .Labels}}{{$values := .Values}}" + v
tmpl, e := template.New(k).Parse(text)
if e != nil {
return nil, e
}
rule.Annotations[k] = tmpl
}
return &rule, nil
}
const (
batchSize = 1024
)
var (
rules sync.Map
wg sync.WaitGroup
chStop = make(chan struct{})
chAlert = make(chan string, batchSize)
reValue = regexp.MustCompile(`\$values\.[_a-zA-Z0-9]+`)
)
func runRules() {
defer wg.Done()
db, e := sql.Open("taosSql", utils.Cfg.TDengine)
if e != nil {
log.Fatal("failed to connect to TDengine: ", e.Error())
}
defer db.Close()
ticker := time.NewTicker(time.Second)
defer ticker.Stop()
LOOP:
for {
var tm time.Time
select {
case <-chStop:
close(chAlert)
break LOOP
case tm = <-ticker.C:
}
rules.Range(func(k, v interface{}) bool {
rule := v.(*Rule)
if !rule.preRun(tm) {
return true
}
wg.Add(1)
go func(rule *Rule) {
defer wg.Done()
defer rule.postRun()
rule.run(db)
}(rule)
return true
})
}
}
func doPushAlerts(alerts []string) {
defer wg.Done()
if len(utils.Cfg.Receivers.AlertManager) == 0 {
return
}
buf := bytes.Buffer{}
buf.WriteByte('[')
for i, alert := range alerts {
if i > 0 {
buf.WriteByte(',')
}
buf.WriteString(alert)
}
buf.WriteByte(']')
log.Debug(buf.String())
resp, e := http.DefaultClient.Post(utils.Cfg.Receivers.AlertManager, "application/json", &buf)
if e != nil {
log.Errorf("failed to push alerts to downstream: %s", e.Error())
return
}
resp.Body.Close()
}
func pushAlerts() {
defer wg.Done()
ticker := time.NewTicker(time.Millisecond * 100)
defer ticker.Stop()
alerts := make([]string, 0, batchSize)
LOOP:
for {
select {
case alert := <-chAlert:
if utils.Cfg.Receivers.Console {
fmt.Print(alert)
}
if len(alert) == 0 {
if len(alerts) > 0 {
wg.Add(1)
doPushAlerts(alerts)
}
break LOOP
}
if len(alerts) == batchSize {
wg.Add(1)
go doPushAlerts(alerts)
alerts = make([]string, 0, batchSize)
}
alerts = append(alerts, alert)
case <-ticker.C:
if len(alerts) > 0 {
wg.Add(1)
go doPushAlerts(alerts)
alerts = make([]string, 0, batchSize)
}
}
}
}
func loadRuleFromDatabase() error {
allRules, e := models.LoadAllRule()
if e != nil {
log.Error("failed to load rules from database:", e.Error())
return e
}
count := 0
for _, r := range allRules {
rule, e := newRule(r.Content)
if e != nil {
log.Errorf("[%s]: parse failed: %s", r.Name, e.Error())
continue
}
if !r.Enabled {
rule.setState(RuleStateDisabled)
}
rules.Store(rule.Name, rule)
count++
}
log.Infof("total %d rules loaded", count)
return nil
}
func loadRuleFromFile() error {
f, e := os.Open(utils.Cfg.RuleFile)
if e != nil {
log.Error("failed to load rules from file:", e.Error())
return e
}
defer f.Close()
var allRules []Rule
e = json.NewDecoder(f).Decode(&allRules)
if e != nil {
log.Error("failed to parse rule file:", e.Error())
return e
}
for i := 0; i < len(allRules); i++ {
rule := &allRules[i]
rules.Store(rule.Name, rule)
}
log.Infof("total %d rules loaded", len(allRules))
return nil
}
func initRule() error {
if len(utils.Cfg.Database) > 0 {
if e := loadRuleFromDatabase(); e != nil {
return e
}
} else {
if e := loadRuleFromFile(); e != nil {
return e
}
}
wg.Add(2)
go runRules()
go pushAlerts()
return nil
}
func uninitRule() error {
close(chStop)
wg.Wait()
return nil
}
package app
import (
"fmt"
"testing"
"github.com/taosdata/alert/utils/log"
)
func TestParseGroupBy(t *testing.T) {
cases := []struct {
sql string
cols []string
}{
{
sql: "select * from a",
cols: []string{},
},
{
sql: "select * from a group by abc",
cols: []string{"abc"},
},
{
sql: "select * from a group by abc, def",
cols: []string{"abc", "def"},
},
{
sql: "select * from a Group by abc, def order by abc",
cols: []string{"abc", "def"},
},
}
for _, c := range cases {
cols, e := parseGroupBy(c.sql)
if e != nil {
t.Errorf("failed to parse sql '%s': %s", c.sql, e.Error())
}
for i := range cols {
if i >= len(c.cols) {
t.Errorf("count of group by columns of '%s' is wrong", c.sql)
}
if c.cols[i] != cols[i] {
t.Errorf("wrong group by columns for '%s'", c.sql)
}
}
}
}
func TestManagement(t *testing.T) {
const format = `{"name":"rule%d", "sql":"select count(*) as count from meters", "expr":"count>2"}`
log.Init()
for i := 0; i < 5; i++ {
s := fmt.Sprintf(format, i)
rule, e := newRule(s)
if e != nil {
t.Errorf("failed to create rule: %s", e.Error())
}
e = doUpdateRule(rule, s)
if e != nil {
t.Errorf("failed to add or update rule: %s", e.Error())
}
}
for i := 0; i < 5; i++ {
name := fmt.Sprintf("rule%d", i)
if _, ok := rules.Load(name); !ok {
t.Errorf("rule '%s' does not exist", name)
}
}
name := "rule1"
if e := doDeleteRule(name); e != nil {
t.Errorf("failed to delete rule: %s", e.Error())
}
if _, ok := rules.Load(name); ok {
t.Errorf("rule '%s' should not exist any more", name)
}
}
{
"port": 8100,
"database": "file:alert.db",
"tdengine": "root:taosdata@/tcp(127.0.0.1:0)/",
"log": {
"level": "debug",
"path": ""
},
"receivers": {
"alertManager": "http://127.0.0.1:9093/api/v1/alerts",
"console": true
}
}
#!/bin/bash
#
# This file is used to install TDengine client library on linux systems.
set -e
#set -x
# -----------------------Variables definition---------------------
script_dir=$(dirname $(readlink -f "$0"))
# Dynamic directory
lib_link_dir="/usr/lib"
#install main path
install_main_dir="/usr/local/taos"
# Color setting
RED='\033[0;31m'
GREEN='\033[1;32m'
GREEN_DARK='\033[0;32m'
GREEN_UNDERLINE='\033[4;32m'
NC='\033[0m'
csudo=""
if command -v sudo > /dev/null; then
csudo="sudo"
fi
function clean_driver() {
${csudo} rm -f /usr/lib/libtaos.so || :
}
function install_driver() {
echo -e "${GREEN}Start to install TDengine client driver ...${NC}"
#create install main dir and all sub dir
${csudo} mkdir -p ${install_main_dir}
${csudo} mkdir -p ${install_main_dir}/driver
${csudo} rm -f ${lib_link_dir}/libtaos.* || :
${csudo} cp -rf ${script_dir}/driver/* ${install_main_dir}/driver && ${csudo} chmod 777 ${install_main_dir}/driver/*
${csudo} ln -s ${install_main_dir}/driver/libtaos.* ${lib_link_dir}/libtaos.so.1
${csudo} ln -s ${lib_link_dir}/libtaos.so.1 ${lib_link_dir}/libtaos.so
echo
echo -e "\033[44;32;1mTDengine client driver is successfully installed!${NC}"
}
install_driver
package main
import (
"context"
"flag"
"fmt"
"io"
"net/http"
"os"
"os/signal"
"path/filepath"
"runtime"
"strconv"
"time"
"github.com/taosdata/alert/app"
"github.com/taosdata/alert/models"
"github.com/taosdata/alert/utils"
"github.com/taosdata/alert/utils/log"
_ "github.com/mattn/go-sqlite3"
_ "github.com/taosdata/driver-go/taosSql"
)
type httpHandler struct {
}
func (h *httpHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
start := time.Now()
path := r.URL.Path
http.DefaultServeMux.ServeHTTP(w, r)
duration := time.Now().Sub(start)
log.Debugf("[%s]\t%s\t%s", r.Method, path, duration)
}
func serveWeb() *http.Server {
log.Info("Listening at port: ", utils.Cfg.Port)
srv := &http.Server{
Addr: ":" + strconv.Itoa(int(utils.Cfg.Port)),
Handler: &httpHandler{},
}
go func() {
if e := srv.ListenAndServe(); e != nil {
log.Error(e.Error())
}
}()
return srv
}
func copyFile(dst, src string) error {
if dst == src {
return nil
}
in, e := os.Open(src)
if e != nil {
return e
}
defer in.Close()
out, e := os.Create(dst)
if e != nil {
return e
}
defer out.Close()
_, e = io.Copy(out, in)
return e
}
func doSetup(cfgPath string) error {
exePath, e := os.Executable()
if e != nil {
fmt.Fprintf(os.Stderr, "failed to get executable path: %s\n", e.Error())
return e
}
if !filepath.IsAbs(cfgPath) {
dir := filepath.Dir(exePath)
cfgPath = filepath.Join(dir, cfgPath)
}
e = copyFile("/etc/taos/alert.cfg", cfgPath)
if e != nil {
fmt.Fprintf(os.Stderr, "failed copy configuration file: %s\n", e.Error())
return e
}
f, e := os.Create("/etc/systemd/system/alert.service")
if e != nil {
fmt.Printf("failed to create alert service: %s\n", e.Error())
return e
}
defer f.Close()
const content = `[Unit]
Description=Alert (TDengine Alert Service)
After=syslog.target
After=network.target
[Service]
RestartSec=2s
Type=simple
WorkingDirectory=/var/lib/taos/
ExecStart=%s -cfg /etc/taos/alert.cfg
Restart=always
[Install]
WantedBy=multi-user.target
`
_, e = fmt.Fprintf(f, content, exePath)
if e != nil {
fmt.Printf("failed to create alert.service: %s\n", e.Error())
return e
}
return nil
}
const version = "TDengine alert v1.0.0"
func main() {
var (
cfgPath string
setup bool
showVersion bool
)
flag.StringVar(&cfgPath, "cfg", "alert.cfg", "path of configuration file")
flag.BoolVar(&setup, "setup", false, "setup the service as a daemon")
flag.BoolVar(&showVersion, "version", false, "show version information")
flag.Parse()
if showVersion {
fmt.Println(version)
return
}
if setup {
if runtime.GOOS == "linux" {
doSetup(cfgPath)
} else {
fmt.Fprintln(os.Stderr, "can only run as a daemon mode in linux.")
}
return
}
if e := utils.LoadConfig(cfgPath); e != nil {
fmt.Fprintln(os.Stderr, "failed to load configuration")
return
}
if e := log.Init(); e != nil {
fmt.Fprintln(os.Stderr, "failed to initialize logger:", e.Error())
return
}
defer log.Sync()
if e := models.Init(); e != nil {
log.Fatal("failed to initialize database:", e.Error())
}
if e := app.Init(); e != nil {
log.Fatal("failed to initialize application:", e.Error())
}
// start web server
srv := serveWeb()
// wait `Ctrl-C` or `Kill` to exit, `Kill` does not work on Windows
interrupt := make(chan os.Signal)
signal.Notify(interrupt, os.Interrupt)
signal.Notify(interrupt, os.Kill)
<-interrupt
fmt.Println("'Ctrl + C' received, exiting...")
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
srv.Shutdown(ctx)
cancel()
app.Uninit()
models.Uninit()
}
{
"name": "CarTooFast",
"period": "10s",
"sql": "select avg(speed) as avgspeed from test.cars where ts > now - 5m group by id",
"expr": "avgSpeed > 100",
"for": "0s",
"labels": {
"ruleName": "CarTooFast"
},
"annotations": {
"summary": "car {{$values.id}} is too fast, its average speed is {{$values.avgSpeed}}km/h"
}
}
\ No newline at end of file
module github.com/taosdata/alert
go 1.14
require (
github.com/jmoiron/sqlx v1.2.0
github.com/mattn/go-sqlite3 v2.0.3+incompatible
github.com/taosdata/driver-go v0.0.0-20200311072652-8c58c512b6ac
go.uber.org/zap v1.14.1
google.golang.org/appengine v1.6.5 // indirect
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c
)
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/go-sql-driver/mysql v1.4.0 h1:7LxgVwFb2hIQtMm87NdgAVfXjnt4OePseqT1tKx+opk=
github.com/go-sql-driver/mysql v1.4.0/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/jmoiron/sqlx v1.2.0 h1:41Ip0zITnmWNR/vHV+S4m+VoUivnWY5E4OJfLZjCJMA=
github.com/jmoiron/sqlx v1.2.0/go.mod h1:1FEQNm3xlJgrMD+FBdI9+xvCksHtbpVBBw5dYhBSsks=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/lib/pq v1.0.0 h1:X5PMW56eZitiTeO7tKzZxFCSpbFZJtkMMooicw2us9A=
github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/mattn/go-sqlite3 v1.9.0/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc=
github.com/mattn/go-sqlite3 v2.0.3+incompatible h1:gXHsfypPkaMZrKbD5209QV9jbUTJKjyR5WD3HYQSd+U=
github.com/mattn/go-sqlite3 v2.0.3+incompatible/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc=
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/taosdata/driver-go v0.0.0-20200311072652-8c58c512b6ac h1:uZplMwObJj8mfgI4ZvYPNHRn+fNz2leiMPqShsjtEEc=
github.com/taosdata/driver-go v0.0.0-20200311072652-8c58c512b6ac/go.mod h1:TuMZDpnBrjNO07rneM2C5qMYFqIro4aupL2cUOGGo/I=
go.uber.org/atomic v1.5.0 h1:OI5t8sDa1Or+q8AeE+yKeB/SDYioSHAgcVljj9JIETY=
go.uber.org/atomic v1.5.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
go.uber.org/atomic v1.6.0 h1:Ezj3JGmsOnG1MoRWQkPBsKLe9DwWD9QeXzTRzzldNVk=
go.uber.org/atomic v1.6.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
go.uber.org/multierr v1.3.0 h1:sFPn2GLc3poCkfrpIXGhBD2X0CMIo4Q/zSULXrj/+uc=
go.uber.org/multierr v1.3.0/go.mod h1:VgVr7evmIr6uPjLBxg28wmKNXyqE9akIJ5XnfpiKl+4=
go.uber.org/multierr v1.5.0 h1:KCa4XfM8CWFCpxXRGok+Q0SS/0XBhMDbHHGABQLvD2A=
go.uber.org/multierr v1.5.0/go.mod h1:FeouvMocqHpRaaGuG9EjoKcStLC43Zu/fmqdUMPcKYU=
go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee h1:0mgffUl7nfd+FpvXMVz4IDEaUSmT1ysygQC7qYo7sG4=
go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee/go.mod h1:vJERXedbb3MVM5f9Ejo0C68/HhF8uaILCdgjnY+goOA=
go.uber.org/zap v1.14.0 h1:/pduUoebOeeJzTDFuoMgC6nRkiasr1sBCIEorly7m4o=
go.uber.org/zap v1.14.0/go.mod h1:zwrFLgMcdUuIBviXEYEH1YKNaOBnKXsx2IPda5bBwHM=
go.uber.org/zap v1.14.1 h1:nYDKopTbvAPq/NrUVZwT15y2lpROBiLLyoRTbXOYWOo=
go.uber.org/zap v1.14.1/go.mod h1:Mb2vm2krFEG5DV0W9qcHBYFtp/Wku1cvYaqPsS/WYfc=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de h1:5hukYrvBGR8/eNkX5mdUezrA6JiaEZDtJb9Ei+1LlBs=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5 h1:hKsoRgsbwY1NafxrwTs+k64bikrLBkAgPir1TNCj3Zs=
golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.6.5 h1:tycE03LOZYQNhDpS27tcQdAzLCVMaj7QT2SXxebnpCM=
google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
honnef.co/go/tools v0.0.1-2019.2.3 h1:3JgtbtFHMiCmsznwGVTUWbgGov+pVqnlf1dEJTNAXeM=
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
package models
import (
"fmt"
"strconv"
"time"
"github.com/jmoiron/sqlx"
"github.com/taosdata/alert/utils"
"github.com/taosdata/alert/utils/log"
)
var db *sqlx.DB
func Init() error {
xdb, e := sqlx.Connect("sqlite3", utils.Cfg.Database)
if e == nil {
db = xdb
}
return upgrade()
}
func Uninit() error {
db.Close()
return nil
}
func getStringOption(tx *sqlx.Tx, name string) (string, error) {
const qs = "SELECT * FROM `option` WHERE `name`=?"
var (
e error
o struct {
Name string `db:"name"`
Value string `db:"value"`
}
)
if tx != nil {
e = tx.Get(&o, qs, name)
} else {
e = db.Get(&o, qs, name)
}
if e != nil {
return "", e
}
return o.Value, nil
}
func getIntOption(tx *sqlx.Tx, name string) (int, error) {
s, e := getStringOption(tx, name)
if e != nil {
return 0, e
}
v, e := strconv.ParseInt(s, 10, 64)
return int(v), e
}
func setOption(tx *sqlx.Tx, name string, value interface{}) error {
const qs = "REPLACE INTO `option`(`name`, `value`) VALUES(?, ?);"
var (
e error
sv string
)
switch v := value.(type) {
case time.Time:
sv = v.Format(time.RFC3339)
default:
sv = fmt.Sprint(value)
}
if tx != nil {
_, e = tx.Exec(qs, name, sv)
} else {
_, e = db.Exec(qs, name, sv)
}
return e
}
var upgradeScripts = []struct {
ver int
stmts []string
}{
{
ver: 0,
stmts: []string{
"CREATE TABLE `option`( `name` VARCHAR(63) PRIMARY KEY, `value` VARCHAR(255) NOT NULL) WITHOUT ROWID;",
"CREATE TABLE `rule`( `name` VARCHAR(63) PRIMARY KEY, `enabled` TINYINT(1) NOT NULL, `created_at` DATETIME NOT NULL, `updated_at` DATETIME NOT NULL, `content` TEXT(65535) NOT NULL);",
},
},
}
func upgrade() error {
const dbVersion = "database version"
ver, e := getIntOption(nil, dbVersion)
if e != nil { // regards all errors as schema not created
ver = -1 // set ver to -1 to execute all statements
}
tx, e := db.Beginx()
if e != nil {
return e
}
for _, us := range upgradeScripts {
if us.ver <= ver {
continue
}
log.Info("upgrading database to version: ", us.ver)
for _, s := range us.stmts {
if _, e = tx.Exec(s); e != nil {
tx.Rollback()
return e
}
}
ver = us.ver
}
if e = setOption(tx, dbVersion, ver); e != nil {
tx.Rollback()
return e
}
return tx.Commit()
}
package models
import "time"
const (
sqlSelectAllRule = "SELECT * FROM `rule`;"
sqlSelectRule = "SELECT * FROM `rule` WHERE `name` = ?;"
sqlInsertRule = "INSERT INTO `rule`(`name`, `enabled`, `created_at`, `updated_at`, `content`) VALUES(:name, :enabled, :created_at, :updated_at, :content);"
sqlUpdateRule = "UPDATE `rule` SET `content` = :content, `updated_at` = :updated_at WHERE `name` = :name;"
sqlEnableRule = "UPDATE `rule` SET `enabled` = :enabled, `updated_at` = :updated_at WHERE `name` = :name;"
sqlDeleteRule = "DELETE FROM `rule` WHERE `name` = ?;"
)
type Rule struct {
Name string `db:"name"`
Enabled bool `db:"enabled"`
CreatedAt time.Time `db:"created_at"`
UpdatedAt time.Time `db:"updated_at"`
Content string `db:"content"`
}
func AddRule(r *Rule) error {
r.CreatedAt = time.Now()
r.Enabled = true
r.UpdatedAt = r.CreatedAt
_, e := db.NamedExec(sqlInsertRule, r)
return e
}
func UpdateRule(name string, content string) error {
r := Rule{
Name: name,
UpdatedAt: time.Now(),
Content: content,
}
_, e := db.NamedExec(sqlUpdateRule, &r)
return e
}
func EnableRule(name string, enabled bool) error {
r := Rule{
Name: name,
Enabled: enabled,
UpdatedAt: time.Now(),
}
if res, e := db.NamedExec(sqlEnableRule, &r); e != nil {
return e
} else if n, e := res.RowsAffected(); n != 1 {
return e
}
return nil
}
func DeleteRule(name string) error {
_, e := db.Exec(sqlDeleteRule, name)
return e
}
func GetRuleByName(name string) (*Rule, error) {
r := Rule{}
if e := db.Get(&r, sqlSelectRule, name); e != nil {
return nil, e
}
return &r, nil
}
func LoadAllRule() ([]Rule, error) {
var rules []Rule
if e := db.Select(&rules, sqlSelectAllRule); e != nil {
return nil, e
}
return rules, nil
}
set -e
# releash.sh -c [arm | arm64 | x64 | x86]
# -o [linux | darwin | windows]
# set parameters by default value
cpuType=x64 # [arm | arm64 | x64 | x86]
osType=linux # [linux | darwin | windows]
while getopts "h:c:o:" arg
do
case $arg in
c)
#echo "cpuType=$OPTARG"
cpuType=$(echo $OPTARG)
;;
o)
#echo "osType=$OPTARG"
osType=$(echo $OPTARG)
;;
h)
echo "Usage: `basename $0` -c [arm | arm64 | x64 | x86] -o [linux | darwin | windows]"
exit 0
;;
?) #unknown option
echo "unknown argument"
exit 1
;;
esac
done
startdir=$(pwd)
scriptdir=$(dirname $(readlink -f $0))
cd ${scriptdir}/cmd/alert
version=$(grep 'const version =' main.go | awk '{print $NF}')
version=${version%\"}
echo "cpuType=${cpuType}"
echo "osType=${osType}"
echo "version=${version}"
GOOS=${osType} GOARCH=${cpuType} go build
GZIP=-9 tar -zcf ${startdir}/tdengine-alert-${version}-${osType}-${cpuType}.tar.gz alert alert.cfg install_driver.sh driver/
sql connect
sleep 100
sql drop database if exists test
sql create database test
sql use test
print ====== create super table
sql create table cars (ts timestamp, speed int) tags(id int)
print ====== create tables
$i = 0
while $i < 5
$tb = car . $i
sql create table $tb using cars tags( $i )
$i = $i + 1
endw
{
"name": "test1",
"period": "10s",
"sql": "select avg(speed) as avgspeed from test.cars group by id",
"expr": "avgSpeed >= 3",
"for": "0s",
"labels": {
"ruleName": "test1"
},
"annotations": {
"summary": "speed of car(id = {{$labels.id}}) is too high: {{$values.avgSpeed}}"
}
}
\ No newline at end of file
sql connect
sleep 100
print ====== insert 10 records to table 0
$i = 10
while $i > 0
$ms = $i . s
sql insert into test.car0 values(now - $ms , 1)
$i = $i - 1
endw
sql connect
sleep 100
print ====== insert another records table 0
sql insert into test.car0 values(now , 100)
sql connect
sleep 100
print ====== insert 10 records to table 0, 1, 2
$i = 10
while $i > 0
$ms = $i . s
sql insert into test.car0 values(now - $ms , 1)
sql insert into test.car1 values(now - $ms , $i )
sql insert into test.car2 values(now - $ms , 10)
$i = $i - 1
endw
# wait until $1 alerts are generates, and at most wait $2 seconds
# return 0 if wait succeeded, 1 if wait timeout
function waitAlert() {
local i=0
while [ $i -lt $2 ]; do
local c=$(wc -l alert.out | awk '{print $1}')
if [ $c -ge $1 ]; then
return 0
fi
let "i=$i+1"
sleep 1s
done
return 1
}
# prepare environment
kill -INT `ps aux | grep 'alert -cfg' | grep -v grep | awk '{print $2}'`
rm -f alert.db
rm -f alert.out
../cmd/alert/alert -cfg ../cmd/alert/alert.cfg > alert.out &
../../td/debug/build/bin/tsim -c /etc/taos -f ./prepare.sim
# add a rule to alert application
curl -d '@rule.json' http://localhost:8100/api/update-rule
# step 1: add some data but not trigger an alert
../../td/debug/build/bin/tsim -c /etc/taos -f ./step1.sim
# wait 20 seconds, should not get an alert
waitAlert 1 20
res=$?
if [ $res -eq 0 ]; then
echo 'should not have alerts here'
exit 1
fi
# step 2: trigger an alert
../../td/debug/build/bin/tsim -c /etc/taos -f ./step2.sim
# wait 30 seconds for the alert
waitAlert 1 30
res=$?
if [ $res -eq 1 ]; then
echo 'there should be an alert now'
exit 1
fi
# compare whether the generate alert meet expectation
diff <(uniq alert.out | sed -n 1p | jq -cS 'del(.startsAt, .endsAt)') <(jq -cSn '{"values":{"avgspeed":10,"id":0},"labels":{"id":"0","ruleName":"test1"},"annotations":{"summary":"speed of car(id = 0) is too high: 10"}}')
if [ $? -ne 0 ]; then
echo 'the generated alert does not meet expectation'
exit 1
fi
# step 3: add more data, trigger another 3 alerts
../../td/debug/build/bin/tsim -c /etc/taos -f ./step3.sim
# wait 30 seconds for the alerts
waitAlert 4 30
res=$?
if [ $res -eq 1 ]; then
echo 'there should be 4 alerts now'
exit 1
fi
# compare whether the generate alert meet expectation
diff <(uniq alert.out | sed -n 2p | jq -cS 'del(.startsAt, .endsAt)') <(jq -cSn '{"annotations":{"summary":"speed of car(id = 0) is too high: 5.714285714285714"},"labels":{"id":"0","ruleName":"test1"},"values":{"avgspeed":5.714285714285714,"id":0}}')
if [ $? -ne 0 ]; then
echo 'the generated alert does not meet expectation'
exit 1
fi
diff <(uniq alert.out | sed -n 3p | jq -cS 'del(.startsAt, .endsAt)') <(jq -cSn '{"annotations":{"summary":"speed of car(id = 1) is too high: 5.5"},"labels":{"id":"1","ruleName":"test1"},"values":{"avgspeed":5.5,"id":1}}')
if [ $? -ne 0 ]; then
echo 'the generated alert does not meet expectation'
exit 1
fi
diff <(uniq alert.out | sed -n 4p | jq -cS 'del(.startsAt, .endsAt)') <(jq -cSn '{"annotations":{"summary":"speed of car(id = 2) is too high: 10"},"labels":{"id":"2","ruleName":"test1"},"values":{"avgspeed":10,"id":2}}')
if [ $? -ne 0 ]; then
echo 'the generated alert does not meet expectation'
exit 1
fi
kill -INT `ps aux | grep 'alert -cfg' | grep -v grep | awk '{print $2}'`
package utils
import (
"encoding/json"
"os"
"gopkg.in/yaml.v3"
)
type Config struct {
Port uint16 `json:"port,omitempty" yaml:"port,omitempty"`
Database string `json:"database,omitempty" yaml:"database,omitempty"`
RuleFile string `json:"ruleFile,omitempty" yaml:"ruleFile,omitempty"`
Log struct {
Level string `json:"level,omitempty" yaml:"level,omitempty"`
Path string `json:"path,omitempty" yaml:"path,omitempty"`
} `json:"log" yaml:"log"`
TDengine string `json:"tdengine,omitempty" yaml:"tdengine,omitempty"`
Receivers struct {
AlertManager string `json:"alertManager,omitempty" yaml:"alertManager,omitempty"`
Console bool `json:"console"`
} `json:"receivers" yaml:"receivers"`
}
var Cfg Config
func LoadConfig(path string) error {
f, e := os.Open(path)
if e != nil {
return e
}
defer f.Close()
e = yaml.NewDecoder(f).Decode(&Cfg)
if e != nil {
f.Seek(0, 0)
e = json.NewDecoder(f).Decode(&Cfg)
}
return e
}
package log
import (
"github.com/taosdata/alert/utils"
"go.uber.org/zap"
)
var logger *zap.SugaredLogger
func Init() error {
var cfg zap.Config
if utils.Cfg.Log.Level == "debug" {
cfg = zap.NewDevelopmentConfig()
} else {
cfg = zap.NewProductionConfig()
}
if len(utils.Cfg.Log.Path) > 0 {
cfg.OutputPaths = []string{utils.Cfg.Log.Path}
}
l, e := cfg.Build()
if e != nil {
return e
}
logger = l.Sugar()
return nil
}
// Debug package logger
func Debug(args ...interface{}) {
logger.Debug(args...)
}
// Debugf package logger
func Debugf(template string, args ...interface{}) {
logger.Debugf(template, args...)
}
// Info package logger
func Info(args ...interface{}) {
logger.Info(args...)
}
// Infof package logger
func Infof(template string, args ...interface{}) {
logger.Infof(template, args...)
}
// Warn package logger
func Warn(args ...interface{}) {
logger.Warn(args...)
}
// Warnf package logger
func Warnf(template string, args ...interface{}) {
logger.Warnf(template, args...)
}
// Error package logger
func Error(args ...interface{}) {
logger.Error(args...)
}
// Errorf package logger
func Errorf(template string, args ...interface{}) {
logger.Errorf(template, args...)
}
// Fatal package logger
func Fatal(args ...interface{}) {
logger.Fatal(args...)
}
// Fatalf package logger
func Fatalf(template string, args ...interface{}) {
logger.Fatalf(template, args...)
}
// Panic package logger
func Panic(args ...interface{}) {
logger.Panic(args...)
}
// Panicf package logger
func Panicf(template string, args ...interface{}) {
logger.Panicf(template, args...)
}
func Sync() error {
return logger.Sync()
}
...@@ -740,7 +740,7 @@ The return value is like: ...@@ -740,7 +740,7 @@ The return value is like:
## Go Connector ## Go Connector
TDengine also provides a Go client package named _taosSql_ for users to access TDengine with Go. The package is in _/usr/local/taos/connector/go/src/taosSql_ by default if you installed TDengine. Users can copy the directory _/usr/local/taos/connector/go/src/taosSql_ to the _src_ directory of your project and import the package in the source code for use. TDengine also provides a Go client package named _taosSql_ for users to access TDengine with Go. The package is in _/usr/local/taos/connector/go/driver-go/taosSql_ by default if you installed TDengine. Users can copy the directory _/usr/local/taos/connector/go/driver-go/taosSql_ to the _src_ directory of your project and import the package in the source code for use.
```Go ```Go
import ( import (
......
...@@ -10,6 +10,7 @@ set -e ...@@ -10,6 +10,7 @@ set -e
# -o [Linux | Kylin | Alpine | Raspberrypi | Darwin | Windows | ...] # -o [Linux | Kylin | Alpine | Raspberrypi | Darwin | Windows | ...]
# -V [stable | beta] # -V [stable | beta]
# -l [full | lite] # -l [full | lite]
# -u [yes | no]
# set parameters by default value # set parameters by default value
verMode=edge # [cluster, edge] verMode=edge # [cluster, edge]
...@@ -17,8 +18,9 @@ verType=stable # [stable, beta] ...@@ -17,8 +18,9 @@ verType=stable # [stable, beta]
cpuType=x64 # [aarch32 | aarch64 | x64 | x86 | mips64 ...] cpuType=x64 # [aarch32 | aarch64 | x64 | x86 | mips64 ...]
osType=Linux # [Linux | Kylin | Alpine | Raspberrypi | Darwin | Windows | ...] osType=Linux # [Linux | Kylin | Alpine | Raspberrypi | Darwin | Windows | ...]
pagMode=full # [full | lite] pagMode=full # [full | lite]
cloudVer=no # [yes | no]
while getopts "hv:V:c:o:l:" arg while getopts "hv:V:c:o:l:u:" arg
do do
case $arg in case $arg in
v) v)
...@@ -41,8 +43,12 @@ do ...@@ -41,8 +43,12 @@ do
#echo "osType=$OPTARG" #echo "osType=$OPTARG"
osType=$(echo $OPTARG) osType=$(echo $OPTARG)
;; ;;
u)
#echo "cloudVer=$OPTARG"
cloudVer=$(echo $OPTARG)
;;
h) h)
echo "Usage: `basename $0` -v [cluster | edge] -c [aarch32 | aarch64 | x64 | x86 | mips64 ...] -o [Linux | Kylin | Alpine | Raspberrypi | Darwin | Windows | ...] -V [stable | beta] -l [full | lite]" echo "Usage: `basename $0` -v [cluster | edge] -c [aarch32 | aarch64 | x64 | x86 | mips64 ...] -o [Linux | Kylin | Alpine | Raspberrypi | Darwin | Windows | ...] -V [stable | beta] -l [full | lite] -u [yes | no]"
exit 0 exit 0
;; ;;
?) #unknow option ?) #unknow option
...@@ -52,7 +58,7 @@ do ...@@ -52,7 +58,7 @@ do
esac esac
done done
echo "verMode=${verMode} verType=${verType} cpuType=${cpuType} osType=${osType} pagMode=${pagMode}" echo "verMode=${verMode} verType=${verType} cpuType=${cpuType} osType=${osType} pagMode=${pagMode} cloudVer=${cloudVer}"
curr_dir=$(pwd) curr_dir=$(pwd)
...@@ -204,7 +210,7 @@ if [[ "$cpuType" == "x64" ]] || [[ "$cpuType" == "aarch64" ]] || [[ "$cpuType" = ...@@ -204,7 +210,7 @@ if [[ "$cpuType" == "x64" ]] || [[ "$cpuType" == "aarch64" ]] || [[ "$cpuType" =
if [ "$verMode" != "cluster" ]; then if [ "$verMode" != "cluster" ]; then
cmake ../ -DCPUTYPE=${cpuType} -DPAGMODE=${pagMode} cmake ../ -DCPUTYPE=${cpuType} -DPAGMODE=${pagMode}
else else
cmake ../../ -DCPUTYPE=${cpuType} cmake ../../ -DCPUTYPE=${cpuType} -DCLOUDVER=${cloudVer}
fi fi
else else
echo "input cpuType=${cpuType} error!!!" echo "input cpuType=${cpuType} error!!!"
...@@ -244,8 +250,8 @@ if [ "$osType" != "Darwin" ]; then ...@@ -244,8 +250,8 @@ if [ "$osType" != "Darwin" ]; then
echo "====do tar.gz package for all systems====" echo "====do tar.gz package for all systems===="
cd ${script_dir}/tools cd ${script_dir}/tools
${csudo} ./makepkg.sh ${compile_dir} ${version} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${csudo} ./makepkg.sh ${compile_dir} ${version} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${cloudVer}
${csudo} ./makeclient.sh ${compile_dir} ${version} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${csudo} ./makeclient.sh ${compile_dir} ${version} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${cloudVer}
else else
cd ${script_dir}/tools cd ${script_dir}/tools
./makeclient.sh ${compile_dir} ${version} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ./makeclient.sh ${compile_dir} ${version} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType}
......
...@@ -13,6 +13,7 @@ osType=$5 ...@@ -13,6 +13,7 @@ osType=$5
verMode=$6 verMode=$6
verType=$7 verType=$7
pagMode=$8 pagMode=$8
cloudVer=$9
if [ "$osType" != "Darwin" ]; then if [ "$osType" != "Darwin" ]; then
script_dir="$(dirname $(readlink -f $0))" script_dir="$(dirname $(readlink -f $0))"
...@@ -122,7 +123,11 @@ fi ...@@ -122,7 +123,11 @@ fi
cd ${release_dir} cd ${release_dir}
if [ "$verMode" == "cluster" ]; then if [ "$verMode" == "cluster" ]; then
if [ "$cloudVer" == "yes" ]; then
pkg_name=${install_dir}-cloud-${version}-${osType}-${cpuType}
else
pkg_name=${install_dir}-${version}-${osType}-${cpuType} pkg_name=${install_dir}-${version}-${osType}-${cpuType}
fi
elif [ "$verMode" == "edge" ]; then elif [ "$verMode" == "edge" ]; then
pkg_name=${install_dir}-${version}-${osType}-${cpuType} pkg_name=${install_dir}-${version}-${osType}-${cpuType}
else else
......
...@@ -14,6 +14,7 @@ osType=$5 ...@@ -14,6 +14,7 @@ osType=$5
verMode=$6 verMode=$6
verType=$7 verType=$7
pagMode=$8 pagMode=$8
cloudVer=$9
script_dir="$(dirname $(readlink -f $0))" script_dir="$(dirname $(readlink -f $0))"
top_dir="$(readlink -f ${script_dir}/../..)" top_dir="$(readlink -f ${script_dir}/../..)"
...@@ -131,7 +132,11 @@ fi ...@@ -131,7 +132,11 @@ fi
cd ${release_dir} cd ${release_dir}
if [ "$verMode" == "cluster" ]; then if [ "$verMode" == "cluster" ]; then
if [ "$cloudVer" == "yes" ]; then
pkg_name=${install_dir}-cloud-${version}-${osType}-${cpuType}
else
pkg_name=${install_dir}-${version}-${osType}-${cpuType} pkg_name=${install_dir}-${version}-${osType}-${cpuType}
fi
elif [ "$verMode" == "edge" ]; then elif [ "$verMode" == "edge" ]; then
pkg_name=${install_dir}-${version}-${osType}-${cpuType} pkg_name=${install_dir}-${version}-${osType}-${cpuType}
else else
......
...@@ -197,7 +197,7 @@ typedef struct SDataBlockList { ...@@ -197,7 +197,7 @@ typedef struct SDataBlockList {
typedef struct SQueryInfo { typedef struct SQueryInfo {
int16_t command; // the command may be different for each subclause, so keep it seperately. int16_t command; // the command may be different for each subclause, so keep it seperately.
uint16_t type; // query/insert/import type uint16_t type; // query/insert/import type
char intervalTimeUnit; char slidingTimeUnit;
int64_t etime, stime; int64_t etime, stime;
int64_t intervalTime; // aggregation time interval int64_t intervalTime; // aggregation time interval
......
...@@ -347,8 +347,8 @@ void tscProcessAsyncRes(SSchedMsg *pMsg) { ...@@ -347,8 +347,8 @@ void tscProcessAsyncRes(SSchedMsg *pMsg) {
(*pSql->fp)(pSql->param, taosres, code); (*pSql->fp)(pSql->param, taosres, code);
if (shouldFree) { if (shouldFree) {
tscFreeSqlObj(pSql);
tscTrace("%p Async sql is automatically freed in async res", pSql); tscTrace("%p Async sql is automatically freed in async res", pSql);
tscFreeSqlObj(pSql);
} }
} }
......
...@@ -20,6 +20,7 @@ ...@@ -20,6 +20,7 @@
#include "thistogram.h" #include "thistogram.h"
#include "tinterpolation.h" #include "tinterpolation.h"
#include "tlog.h" #include "tlog.h"
#include "tpercentile.h"
#include "tscJoinProcess.h" #include "tscJoinProcess.h"
#include "tscSyntaxtreefunction.h" #include "tscSyntaxtreefunction.h"
#include "tscompression.h" #include "tscompression.h"
...@@ -27,7 +28,6 @@ ...@@ -27,7 +28,6 @@
#include "ttime.h" #include "ttime.h"
#include "ttypes.h" #include "ttypes.h"
#include "tutil.h" #include "tutil.h"
#include "tpercentile.h"
#define GET_INPUT_CHAR(x) (((char *)((x)->aInputElemBuf)) + ((x)->startOffset) * ((x)->inputBytes)) #define GET_INPUT_CHAR(x) (((char *)((x)->aInputElemBuf)) + ((x)->startOffset) * ((x)->inputBytes))
#define GET_INPUT_CHAR_INDEX(x, y) (GET_INPUT_CHAR(x) + (y) * (x)->inputBytes) #define GET_INPUT_CHAR_INDEX(x, y) (GET_INPUT_CHAR(x) + (y) * (x)->inputBytes)
...@@ -4104,8 +4104,6 @@ static void twa_function(SQLFunctionCtx *pCtx) { ...@@ -4104,8 +4104,6 @@ static void twa_function(SQLFunctionCtx *pCtx) {
if (pResInfo->superTableQ) { if (pResInfo->superTableQ) {
memcpy(pCtx->aOutputBuf, pInfo, sizeof(STwaInfo)); memcpy(pCtx->aOutputBuf, pInfo, sizeof(STwaInfo));
} }
// pCtx->numOfIteratedElems += notNullElems;
} }
static void twa_function_f(SQLFunctionCtx *pCtx, int32_t index) { static void twa_function_f(SQLFunctionCtx *pCtx, int32_t index) {
...@@ -4138,7 +4136,6 @@ static void twa_function_f(SQLFunctionCtx *pCtx, int32_t index) { ...@@ -4138,7 +4136,6 @@ static void twa_function_f(SQLFunctionCtx *pCtx, int32_t index) {
pInfo->lastKey = primaryKey[index]; pInfo->lastKey = primaryKey[index];
setTWALastVal(pCtx, pData, 0, pInfo); setTWALastVal(pCtx, pData, 0, pInfo);
// pCtx->numOfIteratedElems += 1;
pResInfo->hasResult = DATA_SET_FLAG; pResInfo->hasResult = DATA_SET_FLAG;
if (pResInfo->superTableQ) { if (pResInfo->superTableQ) {
...@@ -4403,10 +4400,8 @@ static double do_calc_rate(const SRateInfo* pRateInfo) { ...@@ -4403,10 +4400,8 @@ static double do_calc_rate(const SRateInfo* pRateInfo) {
} }
} }
int64_t duration = pRateInfo->lastKey - pRateInfo->firstKey; double duration = (pRateInfo->lastKey - pRateInfo->firstKey) / 1000.0;
duration = (duration + 500) / 1000; double resultVal = diff / duration;
double resultVal = ((double)diff) / duration;
pTrace("do_calc_rate() isIRate:%d firstKey:%" PRId64 " lastKey:%" PRId64 " firstValue:%f lastValue:%f CorrectionValue:%f resultVal:%f", pTrace("do_calc_rate() isIRate:%d firstKey:%" PRId64 " lastKey:%" PRId64 " firstValue:%f lastValue:%f CorrectionValue:%f resultVal:%f",
pRateInfo->isIRate, pRateInfo->firstKey, pRateInfo->lastKey, pRateInfo->firstValue, pRateInfo->lastValue, pRateInfo->CorrectionValue, resultVal); pRateInfo->isIRate, pRateInfo->firstKey, pRateInfo->lastKey, pRateInfo->firstValue, pRateInfo->lastValue, pRateInfo->CorrectionValue, resultVal);
...@@ -4448,6 +4443,30 @@ static void rate_function(SQLFunctionCtx *pCtx) { ...@@ -4448,6 +4443,30 @@ static void rate_function(SQLFunctionCtx *pCtx) {
pTrace("%p rate_function() size:%d, hasNull:%d", pCtx, pCtx->size, pCtx->hasNull); pTrace("%p rate_function() size:%d, hasNull:%d", pCtx, pCtx->size, pCtx->hasNull);
if (pCtx->order == TSQL_SO_ASC) {
#ifdef NOT_EQUINIX
// prev interpolation exists
if (pCtx->prev.key != -1) {
pRateInfo->firstValue = pCtx->prev.data;
pRateInfo->firstKey = pCtx->prev.key;
pCtx->prev.key = -1; // clear the flag
pTrace("%p get prev interpolation for firstValue:%f firstKey:%" PRId64, pCtx, pRateInfo->firstValue, pRateInfo->firstKey);
if (-DBL_MAX == pRateInfo->lastValue) {
pRateInfo->lastValue = pCtx->prev.data;
pRateInfo->lastKey = pCtx->prev.key;
} else if (pCtx->prev.data < pRateInfo->lastValue) {
pRateInfo->CorrectionValue += pRateInfo->lastValue;
pTrace("CorrectionValue:%f", pRateInfo->CorrectionValue);
pRateInfo->lastValue = pCtx->prev.data;
pRateInfo->lastKey = pCtx->prev.key;
pTrace("lastValue:%f lastKey:%" PRId64, pRateInfo->lastValue, pRateInfo->lastKey);
}
}
#endif
for (int32_t i = 0; i < pCtx->size; ++i) { for (int32_t i = 0; i < pCtx->size; ++i) {
char *pData = GET_INPUT_CHAR_INDEX(pCtx, i); char *pData = GET_INPUT_CHAR_INDEX(pCtx, i);
if (pCtx->hasNull && isNull(pData, pCtx->inputType)) { if (pCtx->hasNull && isNull(pData, pCtx->inputType)) {
...@@ -4504,6 +4523,111 @@ static void rate_function(SQLFunctionCtx *pCtx) { ...@@ -4504,6 +4523,111 @@ static void rate_function(SQLFunctionCtx *pCtx) {
assert(pCtx->size == notNullElems); assert(pCtx->size == notNullElems);
} }
#ifdef NOT_EQUINIX
if (pCtx->next.key != -1) {
if (pCtx->next.data < pRateInfo->lastValue) {
pRateInfo->CorrectionValue += pRateInfo->lastValue;
pTrace("CorrectionValue:%f", pRateInfo->CorrectionValue);
}
pRateInfo->lastValue = pCtx->next.data;
pRateInfo->lastKey = pCtx->next.key;
pCtx->next.key = -1;
pTrace("%p get next interpolation for lastValue:%f lastKey:%" PRId64, pCtx, pRateInfo->lastValue, pRateInfo->lastKey);
}
#endif
} else {
#ifdef NOT_EQUINIX
if (pCtx->next.key != -1) {
pRateInfo->lastValue = pCtx->next.data;
pRateInfo->lastKey = pCtx->next.key;
pCtx->next.key = -1;
pTrace("%p get next interpolation for lastValue:%f lastKey:%" PRId64, pCtx, pRateInfo->lastValue, pRateInfo->lastKey);
if (-DBL_MAX == pRateInfo->firstValue) {
pRateInfo->firstValue = pCtx->next.data;
pRateInfo->firstKey = pCtx->next.key;
} else if (pCtx->next.data > pRateInfo->firstValue) {
pRateInfo->CorrectionValue += pCtx->next.data;
pTrace("CorrectionValue:%f", pRateInfo->CorrectionValue);
pRateInfo->firstValue = pCtx->next.data;
pRateInfo->firstKey = pCtx->next.key;
pTrace("firstValue:%f firstKey:%" PRId64, pRateInfo->firstValue, pRateInfo->firstKey);
}
}
#endif
for (int32_t i = pCtx->size - 1; i >= 0; --i) {
char *pData = GET_INPUT_CHAR_INDEX(pCtx, i);
if (pCtx->hasNull && isNull(pData, pCtx->inputType)) {
pTrace("%p rate_function() index of null data:%d", pCtx, i);
continue;
}
notNullElems++;
double v = 0;
switch (pCtx->inputType) {
case TSDB_DATA_TYPE_TINYINT:
v = (double)GET_INT8_VAL(pData);
break;
case TSDB_DATA_TYPE_SMALLINT:
v = (double)GET_INT16_VAL(pData);
break;
case TSDB_DATA_TYPE_INT:
v = (double)GET_INT32_VAL(pData);
break;
case TSDB_DATA_TYPE_BIGINT:
v = (double)GET_INT64_VAL(pData);
break;
case TSDB_DATA_TYPE_FLOAT:
v = (double)GET_FLOAT_VAL(pData);
break;
case TSDB_DATA_TYPE_DOUBLE:
v = (double)GET_DOUBLE_VAL(pData);
break;
default:
assert(0);
}
if ((-DBL_MAX == pRateInfo->lastValue) || (INT64_MIN == pRateInfo->lastKey)) {
pRateInfo->lastValue = v;
pRateInfo->lastKey = primaryKey[i];
pTrace("firstValue:%f firstKey:%" PRId64, pRateInfo->lastValue, pRateInfo->lastKey);
}
if (-DBL_MAX == pRateInfo->firstValue) {
pRateInfo->firstValue = v;
} else if (v > pRateInfo->firstValue) {
pRateInfo->CorrectionValue += v;
pTrace("CorrectionValue:%f", pRateInfo->CorrectionValue);
}
pRateInfo->firstValue = v;
pRateInfo->firstKey = primaryKey[i];
pTrace("firstValue:%f firstKey:%" PRId64, pRateInfo->firstValue, pRateInfo->firstKey);
}
if (!pCtx->hasNull) {
assert(pCtx->size == notNullElems);
}
#ifdef NOT_EQUINIX
if (pCtx->prev.key != -1) {
if (pCtx->prev.data > pRateInfo->firstValue) {
pRateInfo->CorrectionValue += pCtx->prev.data;
pTrace("CorrectionValue:%f", pRateInfo->CorrectionValue);
}
pRateInfo->firstValue = pCtx->prev.data;
pRateInfo->firstKey = pCtx->prev.key;
pCtx->prev.key = -1;
pTrace("%p get prev interpolation for firstValue:%f firstKey:%" PRId64, pCtx, pRateInfo->firstValue, pRateInfo->firstKey);
}
#endif
};
SET_VAL(pCtx, notNullElems, 1); SET_VAL(pCtx, notNullElems, 1);
if (notNullElems > 0) { if (notNullElems > 0) {
...@@ -4637,7 +4761,7 @@ static void rate_finalizer(SQLFunctionCtx *pCtx) { ...@@ -4637,7 +4761,7 @@ static void rate_finalizer(SQLFunctionCtx *pCtx) {
pTrace("%p isIRate:%d firstKey:%" PRId64 " lastKey:%" PRId64 " firstValue:%f lastValue:%f CorrectionValue:%f hasResult:%d", pTrace("%p isIRate:%d firstKey:%" PRId64 " lastKey:%" PRId64 " firstValue:%f lastValue:%f CorrectionValue:%f hasResult:%d",
pCtx, pRateInfo->isIRate, pRateInfo->firstKey, pRateInfo->lastKey, pRateInfo->firstValue, pRateInfo->lastValue, pRateInfo->CorrectionValue, pRateInfo->hasResult); pCtx, pRateInfo->isIRate, pRateInfo->firstKey, pRateInfo->lastKey, pRateInfo->firstValue, pRateInfo->lastValue, pRateInfo->CorrectionValue, pRateInfo->hasResult);
if (pRateInfo->hasResult != DATA_SET_FLAG) { if ((pRateInfo->hasResult != DATA_SET_FLAG) || (INT64_MIN == pRateInfo->lastKey) || (INT64_MIN == pRateInfo->firstKey)) {
setNull(pCtx->aOutputBuf, TSDB_DATA_TYPE_DOUBLE, sizeof(double)); setNull(pCtx->aOutputBuf, TSDB_DATA_TYPE_DOUBLE, sizeof(double));
return; return;
} }
...@@ -4669,6 +4793,16 @@ static void irate_function(SQLFunctionCtx *pCtx) { ...@@ -4669,6 +4793,16 @@ static void irate_function(SQLFunctionCtx *pCtx) {
return; return;
} }
#ifdef NOT_EQUINIX
// next interpolation exists
if (pCtx->next.key != -1) {
pRateInfo->lastValue = pCtx->next.data;
pRateInfo->lastKey = pCtx->next.key;
pCtx->next.key = -1; // clear the flag
pTrace("%p irate_function() get next interpolation for lastValue:%f lastKey:%" PRId64, pCtx, pRateInfo->lastValue, pRateInfo->lastKey);
}
#endif
for (int32_t i = pCtx->size - 1; i >= 0; --i) { for (int32_t i = pCtx->size - 1; i >= 0; --i) {
char *pData = GET_INPUT_CHAR_INDEX(pCtx, i); char *pData = GET_INPUT_CHAR_INDEX(pCtx, i);
if (pCtx->hasNull && isNull(pData, pCtx->inputType)) { if (pCtx->hasNull && isNull(pData, pCtx->inputType)) {
...@@ -4702,8 +4836,7 @@ static void irate_function(SQLFunctionCtx *pCtx) { ...@@ -4702,8 +4836,7 @@ static void irate_function(SQLFunctionCtx *pCtx) {
assert(0); assert(0);
} }
// TODO: calc once if only call this function once ???? if (1 == notNullElems) {
if ((INT64_MIN == pRateInfo->lastKey) || (-DBL_MAX == pRateInfo->lastValue)) {
pRateInfo->lastValue = v; pRateInfo->lastValue = v;
pRateInfo->lastKey = primaryKey[i]; pRateInfo->lastKey = primaryKey[i];
...@@ -4711,14 +4844,23 @@ static void irate_function(SQLFunctionCtx *pCtx) { ...@@ -4711,14 +4844,23 @@ static void irate_function(SQLFunctionCtx *pCtx) {
continue; continue;
} }
if ((INT64_MIN == pRateInfo->firstKey) || (-DBL_MAX == pRateInfo->firstValue)){
pRateInfo->firstValue = v; pRateInfo->firstValue = v;
pRateInfo->firstKey = primaryKey[i]; pRateInfo->firstKey = primaryKey[i];
pTrace("%p irate_function() firstValue:%f firstKey:%" PRId64, pCtx, pRateInfo->firstValue, pRateInfo->firstKey); pTrace("%p irate_function() firstValue:%f firstKey:%" PRId64, pCtx, pRateInfo->firstValue, pRateInfo->firstKey);
break; break;
} }
#ifdef NOT_EQUINIX
if (pCtx->prev.key != -1) {
if ((INT64_MIN == pRateInfo->firstKey) || (-DBL_MAX == pRateInfo->firstValue)) {
pRateInfo->firstValue = pCtx->prev.data;
pRateInfo->firstKey = pCtx->prev.key;
pCtx->prev.key = -1;
pTrace("%p irate_function() get prev interpolation for firstValue:%f firstKey:%" PRId64, pCtx, pRateInfo->firstValue, pRateInfo->firstKey);
}
} }
#endif
SET_VAL(pCtx, notNullElems, 1); SET_VAL(pCtx, notNullElems, 1);
...@@ -4803,6 +4945,10 @@ static void do_sumrate_merge(SQLFunctionCtx *pCtx) { ...@@ -4803,6 +4945,10 @@ static void do_sumrate_merge(SQLFunctionCtx *pCtx) {
if (pInput->hasResult != DATA_SET_FLAG) { if (pInput->hasResult != DATA_SET_FLAG) {
continue; continue;
} else if (pInput->num == 0) { } else if (pInput->num == 0) {
if ((INT64_MIN == pInput->lastKey) || (INT64_MIN == pInput->firstKey)) {
continue;
}
pRateInfo->sum += do_calc_rate(pInput); pRateInfo->sum += do_calc_rate(pInput);
pRateInfo->num++; pRateInfo->num++;
} else { } else {
...@@ -4843,6 +4989,11 @@ static void sumrate_finalizer(SQLFunctionCtx *pCtx) { ...@@ -4843,6 +4989,11 @@ static void sumrate_finalizer(SQLFunctionCtx *pCtx) {
if (pRateInfo->num == 0) { if (pRateInfo->num == 0) {
// from meter // from meter
if ((INT64_MIN == pRateInfo->lastKey) || (INT64_MIN == pRateInfo->firstKey)) {
setNull(pCtx->aOutputBuf, TSDB_DATA_TYPE_DOUBLE, sizeof(double));
return;
}
*(double*)pCtx->aOutputBuf = do_calc_rate(pRateInfo); *(double*)pCtx->aOutputBuf = do_calc_rate(pRateInfo);
} else if (pCtx->functionId == TSDB_FUNC_SUM_RATE || pCtx->functionId == TSDB_FUNC_SUM_IRATE) { } else if (pCtx->functionId == TSDB_FUNC_SUM_RATE || pCtx->functionId == TSDB_FUNC_SUM_IRATE) {
*(double*)pCtx->aOutputBuf = pRateInfo->sum; *(double*)pCtx->aOutputBuf = pRateInfo->sum;
......
...@@ -254,7 +254,9 @@ int32_t tsParseOneColumnData(SSchema *pSchema, SSQLToken *pToken, char *payload, ...@@ -254,7 +254,9 @@ int32_t tsParseOneColumnData(SSchema *pSchema, SSQLToken *pToken, char *payload,
if (pToken->type == TK_NULL) { if (pToken->type == TK_NULL) {
*((int32_t *)payload) = TSDB_DATA_FLOAT_NULL; *((int32_t *)payload) = TSDB_DATA_FLOAT_NULL;
} else if ((pToken->type == TK_STRING) && (pToken->n != 0) && } else if ((pToken->type == TK_STRING) && (pToken->n != 0) &&
(strncasecmp(TSDB_DATA_NULL_STR_L, pToken->z, pToken->n) == 0)) { ((strncasecmp(TSDB_DATA_NULL_STR_L, pToken->z, pToken->n) == 0)
|| (strncasecmp("nan", pToken->z, pToken->n) == 0)
|| (strncasecmp("-nan", pToken->z, pToken->n) == 0))) {
*((int32_t *)payload) = TSDB_DATA_FLOAT_NULL; *((int32_t *)payload) = TSDB_DATA_FLOAT_NULL;
} else { } else {
double dv; double dv;
...@@ -279,7 +281,9 @@ int32_t tsParseOneColumnData(SSchema *pSchema, SSQLToken *pToken, char *payload, ...@@ -279,7 +281,9 @@ int32_t tsParseOneColumnData(SSchema *pSchema, SSQLToken *pToken, char *payload,
if (pToken->type == TK_NULL) { if (pToken->type == TK_NULL) {
*((int64_t *)payload) = TSDB_DATA_DOUBLE_NULL; *((int64_t *)payload) = TSDB_DATA_DOUBLE_NULL;
} else if ((pToken->type == TK_STRING) && (pToken->n != 0) && } else if ((pToken->type == TK_STRING) && (pToken->n != 0) &&
(strncasecmp(TSDB_DATA_NULL_STR_L, pToken->z, pToken->n) == 0)) { ((strncasecmp(TSDB_DATA_NULL_STR_L, pToken->z, pToken->n) == 0)
|| (strncasecmp("nan", pToken->z, pToken->n) == 0)
|| (strncasecmp("-nan", pToken->z, pToken->n) == 0))) {
*((int64_t *)payload) = TSDB_DATA_DOUBLE_NULL; *((int64_t *)payload) = TSDB_DATA_DOUBLE_NULL;
} else { } else {
double dv; double dv;
......
...@@ -292,7 +292,7 @@ void tscKillConnection(STscObj *pObj) { ...@@ -292,7 +292,7 @@ void tscKillConnection(STscObj *pObj) {
pthread_mutex_unlock(&pObj->mutex); pthread_mutex_unlock(&pObj->mutex);
taos_close(pObj);
tscTrace("connection:%p is killed", pObj); tscTrace("connection:%p is killed", pObj);
taos_close(pObj);
} }
...@@ -598,9 +598,6 @@ int32_t parseIntervalClause(SQueryInfo* pQueryInfo, SQuerySQL* pQuerySql) { ...@@ -598,9 +598,6 @@ int32_t parseIntervalClause(SQueryInfo* pQueryInfo, SQuerySQL* pQuerySql) {
pQueryInfo->intervalTime = pQueryInfo->intervalTime / 1000; pQueryInfo->intervalTime = pQueryInfo->intervalTime / 1000;
} }
/* parser has filter the illegal type, no need to check here */
pQueryInfo->intervalTimeUnit = pQuerySql->interval.z[pQuerySql->interval.n - 1];
// interval cannot be less than 10 milliseconds // interval cannot be less than 10 milliseconds
if (pQueryInfo->intervalTime < tsMinIntervalTime) { if (pQueryInfo->intervalTime < tsMinIntervalTime) {
return invalidSqlErrMsg(pQueryInfo->msg, msg2); return invalidSqlErrMsg(pQueryInfo->msg, msg2);
...@@ -689,8 +686,13 @@ int32_t parseSlidingClause(SQueryInfo* pQueryInfo, SQuerySQL* pQuerySql) { ...@@ -689,8 +686,13 @@ int32_t parseSlidingClause(SQueryInfo* pQueryInfo, SQuerySQL* pQuerySql) {
if (pQueryInfo->slidingTime > pQueryInfo->intervalTime) { if (pQueryInfo->slidingTime > pQueryInfo->intervalTime) {
return invalidSqlErrMsg(pQueryInfo->msg, msg1); return invalidSqlErrMsg(pQueryInfo->msg, msg1);
} }
pQueryInfo->slidingTimeUnit = pQuerySql->sliding.z[pQuerySql->sliding.n - 1];
} else { } else {
pQueryInfo->slidingTime = pQueryInfo->intervalTime; pQueryInfo->slidingTime = pQueryInfo->intervalTime;
// parser has filter the illegal type, no need to check here
pQueryInfo->slidingTimeUnit = pQuerySql->interval.z[pQuerySql->interval.n - 1];
} }
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
...@@ -1636,13 +1638,16 @@ int32_t addExprAndResultField(SQueryInfo* pQueryInfo, int32_t colIdx, tSQLExprIt ...@@ -1636,13 +1638,16 @@ int32_t addExprAndResultField(SQueryInfo* pQueryInfo, int32_t colIdx, tSQLExprIt
// set the first column ts for diff query // set the first column ts for diff query
if (optr == TK_DIFF) { if (optr == TK_DIFF) {
colIdx += 1; colIdx += 1;
SColumnIndex indexTS = {.tableIndex = index.tableIndex, .columnIndex = 0}; SColumnIndex indexTS = {.tableIndex = index.tableIndex, .columnIndex = PRIMARYKEY_TIMESTAMP_COL_INDEX};
SSqlExpr* pExpr = tscSqlExprInsert(pQueryInfo, 0, TSDB_FUNC_TS_DUMMY, &indexTS, TSDB_DATA_TYPE_TIMESTAMP, SSqlExpr* pExpr = tscSqlExprInsert(pQueryInfo, 0, TSDB_FUNC_TS_DUMMY, &indexTS, TSDB_DATA_TYPE_TIMESTAMP,
TSDB_KEYSIZE, TSDB_KEYSIZE); TSDB_KEYSIZE, TSDB_KEYSIZE);
SColumnList ids = getColumnList(1, 0, 0); SColumnList ids = getColumnList(1, 0, 0);
insertResultField(pQueryInfo, 0, &ids, TSDB_KEYSIZE, TSDB_DATA_TYPE_TIMESTAMP, aAggs[TSDB_FUNC_TS_DUMMY].aName, insertResultField(pQueryInfo, 0, &ids, TSDB_KEYSIZE, TSDB_DATA_TYPE_TIMESTAMP, aAggs[TSDB_FUNC_TS_DUMMY].aName,
pExpr); pExpr);
} else if ((optr >= TK_RATE) && (optr <= TK_AVG_IRATE)) {
SColumnIndex index1 = {.tableIndex = index.tableIndex, .columnIndex = PRIMARYKEY_TIMESTAMP_COL_INDEX};
tscColumnBaseInfoInsert(pQueryInfo, &index1);
} }
// functions can not be applied to tags // functions can not be applied to tags
......
...@@ -446,6 +446,8 @@ int32_t getTimestampInUsFromStrImpl(int64_t val, char unit, int64_t *result) { ...@@ -446,6 +446,8 @@ int32_t getTimestampInUsFromStrImpl(int64_t val, char unit, int64_t *result) {
break; break;
case 'a': case 'a':
break; break;
case 'u':
return 0;
default: { default: {
; ;
return -1; return -1;
......
...@@ -233,6 +233,7 @@ void tscCreateLocalReducer(tExtMemBuffer **pMemBuffer, int32_t numOfBuffer, tOrd ...@@ -233,6 +233,7 @@ void tscCreateLocalReducer(tExtMemBuffer **pMemBuffer, int32_t numOfBuffer, tOrd
} }
assert(idx >= pReducer->numOfBuffer); assert(idx >= pReducer->numOfBuffer);
if (idx == 0) { if (idx == 0) {
free(pReducer);
return; return;
} }
...@@ -324,7 +325,7 @@ void tscCreateLocalReducer(tExtMemBuffer **pMemBuffer, int32_t numOfBuffer, tOrd ...@@ -324,7 +325,7 @@ void tscCreateLocalReducer(tExtMemBuffer **pMemBuffer, int32_t numOfBuffer, tOrd
int64_t stime = (pQueryInfo->stime < pQueryInfo->etime) ? pQueryInfo->stime : pQueryInfo->etime; int64_t stime = (pQueryInfo->stime < pQueryInfo->etime) ? pQueryInfo->stime : pQueryInfo->etime;
int64_t revisedSTime = int64_t revisedSTime =
taosGetIntervalStartTimestamp(stime, pQueryInfo->intervalTime, pQueryInfo->intervalTimeUnit, prec); taosGetIntervalStartTimestamp(stime, pQueryInfo->intervalTime, pQueryInfo->slidingTimeUnit, prec);
SInterpolationInfo *pInterpoInfo = &pReducer->interpolationInfo; SInterpolationInfo *pInterpoInfo = &pReducer->interpolationInfo;
taosInitInterpoInfo(pInterpoInfo, pQueryInfo->order.order, revisedSTime, pQueryInfo->groupbyExpr.numOfGroupCols, taosInitInterpoInfo(pInterpoInfo, pQueryInfo->order.order, revisedSTime, pQueryInfo->groupbyExpr.numOfGroupCols,
...@@ -613,6 +614,7 @@ int32_t tscLocalReducerEnvCreate(SSqlObj *pSql, tExtMemBuffer ***pMemBuffer, tOr ...@@ -613,6 +614,7 @@ int32_t tscLocalReducerEnvCreate(SSqlObj *pSql, tExtMemBuffer ***pMemBuffer, tOr
pSchema = (SSchema *)calloc(1, sizeof(SSchema) * pQueryInfo->exprsInfo.numOfExprs); pSchema = (SSchema *)calloc(1, sizeof(SSchema) * pQueryInfo->exprsInfo.numOfExprs);
if (pSchema == NULL) { if (pSchema == NULL) {
tfree(*pMemBuffer);
tscError("%p failed to allocate memory", pSql); tscError("%p failed to allocate memory", pSql);
pRes->code = TSDB_CODE_CLI_OUT_OF_MEMORY; pRes->code = TSDB_CODE_CLI_OUT_OF_MEMORY;
return pRes->code; return pRes->code;
...@@ -634,15 +636,27 @@ int32_t tscLocalReducerEnvCreate(SSqlObj *pSql, tExtMemBuffer ***pMemBuffer, tOr ...@@ -634,15 +636,27 @@ int32_t tscLocalReducerEnvCreate(SSqlObj *pSql, tExtMemBuffer ***pMemBuffer, tOr
} }
pModel = createColumnModel(pSchema, pQueryInfo->exprsInfo.numOfExprs, capacity); pModel = createColumnModel(pSchema, pQueryInfo->exprsInfo.numOfExprs, capacity);
if (pModel == NULL) {
goto _error_memory;
}
for (int32_t i = 0; i < pMeterMetaInfo->pMetricMeta->numOfVnodes; ++i) { for (int32_t i = 0; i < pMeterMetaInfo->pMetricMeta->numOfVnodes; ++i) {
(*pMemBuffer)[i] = createExtMemBuffer(nBufferSizes, rlen, pModel); (*pMemBuffer)[i] = createExtMemBuffer(nBufferSizes, rlen, pModel);
if ((*pMemBuffer)[i] == NULL) {
for (int32_t j=0; j < i; ++j ) {
destroyExtMemBuffer((*pMemBuffer)[j]);
}
goto _error_memory;
}
(*pMemBuffer)[i]->flushModel = MULTIPLE_APPEND_MODEL; (*pMemBuffer)[i]->flushModel = MULTIPLE_APPEND_MODEL;
} }
if (createOrderDescriptor(pOrderDesc, pCmd, pModel) != TSDB_CODE_SUCCESS) { if (createOrderDescriptor(pOrderDesc, pCmd, pModel) != TSDB_CODE_SUCCESS) {
pRes->code = TSDB_CODE_CLI_OUT_OF_MEMORY; for (int32_t i = 0; i < pMeterMetaInfo->pMetricMeta->numOfVnodes; ++i) {
return pRes->code; destroyExtMemBuffer((*pMemBuffer)[i]);
}
goto _error_memory;
} }
// final result depends on the fields number // final result depends on the fields number
...@@ -685,6 +699,13 @@ int32_t tscLocalReducerEnvCreate(SSqlObj *pSql, tExtMemBuffer ***pMemBuffer, tOr ...@@ -685,6 +699,13 @@ int32_t tscLocalReducerEnvCreate(SSqlObj *pSql, tExtMemBuffer ***pMemBuffer, tOr
tfree(pSchema); tfree(pSchema);
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
_error_memory:
tfree(pSchema);
tfree(*pMemBuffer);
tscError("%p failed to allocate memory", pSql);
pRes->code = TSDB_CODE_CLI_OUT_OF_MEMORY;
return pRes->code;
} }
/** /**
...@@ -698,7 +719,7 @@ void tscLocalReducerEnvDestroy(tExtMemBuffer **pMemBuffer, tOrderDescriptor *pDe ...@@ -698,7 +719,7 @@ void tscLocalReducerEnvDestroy(tExtMemBuffer **pMemBuffer, tOrderDescriptor *pDe
destroyColumnModel(pFinalModel); destroyColumnModel(pFinalModel);
tOrderDescDestroy(pDesc); tOrderDescDestroy(pDesc);
for (int32_t i = 0; i < numOfVnodes; ++i) { for (int32_t i = 0; i < numOfVnodes; ++i) {
pMemBuffer[i] = destoryExtMemBuffer(pMemBuffer[i]); pMemBuffer[i] = destroyExtMemBuffer(pMemBuffer[i]);
} }
tfree(pMemBuffer); tfree(pMemBuffer);
...@@ -779,7 +800,7 @@ void savePrevRecordAndSetupInterpoInfo(SLocalReducer *pLocalReducer, SQueryInfo ...@@ -779,7 +800,7 @@ void savePrevRecordAndSetupInterpoInfo(SLocalReducer *pLocalReducer, SQueryInfo
int64_t stime = (pQueryInfo->stime < pQueryInfo->etime) ? pQueryInfo->stime : pQueryInfo->etime; int64_t stime = (pQueryInfo->stime < pQueryInfo->etime) ? pQueryInfo->stime : pQueryInfo->etime;
int64_t revisedSTime = int64_t revisedSTime =
taosGetIntervalStartTimestamp(stime, pQueryInfo->intervalTime, pQueryInfo->intervalTimeUnit, prec); taosGetIntervalStartTimestamp(stime, pQueryInfo->intervalTime, pQueryInfo->slidingTimeUnit, prec);
taosInitInterpoInfo(pInterpoInfo, pQueryInfo->order.order, revisedSTime, pQueryInfo->groupbyExpr.numOfGroupCols, taosInitInterpoInfo(pInterpoInfo, pQueryInfo->order.order, revisedSTime, pQueryInfo->groupbyExpr.numOfGroupCols,
pLocalReducer->rowSize); pLocalReducer->rowSize);
...@@ -923,7 +944,7 @@ static void doInterpolateResult(SSqlObj *pSql, SLocalReducer *pLocalReducer, boo ...@@ -923,7 +944,7 @@ static void doInterpolateResult(SSqlObj *pSql, SLocalReducer *pLocalReducer, boo
while (1) { while (1) {
int32_t remains = taosNumOfRemainPoints(pInterpoInfo); int32_t remains = taosNumOfRemainPoints(pInterpoInfo);
TSKEY etime = taosGetRevisedEndKey(actualETime, pQueryInfo->order.order, pQueryInfo->intervalTime, TSKEY etime = taosGetRevisedEndKey(actualETime, pQueryInfo->order.order, pQueryInfo->intervalTime,
pQueryInfo->intervalTimeUnit, precision); pQueryInfo->slidingTimeUnit, precision);
int32_t nrows = taosGetNumOfResultWithInterpo(pInterpoInfo, pPrimaryKeys, remains, pQueryInfo->intervalTime, etime, int32_t nrows = taosGetNumOfResultWithInterpo(pInterpoInfo, pPrimaryKeys, remains, pQueryInfo->intervalTime, etime,
pLocalReducer->resColModel->capacity); pLocalReducer->resColModel->capacity);
...@@ -1275,7 +1296,7 @@ static void resetEnvForNewResultset(SSqlRes *pRes, SSqlCmd *pCmd, SLocalReducer ...@@ -1275,7 +1296,7 @@ static void resetEnvForNewResultset(SSqlRes *pRes, SSqlCmd *pCmd, SLocalReducer
if (pQueryInfo->interpoType != TSDB_INTERPO_NONE) { if (pQueryInfo->interpoType != TSDB_INTERPO_NONE) {
int64_t stime = (pQueryInfo->stime < pQueryInfo->etime) ? pQueryInfo->stime : pQueryInfo->etime; int64_t stime = (pQueryInfo->stime < pQueryInfo->etime) ? pQueryInfo->stime : pQueryInfo->etime;
int64_t newTime = int64_t newTime =
taosGetIntervalStartTimestamp(stime, pQueryInfo->intervalTime, pQueryInfo->intervalTimeUnit, precision); taosGetIntervalStartTimestamp(stime, pQueryInfo->intervalTime, pQueryInfo->slidingTimeUnit, precision);
taosInitInterpoInfo(&pLocalReducer->interpolationInfo, pQueryInfo->order.order, newTime, taosInitInterpoInfo(&pLocalReducer->interpolationInfo, pQueryInfo->order.order, newTime,
pQueryInfo->groupbyExpr.numOfGroupCols, pLocalReducer->rowSize); pQueryInfo->groupbyExpr.numOfGroupCols, pLocalReducer->rowSize);
...@@ -1305,7 +1326,7 @@ static bool doInterpolationForCurrentGroup(SSqlObj *pSql) { ...@@ -1305,7 +1326,7 @@ static bool doInterpolationForCurrentGroup(SSqlObj *pSql) {
int32_t remain = taosNumOfRemainPoints(pInterpoInfo); int32_t remain = taosNumOfRemainPoints(pInterpoInfo);
TSKEY ekey = TSKEY ekey =
taosGetRevisedEndKey(etime, pQueryInfo->order.order, pQueryInfo->intervalTime, pQueryInfo->intervalTimeUnit, p); taosGetRevisedEndKey(etime, pQueryInfo->order.order, pQueryInfo->intervalTime, pQueryInfo->slidingTimeUnit, p);
int32_t rows = taosGetNumOfResultWithInterpo(pInterpoInfo, (TSKEY *)pLocalReducer->pBufForInterpo, remain, int32_t rows = taosGetNumOfResultWithInterpo(pInterpoInfo, (TSKEY *)pLocalReducer->pBufForInterpo, remain,
pQueryInfo->intervalTime, ekey, pLocalReducer->resColModel->capacity); pQueryInfo->intervalTime, ekey, pLocalReducer->resColModel->capacity);
if (rows > 0) { // do interpo if (rows > 0) { // do interpo
...@@ -1338,7 +1359,7 @@ static bool doHandleLastRemainData(SSqlObj *pSql) { ...@@ -1338,7 +1359,7 @@ static bool doHandleLastRemainData(SSqlObj *pSql) {
int64_t etime = (pQueryInfo->stime < pQueryInfo->etime) ? pQueryInfo->etime : pQueryInfo->stime; int64_t etime = (pQueryInfo->stime < pQueryInfo->etime) ? pQueryInfo->etime : pQueryInfo->stime;
etime = taosGetRevisedEndKey(etime, pQueryInfo->order.order, pQueryInfo->intervalTime, etime = taosGetRevisedEndKey(etime, pQueryInfo->order.order, pQueryInfo->intervalTime,
pQueryInfo->intervalTimeUnit, precision); pQueryInfo->slidingTimeUnit, precision);
int32_t rows = taosGetNumOfResultWithInterpo(pInterpoInfo, NULL, 0, pQueryInfo->intervalTime, etime, int32_t rows = taosGetNumOfResultWithInterpo(pInterpoInfo, NULL, 0, pQueryInfo->intervalTime, etime,
pLocalReducer->resColModel->capacity); pLocalReducer->resColModel->capacity);
if (rows > 0) { // do interpo if (rows > 0) { // do interpo
......
...@@ -398,9 +398,9 @@ void *tscProcessMsgFromServer(char *msg, void *ahandle, void *thandle) { ...@@ -398,9 +398,9 @@ void *tscProcessMsgFromServer(char *msg, void *ahandle, void *thandle) {
if (pSql->freed || pObj->signature != pObj) { if (pSql->freed || pObj->signature != pObj) {
tscTrace("%p sql is already released or DB connection is closed, freed:%d pObj:%p signature:%p", pSql, pSql->freed, tscTrace("%p sql is already released or DB connection is closed, freed:%d pObj:%p signature:%p", pSql, pSql->freed,
pObj, pObj->signature); pObj, pObj->signature);
taosAddConnIntoCache(tscConnCache, pSql->thandle, pSql->ip, pSql->vnode, pObj->user); //taosAddConnIntoCache(tscConnCache, pSql->thandle, pSql->ip, pSql->vnode, pObj->user);
tscFreeSqlObj(pSql); tscFreeSqlObj(pSql);
return ahandle; return NULL;
} }
SMeterMetaInfo *pMeterMetaInfo = tscGetMeterMetaInfo(pCmd, pCmd->clauseIndex, 0); SMeterMetaInfo *pMeterMetaInfo = tscGetMeterMetaInfo(pCmd, pCmd->clauseIndex, 0);
...@@ -600,8 +600,8 @@ void *tscProcessMsgFromServer(char *msg, void *ahandle, void *thandle) { ...@@ -600,8 +600,8 @@ void *tscProcessMsgFromServer(char *msg, void *ahandle, void *thandle) {
taos_close(pObj); taos_close(pObj);
tscTrace("%p Async sql close failed connection", pSql); tscTrace("%p Async sql close failed connection", pSql);
} else { } else {
tscFreeSqlObj(pSql);
tscTrace("%p Async sql is automatically freed", pSql); tscTrace("%p Async sql is automatically freed", pSql);
tscFreeSqlObj(pSql);
} }
} }
} }
...@@ -1681,7 +1681,7 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) { ...@@ -1681,7 +1681,7 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
} }
pQueryMsg->intervalTime = htobe64(pQueryInfo->intervalTime); pQueryMsg->intervalTime = htobe64(pQueryInfo->intervalTime);
pQueryMsg->intervalTimeUnit = pQueryInfo->intervalTimeUnit; pQueryMsg->slidingTimeUnit = pQueryInfo->slidingTimeUnit;
pQueryMsg->slidingTime = htobe64(pQueryInfo->slidingTime); pQueryMsg->slidingTime = htobe64(pQueryInfo->slidingTime);
if (pQueryInfo->intervalTime < 0) { if (pQueryInfo->intervalTime < 0) {
...@@ -2148,7 +2148,12 @@ int32_t tscBuildDropAcctMsg(SSqlObj *pSql, SSqlInfo *pInfo) { ...@@ -2148,7 +2148,12 @@ int32_t tscBuildDropAcctMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
pMsg += sizeof(SDropUserMsg); pMsg += sizeof(SDropUserMsg);
pCmd->payloadLen = pMsg - pStart; pCmd->payloadLen = pMsg - pStart;
if (pInfo->type == TSDB_SQL_DROP_ACCT) {
pCmd->msgType = TSDB_MSG_TYPE_DROP_ACCT;
} else {
pCmd->msgType = TSDB_MSG_TYPE_DROP_USER; pCmd->msgType = TSDB_MSG_TYPE_DROP_USER;
}
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
} }
......
...@@ -796,8 +796,8 @@ void taos_free_result_imp(TAOS_RES *res, int keepCmd) { ...@@ -796,8 +796,8 @@ void taos_free_result_imp(TAOS_RES *res, int keepCmd) {
tscTrace("%p qhandle is null, abort free, fp:%p", pSql, pSql->fp); tscTrace("%p qhandle is null, abort free, fp:%p", pSql, pSql->fp);
if (pSql->fp != NULL) { if (pSql->fp != NULL) {
pSql->thandle = NULL; pSql->thandle = NULL;
tscFreeSqlObj(pSql);
tscTrace("%p Async SqlObj is freed by app", pSql); tscTrace("%p Async SqlObj is freed by app", pSql);
tscFreeSqlObj(pSql);
} else if (keepCmd) { } else if (keepCmd) {
tscFreeSqlResult(pSql); tscFreeSqlResult(pSql);
} else { } else {
......
...@@ -374,8 +374,7 @@ static void tscSetNextLaunchTimer(SSqlStream *pStream, SSqlObj *pSql) { ...@@ -374,8 +374,7 @@ static void tscSetNextLaunchTimer(SSqlStream *pStream, SSqlObj *pSql) {
} }
static void tscSetSlidingWindowInfo(SSqlObj *pSql, SSqlStream *pStream) { static void tscSetSlidingWindowInfo(SSqlObj *pSql, SSqlStream *pStream) {
int64_t minIntervalTime = int64_t minIntervalTime = tsMinIntervalTime;
(pStream->precision == TSDB_TIME_PRECISION_MICRO) ? tsMinIntervalTime * 1000L : tsMinIntervalTime;
SQueryInfo* pQueryInfo = tscGetQueryInfoDetail(&pSql->cmd, 0); SQueryInfo* pQueryInfo = tscGetQueryInfoDetail(&pSql->cmd, 0);
...@@ -391,8 +390,7 @@ static void tscSetSlidingWindowInfo(SSqlObj *pSql, SSqlStream *pStream) { ...@@ -391,8 +390,7 @@ static void tscSetSlidingWindowInfo(SSqlObj *pSql, SSqlStream *pStream) {
pQueryInfo->slidingTime = pQueryInfo->intervalTime; pQueryInfo->slidingTime = pQueryInfo->intervalTime;
} }
int64_t minSlidingTime = int64_t minSlidingTime = tsMinSlidingTime;
(pStream->precision == TSDB_TIME_PRECISION_MICRO) ? tsMinSlidingTime * 1000L : tsMinSlidingTime;
if (pQueryInfo->slidingTime == -1) { if (pQueryInfo->slidingTime == -1) {
pQueryInfo->slidingTime = pQueryInfo->intervalTime; pQueryInfo->slidingTime = pQueryInfo->intervalTime;
...@@ -582,10 +580,10 @@ void taos_close_stream(TAOS_STREAM *handle) { ...@@ -582,10 +580,10 @@ void taos_close_stream(TAOS_STREAM *handle) {
tscRemoveFromStreamList(pStream, pSql); tscRemoveFromStreamList(pStream, pSql);
taosTmrStopA(&(pStream->pTimer)); taosTmrStopA(&(pStream->pTimer));
tscTrace("%p stream:%p is closed", pSql, pStream);
tscFreeSqlObj(pSql); tscFreeSqlObj(pSql);
pStream->pSql = NULL; pStream->pSql = NULL;
tscTrace("%p stream:%p is closed", pSql, pStream);
tfree(pStream); tfree(pStream);
} }
} }
...@@ -104,6 +104,7 @@ static SSub* tscCreateSubscription(STscObj* pObj, const char* topic, const char* ...@@ -104,6 +104,7 @@ static SSub* tscCreateSubscription(STscObj* pObj, const char* topic, const char*
return NULL; return NULL;
} }
char* sqlstr = NULL;
SSqlObj* pSql = calloc(1, sizeof(SSqlObj)); SSqlObj* pSql = calloc(1, sizeof(SSqlObj));
if (pSql == NULL) { if (pSql == NULL) {
globalCode = TSDB_CODE_CLI_OUT_OF_MEMORY; globalCode = TSDB_CODE_CLI_OUT_OF_MEMORY;
...@@ -114,7 +115,7 @@ static SSub* tscCreateSubscription(STscObj* pObj, const char* topic, const char* ...@@ -114,7 +115,7 @@ static SSub* tscCreateSubscription(STscObj* pObj, const char* topic, const char*
pSql->signature = pSql; pSql->signature = pSql;
pSql->pTscObj = pObj; pSql->pTscObj = pObj;
char* sqlstr = (char*)malloc(strlen(sql) + 1); sqlstr = (char*)malloc(strlen(sql) + 1);
if (sqlstr == NULL) { if (sqlstr == NULL) {
tscError("failed to allocate sql string for subscription"); tscError("failed to allocate sql string for subscription");
goto failed; goto failed;
......
...@@ -87,6 +87,7 @@ void tscGetMetricMetaCacheKey(SQueryInfo* pQueryInfo, char* str, uint64_t uid) { ...@@ -87,6 +87,7 @@ void tscGetMetricMetaCacheKey(SQueryInfo* pQueryInfo, char* str, uint64_t uid) {
MD5Update(&ctx, (uint8_t*)tmp, keyLen); MD5Update(&ctx, (uint8_t*)tmp, keyLen);
char* pStr = base64_encode(ctx.digest, tListLen(ctx.digest)); char* pStr = base64_encode(ctx.digest, tListLen(ctx.digest));
strcpy(str, pStr); strcpy(str, pStr);
free(pStr);
} }
free(tmp); free(tmp);
......
Subproject commit 8c58c512b6acda8bcdfa48fdc7140227b5221766
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package taosSql
import "C"
import (
"context"
"errors"
"database/sql/driver"
"unsafe"
"strconv"
"strings"
"time"
)
type taosConn struct {
taos unsafe.Pointer
affectedRows int
insertId int
cfg *config
status statusFlag
parseTime bool
reset bool // set when the Go SQL package calls ResetSession
}
type taosSqlResult struct {
affectedRows int64
insertId int64
}
func (res *taosSqlResult) LastInsertId() (int64, error) {
return res.insertId, nil
}
func (res *taosSqlResult) RowsAffected() (int64, error) {
return res.affectedRows, nil
}
func (mc *taosConn) Begin() (driver.Tx, error) {
taosLog.Println("taosSql not support transaction")
return nil, errors.New("taosSql not support transaction")
}
func (mc *taosConn) Close() (err error) {
if mc.taos == nil {
return errConnNoExist
}
mc.taos_close()
return nil
}
func (mc *taosConn) Prepare(query string) (driver.Stmt, error) {
if mc.taos == nil {
return nil, errInvalidConn
}
stmt := &taosSqlStmt{
mc: mc,
pSql: query,
}
// find ? count and save to stmt.paramCount
stmt.paramCount = strings.Count(query, "?")
//fmt.Printf("prepare alloc stmt:%p, sql:%s\n", stmt, query)
taosLog.Printf("prepare alloc stmt:%p, sql:%s\n", stmt, query)
return stmt, nil
}
func (mc *taosConn) interpolateParams(query string, args []driver.Value) (string, error) {
// Number of ? should be same to len(args)
if strings.Count(query, "?") != len(args) {
return "", driver.ErrSkip
}
buf := make([]byte, defaultBufSize)
buf = buf[:0] // clear buf
argPos := 0
for i := 0; i < len(query); i++ {
q := strings.IndexByte(query[i:], '?')
if q == -1 {
buf = append(buf, query[i:]...)
break
}
buf = append(buf, query[i:i+q]...)
i += q
arg := args[argPos]
argPos++
if arg == nil {
buf = append(buf, "NULL"...)
continue
}
switch v := arg.(type) {
case int64:
buf = strconv.AppendInt(buf, v, 10)
case uint64:
// Handle uint64 explicitly because our custom ConvertValue emits unsigned values
buf = strconv.AppendUint(buf, v, 10)
case float64:
buf = strconv.AppendFloat(buf, v, 'g', -1, 64)
case bool:
if v {
buf = append(buf, '1')
} else {
buf = append(buf, '0')
}
case time.Time:
if v.IsZero() {
buf = append(buf, "'0000-00-00'"...)
} else {
v := v.In(mc.cfg.loc)
v = v.Add(time.Nanosecond * 500) // To round under microsecond
year := v.Year()
year100 := year / 100
year1 := year % 100
month := v.Month()
day := v.Day()
hour := v.Hour()
minute := v.Minute()
second := v.Second()
micro := v.Nanosecond() / 1000
buf = append(buf, []byte{
'\'',
digits10[year100], digits01[year100],
digits10[year1], digits01[year1],
'-',
digits10[month], digits01[month],
'-',
digits10[day], digits01[day],
' ',
digits10[hour], digits01[hour],
':',
digits10[minute], digits01[minute],
':',
digits10[second], digits01[second],
}...)
if micro != 0 {
micro10000 := micro / 10000
micro100 := micro / 100 % 100
micro1 := micro % 100
buf = append(buf, []byte{
'.',
digits10[micro10000], digits01[micro10000],
digits10[micro100], digits01[micro100],
digits10[micro1], digits01[micro1],
}...)
}
buf = append(buf, '\'')
}
case []byte:
if v == nil {
buf = append(buf, "NULL"...)
} else {
buf = append(buf, "_binary'"...)
if mc.status&statusNoBackslashEscapes == 0 {
buf = escapeBytesBackslash(buf, v)
} else {
buf = escapeBytesQuotes(buf, v)
}
buf = append(buf, '\'')
}
case string:
//buf = append(buf, '\'')
if mc.status&statusNoBackslashEscapes == 0 {
buf = escapeStringBackslash(buf, v)
} else {
buf = escapeStringQuotes(buf, v)
}
//buf = append(buf, '\'')
default:
return "", driver.ErrSkip
}
//if len(buf)+4 > mc.maxAllowedPacket {
if len(buf)+4 > maxTaosSqlLen {
return "", driver.ErrSkip
}
}
if argPos != len(args) {
return "", driver.ErrSkip
}
return string(buf), nil
}
func (mc *taosConn) Exec(query string, args []driver.Value) (driver.Result, error) {
if mc.taos == nil {
return nil, driver.ErrBadConn
}
if len(args) != 0 {
if !mc.cfg.interpolateParams {
return nil, driver.ErrSkip
}
// try to interpolate the parameters to save extra roundtrips for preparing and closing a statement
prepared, err := mc.interpolateParams(query, args)
if err != nil {
return nil, err
}
query = prepared
}
mc.affectedRows = 0
mc.insertId = 0
_, err := mc.taosQuery(query)
if err == nil {
return &taosSqlResult{
affectedRows: int64(mc.affectedRows),
insertId: int64(mc.insertId),
}, err
}
return nil, err
}
func (mc *taosConn) Query(query string, args []driver.Value) (driver.Rows, error) {
return mc.query(query, args)
}
func (mc *taosConn) query(query string, args []driver.Value) (*textRows, error) {
if mc.taos == nil {
return nil, driver.ErrBadConn
}
if len(args) != 0 {
if !mc.cfg.interpolateParams {
return nil, driver.ErrSkip
}
// try client-side prepare to reduce roundtrip
prepared, err := mc.interpolateParams(query, args)
if err != nil {
return nil, err
}
query = prepared
}
num_fields, err := mc.taosQuery(query)
if err == nil {
// Read Result
rows := new(textRows)
rows.mc = mc
// Columns field
rows.rs.columns, err = mc.readColumns(num_fields)
return rows, err
}
return nil, err
}
// Ping implements driver.Pinger interface
func (mc *taosConn) Ping(ctx context.Context) (err error) {
if mc.taos != nil {
return nil
}
return errInvalidConn
}
// BeginTx implements driver.ConnBeginTx interface
func (mc *taosConn) BeginTx(ctx context.Context, opts driver.TxOptions) (driver.Tx, error) {
taosLog.Println("taosSql not support transaction")
return nil, errors.New("taosSql not support transaction")
}
func (mc *taosConn) QueryContext(ctx context.Context, query string, args []driver.NamedValue) (driver.Rows, error) {
if mc.taos == nil {
return nil, errInvalidConn
}
dargs, err := namedValueToValue(args)
if err != nil {
return nil, err
}
rows, err := mc.query(query, dargs)
if err != nil {
return nil, err
}
return rows, err
}
func (mc *taosConn) ExecContext(ctx context.Context, query string, args []driver.NamedValue) (driver.Result, error) {
if mc.taos == nil {
return nil, errInvalidConn
}
dargs, err := namedValueToValue(args)
if err != nil {
return nil, err
}
return mc.Exec(query, dargs)
}
func (mc *taosConn) PrepareContext(ctx context.Context, query string) (driver.Stmt, error) {
if mc.taos == nil {
return nil, errInvalidConn
}
stmt, err := mc.Prepare(query)
if err != nil {
return nil, err
}
return stmt, nil
}
func (stmt *taosSqlStmt) QueryContext(ctx context.Context, args []driver.NamedValue) (driver.Rows, error) {
if stmt.mc == nil {
return nil, errInvalidConn
}
dargs, err := namedValueToValue(args)
if err != nil {
return nil, err
}
rows, err := stmt.query(dargs)
if err != nil {
return nil, err
}
return rows, err
}
func (stmt *taosSqlStmt) ExecContext(ctx context.Context, args []driver.NamedValue) (driver.Result, error) {
if stmt.mc == nil {
return nil, errInvalidConn
}
dargs, err := namedValueToValue(args)
if err != nil {
return nil, err
}
return stmt.Exec(dargs)
}
func (mc *taosConn) CheckNamedValue(nv *driver.NamedValue) (err error) {
nv.Value, err = converter{}.ConvertValue(nv.Value)
return
}
// ResetSession implements driver.SessionResetter.
// (From Go 1.10)
func (mc *taosConn) ResetSession(ctx context.Context) error {
if mc.taos == nil {
return driver.ErrBadConn
}
mc.reset = true
return nil
}
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package taosSql
import (
"context"
"database/sql/driver"
)
type connector struct {
cfg *config
}
// Connect implements driver.Connector interface.
// Connect returns a connection to the database.
func (c *connector) Connect(ctx context.Context) (driver.Conn, error) {
var err error
// New taosConn
mc := &taosConn{
cfg: c.cfg,
parseTime: c.cfg.parseTime,
}
// Connect to Server
mc.taos, err = mc.taosConnect(mc.cfg.addr, mc.cfg.user, mc.cfg.passwd, mc.cfg.dbName, mc.cfg.port)
if err != nil {
return nil, err
}
return mc, nil
}
// Driver implements driver.Connector interface.
// Driver returns &taosSQLDriver{}.
func (c *connector) Driver() driver.Driver {
return &taosSQLDriver{}
}
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package taosSql
const (
timeFormat = "2006-01-02 15:04:05"
maxTaosSqlLen = 65380
defaultBufSize = maxTaosSqlLen + 32
)
type fieldType byte
type fieldFlag uint16
const (
flagNotNULL fieldFlag = 1 << iota
)
type statusFlag uint16
const (
statusInTrans statusFlag = 1 << iota
statusInAutocommit
statusReserved // Not in documentation
statusMoreResultsExists
statusNoGoodIndexUsed
statusNoIndexUsed
statusCursorExists
statusLastRowSent
statusDbDropped
statusNoBackslashEscapes
statusMetadataChanged
statusQueryWasSlow
statusPsOutParams
statusInTransReadonly
statusSessionStateChanged
)
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package taosSql
import (
"context"
"database/sql"
"database/sql/driver"
)
// taosSqlDriver is exported to make the driver directly accessible.
// In general the driver is used via the database/sql package.
type taosSQLDriver struct{}
// Open new Connection.
// the DSN string is formatted
func (d taosSQLDriver) Open(dsn string) (driver.Conn, error) {
cfg, err := parseDSN(dsn)
if err != nil {
return nil, err
}
c := &connector{
cfg: cfg,
}
return c.Connect(context.Background())
}
func init() {
sql.Register("taosSql", &taosSQLDriver{})
taosLogInit()
}
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
package taosSql
/*
#cgo CFLAGS : -I/usr/include
#cgo LDFLAGS: -L/usr/lib -ltaos
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <taos.h>
*/
import "C"
import (
"errors"
"unsafe"
)
func (mc *taosConn) taosConnect(ip, user, pass, db string, port int) (taos unsafe.Pointer, err error) {
cuser := C.CString(user)
cpass := C.CString(pass)
cip := C.CString(ip)
cdb := C.CString(db)
defer C.free(unsafe.Pointer(cip))
defer C.free(unsafe.Pointer(cuser))
defer C.free(unsafe.Pointer(cpass))
defer C.free(unsafe.Pointer(cdb))
taosObj := C.taos_connect(cip, cuser, cpass, cdb, (C.ushort)(port))
if taosObj == nil {
return nil, errors.New("taos_connect() fail!")
}
return (unsafe.Pointer)(taosObj), nil
}
func (mc *taosConn) taosQuery(sqlstr string) (int, error) {
//taosLog.Printf("taosQuery() input sql:%s\n", sqlstr)
csqlstr := C.CString(sqlstr)
defer C.free(unsafe.Pointer(csqlstr))
code := int(C.taos_query(mc.taos, csqlstr))
if 0 != code {
mc.taos_error()
errStr := C.GoString(C.taos_errstr(mc.taos))
taosLog.Println("taos_query() failed:", errStr)
taosLog.Printf("taosQuery() input sql:%s\n", sqlstr)
return 0, errors.New(errStr)
}
// read result and save into mc struct
num_fields := int(C.taos_field_count(mc.taos))
if 0 == num_fields { // there are no select and show kinds of commands
mc.affectedRows = int(C.taos_affected_rows(mc.taos))
mc.insertId = 0
}
return num_fields, nil
}
func (mc *taosConn) taos_close() {
C.taos_close(mc.taos)
}
func (mc *taosConn) taos_error() {
// free local resouce: allocated memory/metric-meta refcnt
//var pRes unsafe.Pointer
pRes := C.taos_use_result(mc.taos)
C.taos_free_result(pRes)
}
此差异已折叠。
...@@ -105,7 +105,7 @@ extern SSdbPeer *sdbPeer[]; ...@@ -105,7 +105,7 @@ extern SSdbPeer *sdbPeer[];
#endif #endif
void *sdbOpenTable(int maxRows, int32_t maxRowSize, char *name, uint8_t keyType, char *directory, void *sdbOpenTable(int maxRows, int32_t maxRowSize, char *name, char keyType, char *directory,
void *(*appTool)(char, void *, char *, int, int *)); void *(*appTool)(char, void *, char *, int, int *));
void *sdbGetRow(void *handle, void *key); void *sdbGetRow(void *handle, void *key);
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册