Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
taosdata
TDengine
提交
64c15fa7
TDengine
项目概览
taosdata
/
TDengine
大约 2 年 前同步成功
通知
1192
Star
22018
Fork
4786
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
TDengine
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
64c15fa7
编写于
8月 30, 2021
作者:
C
Cary Xu
浏览文件
操作
浏览文件
下载
差异文件
Merge branch 'develop' into feature/TD-6117
上级
482566ad
8c96539b
变更
62
展开全部
显示空白变更内容
内联
并排
Showing
62 changed file
with
6229 addition
and
4805 deletion
+6229
-4805
Jenkinsfile
Jenkinsfile
+6
-6
deps/TSZ
deps/TSZ
+1
-1
documentation20/cn/08.connector/01.java/docs.md
documentation20/cn/08.connector/01.java/docs.md
+1
-1
documentation20/cn/08.connector/docs.md
documentation20/cn/08.connector/docs.md
+38
-35
packaging/cfg/taos.cfg
packaging/cfg/taos.cfg
+3
-0
packaging/tools/make_install.sh
packaging/tools/make_install.sh
+50
-46
src/client/inc/tscUtil.h
src/client/inc/tscUtil.h
+3
-1
src/client/inc/tsclient.h
src/client/inc/tsclient.h
+1
-0
src/client/src/tscAsync.c
src/client/src/tscAsync.c
+0
-9
src/client/src/tscGlobalmerge.c
src/client/src/tscGlobalmerge.c
+190
-201
src/client/src/tscParseInsert.c
src/client/src/tscParseInsert.c
+1
-0
src/client/src/tscParseLineProtocol.c
src/client/src/tscParseLineProtocol.c
+298
-262
src/client/src/tscPrepare.c
src/client/src/tscPrepare.c
+2
-2
src/client/src/tscSQLParser.c
src/client/src/tscSQLParser.c
+54
-80
src/client/src/tscServer.c
src/client/src/tscServer.c
+45
-210
src/client/src/tscSubquery.c
src/client/src/tscSubquery.c
+13
-4
src/client/src/tscUtil.c
src/client/src/tscUtil.c
+86
-28
src/common/inc/tglobal.h
src/common/inc/tglobal.h
+1
-0
src/common/src/tglobal.c
src/common/src/tglobal.c
+11
-0
src/connector/go
src/connector/go
+1
-1
src/connector/hivemq-tdengine-extension
src/connector/hivemq-tdengine-extension
+1
-1
src/connector/jdbc/pom.xml
src/connector/jdbc/pom.xml
+0
-1
src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBError.java
...ector/jdbc/src/main/java/com/taosdata/jdbc/TSDBError.java
+2
-2
src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBJNIConnector.java
...dbc/src/main/java/com/taosdata/jdbc/TSDBJNIConnector.java
+6
-12
src/connector/jdbc/src/test/java/com/taosdata/jdbc/cases/UseNowInsertTimestampTest.java
...va/com/taosdata/jdbc/cases/UseNowInsertTimestampTest.java
+84
-0
src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/RestfulConnectionTest.java
...test/java/com/taosdata/jdbc/rs/RestfulConnectionTest.java
+26
-23
src/inc/taoserror.h
src/inc/taoserror.h
+6
-0
src/inc/ttokendef.h
src/inc/ttokendef.h
+120
-119
src/kit/taosdemo/taosdemo.c
src/kit/taosdemo/taosdemo.c
+1069
-300
src/kit/taosdump/taosdump.c
src/kit/taosdump/taosdump.c
+34
-24
src/mnode/src/mnodeTable.c
src/mnode/src/mnodeTable.c
+13
-0
src/plugins/http/CMakeLists.txt
src/plugins/http/CMakeLists.txt
+1
-0
src/plugins/http/src/httpRestHandle.c
src/plugins/http/src/httpRestHandle.c
+9
-0
src/query/inc/qExecutor.h
src/query/inc/qExecutor.h
+26
-17
src/query/inc/qSqlparser.h
src/query/inc/qSqlparser.h
+1
-0
src/query/inc/qTableMeta.h
src/query/inc/qTableMeta.h
+1
-0
src/query/inc/sql.y
src/query/inc/sql.y
+15
-7
src/query/src/qAggMain.c
src/query/src/qAggMain.c
+25
-0
src/query/src/qExecutor.c
src/query/src/qExecutor.c
+161
-96
src/query/src/qFill.c
src/query/src/qFill.c
+1
-1
src/query/src/qSqlParser.c
src/query/src/qSqlParser.c
+1
-1
src/query/src/sql.c
src/query/src/sql.c
+1804
-2585
src/tsdb/src/tsdbRead.c
src/tsdb/src/tsdbRead.c
+4
-4
src/util/inc/tutil.h
src/util/inc/tutil.h
+1
-0
src/util/src/terror.c
src/util/src/terror.c
+6
-0
src/util/src/ttokenizer.c
src/util/src/ttokenizer.c
+1
-0
src/util/src/tutil.c
src/util/src/tutil.c
+23
-1
tests/examples/c/schemaless.c
tests/examples/c/schemaless.c
+2
-116
tests/pytest/fulltest.sh
tests/pytest/fulltest.sh
+5
-0
tests/pytest/functions/function_interp.py
tests/pytest/functions/function_interp.py
+9
-9
tests/pytest/query/last_row_cache.py
tests/pytest/query/last_row_cache.py
+3
-3
tests/pytest/query/query.py
tests/pytest/query/query.py
+15
-0
tests/pytest/query/queryDiffColsOr.py
tests/pytest/query/queryDiffColsOr.py
+545
-0
tests/pytest/query/queryLike.py
tests/pytest/query/queryLike.py
+128
-2
tests/pytest/restful/restful_bind_db1.py
tests/pytest/restful/restful_bind_db1.py
+123
-0
tests/pytest/restful/restful_bind_db2.py
tests/pytest/restful/restful_bind_db2.py
+133
-0
tests/pytest/tools/schemalessInsertPerformance.py
tests/pytest/tools/schemalessInsertPerformance.py
+301
-0
tests/pytest/util/common.py
tests/pytest/util/common.py
+2
-2
tests/pytest/util/dnodes.py
tests/pytest/util/dnodes.py
+1
-1
tests/script/general/http/restful_dbname.sim
tests/script/general/http/restful_dbname.sim
+124
-0
tests/script/general/parser/columnValue_float.sim
tests/script/general/parser/columnValue_float.sim
+4
-4
tests/script/general/parser/interp_test.sim
tests/script/general/parser/interp_test.sim
+588
-587
未找到文件。
Jenkinsfile
浏览文件 @
64c15fa7
...
...
@@ -265,12 +265,12 @@ pipeline {
}
}
timeout
(
time:
60
,
unit:
'MINUTES'
){
sh
'''
cd ${WKC}/tests/pytest
rm -rf /var/lib/taos/*
rm -rf /var/log/taos/*
./handle_crash_gen_val_log.sh
'''
//
sh '''
//
cd ${WKC}/tests/pytest
//
rm -rf /var/lib/taos/*
//
rm -rf /var/log/taos/*
//
./handle_crash_gen_val_log.sh
//
'''
sh
'''
cd ${WKC}/tests/pytest
rm -rf /var/lib/taos/*
...
...
TSZ
@
0ca5b15a
比较
ceda5bf9
...
0ca5b15a
Subproject commit
ceda5bf9fcd7836509ac97dcc0056b3f1dd48cc5
Subproject commit
0ca5b15a8eac40327dd737be52c926fa5675712c
documentation20/cn/08.connector/01.java/docs.md
浏览文件 @
64c15fa7
...
...
@@ -46,7 +46,7 @@ TDengine 的 JDBC 驱动实现尽可能与关系型数据库驱动保持一致
</tr>
</table>
注意:与 JNI 方式不同,RESTful 接口是无状态的。在使用JDBC-RESTful时,需要在sql中指定表、超级表的数据库名称。例如:
注意:与 JNI 方式不同,RESTful 接口是无状态的。在使用JDBC-RESTful时,需要在sql中指定表、超级表的数据库名称。
(从 TDengine 2.1.8.0 版本开始,也可以在 RESTful url 中指定当前 SQL 语句所使用的默认数据库名。)
例如:
```
sql
INSERT
INTO
test
.
t1
USING
test
.
weather
(
ts
,
temperature
)
TAGS
(
'beijing'
)
VALUES
(
now
,
24
.
6
);
```
...
...
documentation20/cn/08.connector/docs.md
浏览文件 @
64c15fa7
...
...
@@ -654,22 +654,23 @@ conn.close()
为支持各种不同类型平台的开发,TDengine 提供符合 REST 设计标准的 API,即 RESTful API。为最大程度降低学习成本,不同于其他数据库 RESTful API 的设计方法,TDengine 直接通过 HTTP POST 请求 BODY 中包含的 SQL 语句来操作数据库,仅需要一个 URL。RESTful 连接器的使用参见
[
视频教程
](
https://www.taosdata.com/blog/2020/11/11/1965.html
)
。
注意:与标准连接器的一个区别是,RESTful 接口是无状态的,因此
`USE db_name`
指令没有效果,所有对表名、超级表名的引用都需要指定数据库名前缀。
注意:与标准连接器的一个区别是,RESTful 接口是无状态的,因此
`USE db_name`
指令没有效果,所有对表名、超级表名的引用都需要指定数据库名前缀。
(从 2.1.8.0 版本开始,支持在 RESTful url 中指定 db_name,这时如果 SQL 语句中没有指定数据库名前缀的话,会使用 url 中指定的这个 db_name。)
### 安装
RESTful
接口不依赖于任何TDengine的库,因此客户端不需要安装任何TDengine的库,只要客户端的开发语言支持HTTP
协议即可。
RESTful
接口不依赖于任何 TDengine 的库,因此客户端不需要安装任何 TDengine 的库,只要客户端的开发语言支持 HTTP
协议即可。
### 验证
在已经安装
TDengine
服务器端的情况下,可以按照如下方式进行验证。
在已经安装
TDengine
服务器端的情况下,可以按照如下方式进行验证。
下面以
Ubuntu环境中使用curl工具(确认已经安装)来验证RESTful
接口的正常。
下面以
Ubuntu 环境中使用 curl 工具(确认已经安装)来验证 RESTful
接口的正常。
下面示例是列出所有的数据库,请把
h1.taosdata.com和6041(缺省值)替换为实际运行的TDengine服务fqdn
和端口号:
下面示例是列出所有的数据库,请把
h1.taosdata.com 和 6041(缺省值)替换为实际运行的 TDengine 服务 fqdn
和端口号:
```
html
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' h1.taosdata.com:6041/rest/sql
```
返回值结果如下表示验证通过:
```
json
{
...
...
@@ -682,22 +683,23 @@ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' h1.taos
}
```
### RESTful连接器的使用
### RESTful
连接器的使用
#### HTTP请求格式
#### HTTP
请求格式
```
http://<fqdn>:<port>/rest/sql
http://<fqdn>:<port>/rest/sql
/[db_name]
```
参数说明:
-
fqnd: 集群中的任一台主机FQDN或IP地址
-
port: 配置文件中httpPort配置项,缺省为6041
-
fqnd: 集群中的任一台主机 FQDN 或 IP 地址
-
port: 配置文件中 httpPort 配置项,缺省为 6041
-
db_name: 可选参数,指定本次所执行的 SQL 语句的默认数据库库名。(从 2.1.8.0 版本开始支持)
例如:http://h1.taos.com:6041/rest/sql
是指向地址为h1.taos.com:6041的url
。
例如:http://h1.taos.com:6041/rest/sql
/test 是指向地址为 h1.taos.com:6041 的 url,并将默认使用的数据库库名设置为 test
。
HTTP
请求的Header里需带有身份认证信息,TDengine支持Basic
认证与自定义认证两种机制,后续版本将提供标准安全的数字签名机制来做身份验证。
HTTP
请求的 Header 里需带有身份认证信息,TDengine 支持 Basic
认证与自定义认证两种机制,后续版本将提供标准安全的数字签名机制来做身份验证。
-
自定义身份认证信息如下所示(
<token>
稍后介绍)
...
...
@@ -711,25 +713,25 @@ Authorization: Taosd <TOKEN>
Authorization: Basic <TOKEN>
```
HTTP
请求的BODY里就是一个完整的SQL语句,SQL语句中的数据表应提供数据库前缀,例如
\<
db-name>.
\<
tb-name>。如果表名不带数据库前缀,系统会返回错误。因为HTTP模块只是一个简单的转发,没有当前DB
的概念。
HTTP
请求的 BODY 里就是一个完整的 SQL 语句,SQL 语句中的数据表应提供数据库前缀,例如
\<
db_name>.
\<
tb_name>。如果表名不带数据库前缀,又没有在 url 中指定数据库名的话,系统会返回错误。因为 HTTP 模块只是一个简单的转发,没有当前 DB
的概念。
使用
curl通过自定义身份认证方式来发起一个
HTTP Request,语法如下:
使用
curl 通过自定义身份认证方式来发起一个
HTTP Request,语法如下:
```
bash
curl
-H
'Authorization: Basic <TOKEN>'
-d
'<SQL>'
<ip>:<PORT>/rest/sql
curl
-H
'Authorization: Basic <TOKEN>'
-d
'<SQL>'
<ip>:<PORT>/rest/sql
/[db_name]
```
或者
```
bash
curl
-u
username:password
-d
'<SQL>'
<ip>:<PORT>/rest/sql
curl
-u
username:password
-d
'<SQL>'
<ip>:<PORT>/rest/sql
/[db_name]
```
其中,
`TOKEN`
为
`{username}:{password}`
经过Base64编码之后的字符串,例如
`root:taosdata`
编码后为
`cm9vdDp0YW9zZGF0YQ==`
其中,
`TOKEN`
为
`{username}:{password}`
经过 Base64 编码之后的字符串,例如
`root:taosdata`
编码后为
`cm9vdDp0YW9zZGF0YQ==`
### HTTP返回格式
### HTTP
返回格式
返回值为
JSON
格式,如下:
返回值为
JSON
格式,如下:
```
json
{
...
...
@@ -747,9 +749,9 @@ curl -u username:password -d '<SQL>' <ip>:<PORT>/rest/sql
说明:
-
status: 告知操作结果是成功还是失败。
-
head: 表的定义,如果不返回结果集,则仅有一列“affected_rows”。(从 2.0.17.0 版本开始,建议不要依赖 head 返回值来判断数据列类型,而推荐使用 column_meta。在未来版本中,有可能会从返回值中去掉 head 这一项。)
-
head: 表的定义,如果不返回结果集,则仅有一列
“affected_rows”。(从 2.0.17.0 版本开始,建议不要依赖 head 返回值来判断数据列类型,而推荐使用 column_meta。在未来版本中,有可能会从返回值中去掉 head 这一项。)
-
column_meta: 从 2.0.17.0 版本开始,返回值中增加这一项来说明 data 里每一列的数据类型。具体每个列会用三个值来说明,分别为:列名、列类型、类型长度。例如
`["current",6,4]`
表示列名为“current”;列类型为 6,也即 float 类型;类型长度为 4,也即对应 4 个字节表示的 float。如果列类型为 binary 或 nchar,则类型长度表示该列最多可以保存的内容长度,而不是本次返回值中的具体数据长度。当列类型是 nchar 的时候,其类型长度表示可以保存的 unicode 字符数量,而不是 bytes。
-
data: 具体返回的数据,一行一行的呈现,如果不返回结果集,那么就仅有[[affected_rows]]。data 中每一行的数据列顺序,与 column_meta 中描述数据列的顺序完全一致。
-
data: 具体返回的数据,一行一行的呈现,如果不返回结果集,那么就仅有
[[affected_rows]]。data 中每一行的数据列顺序,与 column_meta 中描述数据列的顺序完全一致。
-
rows: 表明总共多少行数据。
column_meta 中的列类型说明:
...
...
@@ -766,13 +768,13 @@ column_meta 中的列类型说明:
### 自定义授权码
HTTP
请求中需要带有授权码
`<TOKEN>`
,用于身份识别。授权码通常由管理员提供,可简单的通过发送
`HTTP GET`
请求来获取授权码,操作如下:
HTTP
请求中需要带有授权码
`<TOKEN>`
,用于身份识别。授权码通常由管理员提供,可简单的通过发送
`HTTP GET`
请求来获取授权码,操作如下:
```
bash
curl http://<fqnd>:<port>/rest/login/<username>/<password>
```
其中,
`fqdn`
是TDengine数据库的fqdn或ip地址,port是TDengine服务的端口号,
`username`
为数据库用户名,
`password`
为数据库密码,返回值为
`JSON`
格式,各字段含义如下:
其中,
`fqdn`
是 TDengine 数据库的 fqdn 或 ip 地址,port 是 TDengine 服务的端口号,
`username`
为数据库用户名,
`password`
为数据库密码,返回值为
`JSON`
格式,各字段含义如下:
-
status:请求结果的标志位
...
...
@@ -798,7 +800,7 @@ curl http://192.168.0.1:6041/rest/login/root/taosdata
### 使用示例
-
在
demo库里查询表d1001
的所有记录:
-
在
demo 库里查询表 d1001
的所有记录:
```
bash
curl
-H
'Authorization: Basic cm9vdDp0YW9zZGF0YQ=='
-d
'select * from demo.d1001'
192.168.0.1:6041/rest/sql
...
...
@@ -818,7 +820,7 @@ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.d1001
}
```
-
创建库demo:
-
创建库
demo:
```
bash
curl
-H
'Authorization: Basic cm9vdDp0YW9zZGF0YQ=='
-d
'create database demo'
192.168.0.1:6041/rest/sql
...
...
@@ -837,9 +839,9 @@ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'create database demo' 19
### 其他用法
#### 结果集采用
Unix
时间戳
#### 结果集采用
Unix
时间戳
HTTP
请求URL采用
`sqlt`
时,返回结果集的时间戳将采用Unix
时间戳格式表示,例如
HTTP
请求 URL 采用
`sqlt`
时,返回结果集的时间戳将采用 Unix
时间戳格式表示,例如
```
bash
curl
-H
'Authorization: Basic cm9vdDp0YW9zZGF0YQ=='
-d
'select * from demo.d1001'
192.168.0.1:6041/rest/sqlt
...
...
@@ -860,9 +862,9 @@ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.d1001
}
```
#### 结果集采用
UTC
时间字符串
#### 结果集采用
UTC
时间字符串
HTTP
请求URL采用
`sqlutc`
时,返回结果集的时间戳将采用UTC
时间字符串表示,例如
HTTP
请求 URL 采用
`sqlutc`
时,返回结果集的时间戳将采用 UTC
时间字符串表示,例如
```
bash
curl
-H
'Authorization: Basic cm9vdDp0YW9zZGF0YQ=='
-d
'select * from demo.t1'
192.168.0.1:6041/rest/sqlutc
```
...
...
@@ -884,13 +886,14 @@ HTTP请求URL采用`sqlutc`时,返回结果集的时间戳将采用UTC时间
### 重要配置项
下面仅列出一些与
RESTful接口有关的配置参数,其他系统参数请看配置文件里的说明。(注意:配置修改后,需要重启taosd
服务才能生效)
下面仅列出一些与
RESTful 接口有关的配置参数,其他系统参数请看配置文件里的说明。(注意:配置修改后,需要重启 taosd
服务才能生效)
-
对外提供RESTful服务的端口号,默认绑定到 6041(实际取值是 serverPort + 11,因此可以通过修改 serverPort 参数的设置来修改)
-
httpMaxThreads: 启动的线程数量,默认为2(2.0.17.0版本开始,默认值改为CPU核数的一半向下取整)
-
restfulRowLimit: 返回结果集(JSON格式)的最大条数,默认值为10240
-
httpEnableCompress: 是否支持压缩,默认不支持,目前TDengine仅支持gzip压缩格式
-
httpDebugFlag: 日志开关,默认131。131:仅错误和报警信息,135:调试信息,143:非常详细的调试信息,默认131
-
对外提供 RESTful 服务的端口号,默认绑定到 6041(实际取值是 serverPort + 11,因此可以通过修改 serverPort 参数的设置来修改)。
-
httpMaxThreads: 启动的线程数量,默认为 2(2.0.17.0 版本开始,默认值改为 CPU 核数的一半向下取整)。
-
restfulRowLimit: 返回结果集(JSON 格式)的最大条数,默认值为 10240。
-
httpEnableCompress: 是否支持压缩,默认不支持,目前 TDengine 仅支持 gzip 压缩格式。
-
httpDebugFlag: 日志开关,默认 131。131:仅错误和报警信息,135:调试信息,143:非常详细的调试信息,默认 131。
-
httpDbNameMandatory: 是否必须在 RESTful url 中指定默认的数据库名。默认为 0,即关闭此检查。如果设置为 1,那么每个 RESTful url 中都必须设置一个默认数据库名,否则无论此时执行的 SQL 语句是否需要指定数据库,都会返回一个执行错误,拒绝执行此 SQL 语句。
## <a class="anchor" id="csharp"></a>CSharp Connector
...
...
packaging/cfg/taos.cfg
浏览文件 @
64c15fa7
...
...
@@ -194,6 +194,9 @@ keepColumnName 1
# maximum number of rows returned by the restful interface
# restfulRowLimit 10240
# database name must be specified in restful interface if the following parameter is set, off by default
# httpDbNameMandatory 1
# The following parameter is used to limit the maximum number of lines in log files.
# max number of lines per log filters
# numOfLogLines 10000000
...
...
packaging/tools/make_install.sh
浏览文件 @
64c15fa7
...
...
@@ -19,6 +19,7 @@ else
fi
# Dynamic directory
data_dir
=
"/var/lib/taos"
if
[
"
$osType
"
!=
"Darwin"
]
;
then
...
...
@@ -29,25 +30,32 @@ fi
data_link_dir
=
"/usr/local/taos/data"
log_link_dir
=
"/usr/local/taos/log"
cfg_install_dir
=
"/etc/taos"
if
[
"
$osType
"
!=
"Darwin"
]
;
then
cfg_install_dir
=
"/etc/taos"
else
cfg_install_dir
=
"/usr/local/Cellar/tdengine/
${
verNumber
}
/taos"
fi
if
[
"
$osType
"
!=
"Darwin"
]
;
then
bin_link_dir
=
"/usr/bin"
lib_link_dir
=
"/usr/lib"
lib64_link_dir
=
"/usr/lib64"
inc_link_dir
=
"/usr/include"
else
bin_link_dir
=
"/usr/local/bin"
lib_link_dir
=
"/usr/local/lib"
inc_link_dir
=
"/usr/local/include"
fi
#install main path
install_main_dir
=
"/usr/local/taos"
if
[
"
$osType
"
!=
"Darwin"
]
;
then
install_main_dir
=
"/usr/local/taos"
else
install_main_dir
=
"/usr/local/Cellar/tdengine/
${
verNumber
}
"
fi
# old bin dir
bin_dir
=
"/usr/local/taos/bin"
if
[
"
$osType
"
!=
"Darwin"
]
;
then
bin_dir
=
"/usr/local/taos/bin"
else
bin_dir
=
"/usr/local/Cellar/tdengine/
${
verNumber
}
/bin"
fi
service_config_dir
=
"/etc/systemd/system"
...
...
@@ -59,12 +67,11 @@ GREEN_UNDERLINE='\033[4;32m'
NC
=
'\033[0m'
csudo
=
""
if
command
-v
sudo
>
/dev/null
;
then
csudo
=
"sudo"
fi
if
[
"
$osType
"
!=
"Darwin"
]
;
then
if
command
-v
sudo
>
/dev/null
;
then
csudo
=
"sudo"
fi
initd_mod
=
0
service_mod
=
2
if
pidof systemd &> /dev/null
;
then
...
...
@@ -137,17 +144,15 @@ function install_main_path() {
function
install_bin
()
{
# Remove links
${
csudo
}
rm
-f
${
bin_link_dir
}
/taos
||
:
if
[
"
$osType
"
!=
"Darwin"
]
;
then
${
csudo
}
rm
-f
${
bin_link_dir
}
/taos
||
:
${
csudo
}
rm
-f
${
bin_link_dir
}
/taosd
||
:
${
csudo
}
rm
-f
${
bin_link_dir
}
/taosdemo
||
:
${
csudo
}
rm
-f
${
bin_link_dir
}
/perfMonitor
||
:
${
csudo
}
rm
-f
${
bin_link_dir
}
/taosdump
||
:
${
csudo
}
rm
-f
${
bin_link_dir
}
/set_core
||
:
fi
${
csudo
}
rm
-f
${
bin_link_dir
}
/rmtaos
||
:
fi
${
csudo
}
cp
-r
${
binary_dir
}
/build/bin/
*
${
install_main_dir
}
/bin
${
csudo
}
cp
-r
${
script_dir
}
/taosd-dump-cfg.gdb
${
install_main_dir
}
/bin
...
...
@@ -162,9 +167,8 @@ function install_bin() {
${
csudo
}
chmod
0555
${
install_main_dir
}
/bin/
*
#Make link
[
-x
${
install_main_dir
}
/bin/taos
]
&&
${
csudo
}
ln
-s
${
install_main_dir
}
/bin/taos
${
bin_link_dir
}
/taos
||
:
if
[
"
$osType
"
!=
"Darwin"
]
;
then
[
-x
${
install_main_dir
}
/bin/taos
]
&&
${
csudo
}
ln
-s
${
install_main_dir
}
/bin/taos
${
bin_link_dir
}
/taos
||
:
[
-x
${
install_main_dir
}
/bin/taosd
]
&&
${
csudo
}
ln
-s
${
install_main_dir
}
/bin/taosd
${
bin_link_dir
}
/taosd
||
:
[
-x
${
install_main_dir
}
/bin/taosdump
]
&&
${
csudo
}
ln
-s
${
install_main_dir
}
/bin/taosdump
${
bin_link_dir
}
/taosdump
||
:
[
-x
${
install_main_dir
}
/bin/taosdemo
]
&&
${
csudo
}
ln
-s
${
install_main_dir
}
/bin/taosdemo
${
bin_link_dir
}
/taosdemo
||
:
...
...
@@ -174,8 +178,6 @@ function install_bin() {
if
[
"
$osType
"
!=
"Darwin"
]
;
then
[
-x
${
install_main_dir
}
/bin/remove.sh
]
&&
${
csudo
}
ln
-s
${
install_main_dir
}
/bin/remove.sh
${
bin_link_dir
}
/rmtaos
||
:
else
[
-x
${
install_main_dir
}
/bin/remove_client.sh
]
&&
${
csudo
}
ln
-s
${
install_main_dir
}
/bin/remove_client.sh
${
bin_link_dir
}
/rmtaos
||
:
fi
}
...
...
@@ -222,7 +224,7 @@ function install_jemalloc() {
fi
if
[
-d
/etc/ld.so.conf.d
]
;
then
${
csudo
}
echo
"/usr/local/lib"
>
/etc/ld.so.conf.d/jemalloc.conf
echo
"/usr/local/lib"
|
${
csudo
}
tee
/etc/ld.so.conf.d/jemalloc.conf
${
csudo
}
ldconfig
else
echo
"/etc/ld.so.conf.d not found!"
...
...
@@ -248,8 +250,6 @@ function install_lib() {
fi
else
${
csudo
}
cp
-Rf
${
binary_dir
}
/build/lib/libtaos.
*
${
install_main_dir
}
/driver
&&
${
csudo
}
chmod
777
${
install_main_dir
}
/driver/
*
${
csudo
}
ln
-sf
${
install_main_dir
}
/driver/libtaos.1.dylib
${
lib_link_dir
}
/libtaos.1.dylib
${
csudo
}
ln
-sf
${
lib_link_dir
}
/libtaos.1.dylib
${
lib_link_dir
}
/libtaos.dylib
fi
install_jemalloc
...
...
@@ -261,10 +261,14 @@ function install_lib() {
function
install_header
()
{
if
[
"
$osType
"
!=
"Darwin"
]
;
then
${
csudo
}
rm
-f
${
inc_link_dir
}
/taos.h
${
inc_link_dir
}
/taoserror.h
||
:
fi
${
csudo
}
cp
-f
${
source_dir
}
/src/inc/taos.h
${
source_dir
}
/src/inc/taoserror.h
${
install_main_dir
}
/include
&&
${
csudo
}
chmod
644
${
install_main_dir
}
/include/
*
if
[
"
$osType
"
!=
"Darwin"
]
;
then
${
csudo
}
ln
-s
${
install_main_dir
}
/include/taos.h
${
inc_link_dir
}
/taos.h
${
csudo
}
ln
-s
${
install_main_dir
}
/include/taoserror.h
${
inc_link_dir
}
/taoserror.h
fi
}
function
install_config
()
{
...
...
@@ -272,29 +276,30 @@ function install_config() {
if
[
!
-f
${
cfg_install_dir
}
/taos.cfg
]
;
then
${
csudo
}
mkdir
-p
${
cfg_install_dir
}
[
-f
${
script_dir
}
/../cfg/taos.cfg
]
&&
${
csudo
}
cp
${
script_dir
}
/../cfg/taos.cfg
${
cfg_install_dir
}
[
-f
${
script_dir
}
/../cfg/taos.cfg
]
&&
${
csudo
}
cp
${
script_dir
}
/../cfg/taos.cfg
${
cfg_install_dir
}
${
csudo
}
chmod
644
${
cfg_install_dir
}
/
*
fi
${
csudo
}
cp
-f
${
script_dir
}
/../cfg/taos.cfg
${
install_main_dir
}
/cfg/taos.cfg.org
${
csudo
}
ln
-s
${
cfg_install_dir
}
/taos.cfg
${
install_main_dir
}
/cfg
if
[
"
$osType
"
!=
"Darwin"
]
;
then
${
csudo
}
ln
-s
${
cfg_install_dir
}
/taos.cfg
${
install_main_dir
}
/cfg
fi
}
function
install_log
()
{
${
csudo
}
rm
-rf
${
log_dir
}
||
:
if
[
"
$osType
"
!=
"Darwin"
]
;
then
${
csudo
}
rm
-rf
${
log_dir
}
||
:
${
csudo
}
mkdir
-p
${
log_dir
}
&&
${
csudo
}
chmod
777
${
log_dir
}
else
mkdir
-p
${
log_dir
}
&&
chmod
777
${
log_dir
}
fi
${
csudo
}
ln
-s
${
log_dir
}
${
install_main_dir
}
/log
fi
}
function
install_data
()
{
if
[
"
$osType
"
!=
"Darwin"
]
;
then
${
csudo
}
mkdir
-p
${
data_dir
}
${
csudo
}
ln
-s
${
data_dir
}
${
install_main_dir
}
/data
fi
}
function
install_connector
()
{
...
...
@@ -309,7 +314,6 @@ function install_connector() {
echo
"WARNING: go connector not found, please check if want to use it!"
fi
${
csudo
}
cp
-rf
${
source_dir
}
/src/connector/python
${
install_main_dir
}
/connector
${
csudo
}
cp
${
binary_dir
}
/build/lib/
*
.jar
${
install_main_dir
}
/connector &> /dev/null
&&
${
csudo
}
chmod
777
${
install_main_dir
}
/connector/
*
.jar
||
echo
&> /dev/null
}
...
...
@@ -495,12 +499,12 @@ function install_TDengine() {
if
[
"
$osType
"
!=
"Darwin"
]
;
then
install_data
fi
install_log
install_header
install_lib
install_connector
install_examples
install_bin
if
[
"
$osType
"
!=
"Darwin"
]
;
then
...
...
src/client/inc/tscUtil.h
浏览文件 @
64c15fa7
...
...
@@ -36,7 +36,7 @@ extern "C" {
(((metaInfo)->pTableMeta != NULL) && ((metaInfo)->pTableMeta->tableType == TSDB_CHILD_TABLE))
#define UTIL_TABLE_IS_NORMAL_TABLE(metaInfo) \
(!(UTIL_TABLE_IS_SUPER_TABLE(metaInfo) || UTIL_TABLE_IS_CHILD_TABLE(metaInfo)
|| UTIL_TABLE_IS_TMP_TABLE(metaInfo)
))
(!(UTIL_TABLE_IS_SUPER_TABLE(metaInfo) || UTIL_TABLE_IS_CHILD_TABLE(metaInfo)))
#define UTIL_TABLE_IS_TMP_TABLE(metaInfo) \
(((metaInfo)->pTableMeta != NULL) && ((metaInfo)->pTableMeta->tableType == TSDB_TEMP_TABLE))
...
...
@@ -365,6 +365,8 @@ STblCond* tsGetTableFilter(SArray* filters, uint64_t uid, int16_t idx);
void
tscRemoveCachedTableMeta
(
STableMetaInfo
*
pTableMetaInfo
,
uint64_t
id
);
char
*
cloneCurrentDBName
(
SSqlObj
*
pSql
);
#ifdef __cplusplus
}
#endif
...
...
src/client/inc/tsclient.h
浏览文件 @
64c15fa7
...
...
@@ -492,6 +492,7 @@ bool tscHasReachLimitation(SQueryInfo *pQueryInfo, SSqlRes *pRes);
void
tscSetBoundColumnInfo
(
SParsedDataColInfo
*
pColInfo
,
SSchema
*
pSchema
,
int32_t
numOfCols
);
char
*
tscGetErrorMsgPayload
(
SSqlCmd
*
pCmd
);
int32_t
tscErrorMsgWithCode
(
int32_t
code
,
char
*
dstBuffer
,
const
char
*
errMsg
,
const
char
*
sql
);
int32_t
tscInvalidOperationMsg
(
char
*
msg
,
const
char
*
additionalInfo
,
const
char
*
sql
);
int32_t
tscSQLSyntaxErrMsg
(
char
*
msg
,
const
char
*
additionalInfo
,
const
char
*
sql
);
...
...
src/client/src/tscAsync.c
浏览文件 @
64c15fa7
...
...
@@ -363,15 +363,6 @@ void tscTableMetaCallBack(void *param, TAOS_RES *res, int code) {
}
if
(
TSDB_QUERY_HAS_TYPE
(
pCmd
->
insertParam
.
insertType
,
TSDB_QUERY_TYPE_STMT_INSERT
))
{
// stmt insert
STableMetaInfo
*
pTableMetaInfo
=
tscGetMetaInfo
(
pQueryInfo
,
0
);
code
=
tscGetTableMeta
(
pSql
,
pTableMetaInfo
);
if
(
code
==
TSDB_CODE_TSC_ACTION_IN_PROGRESS
)
{
taosReleaseRef
(
tscObjRef
,
pSql
->
self
);
return
;
}
else
{
assert
(
code
==
TSDB_CODE_SUCCESS
);
}
(
*
pSql
->
fp
)(
pSql
->
param
,
pSql
,
code
);
}
else
if
(
TSDB_QUERY_HAS_TYPE
(
pCmd
->
insertParam
.
insertType
,
TSDB_QUERY_TYPE_FILE_INSERT
))
{
// file insert
tscImportDataFromFile
(
pSql
);
...
...
src/client/src/tscGlobalmerge.c
浏览文件 @
64c15fa7
...
...
@@ -35,6 +35,7 @@ typedef struct SCompareParam {
static
bool
needToMerge
(
SSDataBlock
*
pBlock
,
SArray
*
columnIndexList
,
int32_t
index
,
char
**
buf
)
{
int32_t
ret
=
0
;
size_t
size
=
taosArrayGetSize
(
columnIndexList
);
if
(
size
>
0
)
{
ret
=
compare_aRv
(
pBlock
,
columnIndexList
,
(
int32_t
)
size
,
index
,
buf
,
TSDB_ORDER_ASC
);
...
...
@@ -564,9 +565,11 @@ static void savePrevOrderColumns(char** prevRow, SArray* pColumnList, SSDataBloc
(
*
hasPrev
)
=
true
;
}
// tsdb_func_tag function only produce one row of result. Therefore, we need to copy the
// output value to multiple rows
static
void
setTagValueForMultipleRows
(
SQLFunctionCtx
*
pCtx
,
int32_t
numOfOutput
,
int32_t
numOfRows
)
{
if
(
numOfRows
<=
1
)
{
return
;
return
;
}
for
(
int32_t
k
=
0
;
k
<
numOfOutput
;
++
k
)
{
...
...
@@ -574,31 +577,20 @@ static void setTagValueForMultipleRows(SQLFunctionCtx* pCtx, int32_t numOfOutput
continue
;
}
int32_t
inc
=
numOfRows
-
1
;
// tsdb_func_tag function only produce one row of result
char
*
src
=
pCtx
[
k
].
pOutput
;
char
*
dst
=
pCtx
[
k
].
pOutput
+
pCtx
[
k
].
outputBytes
;
for
(
int32_t
i
=
0
;
i
<
inc
;
++
i
)
{
pCtx
[
k
].
pOutput
+=
pCtx
[
k
].
outputBytes
;
memcpy
(
pCtx
[
k
].
pOutput
,
src
,
(
size_t
)
pCtx
[
k
].
outputBytes
);
// Let's start from the second row, as the first row has result value already.
for
(
int32_t
i
=
1
;
i
<
numOfRows
;
++
i
)
{
memcpy
(
dst
,
src
,
(
size_t
)
pCtx
[
k
].
outputBytes
);
dst
+=
pCtx
[
k
].
outputBytes
;
}
}
}
static
void
doExecuteFinalMerge
(
SOperatorInfo
*
pOperator
,
int32_t
numOfExpr
,
SSDataBlock
*
pBlock
)
{
SMultiwayMergeInfo
*
pInfo
=
pOperator
->
info
;
SQLFunctionCtx
*
pCtx
=
pInfo
->
binfo
.
pCtx
;
char
**
add
=
calloc
(
pBlock
->
info
.
numOfCols
,
POINTER_BYTES
);
for
(
int32_t
i
=
0
;
i
<
pBlock
->
info
.
numOfCols
;
++
i
)
{
add
[
i
]
=
pCtx
[
i
].
pInput
;
pCtx
[
i
].
size
=
1
;
}
for
(
int32_t
i
=
0
;
i
<
pBlock
->
info
.
rows
;
++
i
)
{
if
(
pInfo
->
hasPrev
)
{
if
(
needToMerge
(
pBlock
,
pInfo
->
orderColumnList
,
i
,
pInfo
->
prevRow
))
{
static
void
doMergeResultImpl
(
SMultiwayMergeInfo
*
pInfo
,
SQLFunctionCtx
*
pCtx
,
int32_t
numOfExpr
,
int32_t
rowIndex
,
char
**
pDataPtr
)
{
for
(
int32_t
j
=
0
;
j
<
numOfExpr
;
++
j
)
{
pCtx
[
j
].
pInput
=
add
[
j
]
+
pCtx
[
j
].
inputBytes
*
i
;
pCtx
[
j
].
pInput
=
pDataPtr
[
j
]
+
pCtx
[
j
].
inputBytes
*
rowIndex
;
}
for
(
int32_t
j
=
0
;
j
<
numOfExpr
;
++
j
)
{
...
...
@@ -609,16 +601,15 @@ static void doExecuteFinalMerge(SOperatorInfo* pOperator, int32_t numOfExpr, SSD
if
(
functionId
<
0
)
{
SUdfInfo
*
pUdfInfo
=
taosArrayGet
(
pInfo
->
udfInfo
,
-
1
*
functionId
-
1
);
doInvokeUdf
(
pUdfInfo
,
&
pCtx
[
j
],
0
,
TSDB_UDF_FUNC_MERGE
);
continue
;
}
}
else
{
aAggs
[
functionId
].
mergeFunc
(
&
pCtx
[
j
]);
}
}
else
{
for
(
int32_t
j
=
0
;
j
<
numOfExpr
;
++
j
)
{
// TODO refactor
}
}
static
void
doFinalizeResultImpl
(
SMultiwayMergeInfo
*
pInfo
,
SQLFunctionCtx
*
pCtx
,
int32_t
numOfExpr
)
{
for
(
int32_t
j
=
0
;
j
<
numOfExpr
;
++
j
)
{
int32_t
functionId
=
pCtx
[
j
].
functionId
;
if
(
functionId
==
TSDB_FUNC_TAG_DUMMY
||
functionId
==
TSDB_FUNC_TS_DUMMY
)
{
continue
;
...
...
@@ -626,15 +617,30 @@ static void doExecuteFinalMerge(SOperatorInfo* pOperator, int32_t numOfExpr, SSD
if
(
functionId
<
0
)
{
SUdfInfo
*
pUdfInfo
=
taosArrayGet
(
pInfo
->
udfInfo
,
-
1
*
functionId
-
1
);
doInvokeUdf
(
pUdfInfo
,
&
pCtx
[
j
],
0
,
TSDB_UDF_FUNC_FINALIZE
);
continue
;
}
else
{
aAggs
[
functionId
].
xFinalize
(
&
pCtx
[
j
]);
}
}
}
aAggs
[
functionId
].
xFinalize
(
&
pCtx
[
j
]);
static
void
doExecuteFinalMerge
(
SOperatorInfo
*
pOperator
,
int32_t
numOfExpr
,
SSDataBlock
*
pBlock
)
{
SMultiwayMergeInfo
*
pInfo
=
pOperator
->
info
;
SQLFunctionCtx
*
pCtx
=
pInfo
->
binfo
.
pCtx
;
char
**
addrPtr
=
calloc
(
pBlock
->
info
.
numOfCols
,
POINTER_BYTES
);
for
(
int32_t
i
=
0
;
i
<
pBlock
->
info
.
numOfCols
;
++
i
)
{
addrPtr
[
i
]
=
pCtx
[
i
].
pInput
;
pCtx
[
i
].
size
=
1
;
}
for
(
int32_t
i
=
0
;
i
<
pBlock
->
info
.
rows
;
++
i
)
{
if
(
pInfo
->
hasPrev
)
{
if
(
needToMerge
(
pBlock
,
pInfo
->
orderColumnList
,
i
,
pInfo
->
prevRow
))
{
doMergeResultImpl
(
pInfo
,
pCtx
,
numOfExpr
,
i
,
addrPtr
);
}
else
{
doFinalizeResultImpl
(
pInfo
,
pCtx
,
numOfExpr
);
int32_t
numOfRows
=
getNumOfResult
(
pOperator
->
pRuntimeEnv
,
pInfo
->
binfo
.
pCtx
,
pOperator
->
numOfOutput
);
setTagValueForMultipleRows
(
pCtx
,
pOperator
->
numOfOutput
,
numOfRows
);
...
...
@@ -655,48 +661,10 @@ static void doExecuteFinalMerge(SOperatorInfo* pOperator, int32_t numOfExpr, SSD
aAggs
[
pCtx
[
j
].
functionId
].
init
(
&
pCtx
[
j
],
pCtx
[
j
].
resultInfo
);
}
for
(
int32_t
j
=
0
;
j
<
numOfExpr
;
++
j
)
{
pCtx
[
j
].
pInput
=
add
[
j
]
+
pCtx
[
j
].
inputBytes
*
i
;
}
for
(
int32_t
j
=
0
;
j
<
numOfExpr
;
++
j
)
{
int32_t
functionId
=
pCtx
[
j
].
functionId
;
if
(
functionId
==
TSDB_FUNC_TAG_DUMMY
||
functionId
==
TSDB_FUNC_TS_DUMMY
)
{
continue
;
}
if
(
functionId
<
0
)
{
SUdfInfo
*
pUdfInfo
=
taosArrayGet
(
pInfo
->
udfInfo
,
-
1
*
functionId
-
1
);
doInvokeUdf
(
pUdfInfo
,
&
pCtx
[
j
],
0
,
TSDB_UDF_FUNC_MERGE
);
continue
;
}
aAggs
[
functionId
].
mergeFunc
(
&
pCtx
[
j
]);
}
doMergeResultImpl
(
pInfo
,
pCtx
,
numOfExpr
,
i
,
addrPtr
);
}
}
else
{
for
(
int32_t
j
=
0
;
j
<
numOfExpr
;
++
j
)
{
pCtx
[
j
].
pInput
=
add
[
j
]
+
pCtx
[
j
].
inputBytes
*
i
;
}
for
(
int32_t
j
=
0
;
j
<
numOfExpr
;
++
j
)
{
int32_t
functionId
=
pCtx
[
j
].
functionId
;
if
(
functionId
==
TSDB_FUNC_TAG_DUMMY
||
functionId
==
TSDB_FUNC_TS_DUMMY
)
{
continue
;
}
if
(
functionId
<
0
)
{
SUdfInfo
*
pUdfInfo
=
taosArrayGet
(
pInfo
->
udfInfo
,
-
1
*
functionId
-
1
);
doInvokeUdf
(
pUdfInfo
,
&
pCtx
[
j
],
0
,
TSDB_UDF_FUNC_MERGE
);
continue
;
}
aAggs
[
functionId
].
mergeFunc
(
&
pCtx
[
j
]);
}
doMergeResultImpl
(
pInfo
,
pCtx
,
numOfExpr
,
i
,
addrPtr
);
}
savePrevOrderColumns
(
pInfo
->
prevRow
,
pInfo
->
orderColumnList
,
pBlock
,
i
,
&
pInfo
->
hasPrev
);
...
...
@@ -704,11 +672,11 @@ static void doExecuteFinalMerge(SOperatorInfo* pOperator, int32_t numOfExpr, SSD
{
for
(
int32_t
i
=
0
;
i
<
pBlock
->
info
.
numOfCols
;
++
i
)
{
pCtx
[
i
].
pInput
=
add
[
i
];
pCtx
[
i
].
pInput
=
add
rPtr
[
i
];
}
}
tfree
(
add
);
tfree
(
add
rPtr
);
}
static
bool
isAllSourcesCompleted
(
SGlobalMerger
*
pMerger
)
{
...
...
@@ -816,6 +784,8 @@ SSDataBlock* doMultiwayMergeSort(void* param, bool* newgroup) {
SLocalDataSource
*
pOneDataSrc
=
pMerger
->
pLocalDataSrc
[
pTree
->
pNode
[
0
].
index
];
bool
sameGroup
=
true
;
if
(
pInfo
->
hasPrev
)
{
// todo refactor extract method
int32_t
numOfCols
=
(
int32_t
)
taosArrayGetSize
(
pInfo
->
orderColumnList
);
// if this row belongs to current result set group
...
...
@@ -955,9 +925,10 @@ SSDataBlock* doGlobalAggregate(void* param, bool* newgroup) {
break
;
}
bool
sameGroup
=
true
;
if
(
pAggInfo
->
hasGroupColData
)
{
bool
sameGroup
=
isSameGroup
(
pAggInfo
->
groupColumnList
,
pBlock
,
pAggInfo
->
currentGroupColData
);
if
(
!
sameGroup
)
{
sameGroup
=
isSameGroup
(
pAggInfo
->
groupColumnList
,
pBlock
,
pAggInfo
->
currentGroupColData
);
if
(
!
sameGroup
&&
!
pAggInfo
->
multiGroupResults
)
{
*
newgroup
=
true
;
pAggInfo
->
hasDataBlockForNewGroup
=
true
;
pAggInfo
->
pExistBlock
=
pBlock
;
...
...
@@ -976,26 +947,10 @@ SSDataBlock* doGlobalAggregate(void* param, bool* newgroup) {
}
if
(
handleData
)
{
// data in current group is all handled
for
(
int32_t
j
=
0
;
j
<
pOperator
->
numOfOutput
;
++
j
)
{
int32_t
functionId
=
pAggInfo
->
binfo
.
pCtx
[
j
].
functionId
;
if
(
functionId
==
TSDB_FUNC_TAG_DUMMY
||
functionId
==
TSDB_FUNC_TS_DUMMY
)
{
continue
;
}
if
(
functionId
<
0
)
{
SUdfInfo
*
pUdfInfo
=
taosArrayGet
(
pAggInfo
->
udfInfo
,
-
1
*
functionId
-
1
);
doInvokeUdf
(
pUdfInfo
,
&
pAggInfo
->
binfo
.
pCtx
[
j
],
0
,
TSDB_UDF_FUNC_FINALIZE
);
continue
;
}
aAggs
[
functionId
].
xFinalize
(
&
pAggInfo
->
binfo
.
pCtx
[
j
]);
}
doFinalizeResultImpl
(
pAggInfo
,
pAggInfo
->
binfo
.
pCtx
,
pOperator
->
numOfOutput
);
int32_t
numOfRows
=
getNumOfResult
(
pOperator
->
pRuntimeEnv
,
pAggInfo
->
binfo
.
pCtx
,
pOperator
->
numOfOutput
);
pAggInfo
->
binfo
.
pRes
->
info
.
rows
+=
numOfRows
;
pAggInfo
->
binfo
.
pRes
->
info
.
rows
+=
numOfRows
;
setTagValueForMultipleRows
(
pAggInfo
->
binfo
.
pCtx
,
pOperator
->
numOfOutput
,
numOfRows
);
}
...
...
@@ -1019,71 +974,127 @@ SSDataBlock* doGlobalAggregate(void* param, bool* newgroup) {
return
(
pRes
->
info
.
rows
!=
0
)
?
pRes
:
NULL
;
}
static
SSDataBlock
*
skipGroupBlock
(
SOperatorInfo
*
pOperator
,
bool
*
newgroup
)
{
SSLimitOperatorInfo
*
pInfo
=
pOperator
->
info
;
assert
(
pInfo
->
currentGroupOffset
>=
0
);
static
void
doHandleDataInCurrentGroup
(
SSLimitOperatorInfo
*
pInfo
,
SSDataBlock
*
pBlock
,
int32_t
rowIndex
)
{
if
(
pInfo
->
currentOffset
>
0
)
{
pInfo
->
currentOffset
-=
1
;
}
else
{
// discard the data rows in current group
if
(
pInfo
->
limit
.
limit
<
0
||
(
pInfo
->
limit
.
limit
>=
0
&&
pInfo
->
rowsTotal
<
pInfo
->
limit
.
limit
))
{
size_t
num1
=
taosArrayGetSize
(
pInfo
->
pRes
->
pDataBlock
);
for
(
int32_t
i
=
0
;
i
<
num1
;
++
i
)
{
SColumnInfoData
*
pColInfoData
=
taosArrayGet
(
pBlock
->
pDataBlock
,
i
);
SColumnInfoData
*
pDstInfoData
=
taosArrayGet
(
pInfo
->
pRes
->
pDataBlock
,
i
);
SSDataBlock
*
pBlock
=
NULL
;
if
(
pInfo
->
currentGroupOffset
==
0
)
{
publishOperatorProfEvent
(
pOperator
->
upstream
[
0
],
QUERY_PROF_BEFORE_OPERATOR_EXEC
);
pBlock
=
pOperator
->
upstream
[
0
]
->
exec
(
pOperator
->
upstream
[
0
],
newgroup
);
publishOperatorProfEvent
(
pOperator
->
upstream
[
0
],
QUERY_PROF_AFTER_OPERATOR_EXEC
);
if
(
pBlock
==
NULL
)
{
setQueryStatus
(
pOperator
->
pRuntimeEnv
,
QUERY_COMPLETED
);
pOperator
->
status
=
OP_EXEC_DONE
;
SColumnInfo
*
pColInfo
=
&
pColInfoData
->
info
;
char
*
pSrc
=
rowIndex
*
pColInfo
->
bytes
+
(
char
*
)
pColInfoData
->
pData
;
char
*
pDst
=
(
char
*
)
pDstInfoData
->
pData
+
(
pInfo
->
pRes
->
info
.
rows
*
pColInfo
->
bytes
);
memcpy
(
pDst
,
pSrc
,
pColInfo
->
bytes
);
}
if
(
*
newgroup
==
false
&&
pInfo
->
limit
.
limit
>
0
&&
pInfo
->
rowsTotal
>=
pInfo
->
limit
.
limit
)
{
while
((
*
newgroup
)
==
false
)
{
// ignore the remain blocks
publishOperatorProfEvent
(
pOperator
->
upstream
[
0
],
QUERY_PROF_BEFORE_OPERATOR_EXEC
);
pBlock
=
pOperator
->
upstream
[
0
]
->
exec
(
pOperator
->
upstream
[
0
],
newgroup
);
publishOperatorProfEvent
(
pOperator
->
upstream
[
0
],
QUERY_PROF_AFTER_OPERATOR_EXEC
);
if
(
pBlock
==
NULL
)
{
setQueryStatus
(
pOperator
->
pRuntimeEnv
,
QUERY_COMPLETED
);
pOperator
->
status
=
OP_EXEC_DONE
;
return
NULL
;
pInfo
->
rowsTotal
+=
1
;
pInfo
->
pRes
->
info
.
rows
+=
1
;
}
}
}
static
void
ensureOutputBuf
(
SSLimitOperatorInfo
*
pInfo
,
SSDataBlock
*
pResultBlock
,
int32_t
numOfRows
)
{
if
(
pInfo
->
capacity
<
pResultBlock
->
info
.
rows
+
numOfRows
)
{
int32_t
total
=
pResultBlock
->
info
.
rows
+
numOfRows
;
size_t
num
=
taosArrayGetSize
(
pResultBlock
->
pDataBlock
);
for
(
int32_t
i
=
0
;
i
<
num
;
++
i
)
{
SColumnInfoData
*
pInfoData
=
taosArrayGet
(
pResultBlock
->
pDataBlock
,
i
);
char
*
tmp
=
realloc
(
pInfoData
->
pData
,
total
*
pInfoData
->
info
.
bytes
);
if
(
tmp
!=
NULL
)
{
pInfoData
->
pData
=
tmp
;
}
else
{
// todo handle the malloc failure
}
return
pBlock
;
pInfo
->
capacity
=
total
;
pInfo
->
threshold
=
(
int64_t
)(
total
*
0
.
8
);
}
}
}
publishOperatorProfEvent
(
pOperator
->
upstream
[
0
],
QUERY_PROF_BEFORE_OPERATOR_EXEC
);
pBlock
=
pOperator
->
upstream
[
0
]
->
exec
(
pOperator
->
upstream
[
0
],
newgroup
);
publishOperatorProfEvent
(
pOperator
->
upstream
[
0
],
QUERY_PROF_AFTER_OPERATOR_EXEC
);
enum
{
BLOCK_NEW_GROUP
=
1
,
BLOCK_NO_GROUP
=
2
,
BLOCK_SAME_GROUP
=
3
,
};
if
(
pBlock
==
NULL
)
{
setQueryStatus
(
pOperator
->
pRuntimeEnv
,
QUERY_COMPLETED
);
pOperator
->
status
=
OP_EXEC_DONE
;
return
NULL
;
}
static
int32_t
doSlimitImpl
(
SOperatorInfo
*
pOperator
,
SSLimitOperatorInfo
*
pInfo
,
SSDataBlock
*
pBlock
)
{
int32_t
rowIndex
=
0
;
while
(
1
)
{
if
(
*
newgroup
)
{
pInfo
->
currentGroupOffset
-=
1
;
*
newgroup
=
false
;
while
(
rowIndex
<
pBlock
->
info
.
rows
)
{
int32_t
numOfCols
=
(
int32_t
)
taosArrayGetSize
(
pInfo
->
orderColumnList
);
bool
samegroup
=
true
;
if
(
pInfo
->
hasPrev
)
{
for
(
int32_t
i
=
0
;
i
<
numOfCols
;
++
i
)
{
SColIndex
*
pIndex
=
taosArrayGet
(
pInfo
->
orderColumnList
,
i
);
SColumnInfoData
*
pColInfoData
=
taosArrayGet
(
pBlock
->
pDataBlock
,
pIndex
->
colIndex
);
SColumnInfo
*
pColInfo
=
&
pColInfoData
->
info
;
char
*
d
=
rowIndex
*
pColInfo
->
bytes
+
(
char
*
)
pColInfoData
->
pData
;
int32_t
ret
=
columnValueAscendingComparator
(
pInfo
->
prevRow
[
i
],
d
,
pColInfo
->
type
,
pColInfo
->
bytes
);
if
(
ret
!=
0
)
{
// it is a new group
samegroup
=
false
;
break
;
}
}
}
while
((
*
newgroup
)
==
false
)
{
publishOperatorProfEvent
(
pOperator
->
upstream
[
0
],
QUERY_PROF_BEFORE_OPERATOR_EXEC
);
pBlock
=
pOperator
->
upstream
[
0
]
->
exec
(
pOperator
->
upstream
[
0
],
newgroup
);
publishOperatorProfEvent
(
pOperator
->
upstream
[
0
],
QUERY_PROF_AFTER_OPERATOR_EXEC
);
if
(
!
samegroup
||
!
pInfo
->
hasPrev
)
{
pInfo
->
ignoreCurrentGroup
=
false
;
savePrevOrderColumns
(
pInfo
->
prevRow
,
pInfo
->
orderColumnList
,
pBlock
,
rowIndex
,
&
pInfo
->
hasPrev
);
if
(
pBlock
==
NULL
)
{
pInfo
->
currentOffset
=
pInfo
->
limit
.
offset
;
// reset the offset value for a new group
pInfo
->
rowsTotal
=
0
;
if
(
pInfo
->
currentGroupOffset
>
0
)
{
pInfo
->
ignoreCurrentGroup
=
true
;
pInfo
->
currentGroupOffset
-=
1
;
// now we are in the next group data
rowIndex
+=
1
;
continue
;
}
// A new group has arrived according to the result rows, and the group limitation has already reached.
// Let's jump out of current loop and return immediately.
if
(
pInfo
->
slimit
.
limit
>=
0
&&
pInfo
->
groupTotal
>=
pInfo
->
slimit
.
limit
)
{
setQueryStatus
(
pOperator
->
pRuntimeEnv
,
QUERY_COMPLETED
);
pOperator
->
status
=
OP_EXEC_DONE
;
return
NULL
;
return
BLOCK_NO_GROUP
;
}
pInfo
->
groupTotal
+=
1
;
// data in current group not allowed, return if current result does not belong to the previous group.And there
// are results exists in current SSDataBlock
if
(
!
pInfo
->
multigroupResult
&&
!
samegroup
&&
pInfo
->
pRes
->
info
.
rows
>
0
)
{
return
BLOCK_NEW_GROUP
;
}
// now we have got the first data block of the next group.
if
(
pInfo
->
currentGroupOffset
==
0
)
{
return
pBlock
;
doHandleDataInCurrentGroup
(
pInfo
,
pBlock
,
rowIndex
);
}
else
{
// handle the offset in the same group
// All the data in current group needs to be discarded, due to the limit parameter in the SQL statement
if
(
pInfo
->
ignoreCurrentGroup
)
{
rowIndex
+=
1
;
continue
;
}
doHandleDataInCurrentGroup
(
pInfo
,
pBlock
,
rowIndex
);
}
rowIndex
+=
1
;
}
return
NULL
;
return
BLOCK_SAME_GROUP
;
}
SSDataBlock
*
doSLimit
(
void
*
param
,
bool
*
newgroup
)
{
...
...
@@ -1093,63 +1104,41 @@ SSDataBlock* doSLimit(void* param, bool* newgroup) {
}
SSLimitOperatorInfo
*
pInfo
=
pOperator
->
info
;
pInfo
->
pRes
->
info
.
rows
=
0
;
SSDataBlock
*
pBlock
=
NULL
;
while
(
1
)
{
pBlock
=
skipGroupBlock
(
pOperator
,
newgroup
);
if
(
pBlock
==
NULL
)
{
setQueryStatus
(
pOperator
->
pRuntimeEnv
,
QUERY_COMPLETED
);
pOperator
->
status
=
OP_EXEC_DONE
;
return
NULL
;
}
if
(
*
newgroup
)
{
// a new group arrives
pInfo
->
groupTotal
+=
1
;
pInfo
->
rowsTotal
=
0
;
pInfo
->
currentOffset
=
pInfo
->
limit
.
offset
;
}
assert
(
pInfo
->
currentGroupOffset
==
0
);
if
(
pInfo
->
pPrevBlock
!=
NULL
)
{
ensureOutputBuf
(
pInfo
,
pInfo
->
pRes
,
pInfo
->
pPrevBlock
->
info
.
rows
);
int32_t
ret
=
doSlimitImpl
(
pOperator
,
pInfo
,
pInfo
->
pPrevBlock
);
assert
(
ret
!=
BLOCK_NEW_GROUP
);
if
(
pInfo
->
currentOffset
>=
pBlock
->
info
.
rows
)
{
pInfo
->
currentOffset
-=
pBlock
->
info
.
rows
;
}
else
{
if
(
pInfo
->
currentOffset
==
0
)
{
break
;
pInfo
->
pPrevBlock
=
NULL
;
}
int32_t
remain
=
(
int32_t
)(
pBlock
->
info
.
rows
-
pInfo
->
currentOffset
);
pBlock
->
info
.
rows
=
remain
;
assert
(
pInfo
->
currentGroupOffset
>=
0
);
// move the remain rows of this data block to the front.
for
(
int32_t
i
=
0
;
i
<
pBlock
->
info
.
numOfCols
;
++
i
)
{
SColumnInfoData
*
pColInfoData
=
taosArrayGet
(
pBlock
->
pDataBlock
,
i
);
while
(
1
)
{
publishOperatorProfEvent
(
pOperator
->
upstream
[
0
],
QUERY_PROF_BEFORE_OPERATOR_EXEC
);
SSDataBlock
*
pBlock
=
pOperator
->
upstream
[
0
]
->
exec
(
pOperator
->
upstream
[
0
],
newgroup
);
publishOperatorProfEvent
(
pOperator
->
upstream
[
0
],
QUERY_PROF_AFTER_OPERATOR_EXEC
);
int16_t
bytes
=
pColInfoData
->
info
.
bytes
;
memmove
(
pColInfoData
->
pData
,
pColInfoData
->
pData
+
bytes
*
pInfo
->
currentOffset
,
remain
*
bytes
)
;
if
(
pBlock
==
NULL
)
{
return
pInfo
->
pRes
->
info
.
rows
==
0
?
NULL
:
pInfo
->
pRes
;
}
pInfo
->
currentOffset
=
0
;
break
;
}
ensureOutputBuf
(
pInfo
,
pInfo
->
pRes
,
pBlock
->
info
.
rows
);
int32_t
ret
=
doSlimitImpl
(
pOperator
,
pInfo
,
pBlock
);
if
(
ret
==
BLOCK_NEW_GROUP
)
{
pInfo
->
pPrevBlock
=
pBlock
;
return
pInfo
->
pRes
;
}
if
(
pInfo
->
slimit
.
limit
>
0
&&
pInfo
->
groupTotal
>
pInfo
->
slimit
.
limit
)
{
// reach the group limit, abort
return
NULL
;
if
(
pOperator
->
status
==
OP_EXEC_DONE
)
{
return
pInfo
->
pRes
->
info
.
rows
==
0
?
NULL
:
pInfo
->
pRes
;
}
if
(
pInfo
->
limit
.
limit
>
0
&&
(
pInfo
->
rowsTotal
+
pBlock
->
info
.
rows
>=
pInfo
->
limit
.
limit
))
{
pBlock
->
info
.
rows
=
(
int32_t
)(
pInfo
->
limit
.
limit
-
pInfo
->
rowsTotal
);
pInfo
->
rowsTotal
=
pInfo
->
limit
.
limit
;
if
(
pInfo
->
slimit
.
limit
>
0
&&
pInfo
->
groupTotal
>=
pInfo
->
slimit
.
limit
)
{
pOperator
->
status
=
OP_EXEC_DONE
;
// now the number of rows in current group is enough, let's return to the invoke function
if
(
pInfo
->
pRes
->
info
.
rows
>
pInfo
->
threshold
)
{
return
pInfo
->
pRes
;
}
// setQueryStatus(pOperator->pRuntimeEnv, QUERY_COMPLETED);
}
else
{
pInfo
->
rowsTotal
+=
pBlock
->
info
.
rows
;
}
return
pBlock
;
}
src/client/src/tscParseInsert.c
浏览文件 @
64c15fa7
...
...
@@ -1777,6 +1777,7 @@ static void parseFileSendDataBlock(void *param, TAOS_RES *tres, int32_t numOfRow
}
_error:
pParentSql
->
res
.
code
=
code
;
tfree
(
tokenBuf
);
tfree
(
line
);
taos_free_result
(
pSql
);
...
...
src/client/src/tscParseLineProtocol.c
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
src/client/src/tscPrepare.c
浏览文件 @
64c15fa7
...
...
@@ -1540,6 +1540,8 @@ int taos_stmt_prepare(TAOS_STMT* stmt, const char* sql, unsigned long length) {
pRes
->
qId
=
0
;
pRes
->
numOfRows
=
1
;
registerSqlObj
(
pSql
);
strtolower
(
pSql
->
sqlstr
,
sql
);
tscDebugL
(
"0x%"
PRIx64
" SQL: %s"
,
pSql
->
self
,
pSql
->
sqlstr
);
...
...
@@ -1549,8 +1551,6 @@ int taos_stmt_prepare(TAOS_STMT* stmt, const char* sql, unsigned long length) {
pSql
->
cmd
.
insertParam
.
numOfParams
=
0
;
pSql
->
cmd
.
batchSize
=
0
;
registerSqlObj
(
pSql
);
int32_t
ret
=
stmtParseInsertTbTags
(
pSql
,
pStmt
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
STMT_RET
(
ret
);
...
...
src/client/src/tscSQLParser.c
浏览文件 @
64c15fa7
...
...
@@ -40,7 +40,6 @@
#include "qScript.h"
#include "ttype.h"
#include "qFilter.h"
#include "httpInt.h"
#define DEFAULT_PRIMARY_TIMESTAMP_COL_NAME "_c0"
...
...
@@ -72,7 +71,6 @@ static int convertTimestampStrToInt64(tVariant *pVar, int32_t precision);
static
bool
serializeExprListToVariant
(
SArray
*
pList
,
tVariant
**
dst
,
int16_t
colType
,
uint8_t
precision
);
static
bool
has
(
SArray
*
pFieldList
,
int32_t
startIdx
,
const
char
*
name
);
static
char
*
cloneCurrentDBName
(
SSqlObj
*
pSql
);
static
int32_t
getDelimiterIndex
(
SStrToken
*
pTableName
);
static
bool
validateTableColumnInfo
(
SArray
*
pFieldList
,
SSqlCmd
*
pCmd
);
static
bool
validateTagParams
(
SArray
*
pTagsList
,
SArray
*
pFieldList
,
SSqlCmd
*
pCmd
);
...
...
@@ -117,7 +115,7 @@ static int32_t validateColumnName(char* name);
static
int32_t
setKillInfo
(
SSqlObj
*
pSql
,
struct
SSqlInfo
*
pInfo
,
int32_t
killType
);
static
int32_t
setCompactVnodeInfo
(
SSqlObj
*
pSql
,
struct
SSqlInfo
*
pInfo
);
static
bool
validateOneTags
(
SSqlCmd
*
pCmd
,
TAOS_FIELD
*
pTagField
);
static
int32_t
validateOneTag
(
SSqlCmd
*
pCmd
,
TAOS_FIELD
*
pTagField
);
static
bool
hasTimestampForPointInterpQuery
(
SQueryInfo
*
pQueryInfo
);
static
bool
hasNormalColumnFilter
(
SQueryInfo
*
pQueryInfo
);
...
...
@@ -427,13 +425,12 @@ int32_t readFromFile(char *name, uint32_t *len, void **buf) {
return
TSDB_CODE_TSC_APP_ERROR
;
}
close
(
fd
);
tfree
(
*
buf
);
return
TSDB_CODE_SUCCESS
;
}
int32_t
handleUserDefinedFunc
(
SSqlObj
*
pSql
,
struct
SSqlInfo
*
pInfo
)
{
const
char
*
msg1
=
"
function name is too long
"
;
const
char
*
msg1
=
"
invalidate function name
"
;
const
char
*
msg2
=
"path is too long"
;
const
char
*
msg3
=
"invalid outputtype"
;
const
char
*
msg4
=
"invalid script"
;
...
...
@@ -450,7 +447,10 @@ int32_t handleUserDefinedFunc(SSqlObj* pSql, struct SSqlInfo* pInfo) {
}
createInfo
->
name
.
z
[
createInfo
->
name
.
n
]
=
0
;
// funcname's naming rule is same to column
if
(
validateColumnName
(
createInfo
->
name
.
z
)
!=
TSDB_CODE_SUCCESS
)
{
return
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg1
);
}
strdequote
(
createInfo
->
name
.
z
);
if
(
strlen
(
createInfo
->
name
.
z
)
>=
TSDB_FUNC_NAME_LEN
)
{
...
...
@@ -931,7 +931,6 @@ int32_t tscValidateSqlInfo(SSqlObj* pSql, struct SSqlInfo* pInfo) {
pQueryInfo
=
pCmd
->
active
;
pQueryInfo
->
pUdfInfo
=
pUdfInfo
;
pQueryInfo
->
udfCopy
=
true
;
}
}
...
...
@@ -1085,6 +1084,7 @@ int32_t validateIntervalNode(SSqlObj* pSql, SQueryInfo* pQueryInfo, SSqlNode* pS
const
char
*
msg1
=
"sliding cannot be used without interval"
;
const
char
*
msg2
=
"interval cannot be less than 1 us"
;
const
char
*
msg3
=
"interval value is too small"
;
const
char
*
msg4
=
"only point interpolation query requires keyword EVERY"
;
SSqlCmd
*
pCmd
=
&
pSql
->
cmd
;
...
...
@@ -1116,7 +1116,6 @@ int32_t validateIntervalNode(SSqlObj* pSql, SQueryInfo* pQueryInfo, SSqlNode* pS
}
if
(
pQueryInfo
->
interval
.
intervalUnit
!=
'n'
&&
pQueryInfo
->
interval
.
intervalUnit
!=
'y'
)
{
// interval cannot be less than 10 milliseconds
if
(
convertTimePrecision
(
pQueryInfo
->
interval
.
interval
,
tinfo
.
precision
,
TSDB_TIME_PRECISION_MICRO
)
<
tsMinIntervalTime
)
{
return
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg2
);
...
...
@@ -1131,9 +1130,15 @@ int32_t validateIntervalNode(SSqlObj* pSql, SQueryInfo* pQueryInfo, SSqlNode* pS
return
TSDB_CODE_TSC_INVALID_OPERATION
;
}
bool
interpQuery
=
tscIsPointInterpQuery
(
pQueryInfo
);
if
((
pSqlNode
->
interval
.
token
==
TK_EVERY
&&
(
!
interpQuery
))
||
(
pSqlNode
->
interval
.
token
==
TK_INTERVAL
&&
interpQuery
))
{
return
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg4
);
}
// The following part is used to check for the invalid query expression.
return
checkInvalidExprForTimeWindow
(
pCmd
,
pQueryInfo
);
}
static
int32_t
validateStateWindowNode
(
SSqlCmd
*
pCmd
,
SQueryInfo
*
pQueryInfo
,
SSqlNode
*
pSqlNode
,
bool
isStable
)
{
const
char
*
msg1
=
"invalid column name"
;
...
...
@@ -1540,9 +1545,7 @@ static bool validateTagParams(SArray* pTagsList, SArray* pFieldList, SSqlCmd* pC
/*
* tags name /column name is truncated in sql.y
*/
bool
validateOneTags
(
SSqlCmd
*
pCmd
,
TAOS_FIELD
*
pTagField
)
{
//const char* msg1 = "timestamp not allowed in tags";
const
char
*
msg2
=
"duplicated column names"
;
int32_t
validateOneTag
(
SSqlCmd
*
pCmd
,
TAOS_FIELD
*
pTagField
)
{
const
char
*
msg3
=
"tag length too long"
;
const
char
*
msg4
=
"invalid tag name"
;
const
char
*
msg5
=
"invalid binary/nchar tag length"
;
...
...
@@ -1557,8 +1560,7 @@ bool validateOneTags(SSqlCmd* pCmd, TAOS_FIELD* pTagField) {
// no more max columns
if
(
numOfTags
+
numOfCols
>=
TSDB_MAX_COLUMNS
)
{
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg7
);
return
false
;
return
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg7
);
}
// no more than 6 tags
...
...
@@ -1566,8 +1568,7 @@ bool validateOneTags(SSqlCmd* pCmd, TAOS_FIELD* pTagField) {
char
msg
[
128
]
=
{
0
};
sprintf
(
msg
,
"tags no more than %d"
,
TSDB_MAX_TAGS
);
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg
);
return
false
;
return
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg
);
}
// no timestamp allowable
...
...
@@ -1577,8 +1578,7 @@ bool validateOneTags(SSqlCmd* pCmd, TAOS_FIELD* pTagField) {
//}
if
((
pTagField
->
type
<
TSDB_DATA_TYPE_BOOL
)
||
(
pTagField
->
type
>
TSDB_DATA_TYPE_UBIGINT
))
{
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg6
);
return
false
;
return
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg6
);
}
SSchema
*
pTagSchema
=
tscGetTableTagSchema
(
pTableMetaInfo
->
pTableMeta
);
...
...
@@ -1590,20 +1590,17 @@ bool validateOneTags(SSqlCmd* pCmd, TAOS_FIELD* pTagField) {
// length less than TSDB_MAX_TASG_LEN
if
(
nLen
+
pTagField
->
bytes
>
TSDB_MAX_TAGS_LEN
)
{
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg3
);
return
false
;
return
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg3
);
}
// tags name can not be a keyword
if
(
validateColumnName
(
pTagField
->
name
)
!=
TSDB_CODE_SUCCESS
)
{
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg4
);
return
false
;
return
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg4
);
}
// binary(val), val can not be equalled to or less than 0
if
((
pTagField
->
type
==
TSDB_DATA_TYPE_BINARY
||
pTagField
->
type
==
TSDB_DATA_TYPE_NCHAR
)
&&
pTagField
->
bytes
<=
0
)
{
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg5
);
return
false
;
return
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg5
);
}
// field name must be unique
...
...
@@ -1611,17 +1608,16 @@ bool validateOneTags(SSqlCmd* pCmd, TAOS_FIELD* pTagField) {
for
(
int32_t
i
=
0
;
i
<
numOfTags
+
numOfCols
;
++
i
)
{
if
(
strncasecmp
(
pTagField
->
name
,
pSchema
[
i
].
name
,
sizeof
(
pTagField
->
name
)
-
1
)
==
0
)
{
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg2
);
return
false
;
//return tscErrorMsgWithCode(TSDB_CODE_TSC_DUP_COL_NAMES, tscGetErrorMsgPayload(pCmd), pTagField->name, NULL
);
return
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
"duplicated column names"
)
;
}
}
return
true
;
return
TSDB_CODE_SUCCESS
;
}
bool
validateOneColumn
(
SSqlCmd
*
pCmd
,
TAOS_FIELD
*
pColField
)
{
int32_t
validateOneColumn
(
SSqlCmd
*
pCmd
,
TAOS_FIELD
*
pColField
)
{
const
char
*
msg1
=
"too many columns"
;
const
char
*
msg2
=
"duplicated column names"
;
const
char
*
msg3
=
"column length too long"
;
const
char
*
msg4
=
"invalid data type"
;
const
char
*
msg5
=
"invalid column name"
;
...
...
@@ -1636,18 +1632,15 @@ bool validateOneColumn(SSqlCmd* pCmd, TAOS_FIELD* pColField) {
// no more max columns
if
(
numOfCols
>=
TSDB_MAX_COLUMNS
||
numOfTags
+
numOfCols
>=
TSDB_MAX_COLUMNS
)
{
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg1
);
return
false
;
return
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg1
);
}
if
(
pColField
->
type
<
TSDB_DATA_TYPE_BOOL
||
pColField
->
type
>
TSDB_DATA_TYPE_UBIGINT
)
{
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg4
);
return
false
;
return
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg4
);
}
if
(
validateColumnName
(
pColField
->
name
)
!=
TSDB_CODE_SUCCESS
)
{
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg5
);
return
false
;
return
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg5
);
}
SSchema
*
pSchema
=
tscGetTableSchema
(
pTableMeta
);
...
...
@@ -1658,25 +1651,23 @@ bool validateOneColumn(SSqlCmd* pCmd, TAOS_FIELD* pColField) {
}
if
(
pColField
->
bytes
<=
0
)
{
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg6
);
return
false
;
return
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg6
);
}
// length less than TSDB_MAX_BYTES_PER_ROW
if
(
nLen
+
pColField
->
bytes
>
TSDB_MAX_BYTES_PER_ROW
)
{
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg3
);
return
false
;
return
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg3
);
}
// field name must be unique
for
(
int32_t
i
=
0
;
i
<
numOfTags
+
numOfCols
;
++
i
)
{
if
(
strncasecmp
(
pColField
->
name
,
pSchema
[
i
].
name
,
sizeof
(
pColField
->
name
)
-
1
)
==
0
)
{
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg2
);
return
false
;
//return tscErrorMsgWithCode(TSDB_CODE_TSC_DUP_COL_NAMES, tscGetErrorMsgPayload(pCmd), pColField->name, NULL
);
return
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
"duplicated column names"
)
;
}
}
return
true
;
return
TSDB_CODE_SUCCESS
;
}
/* is contained in pFieldList or not */
...
...
@@ -1692,34 +1683,6 @@ static bool has(SArray* pFieldList, int32_t startIdx, const char* name) {
static
char
*
getAccountId
(
SSqlObj
*
pSql
)
{
return
pSql
->
pTscObj
->
acctId
;
}
static
char
*
cloneCurrentDBName
(
SSqlObj
*
pSql
)
{
char
*
p
=
NULL
;
HttpContext
*
pCtx
=
NULL
;
pthread_mutex_lock
(
&
pSql
->
pTscObj
->
mutex
);
STscObj
*
pTscObj
=
pSql
->
pTscObj
;
switch
(
pTscObj
->
from
)
{
case
TAOS_REQ_FROM_HTTP
:
pCtx
=
pSql
->
param
;
if
(
pCtx
&&
pCtx
->
db
[
0
]
!=
'\0'
)
{
char
db
[
TSDB_ACCT_ID_LEN
+
TSDB_DB_NAME_LEN
]
=
{
0
};
int32_t
len
=
sprintf
(
db
,
"%s%s%s"
,
pTscObj
->
acctId
,
TS_PATH_DELIMITER
,
pCtx
->
db
);
assert
(
len
<=
sizeof
(
db
));
p
=
strdup
(
db
);
}
break
;
default:
break
;
}
if
(
p
==
NULL
)
{
p
=
strdup
(
pSql
->
pTscObj
->
db
);
}
pthread_mutex_unlock
(
&
pSql
->
pTscObj
->
mutex
);
return
p
;
}
/* length limitation, strstr cannot be applied */
static
int32_t
getDelimiterIndex
(
SStrToken
*
pTableName
)
{
for
(
uint32_t
i
=
0
;
i
<
pTableName
->
n
;
++
i
)
{
...
...
@@ -6070,7 +6033,6 @@ int32_t setAlterTableInfo(SSqlObj* pSql, struct SSqlInfo* pInfo) {
const
char
*
msg19
=
"invalid new tag name"
;
const
char
*
msg20
=
"table is not super table"
;
const
char
*
msg21
=
"only binary/nchar column length could be modified"
;
const
char
*
msg22
=
"new column length should be bigger than old one"
;
const
char
*
msg23
=
"only column length coulbe be modified"
;
const
char
*
msg24
=
"invalid binary/nchar column length"
;
...
...
@@ -6122,8 +6084,9 @@ int32_t setAlterTableInfo(SSqlObj* pSql, struct SSqlInfo* pInfo) {
}
TAOS_FIELD
*
p
=
taosArrayGet
(
pFieldList
,
0
);
if
(
!
validateOneTags
(
pCmd
,
p
))
{
return
TSDB_CODE_TSC_INVALID_OPERATION
;
int32_t
ret
=
validateOneTag
(
pCmd
,
p
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
ret
;
}
tscFieldInfoAppend
(
&
pQueryInfo
->
fieldsInfo
,
p
);
...
...
@@ -6300,8 +6263,9 @@ int32_t setAlterTableInfo(SSqlObj* pSql, struct SSqlInfo* pInfo) {
}
TAOS_FIELD
*
p
=
taosArrayGet
(
pFieldList
,
0
);
if
(
!
validateOneColumn
(
pCmd
,
p
))
{
return
TSDB_CODE_TSC_INVALID_OPERATION
;
int32_t
ret
=
validateOneColumn
(
pCmd
,
p
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
ret
;
}
tscFieldInfoAppend
(
&
pQueryInfo
->
fieldsInfo
,
p
);
...
...
@@ -6364,7 +6328,7 @@ int32_t setAlterTableInfo(SSqlObj* pSql, struct SSqlInfo* pInfo) {
}
if
(
pItem
->
bytes
<=
pColSchema
->
bytes
)
{
return
invalidOperationMsg
(
pMsg
,
msg22
);
return
tscErrorMsgWithCode
(
TSDB_CODE_TSC_INVALID_COLUMN_LENGTH
,
pMsg
,
pItem
->
name
,
NULL
);
}
SSchema
*
pSchema
=
(
SSchema
*
)
pTableMetaInfo
->
pTableMeta
->
schema
;
...
...
@@ -6415,7 +6379,7 @@ int32_t setAlterTableInfo(SSqlObj* pSql, struct SSqlInfo* pInfo) {
}
if
(
pItem
->
bytes
<=
pColSchema
->
bytes
)
{
return
invalidOperationMsg
(
pMsg
,
msg22
);
return
tscErrorMsgWithCode
(
TSDB_CODE_TSC_INVALID_TAG_LENGTH
,
pMsg
,
pItem
->
name
,
NULL
);
}
SSchema
*
pSchema
=
tscGetTableTagSchema
(
pTableMetaInfo
->
pTableMeta
);
...
...
@@ -7205,7 +7169,6 @@ static int32_t doAddGroupbyColumnsOnDemand(SSqlCmd* pCmd, SQueryInfo* pQueryInfo
const
char
*
msg1
=
"interval not allowed in group by normal column"
;
STableMetaInfo
*
pTableMetaInfo
=
tscGetMetaInfo
(
pQueryInfo
,
0
);
SSchema
*
pSchema
=
tscGetTableSchema
(
pTableMetaInfo
->
pTableMeta
);
SSchema
*
tagSchema
=
NULL
;
...
...
@@ -8735,6 +8698,7 @@ static int32_t doValidateSubquery(SSqlNode* pSqlNode, int32_t index, SSqlObj* pS
if
(
taosArrayGetSize
(
subInfo
->
pSubquery
)
>=
2
)
{
return
invalidOperationMsg
(
msgBuf
,
"not support union in subquery"
);
}
SQueryInfo
*
pSub
=
calloc
(
1
,
sizeof
(
SQueryInfo
));
tscInitQueryInfo
(
pSub
);
...
...
@@ -8757,6 +8721,7 @@ static int32_t doValidateSubquery(SSqlNode* pSqlNode, int32_t index, SSqlObj* pS
if
(
pTableMetaInfo1
==
NULL
)
{
return
TSDB_CODE_TSC_OUT_OF_MEMORY
;
}
pTableMetaInfo1
->
pTableMeta
=
extractTempTableMetaFromSubquery
(
pSub
);
pTableMetaInfo1
->
tableMetaCapacity
=
tscGetTableMetaSize
(
pTableMetaInfo1
->
pTableMeta
);
...
...
@@ -8822,7 +8787,7 @@ int32_t validateSqlNode(SSqlObj* pSql, SSqlNode* pSqlNode, SQueryInfo* pQueryInf
* select server_status();
* select server_version();
* select client_version();
* select
current_
database();
* select database();
*/
if
(
pSqlNode
->
from
==
NULL
)
{
assert
(
pSqlNode
->
fillType
==
NULL
&&
pSqlNode
->
pGroupby
==
NULL
&&
pSqlNode
->
pWhere
==
NULL
&&
...
...
@@ -8840,7 +8805,7 @@ int32_t validateSqlNode(SSqlObj* pSql, SSqlNode* pSqlNode, SQueryInfo* pQueryInf
// check if there is 3 level select
SRelElementPair
*
subInfo
=
taosArrayGet
(
pSqlNode
->
from
->
list
,
i
);
SSqlNode
*
p
=
taosArrayGetP
(
subInfo
->
pSubquery
,
0
);
if
(
p
->
from
->
type
==
SQL_NODE_FROM_SUBQUERY
){
if
(
p
->
from
->
type
==
SQL_NODE_FROM_SUBQUERY
)
{
return
invalidOperationMsg
(
tscGetErrorMsgPayload
(
pCmd
),
msg9
);
}
...
...
@@ -8933,6 +8898,15 @@ int32_t validateSqlNode(SSqlObj* pSql, SSqlNode* pSqlNode, SQueryInfo* pQueryInf
}
}
// disable group result mixed up if interval/session window query exists.
if
(
isTimeWindowQuery
(
pQueryInfo
))
{
size_t
num
=
taosArrayGetSize
(
pQueryInfo
->
pUpstream
);
for
(
int32_t
i
=
0
;
i
<
num
;
++
i
)
{
SQueryInfo
*
pUp
=
taosArrayGetP
(
pQueryInfo
->
pUpstream
,
i
);
pUp
->
multigroupResult
=
false
;
}
}
// parse the having clause in the first place
int32_t
joinQuery
=
(
pSqlNode
->
from
!=
NULL
&&
taosArrayGetSize
(
pSqlNode
->
from
->
list
)
>
1
);
if
(
validateHavingClause
(
pQueryInfo
,
pSqlNode
->
pHaving
,
pCmd
,
pSqlNode
->
pSelNodeList
,
joinQuery
,
timeWindowQuery
)
!=
...
...
src/client/src/tscServer.c
浏览文件 @
64c15fa7
...
...
@@ -332,188 +332,35 @@ int tscSendMsgToServer(SSqlObj *pSql) {
.
code
=
0
};
rpcSendRequest
(
pObj
->
pRpcObj
->
pDnodeConn
,
&
pSql
->
epSet
,
&
rpcMsg
,
&
pSql
->
rpcRid
);
return
TSDB_CODE_SUCCESS
;
}
//static void doProcessMsgFromServer(SSchedMsg* pSchedMsg) {
// SRpcMsg* rpcMsg = pSchedMsg->ahandle;
// SRpcEpSet* pEpSet = pSchedMsg->thandle;
//
// TSDB_CACHE_PTR_TYPE handle = (TSDB_CACHE_PTR_TYPE) rpcMsg->ahandle;
// SSqlObj* pSql = (SSqlObj*)taosAcquireRef(tscObjRef, handle);
// if (pSql == NULL) {
// rpcFreeCont(rpcMsg->pCont);
// free(rpcMsg);
// free(pEpSet);
// return;
// }
//
// assert(pSql->self == handle);
//
// STscObj *pObj = pSql->pTscObj;
// SSqlRes *pRes = &pSql->res;
// SSqlCmd *pCmd = &pSql->cmd;
//
// pSql->rpcRid = -1;
//
// if (pObj->signature != pObj) {
// tscDebug("0x%"PRIx64" DB connection is closed, cmd:%d pObj:%p signature:%p", pSql->self, pCmd->command, pObj, pObj->signature);
//
// taosRemoveRef(tscObjRef, handle);
// taosReleaseRef(tscObjRef, handle);
// rpcFreeCont(rpcMsg->pCont);
// free(rpcMsg);
// free(pEpSet);
// return;
// }
//
// SQueryInfo* pQueryInfo = tscGetQueryInfo(pCmd);
// if (pQueryInfo != NULL && pQueryInfo->type == TSDB_QUERY_TYPE_FREE_RESOURCE) {
// tscDebug("0x%"PRIx64" sqlObj needs to be released or DB connection is closed, cmd:%d type:%d, pObj:%p signature:%p",
// pSql->self, pCmd->command, pQueryInfo->type, pObj, pObj->signature);
//
// taosRemoveRef(tscObjRef, handle);
// taosReleaseRef(tscObjRef, handle);
// rpcFreeCont(rpcMsg->pCont);
// free(rpcMsg);
// free(pEpSet);
// return;
// }
//
// if (pEpSet) {
// if (!tscEpSetIsEqual(&pSql->epSet, pEpSet)) {
// if (pCmd->command < TSDB_SQL_MGMT) {
// tscUpdateVgroupInfo(pSql, pEpSet);
// } else {
// tscUpdateMgmtEpSet(pSql, pEpSet);
// }
// }
// }
//
// int32_t cmd = pCmd->command;
//
// // set the flag to denote that sql string needs to be re-parsed and build submit block with table schema
// if (cmd == TSDB_SQL_INSERT && rpcMsg->code == TSDB_CODE_TDB_TABLE_RECONFIGURE) {
// pSql->cmd.insertParam.schemaAttached = 1;
// }
//
// // single table query error need to be handled here.
// if ((cmd == TSDB_SQL_SELECT || cmd == TSDB_SQL_UPDATE_TAGS_VAL) &&
// (((rpcMsg->code == TSDB_CODE_TDB_INVALID_TABLE_ID || rpcMsg->code == TSDB_CODE_VND_INVALID_VGROUP_ID)) ||
// rpcMsg->code == TSDB_CODE_RPC_NETWORK_UNAVAIL || rpcMsg->code == TSDB_CODE_APP_NOT_READY)) {
//
// // 1. super table subquery
// // 2. nest queries are all not updated the tablemeta and retry parse the sql after cleanup local tablemeta/vgroup id buffer
// if ((TSDB_QUERY_HAS_TYPE(pQueryInfo->type, (TSDB_QUERY_TYPE_STABLE_SUBQUERY | TSDB_QUERY_TYPE_SUBQUERY |
// TSDB_QUERY_TYPE_TAG_FILTER_QUERY)) &&
// !TSDB_QUERY_HAS_TYPE(pQueryInfo->type, TSDB_QUERY_TYPE_PROJECTION_QUERY)) ||
// (TSDB_QUERY_HAS_TYPE(pQueryInfo->type, TSDB_QUERY_TYPE_NEST_SUBQUERY)) || (TSDB_QUERY_HAS_TYPE(pQueryInfo->type, TSDB_QUERY_TYPE_STABLE_SUBQUERY) && pQueryInfo->distinct)) {
// // do nothing in case of super table subquery
// } else {
// pSql->retry += 1;
// tscWarn("0x%" PRIx64 " it shall renew table meta, code:%s, retry:%d", pSql->self, tstrerror(rpcMsg->code), pSql->retry);
//
// pSql->res.code = rpcMsg->code; // keep the previous error code
// if (pSql->retry > pSql->maxRetry) {
// tscError("0x%" PRIx64 " max retry %d reached, give up", pSql->self, pSql->maxRetry);
// } else {
// // wait for a little bit moment and then retry
// // todo do not sleep in rpc callback thread, add this process into queue to process
// if (rpcMsg->code == TSDB_CODE_APP_NOT_READY || rpcMsg->code == TSDB_CODE_VND_INVALID_VGROUP_ID) {
// int32_t duration = getWaitingTimeInterval(pSql->retry);
// taosMsleep(duration);
// }
//
// pSql->retryReason = rpcMsg->code;
// rpcMsg->code = tscRenewTableMeta(pSql, 0);
// // if there is an error occurring, proceed to the following error handling procedure.
// if (rpcMsg->code == TSDB_CODE_TSC_ACTION_IN_PROGRESS) {
// taosReleaseRef(tscObjRef, handle);
// rpcFreeCont(rpcMsg->pCont);
// free(rpcMsg);
// free(pEpSet);
// return;
// }
// }
// }
// }
//
// pRes->rspLen = 0;
//
// if (pRes->code == TSDB_CODE_TSC_QUERY_CANCELLED) {
// tscDebug("0x%"PRIx64" query is cancelled, code:%s", pSql->self, tstrerror(pRes->code));
// } else {
// pRes->code = rpcMsg->code;
// }
//
// if (pRes->code == TSDB_CODE_SUCCESS) {
// tscDebug("0x%"PRIx64" reset retry counter to be 0 due to success rsp, old:%d", pSql->self, pSql->retry);
// pSql->retry = 0;
// }
//
// if (pRes->code != TSDB_CODE_TSC_QUERY_CANCELLED) {
// assert(rpcMsg->msgType == pCmd->msgType + 1);
// pRes->code = rpcMsg->code;
// pRes->rspType = rpcMsg->msgType;
// pRes->rspLen = rpcMsg->contLen;
//
// if (pRes->rspLen > 0 && rpcMsg->pCont) {
// char *tmp = (char *)realloc(pRes->pRsp, pRes->rspLen);
// if (tmp == NULL) {
// pRes->code = TSDB_CODE_TSC_OUT_OF_MEMORY;
// } else {
// pRes->pRsp = tmp;
// memcpy(pRes->pRsp, rpcMsg->pCont, pRes->rspLen);
// }
// } else {
// tfree(pRes->pRsp);
// }
//
// /*
// * There is not response callback function for submit response.
// * The actual inserted number of points is the first number.
// */
// if (rpcMsg->msgType == TSDB_MSG_TYPE_SUBMIT_RSP && pRes->pRsp != NULL) {
// SShellSubmitRspMsg *pMsg = (SShellSubmitRspMsg*)pRes->pRsp;
// pMsg->code = htonl(pMsg->code);
// pMsg->numOfRows = htonl(pMsg->numOfRows);
// pMsg->affectedRows = htonl(pMsg->affectedRows);
// pMsg->failedRows = htonl(pMsg->failedRows);
// pMsg->numOfFailedBlocks = htonl(pMsg->numOfFailedBlocks);
//
// pRes->numOfRows += pMsg->affectedRows;
// tscDebug("0x%"PRIx64" SQL cmd:%s, code:%s inserted rows:%d rspLen:%d", pSql->self, sqlCmd[pCmd->command],
// tstrerror(pRes->code), pMsg->affectedRows, pRes->rspLen);
// } else {
// tscDebug("0x%"PRIx64" SQL cmd:%s, code:%s rspLen:%d", pSql->self, sqlCmd[pCmd->command], tstrerror(pRes->code), pRes->rspLen);
// }
// }
//
// if (pRes->code == TSDB_CODE_SUCCESS && tscProcessMsgRsp[pCmd->command]) {
// rpcMsg->code = (*tscProcessMsgRsp[pCmd->command])(pSql);
// }
//
// bool shouldFree = tscShouldBeFreed(pSql);
// if (rpcMsg->code != TSDB_CODE_TSC_ACTION_IN_PROGRESS) {
// if (rpcMsg->code != TSDB_CODE_SUCCESS) {
// pRes->code = rpcMsg->code;
// }
// rpcMsg->code = (pRes->code == TSDB_CODE_SUCCESS) ? (int32_t)pRes->numOfRows : pRes->code;
// (*pSql->fp)(pSql->param, pSql, rpcMsg->code);
// }
//
// if (shouldFree) { // in case of table-meta/vgrouplist query, automatically free it
// tscDebug("0x%"PRIx64" sqlObj is automatically freed", pSql->self);
// taosRemoveRef(tscObjRef, handle);
// }
//
// taosReleaseRef(tscObjRef, handle);
// rpcFreeCont(rpcMsg->pCont);
// free(rpcMsg);
// free(pEpSet);
//}
// handle three situation
// 1. epset retry, only return last failure ep
// 2. no epset retry, like 'taos -h invalidFqdn', return invalidFqdn
// 3. other situation, no expected
void
tscSetFqdnErrorMsg
(
SSqlObj
*
pSql
,
SRpcEpSet
*
pEpSet
)
{
SSqlCmd
*
pCmd
=
&
pSql
->
cmd
;
SSqlRes
*
pRes
=
&
pSql
->
res
;
char
*
msgBuf
=
tscGetErrorMsgPayload
(
pCmd
);
if
(
pEpSet
)
{
sprintf
(
msgBuf
,
"%s
\"
%s
\"
"
,
tstrerror
(
pRes
->
code
),
pEpSet
->
fqdn
[(
pEpSet
->
inUse
)
%
(
pEpSet
->
numOfEps
)]);
}
else
if
(
pCmd
->
command
>=
TSDB_SQL_MGMT
)
{
SRpcEpSet
tEpset
;
SRpcCorEpSet
*
pCorEpSet
=
pSql
->
pTscObj
->
tscCorMgmtEpSet
;
taosCorBeginRead
(
&
pCorEpSet
->
version
);
tEpset
=
pCorEpSet
->
epSet
;
taosCorEndRead
(
&
pCorEpSet
->
version
);
sprintf
(
msgBuf
,
"%s
\"
%s
\"
"
,
tstrerror
(
pRes
->
code
),
tEpset
.
fqdn
[(
tEpset
.
inUse
)
%
(
tEpset
.
numOfEps
)]);
}
else
{
sprintf
(
msgBuf
,
"%s"
,
tstrerror
(
pRes
->
code
));
}
}
void
tscProcessMsgFromServer
(
SRpcMsg
*
rpcMsg
,
SRpcEpSet
*
pEpSet
)
{
TSDB_CACHE_PTR_TYPE
handle
=
(
TSDB_CACHE_PTR_TYPE
)
rpcMsg
->
ahandle
;
...
...
@@ -666,28 +513,13 @@ void tscProcessMsgFromServer(SRpcMsg *rpcMsg, SRpcEpSet *pEpSet) {
if
(
rpcMsg
->
code
!=
TSDB_CODE_SUCCESS
)
{
pRes
->
code
=
rpcMsg
->
code
;
}
rpcMsg
->
code
=
(
pRes
->
code
==
TSDB_CODE_SUCCESS
)
?
(
int32_t
)
pRes
->
numOfRows
:
pRes
->
code
;
if
(
pRes
->
code
==
TSDB_CODE_RPC_FQDN_ERROR
)
{
if
(
rpcMsg
->
code
==
TSDB_CODE_RPC_FQDN_ERROR
)
{
tscAllocPayload
(
pCmd
,
TSDB_FQDN_LEN
+
64
);
// handle three situation
// 1. epset retry, only return last failure ep
// 2. no epset retry, like 'taos -h invalidFqdn', return invalidFqdn
// 3. other situation, no expected
if
(
pEpSet
)
{
sprintf
(
tscGetErrorMsgPayload
(
pCmd
),
"%s
\"
%s
\"
"
,
tstrerror
(
pRes
->
code
),
pEpSet
->
fqdn
[(
pEpSet
->
inUse
)
%
(
pEpSet
->
numOfEps
)]);
}
else
if
(
pCmd
->
command
>=
TSDB_SQL_MGMT
)
{
SRpcEpSet
tEpset
;
SRpcCorEpSet
*
pCorEpSet
=
pSql
->
pTscObj
->
tscCorMgmtEpSet
;
taosCorBeginRead
(
&
pCorEpSet
->
version
);
tEpset
=
pCorEpSet
->
epSet
;
taosCorEndRead
(
&
pCorEpSet
->
version
);
sprintf
(
tscGetErrorMsgPayload
(
pCmd
),
"%s
\"
%s
\"
"
,
tstrerror
(
pRes
->
code
),
tEpset
.
fqdn
[(
tEpset
.
inUse
)
%
(
tEpset
.
numOfEps
)]);
}
else
{
sprintf
(
tscGetErrorMsgPayload
(
pCmd
),
"%s"
,
tstrerror
(
pRes
->
code
));
}
tscSetFqdnErrorMsg
(
pSql
,
pEpSet
);
}
(
*
pSql
->
fp
)(
pSql
->
param
,
pSql
,
rpcMsg
->
code
);
}
...
...
@@ -1571,7 +1403,6 @@ int32_t tscBuildSyncDbReplicaMsg(SSqlObj* pSql, SSqlInfo *pInfo) {
}
int32_t
tscBuildShowMsg
(
SSqlObj
*
pSql
,
SSqlInfo
*
pInfo
)
{
STscObj
*
pObj
=
pSql
->
pTscObj
;
SSqlCmd
*
pCmd
=
&
pSql
->
cmd
;
pCmd
->
msgType
=
TSDB_MSG_TYPE_CM_SHOW
;
pCmd
->
payloadLen
=
sizeof
(
SShowMsg
)
+
100
;
...
...
@@ -1594,9 +1425,9 @@ int32_t tscBuildShowMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
}
if
(
tNameIsEmpty
(
&
pTableMetaInfo
->
name
))
{
pthread_mutex_lock
(
&
pObj
->
mutex
);
tstrncpy
(
pShowMsg
->
db
,
p
Obj
->
db
,
sizeof
(
pShowMsg
->
db
));
pthread_mutex_unlock
(
&
pObj
->
mutex
);
char
*
p
=
cloneCurrentDBName
(
pSql
);
tstrncpy
(
pShowMsg
->
db
,
p
,
sizeof
(
pShowMsg
->
db
));
tfree
(
p
);
}
else
{
tNameGetFullDbName
(
&
pTableMetaInfo
->
name
,
pShowMsg
->
db
);
}
...
...
@@ -3093,12 +2924,14 @@ int32_t tscGetTableMetaImpl(SSqlObj* pSql, STableMetaInfo *pTableMetaInfo, bool
if
(
pTableMetaInfo
->
tableMetaCapacity
!=
0
&&
pTableMetaInfo
->
pTableMeta
!=
NULL
)
{
memset
(
pTableMetaInfo
->
pTableMeta
,
0
,
pTableMetaInfo
->
tableMetaCapacity
);
}
if
(
NULL
==
taosHashGetCloneExt
(
tscTableMetaMap
,
name
,
len
,
NULL
,
(
void
**
)
&
(
pTableMetaInfo
->
pTableMeta
),
&
pTableMetaInfo
->
tableMetaCapacity
))
{
tfree
(
pTableMetaInfo
->
pTableMeta
);
}
STableMeta
*
pMeta
=
pTableMetaInfo
->
pTableMeta
;
STableMeta
*
pSTMeta
=
(
STableMeta
*
)(
pSql
->
pBuf
);
if
(
pMeta
&&
pMeta
->
id
.
uid
>
0
)
{
// in case of child table, here only get the
if
(
pMeta
->
tableType
==
TSDB_CHILD_TABLE
)
{
...
...
@@ -3108,6 +2941,8 @@ int32_t tscGetTableMetaImpl(SSqlObj* pSql, STableMetaInfo *pTableMetaInfo, bool
return
getTableMetaFromMnode
(
pSql
,
pTableMetaInfo
,
autocreate
);
}
}
tscDebug
(
"0x%"
PRIx64
" %s retrieve tableMeta from cache, numOfCols:%d, numOfTags:%d"
,
pSql
->
self
,
name
,
pMeta
->
tableInfo
.
numOfColumns
,
pMeta
->
tableInfo
.
numOfTags
);
return
TSDB_CODE_SUCCESS
;
}
...
...
src/client/src/tscSubquery.c
浏览文件 @
64c15fa7
...
...
@@ -2444,7 +2444,11 @@ static void doSendQueryReqs(SSchedMsg* pSchedMsg) {
SSqlObj
*
pSql
=
pSchedMsg
->
ahandle
;
SPair
*
p
=
pSchedMsg
->
msg
;
for
(
int32_t
i
=
p
->
first
;
i
<
p
->
second
;
++
i
)
{
for
(
int32_t
i
=
p
->
first
;
i
<
p
->
second
;
++
i
)
{
if
(
i
>=
pSql
->
subState
.
numOfSub
)
{
tfree
(
p
);
return
;
}
SSqlObj
*
pSub
=
pSql
->
pSubs
[
i
];
SRetrieveSupport
*
pSupport
=
pSub
->
param
;
...
...
@@ -2584,7 +2588,12 @@ int32_t tscHandleMasterSTableQuery(SSqlObj *pSql) {
int32_t
numOfTasks
=
(
pState
->
numOfSub
+
MAX_REQUEST_PER_TASK
-
1
)
/
MAX_REQUEST_PER_TASK
;
assert
(
numOfTasks
>=
1
);
int32_t
num
=
(
pState
->
numOfSub
/
numOfTasks
)
+
1
;
int32_t
num
;
if
(
pState
->
numOfSub
/
numOfTasks
==
MAX_REQUEST_PER_TASK
)
{
num
=
MAX_REQUEST_PER_TASK
;
}
else
{
num
=
pState
->
numOfSub
/
numOfTasks
+
1
;
}
tscDebug
(
"0x%"
PRIx64
" query will be sent by %d threads"
,
pSql
->
self
,
numOfTasks
);
for
(
int32_t
j
=
0
;
j
<
numOfTasks
;
++
j
)
{
...
...
@@ -2740,7 +2749,7 @@ void tscHandleSubqueryError(SRetrieveSupport *trsupport, SSqlObj *pSql, int numO
}
}
else
{
// reach the maximum retry count, abort
atomic_val_compare_exchange_32
(
&
pParentSql
->
res
.
code
,
TSDB_CODE_SUCCESS
,
numOfRows
);
tscError
(
"0x%"
PRIx64
" sub:0x%"
PRIx64
" retrieve failed,
code:%s,orderOfSub:%d failed.no more retry,
set global code:%s"
,
pParentSql
->
self
,
pSql
->
self
,
tscError
(
"0x%"
PRIx64
" sub:0x%"
PRIx64
" retrieve failed,
code:%s, orderOfSub:%d FAILED. no more retry,
set global code:%s"
,
pParentSql
->
self
,
pSql
->
self
,
tstrerror
(
numOfRows
),
subqueryIndex
,
tstrerror
(
pParentSql
->
res
.
code
));
}
}
...
...
@@ -2987,7 +2996,7 @@ static void tscRetrieveFromDnodeCallBack(void *param, TAOS_RES *tres, int numOfR
tscDebug
(
"0x%"
PRIx64
" sub:0x%"
PRIx64
" retrieve numOfRows:%d totalNumOfRows:%"
PRIu64
" from ep:%s, orderOfSub:%d"
,
pParentSql
->
self
,
pSql
->
self
,
pRes
->
numOfRows
,
pState
->
numOfRetrievedRows
,
pSql
->
epSet
.
fqdn
[
pSql
->
epSet
.
inUse
],
idx
);
if
(
num
>
tsMaxNumOfOrderedResults
&&
/*tscIsProjectionQueryOnSTable(pQueryInfo, 0) &&*/
!
(
tscGetQueryInfo
(
&
pParentSql
->
cmd
)
->
distinct
))
{
if
(
num
>
tsMaxNumOfOrderedResults
&&
tscIsProjectionQueryOnSTable
(
pQueryInfo
,
0
)
&&
!
(
tscGetQueryInfo
(
&
pParentSql
->
cmd
)
->
distinct
))
{
tscError
(
"0x%"
PRIx64
" sub:0x%"
PRIx64
" num of OrderedRes is too many, max allowed:%"
PRId32
" , current:%"
PRId64
,
pParentSql
->
self
,
pSql
->
self
,
tsMaxNumOfOrderedResults
,
num
);
tscAbortFurtherRetryRetrieval
(
trsupport
,
tres
,
TSDB_CODE_TSC_SORTED_RES_TOO_MANY
);
...
...
src/client/src/tscUtil.c
浏览文件 @
64c15fa7
...
...
@@ -29,6 +29,7 @@
#include "tsclient.h"
#include "ttimer.h"
#include "ttokendef.h"
#include "httpInt.h"
static
void
freeQueryInfoImpl
(
SQueryInfo
*
pQueryInfo
);
...
...
@@ -3181,6 +3182,7 @@ void tscInitQueryInfo(SQueryInfo* pQueryInfo) {
pQueryInfo
->
slimit
.
offset
=
0
;
pQueryInfo
->
pUpstream
=
taosArrayInit
(
4
,
POINTER_BYTES
);
pQueryInfo
->
window
=
TSWINDOW_INITIALIZER
;
pQueryInfo
->
multigroupResult
=
true
;
}
int32_t
tscAddQueryInfo
(
SSqlCmd
*
pCmd
)
{
...
...
@@ -3192,7 +3194,6 @@ int32_t tscAddQueryInfo(SSqlCmd* pCmd) {
}
tscInitQueryInfo
(
pQueryInfo
);
pQueryInfo
->
msg
=
pCmd
->
payload
;
// pointer to the parent error message buffer
if
(
pCmd
->
pQueryInfo
==
NULL
)
{
...
...
@@ -3241,6 +3242,7 @@ static void freeQueryInfoImpl(SQueryInfo* pQueryInfo) {
taosArrayDestroy
(
pQueryInfo
->
pUpstream
);
pQueryInfo
->
pUpstream
=
NULL
;
pQueryInfo
->
bufLen
=
0
;
}
void
tscClearSubqueryInfo
(
SSqlCmd
*
pCmd
)
{
...
...
@@ -3275,6 +3277,7 @@ int32_t tscQueryInfoCopy(SQueryInfo* pQueryInfo, const SQueryInfo* pSrc) {
pQueryInfo
->
window
=
pSrc
->
window
;
pQueryInfo
->
sessionWindow
=
pSrc
->
sessionWindow
;
pQueryInfo
->
pTableMetaInfo
=
NULL
;
pQueryInfo
->
multigroupResult
=
pSrc
->
multigroupResult
;
pQueryInfo
->
bufLen
=
pSrc
->
bufLen
;
pQueryInfo
->
orderProjectQuery
=
pSrc
->
orderProjectQuery
;
...
...
@@ -3665,19 +3668,20 @@ SSqlObj* createSubqueryObj(SSqlObj* pSql, int16_t tableIndex, __async_cb_func_t
pNewQueryInfo
->
limit
=
pQueryInfo
->
limit
;
pNewQueryInfo
->
slimit
=
pQueryInfo
->
slimit
;
pNewQueryInfo
->
order
=
pQueryInfo
->
order
;
pNewQueryInfo
->
vgroupLimit
=
pQueryInfo
->
vgroupLimit
;
pNewQueryInfo
->
tsBuf
=
NULL
;
pNewQueryInfo
->
fillType
=
pQueryInfo
->
fillType
;
pNewQueryInfo
->
fillVal
=
NULL
;
pNewQueryInfo
->
numOfFillVal
=
0
;
pNewQueryInfo
->
clauseLimit
=
pQueryInfo
->
clauseLimit
;
pNewQueryInfo
->
prjOffset
=
pQueryInfo
->
prjOffset
;
pNewQueryInfo
->
numOfFillVal
=
0
;
pNewQueryInfo
->
numOfTables
=
0
;
pNewQueryInfo
->
pTableMetaInfo
=
NULL
;
pNewQueryInfo
->
bufLen
=
pQueryInfo
->
bufLen
;
pNewQueryInfo
->
buf
=
malloc
(
pQueryInfo
->
bufLen
);
pNewQueryInfo
->
vgroupLimit
=
pQueryInfo
->
vgroupLimit
;
pNewQueryInfo
->
distinct
=
pQueryInfo
->
distinct
;
pNewQueryInfo
->
multigroupResult
=
pQueryInfo
->
multigroupResult
;
pNewQueryInfo
->
buf
=
malloc
(
pQueryInfo
->
bufLen
);
if
(
pNewQueryInfo
->
buf
==
NULL
)
{
terrno
=
TSDB_CODE_TSC_OUT_OF_MEMORY
;
goto
_error
;
...
...
@@ -4165,6 +4169,31 @@ int32_t tscInvalidOperationMsg(char* msg, const char* additionalInfo, const char
return
TSDB_CODE_TSC_INVALID_OPERATION
;
}
int32_t
tscErrorMsgWithCode
(
int32_t
code
,
char
*
dstBuffer
,
const
char
*
errMsg
,
const
char
*
sql
)
{
const
char
*
msgFormat1
=
"%s:%s"
;
const
char
*
msgFormat2
=
"%s:
\'
%s
\'
(%s)"
;
const
char
*
msgFormat3
=
"%s:
\'
%s
\'
"
;
const
int32_t
BACKWARD_CHAR_STEP
=
0
;
if
(
sql
==
NULL
)
{
assert
(
errMsg
!=
NULL
);
sprintf
(
dstBuffer
,
msgFormat1
,
tstrerror
(
code
),
errMsg
);
return
code
;
}
char
buf
[
64
]
=
{
0
};
// only extract part of sql string
strncpy
(
buf
,
(
sql
-
BACKWARD_CHAR_STEP
),
tListLen
(
buf
)
-
1
);
if
(
errMsg
!=
NULL
)
{
sprintf
(
dstBuffer
,
msgFormat2
,
tstrerror
(
code
),
buf
,
errMsg
);
}
else
{
sprintf
(
dstBuffer
,
msgFormat3
,
tstrerror
(
code
),
buf
);
// no additional information for invalid sql error
}
return
code
;
}
bool
tscHasReachLimitation
(
SQueryInfo
*
pQueryInfo
,
SSqlRes
*
pRes
)
{
assert
(
pQueryInfo
!=
NULL
&&
pQueryInfo
->
clauseLimit
!=
0
);
return
(
pQueryInfo
->
clauseLimit
>
0
&&
pRes
->
numOfClauseTotal
>=
pQueryInfo
->
clauseLimit
);
...
...
@@ -4523,7 +4552,7 @@ int32_t tscCreateTableMetaFromSTableMeta(STableMeta** ppChild, const char* name,
STableMeta
*
pChild
=
*
ppChild
;
size_t
sz
=
(
p
!=
NULL
)
?
tscGetTableMetaSize
(
p
)
:
0
;
//ppSTableBuf actually capacity may larger than sz, dont care
if
(
sz
!=
0
)
{
if
(
p
!=
NULL
&&
sz
!=
0
)
{
memset
((
char
*
)
p
,
0
,
sz
);
}
...
...
@@ -4811,6 +4840,7 @@ int32_t tscCreateQueryFromQueryInfo(SQueryInfo* pQueryInfo, SQueryAttr* pQueryAt
pQueryAttr
->
distinct
=
pQueryInfo
->
distinct
;
pQueryAttr
->
sw
=
pQueryInfo
->
sessionWindow
;
pQueryAttr
->
stateWindow
=
pQueryInfo
->
stateWindow
;
pQueryAttr
->
multigroupResult
=
pQueryInfo
->
multigroupResult
;
pQueryAttr
->
numOfCols
=
numOfCols
;
pQueryAttr
->
numOfOutput
=
numOfOutput
;
...
...
@@ -5083,3 +5113,31 @@ void tscRemoveCachedTableMeta(STableMetaInfo* pTableMetaInfo, uint64_t id) {
taosHashRemove
(
tscTableMetaMap
,
fname
,
len
);
tscDebug
(
"0x%"
PRIx64
" remove table meta %s, numOfRemain:%d"
,
id
,
fname
,
(
int32_t
)
taosHashGetSize
(
tscTableMetaMap
));
}
char
*
cloneCurrentDBName
(
SSqlObj
*
pSql
)
{
char
*
p
=
NULL
;
HttpContext
*
pCtx
=
NULL
;
pthread_mutex_lock
(
&
pSql
->
pTscObj
->
mutex
);
STscObj
*
pTscObj
=
pSql
->
pTscObj
;
switch
(
pTscObj
->
from
)
{
case
TAOS_REQ_FROM_HTTP
:
pCtx
=
pSql
->
param
;
if
(
pCtx
&&
pCtx
->
db
[
0
]
!=
'\0'
)
{
char
db
[
TSDB_ACCT_ID_LEN
+
TSDB_DB_NAME_LEN
]
=
{
0
};
int32_t
len
=
sprintf
(
db
,
"%s%s%s"
,
pTscObj
->
acctId
,
TS_PATH_DELIMITER
,
pCtx
->
db
);
assert
(
len
<=
sizeof
(
db
));
p
=
strdup
(
db
);
}
break
;
default:
break
;
}
if
(
p
==
NULL
)
{
p
=
strdup
(
pSql
->
pTscObj
->
db
);
}
pthread_mutex_unlock
(
&
pSql
->
pTscObj
->
mutex
);
return
p
;
}
src/common/inc/tglobal.h
浏览文件 @
64c15fa7
...
...
@@ -134,6 +134,7 @@ extern int32_t tsHttpMaxThreads;
extern
int8_t
tsHttpEnableCompress
;
extern
int8_t
tsHttpEnableRecordSql
;
extern
int8_t
tsTelegrafUseFieldNum
;
extern
int8_t
tsHttpDbNameMandatory
;
// mqtt
extern
int8_t
tsEnableMqttModule
;
...
...
src/common/src/tglobal.c
浏览文件 @
64c15fa7
...
...
@@ -179,6 +179,7 @@ int32_t tsHttpMaxThreads = 2;
int8_t
tsHttpEnableCompress
=
1
;
int8_t
tsHttpEnableRecordSql
=
0
;
int8_t
tsTelegrafUseFieldNum
=
0
;
int8_t
tsHttpDbNameMandatory
=
0
;
// mqtt
int8_t
tsEnableMqttModule
=
0
;
// not finished yet, not started it by default
...
...
@@ -1291,6 +1292,16 @@ static void doInitGlobalConfig(void) {
cfg
.
unitType
=
TAOS_CFG_UTYPE_NONE
;
taosInitConfigOption
(
cfg
);
cfg
.
option
=
"httpDbNameMandatory"
;
cfg
.
ptr
=
&
tsHttpDbNameMandatory
;
cfg
.
valType
=
TAOS_CFG_VTYPE_INT8
;
cfg
.
cfgType
=
TSDB_CFG_CTYPE_B_CONFIG
;
cfg
.
minValue
=
0
;
cfg
.
maxValue
=
1
;
cfg
.
ptrLength
=
0
;
cfg
.
unitType
=
TAOS_CFG_UTYPE_NONE
;
taosInitConfigOption
(
cfg
);
// debug flag
cfg
.
option
=
"numOfLogLines"
;
cfg
.
ptr
=
&
tsNumOfLogLines
;
...
...
go
@
b8f76da4
比较
050667e5
...
b8f76da4
Subproject commit
050667e5b4d0eafa5387e4283e713559b421203f
Subproject commit
b8f76da4a708d158ec3cc4b844571dc4414e36b4
hivemq-tdengine-extension
@
ce520101
比较
b62a26ec
...
ce520101
Subproject commit
b62a26ecc164a310104df57691691b237e091c89
Subproject commit
ce5201014136503d34fecbd56494b67b4961056c
src/connector/jdbc/pom.xml
浏览文件 @
64c15fa7
...
...
@@ -117,7 +117,6 @@
<exclude>
**/DatetimeBefore1970Test.java
</exclude>
<exclude>
**/FailOverTest.java
</exclude>
<exclude>
**/InvalidResultSetPointerTest.java
</exclude>
<exclude>
**/RestfulConnectionTest.java
</exclude>
<exclude>
**/TSDBJNIConnectorTest.java
</exclude>
<exclude>
**/TaosInfoMonitorTest.java
</exclude>
<exclude>
**/UnsignedNumberJniTest.java
</exclude>
...
...
src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBError.java
浏览文件 @
64c15fa7
...
...
@@ -40,13 +40,13 @@ public class TSDBError {
TSDBErrorMap
.
put
(
TSDBErrorNumbers
.
ERROR_SUBSCRIBE_FAILED
,
"failed to create subscription"
);
TSDBErrorMap
.
put
(
TSDBErrorNumbers
.
ERROR_UNSUPPORTED_ENCODING
,
"Unsupported encoding"
);
TSDBErrorMap
.
put
(
TSDBErrorNumbers
.
ERROR_JNI_TDENGINE_ERROR
,
"internal error of database"
);
TSDBErrorMap
.
put
(
TSDBErrorNumbers
.
ERROR_JNI_TDENGINE_ERROR
,
"internal error of database
, please see taoslog for more details
"
);
TSDBErrorMap
.
put
(
TSDBErrorNumbers
.
ERROR_JNI_CONNECTION_NULL
,
"JNI connection is NULL"
);
TSDBErrorMap
.
put
(
TSDBErrorNumbers
.
ERROR_JNI_RESULT_SET_NULL
,
"JNI result set is NULL"
);
TSDBErrorMap
.
put
(
TSDBErrorNumbers
.
ERROR_JNI_NUM_OF_FIELDS_0
,
"invalid num of fields"
);
TSDBErrorMap
.
put
(
TSDBErrorNumbers
.
ERROR_JNI_SQL_NULL
,
"empty sql string"
);
TSDBErrorMap
.
put
(
TSDBErrorNumbers
.
ERROR_JNI_FETCH_END
,
"fetch to the end of resultSet"
);
TSDBErrorMap
.
put
(
TSDBErrorNumbers
.
ERROR_JNI_OUT_OF_MEMORY
,
"JNI alloc memory failed"
);
TSDBErrorMap
.
put
(
TSDBErrorNumbers
.
ERROR_JNI_OUT_OF_MEMORY
,
"JNI alloc memory failed
, please see taoslog for more details
"
);
}
public
static
SQLException
createSQLException
(
int
errorCode
)
{
...
...
src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBJNIConnector.java
浏览文件 @
64c15fa7
...
...
@@ -278,25 +278,20 @@ public class TSDBJNIConnector {
private
native
int
validateCreateTableSqlImp
(
long
connection
,
byte
[]
sqlBytes
);
public
long
prepareStmt
(
String
sql
)
throws
SQLException
{
long
stmt
;
try
{
stmt
=
prepareStmtImp
(
sql
.
getBytes
(),
this
.
taos
);
}
catch
(
Exception
e
)
{
e
.
printStackTrace
();
throw
TSDBError
.
createSQLException
(
TSDBErrorNumbers
.
ERROR_UNSUPPORTED_ENCODING
);
}
long
stmt
=
prepareStmtImp
(
sql
.
getBytes
(),
this
.
taos
);
if
(
stmt
==
TSDBConstants
.
JNI_CONNECTION_NULL
)
{
throw
TSDBError
.
createSQLException
(
TSDBErrorNumbers
.
ERROR_JNI_CONNECTION_NULL
);
throw
TSDBError
.
createSQLException
(
TSDBErrorNumbers
.
ERROR_JNI_CONNECTION_NULL
,
"connection already closed"
);
}
if
(
stmt
==
TSDBConstants
.
JNI_SQL_NULL
)
{
throw
TSDBError
.
createSQLException
(
TSDBErrorNumbers
.
ERROR_JNI_SQL_NULL
);
}
if
(
stmt
==
TSDBConstants
.
JNI_OUT_OF_MEMORY
)
{
throw
TSDBError
.
createSQLException
(
TSDBErrorNumbers
.
ERROR_JNI_OUT_OF_MEMORY
);
}
if
(
stmt
==
TSDBConstants
.
JNI_TDENGINE_ERROR
)
{
throw
TSDBError
.
createSQLException
(
TSDBErrorNumbers
.
ERROR_JNI_TDENGINE_ERROR
);
}
return
stmt
;
}
...
...
@@ -313,8 +308,7 @@ public class TSDBJNIConnector {
private
native
int
setBindTableNameImp
(
long
stmt
,
String
name
,
long
conn
);
public
void
setBindTableNameAndTags
(
long
stmt
,
String
tableName
,
int
numOfTags
,
ByteBuffer
tags
,
ByteBuffer
typeList
,
ByteBuffer
lengthList
,
ByteBuffer
nullList
)
throws
SQLException
{
int
code
=
setTableNameTagsImp
(
stmt
,
tableName
,
numOfTags
,
tags
.
array
(),
typeList
.
array
(),
lengthList
.
array
(),
nullList
.
array
(),
this
.
taos
);
int
code
=
setTableNameTagsImp
(
stmt
,
tableName
,
numOfTags
,
tags
.
array
(),
typeList
.
array
(),
lengthList
.
array
(),
nullList
.
array
(),
this
.
taos
);
if
(
code
!=
TSDBConstants
.
JNI_SUCCESS
)
{
throw
TSDBError
.
createSQLException
(
TSDBErrorNumbers
.
ERROR_UNKNOWN
,
"failed to bind table name and corresponding tags"
);
}
...
...
src/connector/jdbc/src/test/java/com/taosdata/jdbc/cases/UseNowInsertTimestampTest.java
0 → 100644
浏览文件 @
64c15fa7
package
com.taosdata.jdbc.cases
;
import
org.junit.Before
;
import
org.junit.Test
;
import
java.sql.*
;
import
static
org
.
junit
.
Assert
.
assertEquals
;
import
static
org
.
junit
.
Assert
.
assertTrue
;
public
class
UseNowInsertTimestampTest
{
String
url
=
"jdbc:TAOS://127.0.0.1:6030/?user=root&password=taosdata"
;
@Test
public
void
millisec
()
{
try
(
Connection
conn
=
DriverManager
.
getConnection
(
url
))
{
Statement
stmt
=
conn
.
createStatement
();
stmt
.
execute
(
"drop database if exists test"
);
stmt
.
execute
(
"create database if not exists test precision 'ms'"
);
stmt
.
execute
(
"use test"
);
stmt
.
execute
(
"create table weather(ts timestamp, f1 int)"
);
stmt
.
execute
(
"insert into weather values(now, 1)"
);
ResultSet
rs
=
stmt
.
executeQuery
(
"select * from weather"
);
rs
.
next
();
Timestamp
ts
=
rs
.
getTimestamp
(
"ts"
);
assertEquals
(
13
,
Long
.
toString
(
ts
.
getTime
()).
length
());
int
nanos
=
ts
.
getNanos
();
assertEquals
(
0
,
nanos
%
1000_000
);
stmt
.
execute
(
"drop database if exists test"
);
}
catch
(
SQLException
e
)
{
e
.
printStackTrace
();
}
}
@Test
public
void
microsec
()
{
try
(
Connection
conn
=
DriverManager
.
getConnection
(
url
))
{
Statement
stmt
=
conn
.
createStatement
();
stmt
.
execute
(
"drop database if exists test"
);
stmt
.
execute
(
"create database if not exists test precision 'us'"
);
stmt
.
execute
(
"use test"
);
stmt
.
execute
(
"create table weather(ts timestamp, f1 int)"
);
stmt
.
execute
(
"insert into weather values(now, 1)"
);
ResultSet
rs
=
stmt
.
executeQuery
(
"select * from weather"
);
rs
.
next
();
Timestamp
ts
=
rs
.
getTimestamp
(
"ts"
);
int
nanos
=
ts
.
getNanos
();
assertEquals
(
0
,
nanos
%
1000
);
stmt
.
execute
(
"drop database if exists test"
);
}
catch
(
SQLException
e
)
{
e
.
printStackTrace
();
}
}
@Test
public
void
nanosec
()
{
try
(
Connection
conn
=
DriverManager
.
getConnection
(
url
))
{
Statement
stmt
=
conn
.
createStatement
();
stmt
.
execute
(
"drop database if exists test"
);
stmt
.
execute
(
"create database if not exists test precision 'ns'"
);
stmt
.
execute
(
"use test"
);
stmt
.
execute
(
"create table weather(ts timestamp, f1 int)"
);
stmt
.
execute
(
"insert into weather values(now, 1)"
);
ResultSet
rs
=
stmt
.
executeQuery
(
"select * from weather"
);
rs
.
next
();
Timestamp
ts
=
rs
.
getTimestamp
(
"ts"
);
int
nanos
=
ts
.
getNanos
();
assertTrue
(
nanos
%
1000
!=
0
);
stmt
.
execute
(
"drop database if exists test"
);
}
catch
(
SQLException
e
)
{
e
.
printStackTrace
();
}
}
}
src/connector/jdbc/src/test/java/com/taosdata/jdbc/rs/RestfulConnectionTest.java
浏览文件 @
64c15fa7
...
...
@@ -9,6 +9,8 @@ import org.junit.Test;
import
java.sql.*
;
import
java.util.Properties
;
import
static
org
.
junit
.
Assert
.
assertEquals
;
public
class
RestfulConnectionTest
{
private
static
final
String
host
=
"127.0.0.1"
;
...
...
@@ -26,7 +28,7 @@ public class RestfulConnectionTest {
ResultSet
rs
=
stmt
.
executeQuery
(
"select server_status()"
);
rs
.
next
();
int
status
=
rs
.
getInt
(
"server_status()"
);
Assert
.
assertEquals
(
1
,
status
);
assertEquals
(
1
,
status
);
}
catch
(
SQLException
e
)
{
e
.
printStackTrace
();
}
...
...
@@ -38,7 +40,7 @@ public class RestfulConnectionTest {
ResultSet
rs
=
pstmt
.
executeQuery
();
rs
.
next
();
int
status
=
rs
.
getInt
(
"server_status()"
);
Assert
.
assertEquals
(
1
,
status
);
assertEquals
(
1
,
status
);
}
@Test
(
expected
=
SQLFeatureNotSupportedException
.
class
)
...
...
@@ -49,7 +51,7 @@ public class RestfulConnectionTest {
@Test
public
void
nativeSQL
()
throws
SQLException
{
String
nativeSQL
=
conn
.
nativeSQL
(
"select * from log.log"
);
Assert
.
assertEquals
(
"select * from log.log"
,
nativeSQL
);
assertEquals
(
"select * from log.log"
,
nativeSQL
);
}
@Test
...
...
@@ -87,7 +89,7 @@ public class RestfulConnectionTest {
public
void
getMetaData
()
throws
SQLException
{
DatabaseMetaData
meta
=
conn
.
getMetaData
();
Assert
.
assertNotNull
(
meta
);
Assert
.
assertEquals
(
"com.taosdata.jdbc.rs.RestfulDriver"
,
meta
.
getDriverName
());
assertEquals
(
"com.taosdata.jdbc.rs.RestfulDriver"
,
meta
.
getDriverName
());
}
@Test
...
...
@@ -103,25 +105,25 @@ public class RestfulConnectionTest {
@Test
public
void
setCatalog
()
throws
SQLException
{
conn
.
setCatalog
(
"test"
);
Assert
.
assertEquals
(
"test"
,
conn
.
getCatalog
());
assertEquals
(
"test"
,
conn
.
getCatalog
());
}
@Test
public
void
getCatalog
()
throws
SQLException
{
conn
.
setCatalog
(
"log"
);
Assert
.
assertEquals
(
"log"
,
conn
.
getCatalog
());
assertEquals
(
"log"
,
conn
.
getCatalog
());
}
@Test
(
expected
=
SQLFeatureNotSupportedException
.
class
)
public
void
setTransactionIsolation
()
throws
SQLException
{
conn
.
setTransactionIsolation
(
Connection
.
TRANSACTION_NONE
);
Assert
.
assertEquals
(
Connection
.
TRANSACTION_NONE
,
conn
.
getTransactionIsolation
());
assertEquals
(
Connection
.
TRANSACTION_NONE
,
conn
.
getTransactionIsolation
());
conn
.
setTransactionIsolation
(
Connection
.
TRANSACTION_READ_UNCOMMITTED
);
}
@Test
public
void
getTransactionIsolation
()
throws
SQLException
{
Assert
.
assertEquals
(
Connection
.
TRANSACTION_NONE
,
conn
.
getTransactionIsolation
());
assertEquals
(
Connection
.
TRANSACTION_NONE
,
conn
.
getTransactionIsolation
());
}
@Test
...
...
@@ -140,7 +142,7 @@ public class RestfulConnectionTest {
ResultSet
rs
=
stmt
.
executeQuery
(
"select server_status()"
);
rs
.
next
();
int
status
=
rs
.
getInt
(
"server_status()"
);
Assert
.
assertEquals
(
1
,
status
);
assertEquals
(
1
,
status
);
conn
.
createStatement
(
ResultSet
.
TYPE_SCROLL_INSENSITIVE
,
ResultSet
.
CONCUR_READ_ONLY
);
}
...
...
@@ -152,7 +154,7 @@ public class RestfulConnectionTest {
ResultSet
rs
=
pstmt
.
executeQuery
();
rs
.
next
();
int
status
=
rs
.
getInt
(
"server_status()"
);
Assert
.
assertEquals
(
1
,
status
);
assertEquals
(
1
,
status
);
conn
.
prepareStatement
(
"select server_status"
,
ResultSet
.
TYPE_SCROLL_INSENSITIVE
,
ResultSet
.
CONCUR_READ_ONLY
);
}
...
...
@@ -175,13 +177,13 @@ public class RestfulConnectionTest {
@Test
(
expected
=
SQLFeatureNotSupportedException
.
class
)
public
void
setHoldability
()
throws
SQLException
{
conn
.
setHoldability
(
ResultSet
.
HOLD_CURSORS_OVER_COMMIT
);
Assert
.
assertEquals
(
ResultSet
.
HOLD_CURSORS_OVER_COMMIT
,
conn
.
getHoldability
());
assertEquals
(
ResultSet
.
HOLD_CURSORS_OVER_COMMIT
,
conn
.
getHoldability
());
conn
.
setHoldability
(
ResultSet
.
CLOSE_CURSORS_AT_COMMIT
);
}
@Test
public
void
getHoldability
()
throws
SQLException
{
Assert
.
assertEquals
(
ResultSet
.
HOLD_CURSORS_OVER_COMMIT
,
conn
.
getHoldability
());
assertEquals
(
ResultSet
.
HOLD_CURSORS_OVER_COMMIT
,
conn
.
getHoldability
());
}
@Test
(
expected
=
SQLFeatureNotSupportedException
.
class
)
...
...
@@ -210,7 +212,7 @@ public class RestfulConnectionTest {
ResultSet
rs
=
stmt
.
executeQuery
(
"select server_status()"
);
rs
.
next
();
int
status
=
rs
.
getInt
(
"server_status()"
);
Assert
.
assertEquals
(
1
,
status
);
assertEquals
(
1
,
status
);
conn
.
createStatement
(
ResultSet
.
TYPE_SCROLL_INSENSITIVE
,
ResultSet
.
CONCUR_READ_ONLY
,
ResultSet
.
HOLD_CURSORS_OVER_COMMIT
);
}
...
...
@@ -222,7 +224,7 @@ public class RestfulConnectionTest {
ResultSet
rs
=
pstmt
.
executeQuery
();
rs
.
next
();
int
status
=
rs
.
getInt
(
"server_status()"
);
Assert
.
assertEquals
(
1
,
status
);
assertEquals
(
1
,
status
);
conn
.
prepareStatement
(
"select server_status"
,
ResultSet
.
TYPE_SCROLL_INSENSITIVE
,
ResultSet
.
CONCUR_READ_ONLY
,
ResultSet
.
HOLD_CURSORS_OVER_COMMIT
);
}
...
...
@@ -299,11 +301,11 @@ public class RestfulConnectionTest {
Properties
info
=
conn
.
getClientInfo
();
String
charset
=
info
.
getProperty
(
TSDBDriver
.
PROPERTY_KEY_CHARSET
);
Assert
.
assertEquals
(
"UTF-8"
,
charset
);
assertEquals
(
"UTF-8"
,
charset
);
String
locale
=
info
.
getProperty
(
TSDBDriver
.
PROPERTY_KEY_LOCALE
);
Assert
.
assertEquals
(
"en_US.UTF-8"
,
locale
);
assertEquals
(
"en_US.UTF-8"
,
locale
);
String
timezone
=
info
.
getProperty
(
TSDBDriver
.
PROPERTY_KEY_TIME_ZONE
);
Assert
.
assertEquals
(
"UTC-8"
,
timezone
);
assertEquals
(
"UTC-8"
,
timezone
);
}
@Test
...
...
@@ -313,11 +315,11 @@ public class RestfulConnectionTest {
conn
.
setClientInfo
(
TSDBDriver
.
PROPERTY_KEY_TIME_ZONE
,
"UTC-8"
);
String
charset
=
conn
.
getClientInfo
(
TSDBDriver
.
PROPERTY_KEY_CHARSET
);
Assert
.
assertEquals
(
"UTF-8"
,
charset
);
assertEquals
(
"UTF-8"
,
charset
);
String
locale
=
conn
.
getClientInfo
(
TSDBDriver
.
PROPERTY_KEY_LOCALE
);
Assert
.
assertEquals
(
"en_US.UTF-8"
,
locale
);
assertEquals
(
"en_US.UTF-8"
,
locale
);
String
timezone
=
conn
.
getClientInfo
(
TSDBDriver
.
PROPERTY_KEY_TIME_ZONE
);
Assert
.
assertEquals
(
"UTC-8"
,
timezone
);
assertEquals
(
"UTC-8"
,
timezone
);
}
@Test
(
expected
=
SQLFeatureNotSupportedException
.
class
)
...
...
@@ -345,14 +347,15 @@ public class RestfulConnectionTest {
conn
.
abort
(
null
);
}
@Test
(
expected
=
SQLFeatureNotSupportedException
.
class
)
@Test
public
void
setNetworkTimeout
()
throws
SQLException
{
conn
.
setNetworkTimeout
(
null
,
1000
);
}
@Test
(
expected
=
SQLFeatureNotSupportedException
.
class
)
@Test
public
void
getNetworkTimeout
()
throws
SQLException
{
conn
.
getNetworkTimeout
();
int
timeout
=
conn
.
getNetworkTimeout
();
assertEquals
(
0
,
timeout
);
}
@Test
...
...
src/inc/taoserror.h
浏览文件 @
64c15fa7
...
...
@@ -103,6 +103,9 @@ int32_t* taosGetErrno();
#define TSDB_CODE_TSC_FILE_EMPTY TAOS_DEF_ERROR_CODE(0, 0x021A) //"File is empty")
#define TSDB_CODE_TSC_LINE_SYNTAX_ERROR TAOS_DEF_ERROR_CODE(0, 0x021B) //"Syntax error in Line")
#define TSDB_CODE_TSC_NO_META_CACHED TAOS_DEF_ERROR_CODE(0, 0x021C) //"No table meta cached")
#define TSDB_CODE_TSC_DUP_COL_NAMES TAOS_DEF_ERROR_CODE(0, 0x021D) //"duplicated column names")
#define TSDB_CODE_TSC_INVALID_TAG_LENGTH TAOS_DEF_ERROR_CODE(0, 0x021E) //"Invalid tag length")
#define TSDB_CODE_TSC_INVALID_COLUMN_LENGTH TAOS_DEF_ERROR_CODE(0, 0x021F) //"Invalid column length")
// mnode
#define TSDB_CODE_MND_MSG_NOT_PROCESSED TAOS_DEF_ERROR_CODE(0, 0x0300) //"Message not processed")
...
...
@@ -185,6 +188,9 @@ int32_t* taosGetErrno();
#define TSDB_CODE_MND_INVALID_FUNC TAOS_DEF_ERROR_CODE(0, 0x0374) //"Invalid func")
#define TSDB_CODE_MND_INVALID_FUNC_BUFSIZE TAOS_DEF_ERROR_CODE(0, 0x0375) //"Invalid func bufSize")
#define TSDB_CODE_MND_INVALID_TAG_LENGTH TAOS_DEF_ERROR_CODE(0, 0x0376) //"invalid tag length")
#define TSDB_CODE_MND_INVALID_COLUMN_LENGTH TAOS_DEF_ERROR_CODE(0, 0x0377) //"invalid column length")
#define TSDB_CODE_MND_DB_NOT_SELECTED TAOS_DEF_ERROR_CODE(0, 0x0380) //"Database not specified or available")
#define TSDB_CODE_MND_DB_ALREADY_EXIST TAOS_DEF_ERROR_CODE(0, 0x0381) //"Database already exists")
#define TSDB_CODE_MND_INVALID_DB_OPTION TAOS_DEF_ERROR_CODE(0, 0x0382) //"Invalid database options")
...
...
src/inc/ttokendef.h
浏览文件 @
64c15fa7
...
...
@@ -91,125 +91,126 @@
#define TK_ACCOUNT 73
#define TK_USE 74
#define TK_DESCRIBE 75
#define TK_ALTER 76
#define TK_PASS 77
#define TK_PRIVILEGE 78
#define TK_LOCAL 79
#define TK_COMPACT 80
#define TK_LP 81
#define TK_RP 82
#define TK_IF 83
#define TK_EXISTS 84
#define TK_AS 85
#define TK_OUTPUTTYPE 86
#define TK_AGGREGATE 87
#define TK_BUFSIZE 88
#define TK_PPS 89
#define TK_TSERIES 90
#define TK_DBS 91
#define TK_STORAGE 92
#define TK_QTIME 93
#define TK_CONNS 94
#define TK_STATE 95
#define TK_COMMA 96
#define TK_KEEP 97
#define TK_CACHE 98
#define TK_REPLICA 99
#define TK_QUORUM 100
#define TK_DAYS 101
#define TK_MINROWS 102
#define TK_MAXROWS 103
#define TK_BLOCKS 104
#define TK_CTIME 105
#define TK_WAL 106
#define TK_FSYNC 107
#define TK_COMP 108
#define TK_PRECISION 109
#define TK_UPDATE 110
#define TK_CACHELAST 111
#define TK_PARTITIONS 112
#define TK_UNSIGNED 113
#define TK_TAGS 114
#define TK_USING 115
#define TK_NULL 116
#define TK_NOW 117
#define TK_SELECT 118
#define TK_UNION 119
#define TK_ALL 120
#define TK_DISTINCT 121
#define TK_FROM 122
#define TK_VARIABLE 123
#define TK_INTERVAL 124
#define TK_SESSION 125
#define TK_STATE_WINDOW 126
#define TK_FILL 127
#define TK_SLIDING 128
#define TK_ORDER 129
#define TK_BY 130
#define TK_ASC 131
#define TK_DESC 132
#define TK_GROUP 133
#define TK_HAVING 134
#define TK_LIMIT 135
#define TK_OFFSET 136
#define TK_SLIMIT 137
#define TK_SOFFSET 138
#define TK_WHERE 139
#define TK_RESET 140
#define TK_QUERY 141
#define TK_SYNCDB 142
#define TK_ADD 143
#define TK_COLUMN 144
#define TK_MODIFY 145
#define TK_TAG 146
#define TK_CHANGE 147
#define TK_SET 148
#define TK_KILL 149
#define TK_CONNECTION 150
#define TK_STREAM 151
#define TK_COLON 152
#define TK_ABORT 153
#define TK_AFTER 154
#define TK_ATTACH 155
#define TK_BEFORE 156
#define TK_BEGIN 157
#define TK_CASCADE 158
#define TK_CLUSTER 159
#define TK_CONFLICT 160
#define TK_COPY 161
#define TK_DEFERRED 162
#define TK_DELIMITERS 163
#define TK_DETACH 164
#define TK_EACH 165
#define TK_END 166
#define TK_EXPLAIN 167
#define TK_FAIL 168
#define TK_FOR 169
#define TK_IGNORE 170
#define TK_IMMEDIATE 171
#define TK_INITIALLY 172
#define TK_INSTEAD 173
#define TK_MATCH 174
#define TK_KEY 175
#define TK_OF 176
#define TK_RAISE 177
#define TK_REPLACE 178
#define TK_RESTRICT 179
#define TK_ROW 180
#define TK_STATEMENT 181
#define TK_TRIGGER 182
#define TK_VIEW 183
#define TK_IPTOKEN 184
#define TK_SEMI 185
#define TK_NONE 186
#define TK_PREV 187
#define TK_LINEAR 188
#define TK_IMPORT 189
#define TK_TBNAME 190
#define TK_JOIN 191
#define TK_INSERT 192
#define TK_INTO 193
#define TK_VALUES 194
#define TK_DESC 76
#define TK_ALTER 77
#define TK_PASS 78
#define TK_PRIVILEGE 79
#define TK_LOCAL 80
#define TK_COMPACT 81
#define TK_LP 82
#define TK_RP 83
#define TK_IF 84
#define TK_EXISTS 85
#define TK_AS 86
#define TK_OUTPUTTYPE 87
#define TK_AGGREGATE 88
#define TK_BUFSIZE 89
#define TK_PPS 90
#define TK_TSERIES 91
#define TK_DBS 92
#define TK_STORAGE 93
#define TK_QTIME 94
#define TK_CONNS 95
#define TK_STATE 96
#define TK_COMMA 97
#define TK_KEEP 98
#define TK_CACHE 99
#define TK_REPLICA 100
#define TK_QUORUM 101
#define TK_DAYS 102
#define TK_MINROWS 103
#define TK_MAXROWS 104
#define TK_BLOCKS 105
#define TK_CTIME 106
#define TK_WAL 107
#define TK_FSYNC 108
#define TK_COMP 109
#define TK_PRECISION 110
#define TK_UPDATE 111
#define TK_CACHELAST 112
#define TK_PARTITIONS 113
#define TK_UNSIGNED 114
#define TK_TAGS 115
#define TK_USING 116
#define TK_NULL 117
#define TK_NOW 118
#define TK_SELECT 119
#define TK_UNION 120
#define TK_ALL 121
#define TK_DISTINCT 122
#define TK_FROM 123
#define TK_VARIABLE 124
#define TK_INTERVAL 125
#define TK_EVERY 126
#define TK_SESSION 127
#define TK_STATE_WINDOW 128
#define TK_FILL 129
#define TK_SLIDING 130
#define TK_ORDER 131
#define TK_BY 132
#define TK_ASC 133
#define TK_GROUP 134
#define TK_HAVING 135
#define TK_LIMIT 136
#define TK_OFFSET 137
#define TK_SLIMIT 138
#define TK_SOFFSET 139
#define TK_WHERE 140
#define TK_RESET 141
#define TK_QUERY 142
#define TK_SYNCDB 143
#define TK_ADD 144
#define TK_COLUMN 145
#define TK_MODIFY 146
#define TK_TAG 147
#define TK_CHANGE 148
#define TK_SET 149
#define TK_KILL 150
#define TK_CONNECTION 151
#define TK_STREAM 152
#define TK_COLON 153
#define TK_ABORT 154
#define TK_AFTER 155
#define TK_ATTACH 156
#define TK_BEFORE 157
#define TK_BEGIN 158
#define TK_CASCADE 159
#define TK_CLUSTER 160
#define TK_CONFLICT 161
#define TK_COPY 162
#define TK_DEFERRED 163
#define TK_DELIMITERS 164
#define TK_DETACH 165
#define TK_EACH 166
#define TK_END 167
#define TK_EXPLAIN 168
#define TK_FAIL 169
#define TK_FOR 170
#define TK_IGNORE 171
#define TK_IMMEDIATE 172
#define TK_INITIALLY 173
#define TK_INSTEAD 174
#define TK_MATCH 175
#define TK_KEY 176
#define TK_OF 177
#define TK_RAISE 178
#define TK_REPLACE 179
#define TK_RESTRICT 180
#define TK_ROW 181
#define TK_STATEMENT 182
#define TK_TRIGGER 183
#define TK_VIEW 184
#define TK_IPTOKEN 185
#define TK_SEMI 186
#define TK_NONE 187
#define TK_PREV 188
#define TK_LINEAR 189
#define TK_IMPORT 190
#define TK_TBNAME 191
#define TK_JOIN 192
#define TK_INSERT 193
#define TK_INTO 194
#define TK_VALUES 195
#define TK_SPACE 300
...
...
src/kit/taosdemo/taosdemo.c
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
src/kit/taosdump/taosdump.c
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
src/mnode/src/mnodeTable.c
浏览文件 @
64c15fa7
...
...
@@ -1518,6 +1518,13 @@ static int32_t mnodeChangeSuperTableColumn(SMnodeMsg *pMsg) {
// update
SSchema
*
schema
=
(
SSchema
*
)
(
pStable
->
schema
+
col
);
ASSERT
(
schema
->
type
==
TSDB_DATA_TYPE_BINARY
||
schema
->
type
==
TSDB_DATA_TYPE_NCHAR
);
if
(
pAlter
->
schema
[
0
].
bytes
<=
schema
->
bytes
)
{
mError
(
"msg:%p, app:%p stable:%s, modify column len. column:%s, len from %d to %d"
,
pMsg
,
pMsg
->
rpcMsg
.
ahandle
,
pStable
->
info
.
tableId
,
name
,
schema
->
bytes
,
pAlter
->
schema
[
0
].
bytes
);
return
TSDB_CODE_MND_INVALID_COLUMN_LENGTH
;
}
schema
->
bytes
=
pAlter
->
schema
[
0
].
bytes
;
pStable
->
sversion
++
;
mInfo
(
"msg:%p, app:%p stable %s, start to modify column %s len to %d"
,
pMsg
,
pMsg
->
rpcMsg
.
ahandle
,
pStable
->
info
.
tableId
,
...
...
@@ -1548,6 +1555,12 @@ static int32_t mnodeChangeSuperTableTag(SMnodeMsg *pMsg) {
// update
SSchema
*
schema
=
(
SSchema
*
)
(
pStable
->
schema
+
col
+
pStable
->
numOfColumns
);
ASSERT
(
schema
->
type
==
TSDB_DATA_TYPE_BINARY
||
schema
->
type
==
TSDB_DATA_TYPE_NCHAR
);
if
(
pAlter
->
schema
[
0
].
bytes
<=
schema
->
bytes
)
{
mError
(
"msg:%p, app:%p stable:%s, modify tag len. tag:%s, len from %d to %d"
,
pMsg
,
pMsg
->
rpcMsg
.
ahandle
,
pStable
->
info
.
tableId
,
name
,
schema
->
bytes
,
pAlter
->
schema
[
0
].
bytes
);
return
TSDB_CODE_MND_INVALID_TAG_LENGTH
;
}
schema
->
bytes
=
pAlter
->
schema
[
0
].
bytes
;
pStable
->
tversion
++
;
mInfo
(
"msg:%p, app:%p stable %s, start to modify tag len %s to %d"
,
pMsg
,
pMsg
->
rpcMsg
.
ahandle
,
pStable
->
info
.
tableId
,
...
...
src/plugins/http/CMakeLists.txt
浏览文件 @
64c15fa7
...
...
@@ -6,6 +6,7 @@ INCLUDE_DIRECTORIES(${TD_COMMUNITY_DIR}/deps/cJson/inc)
INCLUDE_DIRECTORIES
(
${
TD_COMMUNITY_DIR
}
/deps/lz4/inc
)
INCLUDE_DIRECTORIES
(
${
TD_COMMUNITY_DIR
}
/src/client/inc
)
INCLUDE_DIRECTORIES
(
${
TD_COMMUNITY_DIR
}
/src/query/inc
)
INCLUDE_DIRECTORIES
(
${
TD_COMMUNITY_DIR
}
/src/common/inc
)
INCLUDE_DIRECTORIES
(
inc
)
AUX_SOURCE_DIRECTORY
(
src SRC
)
...
...
src/plugins/http/src/httpRestHandle.c
浏览文件 @
64c15fa7
...
...
@@ -19,6 +19,7 @@
#include "httpLog.h"
#include "httpRestHandle.h"
#include "httpRestJson.h"
#include "tglobal.h"
static
HttpDecodeMethod
restDecodeMethod
=
{
"rest"
,
restProcessRequest
};
static
HttpDecodeMethod
restDecodeMethod2
=
{
"restful"
,
restProcessRequest
};
...
...
@@ -111,6 +112,14 @@ bool restProcessSqlRequest(HttpContext* pContext, int32_t timestampFmt) {
pContext
->
db
[
0
]
=
'\0'
;
HttpString
*
path
=
&
pContext
->
parser
->
path
[
REST_USER_USEDB_URL_POS
];
if
(
tsHttpDbNameMandatory
)
{
if
(
path
->
pos
==
0
)
{
httpError
(
"context:%p, fd:%d, user:%s, database name is mandatory"
,
pContext
,
pContext
->
fd
,
pContext
->
user
);
httpSendErrorResp
(
pContext
,
TSDB_CODE_HTTP_INVALID_URL
);
return
false
;
}
}
if
(
path
->
pos
>
0
&&
!
(
strlen
(
sql
)
>
4
&&
(
sql
[
0
]
==
'u'
||
sql
[
0
]
==
'U'
)
&&
(
sql
[
1
]
==
's'
||
sql
[
1
]
==
'S'
)
&&
(
sql
[
2
]
==
'e'
||
sql
[
2
]
==
'E'
)
&&
sql
[
3
]
==
' '
))
{
...
...
src/query/inc/qExecutor.h
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
src/query/inc/qSqlparser.h
浏览文件 @
64c15fa7
...
...
@@ -80,6 +80,7 @@ typedef struct tVariantListItem {
}
tVariantListItem
;
typedef
struct
SIntervalVal
{
int32_t
token
;
SStrToken
interval
;
SStrToken
offset
;
}
SIntervalVal
;
...
...
src/query/inc/qTableMeta.h
浏览文件 @
64c15fa7
...
...
@@ -165,6 +165,7 @@ typedef struct SQueryInfo {
bool
orderProjectQuery
;
bool
stateWindow
;
bool
globalMerge
;
bool
multigroupResult
;
}
SQueryInfo
;
/**
...
...
src/query/inc/sql.y
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
src/query/src/qAggMain.c
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
src/query/src/qExecutor.c
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
src/query/src/qFill.c
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
src/query/src/qSqlParser.c
浏览文件 @
64c15fa7
src/query/src/sql.c
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
src/tsdb/src/tsdbRead.c
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
src/util/inc/tutil.h
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
src/util/src/terror.c
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
src/util/src/ttokenizer.c
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
src/util/src/tutil.c
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
tests/examples/c/schemaless.c
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
tests/pytest/fulltest.sh
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
tests/pytest/functions/function_interp.py
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
tests/pytest/query/last_row_cache.py
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
tests/pytest/query/query.py
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
tests/pytest/query/queryDiffColsOr.py
0 → 100644
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
tests/pytest/query/queryLike.py
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
tests/pytest/restful/restful_bind_db1.py
0 → 100644
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
tests/pytest/restful/restful_bind_db2.py
0 → 100644
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
tests/pytest/tools/schemalessInsertPerformance.py
0 → 100644
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
tests/pytest/util/common.py
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
tests/pytest/util/dnodes.py
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
tests/script/general/http/restful_dbname.sim
0 → 100644
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
tests/script/general/parser/columnValue_float.sim
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
tests/script/general/parser/interp_test.sim
浏览文件 @
64c15fa7
此差异已折叠。
点击以展开。
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录