提交 8af62ec9 编写于 作者: S shenglian zhou

Merge branch '3.0' of github.com:taosdata/TDengine into szhou/feature/project-elimation

......@@ -7,7 +7,7 @@ description: "taosBenchmark (once called taosdemo ) is a tool for testing the pe
## Introduction
taosBenchmark (formerly taosdemo ) is a tool for testing the performance of TDengine products. taosBenchmark can test the performance of TDengine's insert, query, and subscription functions and simulate large amounts of data generated by many devices. taosBenchmark can flexibly control the number and type of databases, supertables, tag columns, number and type of data columns, and sub-tables, and types of databases, super tables, the number and types of data columns, the number of sub-tables, the amount of data per sub-table, the time interval for inserting data, the number of working threads, whether and how to insert disordered data, and so on. The installer provides taosdemo as a soft link to taosBenchmark for compatibility and for the convenience of past users.
taosBenchmark (formerly taosdemo ) is a tool for testing the performance of TDengine products. taosBenchmark can test the performance of TDengine's insert, query, and subscription functions and simulate large amounts of data generated by many devices. taosBenchmark can be configured to generate user defined databases, supertables, subtables, and the time series data to populate these for performance benchmarking. taosBenchmark is highly configurable and some of the configurations include the time interval for inserting data, the number of working threads and the capability to insert disordered data. The installer provides taosdemo as a soft link to taosBenchmark for compatibility with past users.
## Installation
......@@ -21,9 +21,13 @@ There are two ways to install taosBenchmark:
### Configuration and running methods
TaosBenchmark needs to be executed on the terminal of the operating system, it supports two configuration methods: [Command-line arguments](#Command-line arguments in detailed) and [JSON configuration file](#Configuration file arguments in detailed). These two methods are mutually exclusive. Users can use `-f <json file>` to specify a configuration file. When running taosBenchmark with command-line arguments to control its behavior, users should use other parameters for configuration, but not the `-f` parameter. In addition, taosBenchmark offers a special way of running without parameters.
TaosBenchmark needs to be executed on the terminal of the operating system, it supports two configuration methods: [Command-line arguments](#command-line-arguments-in-detail) and [JSON configuration file](#configuration-file-parameters-in-detail). These two methods are mutually exclusive. Users can use `-f <json file>` to specify a configuration file. When running taosBenchmark with command-line arguments to control its behavior, users should use other parameters for configuration, but not the `-f` parameter. In addition, taosBenchmark offers a special way of running without parameters.
<<<<<<< HEAD
taosBenchmark supports complete performance testing of TDengine. taosBenchmark supports the TDengine functions in three categories: write, query, and subscribe. These three functions are mutually exclusive, and users can select only one of them each time taosBenchmark runs. It is important to note that the type of functionality to be tested is not configurable when using the command-line configuration method, which can only test writing performance. To test the query and subscription performance of the TDengine, you must use the configuration file method and specify the function type to test via the parameter `filetype` in the configuration file.
=======
taosBenchmark supports the complete performance testing of TDengine by providing functionally to write, query, and subscribe. These three functions are mutually exclusive, users can only select one of them each time taosBenchmark runs. The query and subscribe functionalities are only configurable using a json configuration file by specifying the parameter `filetype`, while write can be performed through both the command-line and a configuration file.
>>>>>>> 108548b4d6 (docs: typo)
**Make sure that the TDengine cluster is running correctly before running taosBenchmark. **
......@@ -57,9 +61,8 @@ Use the following command-line to run taosBenchmark and control its behavior via
taosBenchmark -f <json file>
```
**Here are a few examples of configuration files:**
#### Example of inserting a scenario JSON configuration file
#### Configuration file examples
##### Example of inserting a scenario JSON configuration file
<details>
<summary>insert.json</summary>
......@@ -70,7 +73,7 @@ taosBenchmark -f <json file>
</details>
#### Query Scenario JSON Profile Example
##### Query Scenario JSON Profile Example
<details>
<summary>query.json</summary>
......@@ -81,7 +84,7 @@ taosBenchmark -f <json file>
</details>
#### Subscription JSON configuration example
##### Subscription JSON configuration example
<details>
<summary>subscribe.json</summary>
......@@ -172,7 +175,7 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
Switch parameter specifying whether to use escape characters in the super table and sub-table names. By default is not used.
- **-C/--chinese** :
Switch specifying whether to use Unicode Chinese characters in nchar and binary. By default is not used.
specify whether to use Unicode Chinese characters in nchar and binary, the default is no.
- **-N/--normal-table** :
This parameter indicates that taosBenchmark will create only normal tables instead of super tables. The default value is false. It can be used if the insert mode is taosc, stmt, and rest.
......@@ -373,7 +376,7 @@ The configuration parameters for querying the sub-tables or the normal tables ar
- **sqls**.
- **sql**: the SQL command to be executed.
- **result**: the file to save the query result. If it is unspecified, taosBenchark will not save the result.
- **result**: the file to save the query result. If it is unspecified, taosBenchmark will not save the result.
#### Configuration parameters of query super table
......
......@@ -31,38 +31,41 @@ TDengine currently supports Grafana versions 7.5 and above. Users can go to the
### Install Grafana Plugin and Configure Data Source
<Tabs defaultValue="script">
<TabItem value="script" label="Using Script">
<TabItem value="gui" label="With GUI">
Set the url and authorization environment variables by `export` or a [`.env`(dotenv) file](https://hexdocs.pm/dotenvy/dotenv-file-format.html):
Under Grafana 8, plugin catalog allows you to [browse and manage plugins within Grafana](https://grafana.com/docs/grafana/next/administration/plugin-management/#plugin-catalog) (but for Grafana 7.x, use **With Script** or **Install & Configure Manually**). Find the page at **Configurations > Plugins**, search **TDengine** and click it to install.
```sh
export TDENGINE_API=http://tdengine.local:6041
# user + password
export TDENGINE_USER=user
export TDENGINE_PASSWORD=password
# Other useful variables
# - If to install TDengine data source, default is true
export TDENGINE_DS_ENABLED=false
# - Data source name to be created, default is TDengine
export TDENGINE_DS_NAME=TDengine
# - Data source organization id, default is 1
export GF_ORG_ID=1
# - Data source is editable in admin ui or not, default is 0 (false)
export TDENGINE_EDITABLE=1
```
![Search tdengine in grafana plugins](./grafana/grafana-plugin-search-tdengine.png)
Installation may cost some minutes, then you can **Create a TDengine data source**:
![Install and configure Grafana data source](./grafana/grafana-install-and-config.png)
Then you can add a TDengine data source by filling up the configuration options.
![TDengine Database Grafana plugin add data source](./grafana/grafana-data-source.png)
You can create dashboards with TDengine now.
</TabItem>
<TabItem value="script" label="With Script">
Run `install.sh`:
On a server with Grafana installed, run `install.sh` with TDengine url and username/passwords will install TDengine data source plugin and add a data source named TDengine. This is the recommended way for Grafana 7.x or [Grafana provisioning](https://grafana.com/docs/grafana/latest/administration/provisioning/) users.
```sh
bash -c "$(curl -fsSL https://raw.githubusercontent.com/taosdata/grafanaplugin/master/install.sh)"
bash -c "$(curl -fsSL \
https://raw.githubusercontent.com/taosdata/grafanaplugin/master/install.sh)" -- \
-a http://localhost:6041 \
-u root \
-p taosdata
```
With this script, TDengine data source plugin and the Grafana data source will be installed and created automatically with Grafana provisioning configurations. Save the script and type `./install.sh --help` for the full usage of the script.
Restart Grafana service and open Grafana in web-browser, usually <http://localhost:3000>.
And then, restart Grafana service and open Grafana in web-browser, usually <http://localhost:3000>.
Save the script and type `./install.sh --help` for the full usage of the script.
</TabItem>
<TabItem value="manual" label="Install & Configure Manually">
Follow the installation steps in [Grafana](https://grafana.com/grafana/plugins/tdengine-datasource/?tab=installation) with the [``grafana-cli`` command-line tool](https://grafana.com/docs/grafana/latest/administration/cli/) for plugin installation.
......@@ -115,6 +118,73 @@ Click `Save & Test` to test. You should see a success message if the test worked
![TDengine Database TDinsight plugin add database 4](./grafana/add_datasource4.webp)
</TabItem>
<TabItem value="container" label="Container">
Please refer to [Install plugins in the Docker container](https://grafana.com/docs/grafana/next/setup-grafana/installation/docker/#install-plugins-in-the-docker-container). This will install `tdengine-datasource` plugin when Grafana container starts:
```bash
docker run -d \
-p 3000:3000 \
--name=grafana \
-e "GF_INSTALL_PLUGINS=tdengine-datasource" \
grafana/grafana
```
You can setup a zero-configuration stack for TDengine + Grafana by [docker-compose](https://docs.docker.com/compose/) and [Grafana provisioning](https://grafana.com/docs/grafana/latest/administration/provisioning/) file:
1. Save the provisioning configuration file to `tdengine.yml`.
```yml
apiVersion: 1
datasources:
- name: TDengine
type: tdengine-datasource
orgId: 1
url: "$TDENGINE_API"
isDefault: true
secureJsonData:
url: "$TDENGINE_URL"
basicAuth: "$TDENGINE_BASIC_AUTH"
token: "$TDENGINE_CLOUD_TOKEN"
version: 1
editable: true
```
2. Write `docker-compose.yml` with [TDengine](https://hub.docker.com/r/tdengine/tdengine) and [Grafana](https://hub.docker.com/r/grafana/grafana) image.
```yml
version: "3.7"
services:
tdengine:
image: tdengine/tdengine:2.6.0.2
environment:
TAOS_FQDN: tdengine
volumes:
- tdengine-data:/var/lib/taos/
grafana:
image: grafana/grafana:8.5.6
volumes:
- ./tdengine.yml/:/etc/grafana/provisioning/tdengine.yml
- grafana-data:/var/lib/grafana
environment:
# install tdengine plugin at start
GF_INSTALL_PLUGINS: "tdengine-datasource"
TDENGINE_URL: "http://tdengine:6041"
#printf "$TDENGINE_USER:$TDENGINE_PASSWORD" | base64
TDENGINE_BASIC_AUTH: "cm9vdDp0YmFzZTEyNQ=="
ports:
- 3000:3000
volumes:
grafana-data:
tdengine-data:
```
3. Start TDengine and Grafana services: `docker-compose up -d`.
Open Grafana <http://localhost:3000>, and you can add dashboard with TDengine now.
</TabItem>
</Tabs>
......
......@@ -29,39 +29,41 @@ TDengine 能够与开源数据可视化系统 [Grafana](https://www.grafana.com/
### 安装 Grafana Plugin 并配置数据源
<Tabs defaultValue="script">
<TabItem value="script" label="使用安装脚本">
<TabItem value="gui" label="图形化界面安装">
将集群信息设置为环境变量;也可以使用 `.env` 文件,请参考 [dotenv](https://hexdocs.pm/dotenvy/dotenv-file-format.html):
使用 Grafana 最新版本(8.5+),您可以在 Grafana 中[浏览和管理插件](https://grafana.com/docs/grafana/next/administration/plugin-management/#plugin-catalog)(对于 7.x 版本,请使用 **安装脚本** 或 **手动安装并配置** 方式)。在 Grafana 管理界面中的 **Configurations > Plugins** 页面直接搜索并按照提示安装 TDengine。
```sh
export TDENGINE_API=http://tdengine.local:6041
# user + password
export TDENGINE_USER=user
export TDENGINE_PASSWORD=password
# 其他环境变量:
# - 是否安装数据源,默认为 true,表示安装
export TDENGINE_DS_ENABLED=false
# - 数据源名称,默认为 TDengine
export TDENGINE_DS_NAME=TDengine
# - 数据源所属组织 ID,默认为 1
export GF_ORG_ID=1
# - 数据源是否可通过管理面板编辑,默认为 0,表示不可编辑
export TDENGINE_EDITABLE=1
```
![Search tdengine in grafana plugins](grafana-plugin-search-tdengine.png)
如图示即安装完毕,按照指示 **Create a TDengine data source** 添加数据源。
![Install and configure Grafana data source](grafana-install-and-config.png)
输入 TDengine 相关配置,完成数据源配置。
![TDengine Database Grafana plugin add data source](./grafana-data-source.png)
配置完毕,现在可以使用 TDengine 创建 Dashboard 了。
</TabItem>
<TabItem value="script" label="使用安装脚本">
运行安装脚本:
对于使用 Grafana 7.x 版本或使用 [Grafana Provisioning](https://grafana.com/docs/grafana/latest/administration/provisioning/) 配置的用户,可以在 Grafana 服务器上使用安装脚本自动安装插件即添加数据源 Provisioning 配置文件。
```sh
bash -c "$(curl -fsSL https://raw.githubusercontent.com/taosdata/grafanaplugin/master/install.sh)"
bash -c "$(curl -fsSL \
https://raw.githubusercontent.com/taosdata/grafanaplugin/master/install.sh)" -- \
-a http://localhost:6041 \
-u root \
-p taosdata
```
该脚本将自动安装 Grafana 插件并配置数据源。安装完毕后,需要重启 Grafana 服务后生效。
安装完毕后,需要重启 Grafana 服务后方可生效。
保存该脚本并执行 `./install.sh --help` 可查看详细帮助文档。
</TabItem>
<TabItem value="manual" label="手动安装并配置">
<TabItem value="manual" label="手动安装">
使用 [`grafana-cli` 命令行工具](https://grafana.com/docs/grafana/latest/administration/cli/) 进行插件[安装](https://grafana.com/grafana/plugins/tdengine-datasource/?tab=installation)。
......@@ -113,6 +115,73 @@ GF_INSTALL_PLUGINS=tdengine-datasource
![TDengine Database Grafana plugin add data source](./add_datasource4.webp)
</TabItem>
<TabItem value="container" label="K8s/Docker 容器">
参考 [Grafana 容器化安装说明](https://grafana.com/docs/grafana/next/setup-grafana/installation/docker/#install-plugins-in-the-docker-container)。使用如下命令启动一个容器,并自动安装 TDengine 插件:
```bash
docker run -d \
-p 3000:3000 \
--name=grafana \
-e "GF_INSTALL_PLUGINS=tdengine-datasource" \
grafana/grafana
```
使用 docker-compose,配置 Grafana Provisioning 自动化配置,体验 TDengine + Grafana 组合的零配置启动:
1. 保存该文件为 `tdengine.yml`。
```yml
apiVersion: 1
datasources:
- name: TDengine
type: tdengine-datasource
orgId: 1
url: "$TDENGINE_API"
isDefault: true
secureJsonData:
url: "$TDENGINE_URL"
basicAuth: "$TDENGINE_BASIC_AUTH"
token: "$TDENGINE_CLOUD_TOKEN"
version: 1
editable: true
```
2. 保存该文件为 `docker-compose.yml`。
```yml
version: "3.7"
services:
tdengine:
image: tdengine/tdengine:2.6.0.2
environment:
TAOS_FQDN: tdengine
volumes:
- tdengine-data:/var/lib/taos/
grafana:
image: grafana/grafana:8.5.6
volumes:
- ./tdengine.yml/:/etc/grafana/provisioning/tdengine.yml
- grafana-data:/var/lib/grafana
environment:
# install tdengine plugin at start
GF_INSTALL_PLUGINS: "tdengine-datasource"
TDENGINE_URL: "http://tdengine:6041"
#printf "$TDENGINE_USER:$TDENGINE_PASSWORD" | base64
TDENGINE_BASIC_AUTH: "cm9vdDp0YmFzZTEyNQ=="
ports:
- 3000:3000
volumes:
grafana-data:
tdengine-data:
```
3. 使用 docker-compose 命令启动 TDengine + Grafana :`docker-compose up -d`。
打开 Grafana <http://localhost:3000>,现在可以添加 Dashboard 了。
</TabItem>
</Tabs>
......
......@@ -429,8 +429,10 @@ STSchema* tdGetSTSChemaFromSSChema(SSchema** pSchema, int32_t nCols);
typedef struct {
char name[TSDB_TABLE_FNAME_LEN];
int8_t igExists;
float xFilesFactor;
int32_t delay;
int64_t delay1;
int64_t delay2;
int64_t watermark1;
int64_t watermark2;
int32_t ttl;
int32_t numOfColumns;
int32_t numOfTags;
......
......@@ -127,134 +127,133 @@
#define TK_BLOB 109
#define TK_VARBINARY 110
#define TK_DECIMAL 111
#define TK_FILE_FACTOR 112
#define TK_NK_FLOAT 113
#define TK_MAX_DELAY 112
#define TK_WATERMARK 113
#define TK_ROLLUP 114
#define TK_TTL 115
#define TK_SMA 116
#define TK_SHOW 117
#define TK_DATABASES 118
#define TK_TABLES 119
#define TK_STABLES 120
#define TK_MNODES 121
#define TK_MODULES 122
#define TK_QNODES 123
#define TK_FUNCTIONS 124
#define TK_INDEXES 125
#define TK_ACCOUNTS 126
#define TK_APPS 127
#define TK_CONNECTIONS 128
#define TK_LICENCE 129
#define TK_GRANTS 130
#define TK_QUERIES 131
#define TK_SCORES 132
#define TK_TOPICS 133
#define TK_VARIABLES 134
#define TK_BNODES 135
#define TK_SNODES 136
#define TK_CLUSTER 137
#define TK_TRANSACTIONS 138
#define TK_LIKE 139
#define TK_INDEX 140
#define TK_FULLTEXT 141
#define TK_FUNCTION 142
#define TK_INTERVAL 143
#define TK_TOPIC 144
#define TK_AS 145
#define TK_CONSUMER 146
#define TK_GROUP 147
#define TK_DESC 148
#define TK_DESCRIBE 149
#define TK_RESET 150
#define TK_QUERY 151
#define TK_CACHE 152
#define TK_EXPLAIN 153
#define TK_ANALYZE 154
#define TK_VERBOSE 155
#define TK_NK_BOOL 156
#define TK_RATIO 157
#define TK_COMPACT 158
#define TK_VNODES 159
#define TK_IN 160
#define TK_OUTPUTTYPE 161
#define TK_AGGREGATE 162
#define TK_BUFSIZE 163
#define TK_STREAM 164
#define TK_INTO 165
#define TK_TRIGGER 166
#define TK_AT_ONCE 167
#define TK_WINDOW_CLOSE 168
#define TK_MAX_DELAY 169
#define TK_WATERMARK 170
#define TK_KILL 171
#define TK_CONNECTION 172
#define TK_TRANSACTION 173
#define TK_BALANCE 174
#define TK_VGROUP 175
#define TK_MERGE 176
#define TK_REDISTRIBUTE 177
#define TK_SPLIT 178
#define TK_SYNCDB 179
#define TK_DELETE 180
#define TK_NULL 181
#define TK_NK_QUESTION 182
#define TK_NK_ARROW 183
#define TK_ROWTS 184
#define TK_TBNAME 185
#define TK_QSTARTTS 186
#define TK_QENDTS 187
#define TK_WSTARTTS 188
#define TK_WENDTS 189
#define TK_WDURATION 190
#define TK_CAST 191
#define TK_NOW 192
#define TK_TODAY 193
#define TK_TIMEZONE 194
#define TK_COUNT 195
#define TK_FIRST 196
#define TK_LAST 197
#define TK_LAST_ROW 198
#define TK_BETWEEN 199
#define TK_IS 200
#define TK_NK_LT 201
#define TK_NK_GT 202
#define TK_NK_LE 203
#define TK_NK_GE 204
#define TK_NK_NE 205
#define TK_MATCH 206
#define TK_NMATCH 207
#define TK_CONTAINS 208
#define TK_JOIN 209
#define TK_INNER 210
#define TK_SELECT 211
#define TK_DISTINCT 212
#define TK_WHERE 213
#define TK_PARTITION 214
#define TK_BY 215
#define TK_SESSION 216
#define TK_STATE_WINDOW 217
#define TK_SLIDING 218
#define TK_FILL 219
#define TK_VALUE 220
#define TK_NONE 221
#define TK_PREV 222
#define TK_LINEAR 223
#define TK_NEXT 224
#define TK_HAVING 225
#define TK_ORDER 226
#define TK_SLIMIT 227
#define TK_SOFFSET 228
#define TK_LIMIT 229
#define TK_OFFSET 230
#define TK_ASC 231
#define TK_NULLS 232
#define TK_ID 233
#define TK_NK_BITNOT 234
#define TK_INSERT 235
#define TK_VALUES 236
#define TK_IMPORT 237
#define TK_NK_SEMI 238
#define TK_FILE 239
#define TK_FIRST 117
#define TK_LAST 118
#define TK_SHOW 119
#define TK_DATABASES 120
#define TK_TABLES 121
#define TK_STABLES 122
#define TK_MNODES 123
#define TK_MODULES 124
#define TK_QNODES 125
#define TK_FUNCTIONS 126
#define TK_INDEXES 127
#define TK_ACCOUNTS 128
#define TK_APPS 129
#define TK_CONNECTIONS 130
#define TK_LICENCE 131
#define TK_GRANTS 132
#define TK_QUERIES 133
#define TK_SCORES 134
#define TK_TOPICS 135
#define TK_VARIABLES 136
#define TK_BNODES 137
#define TK_SNODES 138
#define TK_CLUSTER 139
#define TK_TRANSACTIONS 140
#define TK_LIKE 141
#define TK_INDEX 142
#define TK_FULLTEXT 143
#define TK_FUNCTION 144
#define TK_INTERVAL 145
#define TK_TOPIC 146
#define TK_AS 147
#define TK_CONSUMER 148
#define TK_GROUP 149
#define TK_DESC 150
#define TK_DESCRIBE 151
#define TK_RESET 152
#define TK_QUERY 153
#define TK_CACHE 154
#define TK_EXPLAIN 155
#define TK_ANALYZE 156
#define TK_VERBOSE 157
#define TK_NK_BOOL 158
#define TK_RATIO 159
#define TK_NK_FLOAT 160
#define TK_COMPACT 161
#define TK_VNODES 162
#define TK_IN 163
#define TK_OUTPUTTYPE 164
#define TK_AGGREGATE 165
#define TK_BUFSIZE 166
#define TK_STREAM 167
#define TK_INTO 168
#define TK_TRIGGER 169
#define TK_AT_ONCE 170
#define TK_WINDOW_CLOSE 171
#define TK_KILL 172
#define TK_CONNECTION 173
#define TK_TRANSACTION 174
#define TK_BALANCE 175
#define TK_VGROUP 176
#define TK_MERGE 177
#define TK_REDISTRIBUTE 178
#define TK_SPLIT 179
#define TK_SYNCDB 180
#define TK_DELETE 181
#define TK_NULL 182
#define TK_NK_QUESTION 183
#define TK_NK_ARROW 184
#define TK_ROWTS 185
#define TK_TBNAME 186
#define TK_QSTARTTS 187
#define TK_QENDTS 188
#define TK_WSTARTTS 189
#define TK_WENDTS 190
#define TK_WDURATION 191
#define TK_CAST 192
#define TK_NOW 193
#define TK_TODAY 194
#define TK_TIMEZONE 195
#define TK_COUNT 196
#define TK_LAST_ROW 197
#define TK_BETWEEN 198
#define TK_IS 199
#define TK_NK_LT 200
#define TK_NK_GT 201
#define TK_NK_LE 202
#define TK_NK_GE 203
#define TK_NK_NE 204
#define TK_MATCH 205
#define TK_NMATCH 206
#define TK_CONTAINS 207
#define TK_JOIN 208
#define TK_INNER 209
#define TK_SELECT 210
#define TK_DISTINCT 211
#define TK_WHERE 212
#define TK_PARTITION 213
#define TK_BY 214
#define TK_SESSION 215
#define TK_STATE_WINDOW 216
#define TK_SLIDING 217
#define TK_FILL 218
#define TK_VALUE 219
#define TK_NONE 220
#define TK_PREV 221
#define TK_LINEAR 222
#define TK_NEXT 223
#define TK_HAVING 224
#define TK_ORDER 225
#define TK_SLIMIT 226
#define TK_SOFFSET 227
#define TK_LIMIT 228
#define TK_OFFSET 229
#define TK_ASC 230
#define TK_NULLS 231
#define TK_ID 232
#define TK_NK_BITNOT 233
#define TK_INSERT 234
#define TK_VALUES 235
#define TK_IMPORT 236
#define TK_NK_SEMI 237
#define TK_FILE 238
#define TK_NK_SPACE 300
#define TK_NK_COMMENT 301
......
......@@ -158,10 +158,10 @@ typedef struct tExprNode {
int32_t nodeType;
union {
struct {// function node
char functionName[FUNCTIONS_NAME_MAX_LENGTH]; // todo refactor
int32_t functionId;
int32_t num;
struct SFunctionNode *pFunctNode;
char functionName[FUNCTIONS_NAME_MAX_LENGTH]; // todo refactor
int32_t functionId;
int32_t num;
struct SFunctionNode *pFunctNode;
} _function;
struct {
......
......@@ -88,8 +88,12 @@ typedef struct SAlterDatabaseStmt {
typedef struct STableOptions {
ENodeType type;
char comment[TSDB_TB_COMMENT_LEN];
double filesFactor;
int32_t delay;
SNodeList* pMaxDelay;
int64_t maxDelay1;
int64_t maxDelay2;
SNodeList* pWatermark;
int64_t watermark1;
int64_t watermark2;
SNodeList* pRollupFuncs;
int32_t ttl;
SNodeList* pSma;
......@@ -204,11 +208,18 @@ typedef struct SShowStmt {
SNode* pTbNamePattern; // SValueNode
} SShowStmt;
typedef struct SShowCreatStmt {
typedef struct SShowCreateDatabaseStmt {
ENodeType type;
char dbName[TSDB_DB_NAME_LEN];
char tableName[TSDB_TABLE_NAME_LEN];
} SShowCreatStmt;
void* pCfg; // SDbCfgInfo
} SShowCreateDatabaseStmt;
typedef struct SShowCreateTableStmt {
ENodeType type;
char dbName[TSDB_DB_NAME_LEN];
char tableName[TSDB_TABLE_NAME_LEN];
STableMeta* pMeta;
} SShowCreateTableStmt;
typedef enum EIndexType { INDEX_TYPE_SMA = 1, INDEX_TYPE_FULLTEXT } EIndexType;
......
......@@ -562,6 +562,7 @@ int32_t* taosGetErrno();
#define TSDB_CODE_PAR_WINDOW_NOT_ALLOWED_FUNC TAOS_DEF_ERROR_CODE(0, 0x2659)
#define TSDB_CODE_PAR_STREAM_NOT_ALLOWED_FUNC TAOS_DEF_ERROR_CODE(0, 0x265A)
#define TSDB_CODE_PAR_GROUP_BY_NOT_ALLOWED_FUNC TAOS_DEF_ERROR_CODE(0, 0x265B)
#define TSDB_CODE_PAR_INVALID_TABLE_OPTION TAOS_DEF_ERROR_CODE(0, 0x265C)
//planner
#define TSDB_CODE_PLAN_INTERNAL_ERROR TAOS_DEF_ERROR_CODE(0, 0x2700)
......
......@@ -344,14 +344,13 @@ typedef enum ELogicConditionType {
#define TSDB_DB_SCHEMALESS_OFF 0
#define TSDB_DEFAULT_DB_SCHEMALESS TSDB_DB_SCHEMALESS_OFF
// #define TSDB_MIN_ROLLUP_DELAY 1
// #define TSDB_MAX_ROLLUP_DELAY 10
// #define TSDB_DEFAULT_ROLLUP_DELAY 1
#define TSDB_MIN_ROLLUP_FILE_FACTOR 0
#define TSDB_MAX_ROLLUP_FILE_FACTOR 10
#define TSDB_DEFAULT_ROLLUP_FILE_FACTOR 0.1
#define TSDB_MIN_TABLE_TTL 0
#define TSDB_DEFAULT_TABLE_TTL 0
#define TSDB_MIN_ROLLUP_MAX_DELAY 1 // unit millisecond
#define TSDB_MAX_ROLLUP_MAX_DELAY (15 * 60 * 1000)
#define TSDB_MIN_ROLLUP_WATERMARK 0 // unit millisecond
#define TSDB_MAX_ROLLUP_WATERMARK (15 * 60 * 1000)
#define TSDB_DEFAULT_ROLLUP_WATERMARK 5000
#define TSDB_MIN_TABLE_TTL 0
#define TSDB_DEFAULT_TABLE_TTL 0
#define TSDB_MIN_EXPLAIN_RATIO 0
#define TSDB_MAX_EXPLAIN_RATIO 1
......
......@@ -128,7 +128,7 @@ typedef struct STscObj {
int8_t connType;
int32_t acctId;
uint32_t connId;
uint64_t id; // ref ID returned by taosAddRef
TAOS *id; // ref ID returned by taosAddRef
TdThreadMutex mutex; // used to protect the operation on db
int32_t numOfReqs; // number of sqlObj bound to this connection
SAppInstInfo* pAppInfo;
......
......@@ -38,7 +38,7 @@ static TdThreadOnce tscinit = PTHREAD_ONCE_INIT;
volatile int32_t tscInitRes = 0;
static void registerRequest(SRequestObj *pRequest) {
STscObj *pTscObj = acquireTscObj(pRequest->pTscObj->id);
STscObj *pTscObj = acquireTscObj(*(int64_t*)pRequest->pTscObj->id);
assert(pTscObj != NULL);
......@@ -54,7 +54,7 @@ static void registerRequest(SRequestObj *pRequest) {
int32_t currentInst = atomic_add_fetch_64((int64_t *)&pSummary->currentRequests, 1);
tscDebug("0x%" PRIx64 " new Request from connObj:0x%" PRIx64
", current:%d, app current:%d, total:%d, reqId:0x%" PRIx64,
pRequest->self, pRequest->pTscObj->id, num, currentInst, total, pRequest->requestId);
pRequest->self, *(int64_t*)pRequest->pTscObj->id, num, currentInst, total, pRequest->requestId);
}
}
......@@ -70,8 +70,8 @@ static void deregisterRequest(SRequestObj *pRequest) {
int64_t duration = taosGetTimestampUs() - pRequest->metric.start;
tscDebug("0x%" PRIx64 " free Request from connObj: 0x%" PRIx64 ", reqId:0x%" PRIx64 " elapsed:%" PRIu64
" ms, current:%d, app current:%d",
pRequest->self, pTscObj->id, pRequest->requestId, duration / 1000, num, currentInst);
releaseTscObj(pTscObj->id);
pRequest->self, *(int64_t*)pTscObj->id, pRequest->requestId, duration / 1000, num, currentInst);
releaseTscObj(*(int64_t*)pTscObj->id);
}
// todo close the transporter properly
......@@ -80,7 +80,7 @@ void closeTransporter(STscObj *pTscObj) {
return;
}
tscDebug("free transporter:%p in connObj: 0x%" PRIx64, pTscObj->pAppInfo->pTransporter, pTscObj->id);
tscDebug("free transporter:%p in connObj: 0x%" PRIx64, pTscObj->pAppInfo->pTransporter, *(int64_t*)pTscObj->id);
rpcClose(pTscObj->pAppInfo->pTransporter);
}
......@@ -128,7 +128,7 @@ void closeAllRequests(SHashObj *pRequests) {
void destroyTscObj(void *pObj) {
STscObj *pTscObj = pObj;
SClientHbKey connKey = {.tscRid = pTscObj->id, .connType = pTscObj->connType};
SClientHbKey connKey = {.tscRid = *(int64_t*)pTscObj->id, .connType = pTscObj->connType};
hbDeregisterConn(pTscObj->pAppInfo->pAppHbMgr, connKey);
int64_t connNum = atomic_sub_fetch_64(&pTscObj->pAppInfo->numOfConns, 1);
closeAllRequests(pTscObj->pRequests);
......@@ -137,7 +137,7 @@ void destroyTscObj(void *pObj) {
// TODO
//closeTransporter(pTscObj);
}
tscDebug("connObj 0x%" PRIx64 " destroyed, totalConn:%" PRId64, pTscObj->id, pTscObj->pAppInfo->numOfConns);
tscDebug("connObj 0x%" PRIx64 " destroyed, totalConn:%" PRId64, *(int64_t*)pTscObj->id, pTscObj->pAppInfo->numOfConns);
taosThreadMutexDestroy(&pTscObj->mutex);
taosMemoryFreeClear(pTscObj);
}
......@@ -166,10 +166,11 @@ void *createTscObj(const char *user, const char *auth, const char *db, int32_t c
}
taosThreadMutexInit(&pObj->mutex, NULL);
pObj->id = taosAddRef(clientConnRefPool, pObj);
pObj->id = taosMemoryMalloc(sizeof(int64_t));
*(int64_t*)pObj->id = taosAddRef(clientConnRefPool, pObj);
pObj->schemalessType = 1;
tscDebug("connObj created, 0x%" PRIx64, pObj->id);
tscDebug("connObj created, 0x%" PRIx64, *(int64_t*)pObj->id);
return pObj;
}
......
......@@ -263,6 +263,7 @@ void asyncExecLocalCmd(SRequestObj* pRequest, SQuery* pQuery) {
int32_t asyncExecDdlQuery(SRequestObj* pRequest, SQuery* pQuery) {
// drop table if exists not_exists_table
if (NULL == pQuery->pCmdMsg) {
pRequest->body.queryFp(pRequest->body.param, pRequest, 0);
return TSDB_CODE_SUCCESS;
}
......@@ -609,6 +610,16 @@ void schedulerExecCb(SQueryResult* pResult, void* param, int32_t code) {
SRequestObj* pRequest = (SRequestObj*)param;
pRequest->code = code;
if (TDMT_VND_SUBMIT == pRequest->type || TDMT_VND_DELETE == pRequest->type ||
TDMT_VND_CREATE_TABLE == pRequest->type) {
pRequest->body.resInfo.numOfRows = pResult->numOfRows;
if (pRequest->body.queryJob != 0) {
schedulerFreeJob(pRequest->body.queryJob, 0);
pRequest->body.queryJob = 0;
}
}
tscDebug("0x%" PRIx64 " enter scheduler exec cb, code:%d - %s, reqId:0x%" PRIx64,
pRequest->self, code, tstrerror(code), pRequest->requestId);
......@@ -712,7 +723,7 @@ void launchAsyncQuery(SRequestObj* pRequest, SQuery* pQuery) {
code = asyncExecDdlQuery(pRequest, pQuery);
break;
case QUERY_EXEC_MODE_SCHEDULE: {
SArray* pNodeList = taosArrayInit(4, sizeof(struct SQueryNodeAddr));
SArray* pNodeList = taosArrayInit(4, sizeof(SQueryNodeLoad));
pRequest->type = pQuery->msgType;
......@@ -725,12 +736,10 @@ void launchAsyncQuery(SRequestObj* pRequest, SQuery* pQuery) {
.msgLen = ERROR_MSG_BUF_DEFAULT_SIZE};
SAppInstInfo* pAppInfo = getAppInfo(pRequest);
if (TSDB_CODE_SUCCESS == code) {
code = qCreateQueryPlan(&cxt, &pRequest->body.pDag, pNodeList);
if (code) {
tscError("0x%" PRIx64 " failed to create query plan, code:%s 0x%" PRIx64, pRequest->self, tstrerror(code),
pRequest->requestId);
}
code = qCreateQueryPlan(&cxt, &pRequest->body.pDag, pNodeList);
if (code) {
tscError("0x%" PRIx64 " failed to create query plan, code:%s 0x%" PRIx64, pRequest->self, tstrerror(code),
pRequest->requestId);
}
if (TSDB_CODE_SUCCESS == code) {
......@@ -927,7 +936,7 @@ STscObj* taosConnectImpl(const char* user, const char* auth, const char* db, __t
taos_close_internal(pTscObj);
pTscObj = NULL;
} else {
tscDebug("0x%" PRIx64 " connection is opening, connId:%u, dnodeConn:%p, reqId:0x%" PRIx64, pTscObj->id,
tscDebug("0x%" PRIx64 " connection is opening, connId:%u, dnodeConn:%p, reqId:0x%" PRIx64, *(int64_t*)pTscObj->id,
pTscObj->connId, pTscObj->pAppInfo->pTransporter, pRequest->requestId);
destroyRequest(pRequest);
}
......@@ -1090,10 +1099,10 @@ TAOS* taos_connect_auth(const char* ip, const char* user, const char* auth, cons
STscObj* pObj = taos_connect_internal(ip, user, NULL, auth, db, port, CONN_TYPE__QUERY);
if (pObj) {
return (TAOS*)pObj->id;
return pObj->id;
}
return (TAOS*)0;
return NULL;
}
TAOS* taos_connect_l(const char* ip, int ipLen, const char* user, int userLen, const char* pass, int passLen,
......
......@@ -99,10 +99,10 @@ TAOS *taos_connect(const char *ip, const char *user, const char *pass, const cha
STscObj* pObj = taos_connect_internal(ip, user, pass, NULL, db, port, CONN_TYPE__QUERY);
if (pObj) {
return (TAOS*)pObj->id;
return pObj->id;
}
return (TAOS*)0;
return NULL;
}
void taos_close_internal(void *taos) {
......@@ -111,19 +111,24 @@ void taos_close_internal(void *taos) {
}
STscObj *pTscObj = (STscObj *)taos;
tscDebug("0x%" PRIx64 " try to close connection, numOfReq:%d", pTscObj->id, pTscObj->numOfReqs);
tscDebug("0x%" PRIx64 " try to close connection, numOfReq:%d", *(int64_t*)pTscObj->id, pTscObj->numOfReqs);
taosRemoveRef(clientConnRefPool, pTscObj->id);
taosRemoveRef(clientConnRefPool, *(int64_t*)pTscObj->id);
}
void taos_close(TAOS *taos) {
STscObj* pObj = acquireTscObj((int64_t)taos);
if (taos == NULL) {
return;
}
STscObj* pObj = acquireTscObj(*(int64_t*)taos);
if (NULL == pObj) {
return;
}
taos_close_internal(pObj);
releaseTscObj((int64_t)taos);
releaseTscObj(*(int64_t*)taos);
taosMemoryFree(taos);
}
......@@ -206,7 +211,12 @@ static void syncQueryFn(void *param, void *res, int32_t code) {
}
TAOS_RES *taos_query(TAOS *taos, const char *sql) {
STscObj* pTscObj = acquireTscObj((int64_t)taos);
if (NULL == taos) {
terrno = TSDB_CODE_TSC_DISCONNECTED;
return NULL;
}
STscObj* pTscObj = acquireTscObj(*(int64_t*)taos);
if (pTscObj == NULL || sql == NULL) {
terrno = TSDB_CODE_TSC_DISCONNECTED;
return NULL;
......@@ -219,13 +229,13 @@ TAOS_RES *taos_query(TAOS *taos, const char *sql) {
taos_query_a(taos, sql, syncQueryFn, param);
tsem_wait(&param->sem);
releaseTscObj((int64_t)taos);
releaseTscObj(*(int64_t*)taos);
return param->pRequest;
#else
size_t sqlLen = strlen(sql);
if (sqlLen > (size_t)TSDB_MAX_ALLOWED_SQL_LEN) {
releaseTscObj((int64_t)taos);
releaseTscObj(*(int64_t*)taos);
tscError("sql string exceeds max length:%d", TSDB_MAX_ALLOWED_SQL_LEN);
terrno = TSDB_CODE_TSC_EXCEED_SQL_LIMIT;
return NULL;
......@@ -233,7 +243,7 @@ TAOS_RES *taos_query(TAOS *taos, const char *sql) {
TAOS_RES* pRes = execQuery(pTscObj, sql, sqlLen);
releaseTscObj((int64_t)taos);
releaseTscObj(*(int64_t*)taos);
return pRes;
#endif
......@@ -453,15 +463,15 @@ int taos_result_precision(TAOS_RES *res) {
}
int taos_select_db(TAOS *taos, const char *db) {
STscObj* pObj = acquireTscObj((int64_t)taos);
STscObj* pObj = acquireTscObj(*(int64_t*)taos);
if (pObj == NULL) {
releaseTscObj((int64_t)taos);
releaseTscObj(*(int64_t*)taos);
terrno = TSDB_CODE_TSC_DISCONNECTED;
return TSDB_CODE_TSC_DISCONNECTED;
}
if (db == NULL || strlen(db) == 0) {
releaseTscObj((int64_t)taos);
releaseTscObj(*(int64_t*)taos);
terrno = TSDB_CODE_TSC_INVALID_INPUT;
return terrno;
}
......@@ -473,7 +483,7 @@ int taos_select_db(TAOS *taos, const char *db) {
int32_t code = taos_errno(pRequest);
taos_free_result(pRequest);
releaseTscObj((int64_t)taos);
releaseTscObj(*(int64_t*)taos);
return code;
}
......@@ -626,7 +636,7 @@ int *taos_get_column_data_offset(TAOS_RES *res, int columnIndex) {
int taos_validate_sql(TAOS *taos, const char *sql) { return true; }
void taos_reset_current_db(TAOS *taos) {
STscObj* pTscObj = acquireTscObj((int64_t)taos);
STscObj* pTscObj = acquireTscObj(*(int64_t*)taos);
if (pTscObj == NULL) {
terrno = TSDB_CODE_TSC_DISCONNECTED;
return;
......@@ -634,17 +644,17 @@ void taos_reset_current_db(TAOS *taos) {
resetConnectDB(pTscObj);
releaseTscObj((int64_t)taos);
releaseTscObj(*(int64_t*)taos);
}
const char *taos_get_server_info(TAOS *taos) {
STscObj* pTscObj = acquireTscObj((int64_t)taos);
STscObj* pTscObj = acquireTscObj(*(int64_t*)taos);
if (pTscObj == NULL) {
terrno = TSDB_CODE_TSC_DISCONNECTED;
return NULL;
}
releaseTscObj((int64_t)taos);
releaseTscObj(*(int64_t*)taos);
return pTscObj->ver;
}
......@@ -671,6 +681,8 @@ static void destorySqlParseWrapper(SqlParseWrapper *pWrapper) {
}
void retrieveMetaCallback(SMetaData *pResultMeta, void *param, int32_t code) {
tscDebug("enter meta callback, code %s", tstrerror(code));
SqlParseWrapper *pWrapper = (SqlParseWrapper *)param;
SQuery *pQuery = pWrapper->pQuery;
SRequestObj *pRequest = pWrapper->pRequest;
......@@ -711,11 +723,11 @@ void retrieveMetaCallback(SMetaData *pResultMeta, void *param, int32_t code) {
}
void taos_query_a(TAOS *taos, const char *sql, __taos_async_fn_t fp, void *param) {
STscObj* pTscObj = acquireTscObj((int64_t)taos);
STscObj* pTscObj = acquireTscObj(*(int64_t*)taos);
if (pTscObj == NULL || sql == NULL || NULL == fp) {
terrno = TSDB_CODE_INVALID_PARA;
if (pTscObj) {
releaseTscObj((int64_t)taos);
releaseTscObj(*(int64_t*)taos);
} else {
terrno = TSDB_CODE_TSC_DISCONNECTED;
}
......@@ -936,7 +948,7 @@ int taos_load_table_info(TAOS *taos, const char *tableNameList) {
}
TAOS_STMT *taos_stmt_init(TAOS *taos) {
STscObj* pObj = acquireTscObj((int64_t)taos);
STscObj* pObj = acquireTscObj(*(int64_t*)taos);
if (NULL == pObj) {
tscError("invalid parameter for %s", __FUNCTION__);
terrno = TSDB_CODE_TSC_DISCONNECTED;
......@@ -945,7 +957,7 @@ TAOS_STMT *taos_stmt_init(TAOS *taos) {
TAOS_STMT* pStmt = stmtInit(pObj);
releaseTscObj((int64_t)taos);
releaseTscObj(*(int64_t*)taos);
return pStmt;
}
......
......@@ -77,7 +77,7 @@ int32_t processConnectRsp(void* param, const SDataBuf* pMsg, int32_t code) {
for (int32_t i = 0; i < connectRsp.epSet.numOfEps; ++i) {
tscDebug("0x%" PRIx64 " epSet.fqdn[%d]:%s port:%d, connObj:0x%" PRIx64, pRequest->requestId, i,
connectRsp.epSet.eps[i].fqdn, connectRsp.epSet.eps[i].port, pTscObj->id);
connectRsp.epSet.eps[i].fqdn, connectRsp.epSet.eps[i].port, *(int64_t*)pTscObj->id);
}
pTscObj->connId = connectRsp.connId;
......@@ -90,7 +90,7 @@ int32_t processConnectRsp(void* param, const SDataBuf* pMsg, int32_t code) {
pTscObj->connType = connectRsp.connType;
hbRegisterConn(pTscObj->pAppInfo->pAppHbMgr, pTscObj->id, connectRsp.clusterId, connectRsp.connType);
hbRegisterConn(pTscObj->pAppInfo->pAppHbMgr, *(int64_t*)pTscObj->id, connectRsp.clusterId, connectRsp.connType);
// pRequest->body.resInfo.pRspMsg = pMsg->pData;
tscDebug("0x%" PRIx64 " clusterId:%" PRId64 ", totalConn:%" PRId64, pRequest->requestId, connectRsp.clusterId,
......
......@@ -309,7 +309,7 @@ static int32_t smlApplySchemaAction(SSmlHandle *info, SSchemaAction *action) {
case SCHEMA_ACTION_ADD_COLUMN: {
int n = sprintf(result, "alter stable `%s` add column ", action->alterSTable.sTableName);
smlBuildColumnDescription(action->alterSTable.field, result + n, capacity - n, &outBytes);
TAOS_RES *res = taos_query((TAOS*)info->taos->id, result); // TODO async doAsyncQuery
TAOS_RES *res = taos_query(info->taos->id, result); // TODO async doAsyncQuery
code = taos_errno(res);
const char *errStr = taos_errstr(res);
if (code != TSDB_CODE_SUCCESS) {
......@@ -323,7 +323,7 @@ static int32_t smlApplySchemaAction(SSmlHandle *info, SSchemaAction *action) {
case SCHEMA_ACTION_ADD_TAG: {
int n = sprintf(result, "alter stable `%s` add tag ", action->alterSTable.sTableName);
smlBuildColumnDescription(action->alterSTable.field, result + n, capacity - n, &outBytes);
TAOS_RES *res = taos_query((TAOS*)info->taos->id, result); // TODO async doAsyncQuery
TAOS_RES *res = taos_query(info->taos->id, result); // TODO async doAsyncQuery
code = taos_errno(res);
const char *errStr = taos_errstr(res);
if (code != TSDB_CODE_SUCCESS) {
......@@ -337,7 +337,7 @@ static int32_t smlApplySchemaAction(SSmlHandle *info, SSchemaAction *action) {
case SCHEMA_ACTION_CHANGE_COLUMN_SIZE: {
int n = sprintf(result, "alter stable `%s` modify column ", action->alterSTable.sTableName);
smlBuildColumnDescription(action->alterSTable.field, result + n, capacity - n, &outBytes);
TAOS_RES *res = taos_query((TAOS*)info->taos->id, result); // TODO async doAsyncQuery
TAOS_RES *res = taos_query(info->taos->id, result); // TODO async doAsyncQuery
code = taos_errno(res);
if (code != TSDB_CODE_SUCCESS) {
uError("SML:0x%" PRIx64 " apply schema action. error : %s", info->id, taos_errstr(res));
......@@ -350,7 +350,7 @@ static int32_t smlApplySchemaAction(SSmlHandle *info, SSchemaAction *action) {
case SCHEMA_ACTION_CHANGE_TAG_SIZE: {
int n = sprintf(result, "alter stable `%s` modify tag ", action->alterSTable.sTableName);
smlBuildColumnDescription(action->alterSTable.field, result + n, capacity - n, &outBytes);
TAOS_RES *res = taos_query((TAOS*)info->taos->id, result); // TODO async doAsyncQuery
TAOS_RES *res = taos_query(info->taos->id, result); // TODO async doAsyncQuery
code = taos_errno(res);
if (code != TSDB_CODE_SUCCESS) {
uError("SML:0x%" PRIx64 " apply schema action. error : %s", info->id, taos_errstr(res));
......@@ -405,7 +405,7 @@ static int32_t smlApplySchemaAction(SSmlHandle *info, SSchemaAction *action) {
pos--;
++freeBytes;
outBytes = snprintf(pos, freeBytes, ")");
TAOS_RES *res = taos_query((TAOS*)info->taos->id, result);
TAOS_RES *res = taos_query(info->taos->id, result);
code = taos_errno(res);
if (code != TSDB_CODE_SUCCESS) {
uError("SML:0x%" PRIx64 " apply schema action. error : %s", info->id, taos_errstr(res));
......@@ -2434,7 +2434,7 @@ static void smlInsertCallback(void *param, void *res, int32_t code) {
*/
TAOS_RES* taos_schemaless_insert(TAOS* taos, char* lines[], int numLines, int protocol, int precision) {
STscObj* pTscObj = acquireTscObj((int64_t)taos);
STscObj* pTscObj = acquireTscObj(*(int64_t*)taos);
if (NULL == pTscObj) {
terrno = TSDB_CODE_TSC_DISCONNECTED;
uError("SML:taos_schemaless_insert invalid taos");
......@@ -2443,7 +2443,7 @@ TAOS_RES* taos_schemaless_insert(TAOS* taos, char* lines[], int numLines, int pr
SRequestObj* request = (SRequestObj*)createRequest(pTscObj, TSDB_SQL_INSERT);
if(!request){
releaseTscObj((int64_t)taos);
releaseTscObj(*(int64_t*)taos);
uError("SML:taos_schemaless_insert error request is null");
return NULL;
}
......@@ -2531,6 +2531,6 @@ end:
// ((STscObj *)taos)->schemalessType = 0;
pTscObj->schemalessType = 1;
uDebug("resultend:%s", request->msgBuf);
releaseTscObj((int64_t)taos);
releaseTscObj(*(int64_t*)taos);
return (TAOS_RES*)request;
}
......@@ -204,7 +204,7 @@ static int32_t tSerializeSClientHbReq(SEncoder *pEncoder, const SClientHbReq *pR
if (tEncodeU64(pEncoder, pReq->app.summary.numOfSlowQueries) < 0) return -1;
if (tEncodeU64(pEncoder, pReq->app.summary.totalRequests) < 0) return -1;
if (tEncodeU64(pEncoder, pReq->app.summary.currentRequests) < 0) return -1;
int32_t queryNum = 0;
if (pReq->query) {
queryNum = 1;
......@@ -288,7 +288,7 @@ static int32_t tDeserializeSClientHbReq(SDecoder *pDecoder, SClientHbReq *pReq)
if (tDecodeI64(pDecoder, &desc.useconds) < 0) return -1;
if (tDecodeI64(pDecoder, &desc.stime) < 0) return -1;
if (tDecodeI64(pDecoder, &desc.reqRid) < 0) return -1;
if (tDecodeI8(pDecoder, (int8_t*)&desc.stableQuery) < 0) return -1;
if (tDecodeI8(pDecoder, (int8_t *)&desc.stableQuery) < 0) return -1;
if (tDecodeCStrTo(pDecoder, desc.fqdn) < 0) return -1;
if (tDecodeI32(pDecoder, &desc.subPlanNum) < 0) return -1;
......@@ -496,8 +496,10 @@ int32_t tSerializeSMCreateStbReq(void *buf, int32_t bufLen, SMCreateStbReq *pReq
if (tStartEncode(&encoder) < 0) return -1;
if (tEncodeCStr(&encoder, pReq->name) < 0) return -1;
if (tEncodeI8(&encoder, pReq->igExists) < 0) return -1;
if (tEncodeFloat(&encoder, pReq->xFilesFactor) < 0) return -1;
if (tEncodeI32(&encoder, pReq->delay) < 0) return -1;
if (tEncodeI64(&encoder, pReq->delay1) < 0) return -1;
if (tEncodeI64(&encoder, pReq->delay2) < 0) return -1;
if (tEncodeI64(&encoder, pReq->watermark1) < 0) return -1;
if (tEncodeI64(&encoder, pReq->watermark2) < 0) return -1;
if (tEncodeI32(&encoder, pReq->ttl) < 0) return -1;
if (tEncodeI32(&encoder, pReq->numOfColumns) < 0) return -1;
if (tEncodeI32(&encoder, pReq->numOfTags) < 0) return -1;
......@@ -544,8 +546,10 @@ int32_t tDeserializeSMCreateStbReq(void *buf, int32_t bufLen, SMCreateStbReq *pR
if (tStartDecode(&decoder) < 0) return -1;
if (tDecodeCStrTo(&decoder, pReq->name) < 0) return -1;
if (tDecodeI8(&decoder, &pReq->igExists) < 0) return -1;
if (tDecodeFloat(&decoder, &pReq->xFilesFactor) < 0) return -1;
if (tDecodeI32(&decoder, &pReq->delay) < 0) return -1;
if (tDecodeI64(&decoder, &pReq->delay1) < 0) return -1;
if (tDecodeI64(&decoder, &pReq->delay2) < 0) return -1;
if (tDecodeI64(&decoder, &pReq->watermark1) < 0) return -1;
if (tDecodeI64(&decoder, &pReq->watermark2) < 0) return -1;
if (tDecodeI32(&decoder, &pReq->ttl) < 0) return -1;
if (tDecodeI32(&decoder, &pReq->numOfColumns) < 0) return -1;
if (tDecodeI32(&decoder, &pReq->numOfTags) < 0) return -1;
......
......@@ -204,8 +204,8 @@ SArray *mmGetMsgHandles() {
if (dmSetMgmtHandle(pArray, TDMT_VND_QUERY, mmPutMsgToQueryQueue, 1) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_QUERY_CONTINUE, mmPutMsgToQueryQueue, 1) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_QUERY_HEARTBEAT, mmPutMsgToQueryQueue, 1) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_FETCH, mmPutMsgToQueryQueue, 1) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_QUERY_HEARTBEAT, mmPutMsgToFetchQueue, 1) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_FETCH, mmPutMsgToFetchQueue, 1) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_CREATE_STB_RSP, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_ALTER_STB_RSP, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_DROP_STB_RSP, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
......@@ -213,7 +213,7 @@ SArray *mmGetMsgHandles() {
if (dmSetMgmtHandle(pArray, TDMT_VND_DROP_SMA_RSP, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_MQ_VG_CHANGE_RSP, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_MQ_VG_DELETE_RSP, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_DROP_TASK, mmPutMsgToQueryQueue, 1) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_DROP_TASK, mmPutMsgToFetchQueue, 1) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_STREAM_TASK_DEPLOY_RSP, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_STREAM_TASK_DROP_RSP, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_ALTER_CONFIG_RSP, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
......
......@@ -530,7 +530,10 @@ static int32_t mndCreateSma(SMnode *pMnode, SRpcMsg *pReq, SMCreateSmaReq *pCrea
smaObj.dbUid = pStb->dbUid;
smaObj.intervalUnit = pCreate->intervalUnit;
smaObj.slidingUnit = pCreate->slidingUnit;
#if 0
smaObj.timezone = pCreate->timezone;
#endif
smaObj.timezone = tsTimezone; // use timezone of server
smaObj.interval = pCreate->interval;
smaObj.offset = pCreate->offset;
smaObj.sliding = pCreate->sliding;
......
......@@ -669,8 +669,8 @@ int32_t mndBuildStbFromReq(SMnode *pMnode, SStbObj *pDst, SMCreateStbReq *pCreat
pDst->tagVer = 1;
pDst->colVer = 1;
pDst->nextColId = 1;
pDst->xFilesFactor = pCreate->xFilesFactor;
pDst->delay = pCreate->delay;
// pDst->xFilesFactor = pCreate->xFilesFactor;
// pDst->delay = pCreate->delay;
pDst->ttl = pCreate->ttl;
pDst->numOfColumns = pCreate->numOfColumns;
pDst->numOfTags = pCreate->numOfTags;
......
......@@ -774,9 +774,6 @@ static int32_t vnodeProcessCreateTSmaReq(SVnode *pVnode, int64_t version, void *
goto _err;
}
// record current timezone of server side
req.timezoneInt = tsTimezone;
if (tdProcessTSmaCreate(pVnode->pSma, version, (const char *)&req) < 0) {
if (pRsp) pRsp->code = terrno;
goto _err;
......
......@@ -47,7 +47,8 @@ static SSDataBlock* buildDescResultDataBlock() {
taosArrayPush(pBlock->pDataBlock, &infoData);
infoData.info.type = TSDB_DATA_TYPE_INT;
infoData.info.bytes = tDataTypes[TSDB_DATA_TYPE_INT].bytes;;
infoData.info.bytes = tDataTypes[TSDB_DATA_TYPE_INT].bytes;
taosArrayPush(pBlock->pDataBlock, &infoData);
infoData.info.type = TSDB_DATA_TYPE_VARCHAR;
......@@ -63,7 +64,7 @@ static void setDescResultIntoDataBlock(SSDataBlock* pBlock, int32_t numOfRows, S
// field
SColumnInfoData* pCol1 = taosArrayGet(pBlock->pDataBlock, 0);
char buf[DESCRIBE_RESULT_FIELD_LEN] = {0};
char buf[DESCRIBE_RESULT_FIELD_LEN] = {0};
for (int32_t i = 0; i < numOfRows; ++i) {
STR_TO_VARSTR(buf, pMeta->schema[i].name);
colDataAppend(pCol1, i, buf, false);
......@@ -92,8 +93,8 @@ static void setDescResultIntoDataBlock(SSDataBlock* pBlock, int32_t numOfRows, S
}
static int32_t execDescribe(SNode* pStmt, SRetrieveTableRsp** pRsp) {
SDescribeStmt* pDesc = (SDescribeStmt*) pStmt;
int32_t numOfRows = TABLE_TOTAL_COL_NUM(pDesc->pMeta);
SDescribeStmt* pDesc = (SDescribeStmt*)pStmt;
int32_t numOfRows = TABLE_TOTAL_COL_NUM(pDesc->pMeta);
SSDataBlock* pBlock = buildDescResultDataBlock();
setDescResultIntoDataBlock(pBlock, numOfRows, pDesc->pMeta);
......@@ -120,9 +121,15 @@ static int32_t execDescribe(SNode* pStmt, SRetrieveTableRsp** pRsp) {
return TSDB_CODE_SUCCESS;
}
static int32_t execResetQueryCache() {
return catalogClearCache();
}
static int32_t execResetQueryCache() { return catalogClearCache(); }
static int32_t execShowCreateDatabase(SShowCreateDatabaseStmt* pStmt) { return TSDB_CODE_FAILED; }
static int32_t execShowCreateTable(SShowCreateTableStmt* pStmt) { return TSDB_CODE_FAILED; }
static int32_t execShowCreateSTable(SShowCreateTableStmt* pStmt) { return TSDB_CODE_FAILED; }
static int32_t execAlterLocal(SAlterLocalStmt* pStmt) { return TSDB_CODE_FAILED; }
int32_t qExecCommand(SNode* pStmt, SRetrieveTableRsp** pRsp) {
switch (nodeType(pStmt)) {
......@@ -130,6 +137,14 @@ int32_t qExecCommand(SNode* pStmt, SRetrieveTableRsp** pRsp) {
return execDescribe(pStmt, pRsp);
case QUERY_NODE_RESET_QUERY_CACHE_STMT:
return execResetQueryCache();
case QUERY_NODE_SHOW_CREATE_DATABASE_STMT:
return execShowCreateDatabase((SShowCreateDatabaseStmt*)pStmt);
case QUERY_NODE_SHOW_CREATE_TABLE_STMT:
return execShowCreateTable((SShowCreateTableStmt*)pStmt);
case QUERY_NODE_SHOW_CREATE_STABLE_STMT:
return execShowCreateSTable((SShowCreateTableStmt*)pStmt);
case QUERY_NODE_ALTER_LOCAL_STMT:
return execAlterLocal((SAlterLocalStmt*)pStmt);
default:
break;
}
......
......@@ -113,7 +113,7 @@ SArray* extractColMatchInfo(SNodeList* pNodeList, SDataBlockDescNode* pOutputNod
SExprInfo* createExprInfo(SNodeList* pNodeList, SNodeList* pGroupKeys, int32_t* numOfExprs);
SqlFunctionCtx* createSqlFunctionCtx(SExprInfo* pExprInfo, int32_t numOfOutput, int32_t** rowCellInfoOffset);
SqlFunctionCtx* createSqlFunctionCtx(SExprInfo* pExprInfo, int32_t numOfOutput, int32_t** rowEntryInfoOffset);
void relocateColumnData(SSDataBlock* pBlock, const SArray* pColMatchInfo, SArray* pCols);
void initExecTimeWindowInfo(SColumnInfoData* pColData, STimeWindow* pQueryWindow);
......@@ -121,6 +121,6 @@ SInterval extractIntervalInfo(const STableScanPhysiNode* pTableScanNode);
SColumn extractColumnFromColumnNode(SColumnNode* pColNode);
int32_t initQueryTableDataCond(SQueryTableDataCond* pCond, const STableScanPhysiNode* pTableScanNode);
void cleanupQueryTableDataCond(SQueryTableDataCond* pCond);
void cleanupQueryTableDataCond(SQueryTableDataCond* pCond);
#endif // TDENGINE_QUERYUTIL_H
......@@ -49,8 +49,6 @@ typedef int32_t (*__block_search_fn_t)(char* data, int32_t num, int64_t key, int
#define Q_STATUS_EQUAL(p, s) (((p) & (s)) != 0u)
#define QUERY_IS_ASC_QUERY(q) (GET_FORWARD_DIRECTION_FACTOR((q)->order.order) == QUERY_ASC_FORWARD_STEP)
//#define GET_TABLEGROUP(q, _index) ((SArray*)taosArrayGetP((q)->tableqinfoGroupInfo.pGroupList, (_index)))
#define NEEDTO_COMPRESS_QUERY(size) ((size) > tsCompressColData ? 1 : 0)
enum {
......@@ -64,11 +62,6 @@ enum {
TASK_COMPLETED = 0x2u,
};
typedef struct SResultRowCell {
uint64_t groupId;
SResultRowPosition pos;
} SResultRowCell;
/**
* If the number of generated results is greater than this value,
* query query will be halt and return results to client immediate.
......@@ -83,7 +76,6 @@ typedef struct SResultInfo { // TODO refactor
typedef struct STableQueryInfo {
TSKEY lastKey; // last check ts, todo remove it later
SResultRowPosition pos; // current active time window
// SVariant tag;
} STableQueryInfo;
typedef struct SLimit {
......@@ -123,41 +115,7 @@ typedef struct SOperatorCostInfo {
double totalCost;
} SOperatorCostInfo;
// The basic query information extracted from the SQueryInfo tree to support the
// execution of query in a data node.
typedef struct STaskAttr {
SLimit limit;
SLimit slimit;
bool stableQuery; // super table query or not
bool topBotQuery; // TODO used bitwise flag
bool groupbyColumn; // denote if this is a groupby normal column query
bool timeWindowInterpo; // if the time window start/end required interpolation
bool tsCompQuery; // is tscomp query
bool diffQuery; // is diff query
bool pointInterpQuery; // point interpolation query
int32_t havingNum; // having expr number
int16_t numOfCols;
int16_t numOfTags;
STimeWindow window;
SInterval interval;
int16_t precision;
int16_t numOfOutput;
int16_t fillType;
int32_t resultRowSize;
int32_t tagLen; // tag value length of current query
SExprInfo* pExpr1;
SColumnInfo* tagColList;
int32_t numOfFilterCols;
int64_t* fillVal;
void* tsdb;
// STableListInfo tableGroupInfo; // table list
int32_t vgId;
} STaskAttr;
struct SOperatorInfo;
//struct SAggSupporter;
//struct SOptrBasicInfo;
typedef int32_t (*__optr_encode_fn_t)(struct SOperatorInfo* pOperator, char** result, int32_t* length);
typedef int32_t (*__optr_decode_fn_t)(struct SOperatorInfo* pOperator, char* result);
......@@ -195,31 +153,6 @@ typedef struct SExecTaskInfo {
struct SOperatorInfo* pRoot;
} SExecTaskInfo;
typedef struct STaskRuntimeEnv {
STaskAttr* pQueryAttr;
uint32_t status; // query status
uint8_t scanFlag; // denotes reversed scan of data or not
SDiskbasedBuf* pResultBuf; // query result buffer based on blocked-wised disk file
SHashObj* pResultRowHashTable; // quick locate the window object for each result
SHashObj* pResultRowListSet; // used to check if current ResultRowInfo has ResultRow object or not
SArray* pResultRowArrayList; // The array list that contains the Result rows
char* keyBuf; // window key buffer
// The window result objects pool, all the resultRow Objects are allocated and managed by this object.
char** prevRow;
STSBuf* pTsBuf; // timestamp filter list
STSCursor cur;
char* tagVal; // tag value of current data block
// STableGroupInfo tableqinfoGroupInfo; // this is a table list
struct SOperatorInfo* proot;
SGroupResInfo groupResInfo;
int64_t currentOffset; // dynamic offset value
STableQueryInfo* current;
SResultInfo resultInfo;
struct SUdfInfo* pUdfInfo;
} STaskRuntimeEnv;
enum {
OP_NOT_OPENED = 0x0,
OP_OPENED = 0x1,
......@@ -238,14 +171,20 @@ typedef struct SOperatorFpSet {
__optr_explain_fn_t getExplainFn;
} SOperatorFpSet;
typedef struct SExprSupp {
SExprInfo* pExprInfo;
int32_t numOfExprs; // the number of scalar expression in group operator
SqlFunctionCtx* pCtx;
int32_t* rowEntryInfoOffset; // offset value for each row result cell info
} SExprSupp;
typedef struct SOperatorInfo {
uint8_t operatorType;
bool blocking; // block operator or not
uint8_t status; // denote if current operator is completed
int32_t numOfExprs; // number of columns of the current operator results
char* name; // name, used to show the query execution plan
char* name; // name, for debug purpose
void* info; // extension attribution
SExprInfo* pExpr;
SExprSupp exprSupp;
SExecTaskInfo* pTaskInfo;
SOperatorCostInfo cost;
SResultInfo resultInfo;
......@@ -260,6 +199,9 @@ typedef enum {
EX_SOURCE_DATA_EXHAUSTED = 0x3,
} EX_SOURCE_STATUS;
#define COL_MATCH_FROM_COL_ID 0x1
#define COL_MATCH_FROM_SLOT_ID 0x2
typedef struct SSourceDataInfo {
int32_t index;
SRetrieveTableRsp* pRsp;
......@@ -287,9 +229,6 @@ typedef struct SExchangeInfo {
uint64_t self;
} SExchangeInfo;
#define COL_MATCH_FROM_COL_ID 0x1
#define COL_MATCH_FROM_SLOT_ID 0x2
typedef struct SColMatchInfo {
int32_t srcSlotId; // source slot id
int32_t colId;
......@@ -299,8 +238,8 @@ typedef struct SColMatchInfo {
} SColMatchInfo;
typedef struct SScanInfo {
int32_t numOfAsc;
int32_t numOfDesc;
int32_t numOfAsc;
int32_t numOfDesc;
} SScanInfo;
typedef struct SSampleExecInfo {
......@@ -320,17 +259,13 @@ typedef struct STableScanInfo {
SNode* pFilterNode; // filter info, which is push down by optimizer
SqlFunctionCtx* pCtx; // which belongs to the direct upstream operator operator query context
SResultRowInfo* pResultRowInfo;
int32_t* rowCellInfoOffset;
int32_t* rowEntryInfoOffset;
SExprInfo* pExpr;
SSDataBlock* pResBlock;
SArray* pColMatchInfo;
int32_t numOfOutput;
SExprInfo* pPseudoExpr;
int32_t numOfPseudoExpr;
SqlFunctionCtx* pPseudoCtx;
// int32_t* rowCellInfoOffset;
SExprSupp pseudoSup;
SQueryTableDataCond cond;
int32_t scanFlag; // table scan flag to denote if it is a repeat/reverse/main scan
int32_t dataBlockLoadFlag;
......@@ -436,14 +371,12 @@ typedef struct SBlockDistInfo {
void* pHandle;
} SBlockDistInfo;
// todo remove this
typedef struct SOptrBasicInfo {
SResultRowInfo resultRowInfo;
int32_t* rowCellInfoOffset; // offset value for each row result cell info
SqlFunctionCtx* pCtx;
SSDataBlock* pRes;
} SOptrBasicInfo;
// TODO move the resultrowsiz together with SOptrBasicInfo:rowCellInfoOffset
typedef struct SAggSupporter {
SHashObj* pResultRowHashTable; // quick locate the window object for each result
char* keyBuf; // window key buffer
......@@ -500,10 +433,7 @@ typedef struct SAggOperatorInfo {
STableQueryInfo *current;
uint64_t groupId;
SGroupResInfo groupResInfo;
SExprInfo *pScalarExprInfo;
int32_t numOfScalarExpr; // the number of scalar expression before the aggregate function can be applied
SqlFunctionCtx *pScalarCtx; // scalar function requried sql function struct.
int32_t *rowCellInfoOffset; // offset value for each row result cell info
SExprSupp scalarExprSup;
} SAggOperatorInfo;
typedef struct SProjectOperatorInfo {
......@@ -528,11 +458,7 @@ typedef struct SIndefOperatorInfo {
SOptrBasicInfo binfo;
SAggSupporter aggSup;
SArray* pPseudoColInfo;
SExprInfo* pScalarExpr;
int32_t numOfScalarExpr;
SqlFunctionCtx* pScalarCtx;
int32_t* rowCellInfoOffset;
SExprSupp scalarSup;
} SIndefOperatorInfo;
typedef struct SFillOperatorInfo {
......@@ -544,13 +470,6 @@ typedef struct SFillOperatorInfo {
bool multigroupResult;
} SFillOperatorInfo;
typedef struct SScalarSupp {
SExprInfo* pScalarExprInfo;
int32_t numOfScalarExpr; // the number of scalar expression in group operator
SqlFunctionCtx* pScalarFuncCtx;
int32_t* rowCellInfoOffset; // offset value for each row result cell info
} SScalarSupp;
typedef struct SGroupbyOperatorInfo {
// SOptrBasicInfo should be first, SAggSupporter should be second for stream encode
SOptrBasicInfo binfo;
......@@ -563,7 +482,7 @@ typedef struct SGroupbyOperatorInfo {
char* keyBuf; // group by keys for hash
int32_t groupKeyLen; // total group by column width
SGroupResInfo groupResInfo;
SScalarSupp scalarSup;
SExprSupp scalarSup;
} SGroupbyOperatorInfo;
typedef struct SDataGroupInfo {
......@@ -587,7 +506,7 @@ typedef struct SPartitionOperatorInfo {
void* pGroupIter; // group iterator
int32_t pageIndex; // page index of current group
SSDataBlock* pUpdateRes;
SScalarSupp scalarSupp;
SExprSupp scalarSup;
} SPartitionOperatorInfo;
typedef struct SWindowRowsSup {
......@@ -740,15 +659,19 @@ SOperatorFpSet createOperatorFpSet(__optr_open_fn_t openFn, __optr_fn_t nextFn,
int32_t operatorDummyOpenFn(SOperatorInfo* pOperator);
void operatorDummyCloseFn(void* param, int32_t numOfCols);
int32_t appendDownstream(SOperatorInfo* p, SOperatorInfo** pDownstream, int32_t num);
int32_t initAggInfo(SOptrBasicInfo* pBasicInfo, SAggSupporter* pAggSup, SExprInfo* pExprInfo, int32_t numOfCols,
SSDataBlock* pResultBlock, size_t keyBufSize, const char* pkey);
void initBasicInfo(SOptrBasicInfo* pInfo, SSDataBlock* pBlock);
void cleanupBasicInfo(SOptrBasicInfo* pInfo);
void initExprSupp(SExprSupp* pSup, SExprInfo* pExprInfo, int32_t numOfExpr);
void cleanupExprSup(SExprSupp* pSup);
int32_t initAggInfo(SExprSupp *pSup, SAggSupporter* pAggSup, SExprInfo* pExprInfo, int32_t numOfCols, size_t keyBufSize,
const char* pkey);
void initResultSizeInfo(SOperatorInfo* pOperator, int32_t numOfRows);
void doBuildResultDatablock(SOperatorInfo* pOperator, SOptrBasicInfo* pbInfo, SGroupResInfo* pGroupResInfo, SDiskbasedBuf* pBuf);
void doApplyFunctions(SExecTaskInfo* taskInfo, SqlFunctionCtx* pCtx, STimeWindow* pWin, SColumnInfoData* pTimeWindowData, int32_t offset,
int32_t forwardStep, TSKEY* tsCol, int32_t numOfTotal, int32_t numOfOutput, int32_t order);
void doDestroyBasicInfo(SOptrBasicInfo* pInfo, int32_t numOfOutput);
int32_t extractDataBlockFromFetchRsp(SSDataBlock* pRes, SLoadRemoteDataInfo* pLoadInfo, int32_t numOfRows, char* pData,
int32_t compLen, int32_t numOfOutput, int64_t startTs, uint64_t* total,
SArray* pColList);
......@@ -764,9 +687,11 @@ void destroyBasicOperatorInfo(void* param, int32_t numOfOutput);
void appendOneRowToDataBlock(SSDataBlock* pBlock, STupleHandle* pTupleHandle);
void setTbNameColData(void* pMeta, const SSDataBlock* pBlock, SColumnInfoData* pColInfoData, int32_t functionId);
void cleanupExecSupp(SExprSupp* pSupp);
SSDataBlock* loadNextDataBlock(void* param);
void setResultRowInitCtx(SResultRow* pResult, SqlFunctionCtx* pCtx, int32_t numOfOutput, int32_t* rowCellInfoOffset);
void setResultRowInitCtx(SResultRow* pResult, SqlFunctionCtx* pCtx, int32_t numOfOutput, int32_t* rowEntryInfoOffset);
SResultRow* doSetResultOutBufByKey(SDiskbasedBuf* pResultBuf, SResultRowInfo* pResultRowInfo,
char* pData, int16_t bytes, bool masterscan, uint64_t groupId,
......@@ -885,12 +810,13 @@ STimeWindow getActiveTimeWindow(SDiskbasedBuf* pBuf, SResultRowInfo* pResultRowI
int32_t getNumOfRowsInTimeWindow(SDataBlockInfo* pDataBlockInfo, TSKEY* pPrimaryColumn, int32_t startPos, TSKEY ekey,
__block_search_fn_t searchFn, STableQueryInfo* item, int32_t order);
int32_t binarySearchForKey(char* pValue, int num, TSKEY key, int order);
int32_t initStreamAggSupporter(SStreamAggSupporter* pSup, const char* pKey,
SqlFunctionCtx* pCtx, int32_t numOfOutput, int32_t size);
int32_t initStreamAggSupporter(SStreamAggSupporter* pSup, const char* pKey, SqlFunctionCtx* pCtx, int32_t numOfOutput,
int32_t size);
SResultRow* getNewResultRow(SDiskbasedBuf* pResultBuf, int64_t tableGroupId, int32_t interBufSize);
SResultWindowInfo* getSessionTimeWindow(SStreamAggSupporter* pAggSup, TSKEY ts, uint64_t groupId, int64_t gap, int32_t* pIndex);
int32_t updateSessionWindowInfo(SResultWindowInfo* pWinInfo, TSKEY* pTs, int32_t rows,
int32_t start, int64_t gap, SHashObj* pStDeleted);
int32_t updateSessionWindowInfo(SResultWindowInfo* pWinInfo, TSKEY* pTs, int32_t rows, int32_t start, int64_t gap,
SHashObj* pStDeleted);
bool functionNeedToExecute(SqlFunctionCtx* pCtx);
int32_t compareTimeWindow(const void* p1, const void* p2, const void* param);
......
......@@ -42,21 +42,6 @@ void cleanupResultRowInfo(SResultRowInfo *pResultRowInfo) {
}
}
void resetResultRowInfo(STaskRuntimeEnv *pRuntimeEnv, SResultRowInfo *pResultRowInfo) {
for (int32_t i = 0; i < pResultRowInfo->size; ++i) {
// SResultRow *pWindowRes = pResultRowInfo->pResult[i];
// clearResultRow(pRuntimeEnv, pWindowRes);
int32_t groupIndex = 0;
int64_t uid = 0;
SET_RES_WINDOW_KEY(pRuntimeEnv->keyBuf, &groupIndex, sizeof(groupIndex), uid);
taosHashRemove(pRuntimeEnv->pResultRowHashTable, (const char *)pRuntimeEnv->keyBuf, GET_RES_WINDOW_KEY_LEN(sizeof(groupIndex)));
}
pResultRowInfo->size = 0;
}
void closeAllResultRows(SResultRowInfo *pResultRowInfo) {
// do nothing
}
......@@ -518,14 +503,14 @@ static int32_t setSelectValueColumnInfo(SqlFunctionCtx* pCtx, int32_t numOfOutpu
return TSDB_CODE_SUCCESS;
}
SqlFunctionCtx* createSqlFunctionCtx(SExprInfo* pExprInfo, int32_t numOfOutput, int32_t** rowCellInfoOffset) {
SqlFunctionCtx* createSqlFunctionCtx(SExprInfo* pExprInfo, int32_t numOfOutput, int32_t** rowEntryInfoOffset) {
SqlFunctionCtx* pFuncCtx = (SqlFunctionCtx*)taosMemoryCalloc(numOfOutput, sizeof(SqlFunctionCtx));
if (pFuncCtx == NULL) {
return NULL;
}
*rowCellInfoOffset = taosMemoryCalloc(numOfOutput, sizeof(int32_t));
if (*rowCellInfoOffset == 0) {
*rowEntryInfoOffset = taosMemoryCalloc(numOfOutput, sizeof(int32_t));
if (*rowEntryInfoOffset == 0) {
taosMemoryFreeClear(pFuncCtx);
return NULL;
}
......@@ -584,8 +569,8 @@ SqlFunctionCtx* createSqlFunctionCtx(SExprInfo* pExprInfo, int32_t numOfOutput,
}
for (int32_t i = 1; i < numOfOutput; ++i) {
(*rowCellInfoOffset)[i] =
(int32_t)((*rowCellInfoOffset)[i - 1] + sizeof(SResultRowEntryInfo) + pFuncCtx[i - 1].resDataInfo.interBufSize);
(*rowEntryInfoOffset)[i] =
(int32_t)((*rowEntryInfoOffset)[i - 1] + sizeof(SResultRowEntryInfo) + pFuncCtx[i - 1].resDataInfo.interBufSize);
}
setSelectValueColumnInfo(pFuncCtx, numOfOutput);
......
......@@ -28,15 +28,16 @@
static void* getCurrentDataGroupInfo(const SPartitionOperatorInfo* pInfo, SDataGroupInfo** pGroupInfo, int32_t len);
static int32_t* setupColumnOffset(const SSDataBlock* pBlock, int32_t rowCapacity);
static int32_t setGroupResultOutputBuf(SOptrBasicInfo* binfo, int32_t numOfCols, char* pData, int16_t type, int16_t bytes,
int32_t groupId, SDiskbasedBuf* pBuf, SExecTaskInfo* pTaskInfo, SAggSupporter* pAggSup);
static int32_t setGroupResultOutputBuf(SOperatorInfo* pOperator, SOptrBasicInfo* binfo, int32_t numOfCols, char* pData, int16_t bytes,
int32_t groupId, SDiskbasedBuf* pBuf, SAggSupporter* pAggSup);
static void destroyGroupOperatorInfo(void* param, int32_t numOfOutput) {
SGroupbyOperatorInfo* pInfo = (SGroupbyOperatorInfo*)param;
doDestroyBasicInfo(&pInfo->binfo, numOfOutput);
cleanupBasicInfo(&pInfo->binfo);
taosMemoryFreeClear(pInfo->keyBuf);
taosArrayDestroy(pInfo->pGroupCols);
taosArrayDestroy(pInfo->pGroupColVals);
cleanupExecSupp(&pInfo->scalarSup);
}
static int32_t initGroupOptrInfo(SArray** pGroupColVals, int32_t* keyLen, char** keyBuf, const SArray* pGroupColList) {
......@@ -216,7 +217,7 @@ static void doHashGroupbyAgg(SOperatorInfo* pOperator, SSDataBlock* pBlock) {
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
SGroupbyOperatorInfo* pInfo = pOperator->info;
SqlFunctionCtx* pCtx = pInfo->binfo.pCtx;
SqlFunctionCtx* pCtx = pOperator->exprSupp.pCtx;
int32_t numOfGroupCols = taosArrayGetSize(pInfo->pGroupCols);
// if (type == TSDB_DATA_TYPE_FLOAT || type == TSDB_DATA_TYPE_DOUBLE) {
// qError("QInfo:0x%"PRIx64" group by not supported on double/float columns, abort", GET_TASKID(pRuntimeEnv));
......@@ -250,16 +251,16 @@ static void doHashGroupbyAgg(SOperatorInfo* pOperator, SSDataBlock* pBlock) {
}
len = buildGroupKeys(pInfo->keyBuf, pInfo->pGroupColVals);
int32_t ret = setGroupResultOutputBuf(&(pInfo->binfo), pOperator->numOfExprs, pInfo->keyBuf, TSDB_DATA_TYPE_VARCHAR, len, 0, pInfo->aggSup.pResultBuf, pTaskInfo, &pInfo->aggSup);
int32_t ret = setGroupResultOutputBuf(pOperator, &(pInfo->binfo), pOperator->exprSupp.numOfExprs, pInfo->keyBuf, len, 0, pInfo->aggSup.pResultBuf, &pInfo->aggSup);
if (ret != TSDB_CODE_SUCCESS) { // null data, too many state code
longjmp(pTaskInfo->env, TSDB_CODE_QRY_APP_ERROR);
}
int32_t rowIndex = j - num;
doApplyFunctions(pTaskInfo, pCtx, &w, NULL, rowIndex, num, NULL, pBlock->info.rows, pOperator->numOfExprs, TSDB_ORDER_ASC);
doApplyFunctions(pTaskInfo, pCtx, &w, NULL, rowIndex, num, NULL, pBlock->info.rows, pOperator->exprSupp.numOfExprs, TSDB_ORDER_ASC);
// assign the group keys or user input constant values if required
doAssignGroupKeys(pCtx, pOperator->numOfExprs, pBlock->info.rows, rowIndex);
doAssignGroupKeys(pCtx, pOperator->exprSupp.numOfExprs, pBlock->info.rows, rowIndex);
recordNewGroupKeys(pInfo->pGroupCols, pInfo->pGroupColVals, pBlock, j);
num = 1;
}
......@@ -267,15 +268,15 @@ static void doHashGroupbyAgg(SOperatorInfo* pOperator, SSDataBlock* pBlock) {
if (num > 0) {
len = buildGroupKeys(pInfo->keyBuf, pInfo->pGroupColVals);
int32_t ret =
setGroupResultOutputBuf(&(pInfo->binfo), pOperator->numOfExprs, pInfo->keyBuf, TSDB_DATA_TYPE_VARCHAR, len,
0, pInfo->aggSup.pResultBuf, pTaskInfo, &pInfo->aggSup);
setGroupResultOutputBuf(pOperator, &(pInfo->binfo), pOperator->exprSupp.numOfExprs, pInfo->keyBuf, len,
0, pInfo->aggSup.pResultBuf, &pInfo->aggSup);
if (ret != TSDB_CODE_SUCCESS) {
longjmp(pTaskInfo->env, TSDB_CODE_QRY_APP_ERROR);
}
int32_t rowIndex = pBlock->info.rows - num;
doApplyFunctions(pTaskInfo, pCtx, &w, NULL, rowIndex, num, NULL, pBlock->info.rows, pOperator->numOfExprs, TSDB_ORDER_ASC);
doAssignGroupKeys(pCtx, pOperator->numOfExprs, pBlock->info.rows, rowIndex);
doApplyFunctions(pTaskInfo, pCtx, &w, NULL, rowIndex, num, NULL, pBlock->info.rows, pOperator->exprSupp.numOfExprs, TSDB_ORDER_ASC);
doAssignGroupKeys(pCtx, pOperator->exprSupp.numOfExprs, pBlock->info.rows, rowIndex);
}
}
......@@ -319,11 +320,11 @@ static SSDataBlock* hashGroupbyAggregate(SOperatorInfo* pOperator) {
}
// the pDataBlock are always the same one, no need to call this again
setInputDataBlock(pOperator, pInfo->binfo.pCtx, pBlock, order, scanFlag, true);
setInputDataBlock(pOperator, pOperator->exprSupp.pCtx, pBlock, order, scanFlag, true);
// there is an scalar expression that needs to be calculated right before apply the group aggregation.
if (pInfo->scalarSup.pScalarExprInfo != NULL) {
pTaskInfo->code = projectApplyFunctions(pInfo->scalarSup.pScalarExprInfo, pBlock, pBlock, pInfo->scalarSup.pScalarFuncCtx, pInfo->scalarSup.numOfScalarExpr, NULL);
if (pInfo->scalarSup.pExprInfo != NULL) {
pTaskInfo->code = projectApplyFunctions(pInfo->scalarSup.pExprInfo, pBlock, pBlock, pInfo->scalarSup.pCtx, pInfo->scalarSup.numOfExprs, NULL);
if (pTaskInfo->code != TSDB_CODE_SUCCESS) {
longjmp(pTaskInfo->env, pTaskInfo->code);
}
......@@ -386,9 +387,9 @@ SOperatorInfo* createGroupOperatorInfo(SOperatorInfo* downstream, SExprInfo* pEx
pInfo->pGroupCols = pGroupColList;
pInfo->pCondition = pCondition;
pInfo->scalarSup.pScalarExprInfo = pScalarExprInfo;
pInfo->scalarSup.numOfScalarExpr = numOfScalarExpr;
pInfo->scalarSup.pScalarFuncCtx = createSqlFunctionCtx(pScalarExprInfo, numOfScalarExpr, &pInfo->scalarSup.rowCellInfoOffset);
pInfo->scalarSup.pExprInfo = pScalarExprInfo;
pInfo->scalarSup.numOfExprs = numOfScalarExpr;
pInfo->scalarSup.pCtx = createSqlFunctionCtx(pScalarExprInfo, numOfScalarExpr, &pInfo->scalarSup.rowEntryInfoOffset);
int32_t code = initGroupOptrInfo(&pInfo->pGroupColVals, &pInfo->groupKeyLen, &pInfo->keyBuf, pGroupColList);
if (code != TSDB_CODE_SUCCESS) {
......@@ -396,15 +397,16 @@ SOperatorInfo* createGroupOperatorInfo(SOperatorInfo* downstream, SExprInfo* pEx
}
initResultSizeInfo(pOperator, 4096);
initAggInfo(&pInfo->binfo, &pInfo->aggSup, pExprInfo, numOfCols, pResultBlock, pInfo->groupKeyLen, pTaskInfo->id.str);
initAggInfo(&pOperator->exprSupp, &pInfo->aggSup, pExprInfo, numOfCols, pInfo->groupKeyLen, pTaskInfo->id.str);
initBasicInfo(&pInfo->binfo, pResultBlock);
initResultRowInfo(&pInfo->binfo.resultRowInfo);
pOperator->name = "GroupbyAggOperator";
pOperator->blocking = true;
pOperator->status = OP_NOT_OPENED;
// pOperator->operatorType = OP_Groupby;
pOperator->pExpr = pExprInfo;
pOperator->numOfExprs = numOfCols;
pOperator->exprSupp.pExprInfo = pExprInfo;
pOperator->exprSupp.numOfExprs = numOfCols;
pOperator->info = pInfo;
pOperator->pTaskInfo = pTaskInfo;
......@@ -441,9 +443,9 @@ static void doHashPartition(SOperatorInfo* pOperator, SSDataBlock* pBlock) {
// group id
size_t numOfCols = pOperator->numOfExprs;
size_t numOfCols = pOperator->exprSupp.numOfExprs;
for(int32_t i = 0; i < numOfCols; ++i) {
SExprInfo* pExpr = &pOperator->pExpr[i];
SExprInfo* pExpr = &pOperator->exprSupp.pExprInfo[i];
int32_t slotId = pExpr->base.pParam[0].pCol->slotId;
SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, slotId);
......@@ -645,8 +647,8 @@ static SSDataBlock* hashPartition(SOperatorInfo* pOperator) {
}
// there is an scalar expression that needs to be calculated right before apply the group aggregation.
if (pInfo->scalarSupp.pScalarExprInfo != NULL) {
pTaskInfo->code = projectApplyFunctions(pInfo->scalarSupp.pScalarExprInfo, pBlock, pBlock, pInfo->scalarSupp.pScalarFuncCtx, pInfo->scalarSupp.numOfScalarExpr, NULL);
if (pInfo->scalarSup.pExprInfo != NULL) {
pTaskInfo->code = projectApplyFunctions(pInfo->scalarSup.pExprInfo, pBlock, pBlock, pInfo->scalarSup.pCtx, pInfo->scalarSup.numOfExprs, NULL);
if (pTaskInfo->code != TSDB_CODE_SUCCESS) {
longjmp(pTaskInfo->env, pTaskInfo->code);
}
......@@ -664,16 +666,20 @@ static SSDataBlock* hashPartition(SOperatorInfo* pOperator) {
static void destroyPartitionOperatorInfo(void* param, int32_t numOfOutput) {
SPartitionOperatorInfo* pInfo = (SPartitionOperatorInfo*)param;
doDestroyBasicInfo(&pInfo->binfo, numOfOutput);
cleanupBasicInfo(&pInfo->binfo);
taosArrayDestroy(pInfo->pGroupCols);
for(int i = 0; i < taosArrayGetSize(pInfo->pGroupColVals); i++){
SGroupKeys key = *(SGroupKeys*)taosArrayGet(pInfo->pGroupColVals, i);
taosMemoryFree(key.pData);
}
taosArrayDestroy(pInfo->pGroupColVals);
taosMemoryFree(pInfo->keyBuf);
taosHashCleanup(pInfo->pGroupSet);
taosMemoryFree(pInfo->columnOffset);
cleanupExecSupp(&pInfo->scalarSup);
}
SOperatorInfo* createPartitionOperatorInfo(SOperatorInfo* downstream, SPartitionPhysiNode* pPartNode, SExecTaskInfo* pTaskInfo) {
......@@ -691,10 +697,10 @@ SOperatorInfo* createPartitionOperatorInfo(SOperatorInfo* downstream, SPartition
pInfo->pGroupCols = extractPartitionColInfo(pPartNode->pPartitionKeys);
if (pPartNode->pExprs != NULL) {
pInfo->scalarSupp.numOfScalarExpr = 0;
pInfo->scalarSupp.pScalarExprInfo = createExprInfo(pPartNode->pExprs, NULL, &pInfo->scalarSupp.numOfScalarExpr);
pInfo->scalarSupp.pScalarFuncCtx = createSqlFunctionCtx(
pInfo->scalarSupp.pScalarExprInfo, pInfo->scalarSupp.numOfScalarExpr, &pInfo->scalarSupp.rowCellInfoOffset);
pInfo->scalarSup.numOfExprs = 0;
pInfo->scalarSup.pExprInfo = createExprInfo(pPartNode->pExprs, NULL, &pInfo->scalarSup.numOfExprs);
pInfo->scalarSup.pCtx = createSqlFunctionCtx(
pInfo->scalarSup.pExprInfo, pInfo->scalarSup.numOfExprs, &pInfo->scalarSup.rowEntryInfoOffset);
}
_hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY);
......@@ -724,8 +730,8 @@ SOperatorInfo* createPartitionOperatorInfo(SOperatorInfo* downstream, SPartition
pOperator->status = OP_NOT_OPENED;
pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_PARTITION;
pInfo->binfo.pRes = pResBlock;
pOperator->numOfExprs = numOfCols;
pOperator->pExpr = pExprInfo;
pOperator->exprSupp.numOfExprs = numOfCols;
pOperator->exprSupp.pExprInfo = pExprInfo;
pOperator->info = pInfo;
pOperator->pTaskInfo = pTaskInfo;
......@@ -742,16 +748,16 @@ SOperatorInfo* createPartitionOperatorInfo(SOperatorInfo* downstream, SPartition
return NULL;
}
int32_t setGroupResultOutputBuf(SOptrBasicInfo* binfo, int32_t numOfCols, char* pData, int16_t type, int16_t bytes,
int32_t groupId, SDiskbasedBuf* pBuf, SExecTaskInfo* pTaskInfo,
SAggSupporter* pAggSup) {
int32_t setGroupResultOutputBuf(SOperatorInfo* pOperator, SOptrBasicInfo* binfo, int32_t numOfCols, char* pData, int16_t bytes,
int32_t groupId, SDiskbasedBuf* pBuf, SAggSupporter* pAggSup) {
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
SResultRowInfo* pResultRowInfo = &binfo->resultRowInfo;
SqlFunctionCtx* pCtx = binfo->pCtx;
SqlFunctionCtx* pCtx = pOperator->exprSupp.pCtx;
SResultRow* pResultRow =
doSetResultOutBufByKey(pBuf, pResultRowInfo, (char*)pData, bytes, true, groupId, pTaskInfo, false, pAggSup);
assert(pResultRow != NULL);
setResultRowInitCtx(pResultRow, pCtx, numOfCols, binfo->rowCellInfoOffset);
setResultRowInitCtx(pResultRow, pCtx, numOfCols, pOperator->exprSupp.rowEntryInfoOffset);
return TSDB_CODE_SUCCESS;
}
\ No newline at end of file
......@@ -48,8 +48,8 @@ SOperatorInfo* createMergeJoinOperatorInfo(SOperatorInfo** pDownstream, int32_t
pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_MERGE_JOIN;
pOperator->blocking = false;
pOperator->status = OP_NOT_OPENED;
pOperator->pExpr = pExprInfo;
pOperator->numOfExprs = numOfCols;
pOperator->exprSupp.pExprInfo = pExprInfo;
pOperator->exprSupp.numOfExprs = numOfCols;
pOperator->info = pInfo;
pOperator->pTaskInfo = pTaskInfo;
......@@ -132,10 +132,10 @@ SSDataBlock* doMergeJoin(struct SOperatorInfo* pOperator) {
// only the timestamp match support for ordinary table
ASSERT(pLeftCol->info.type == TSDB_DATA_TYPE_TIMESTAMP);
if (*(int64_t*)pLeftVal == *(int64_t*)pRightVal) {
for (int32_t i = 0; i < pOperator->numOfExprs; ++i) {
for (int32_t i = 0; i < pOperator->exprSupp.numOfExprs; ++i) {
SColumnInfoData* pDst = taosArrayGet(pRes->pDataBlock, i);
SExprInfo* pExprInfo = &pOperator->pExpr[i];
SExprInfo* pExprInfo = &pOperator->exprSupp.pExprInfo[i];
int32_t blockId = pExprInfo->base.pParam[0].pCol->dataBlockId;
int32_t slotId = pExprInfo->base.pParam[0].pCol->slotId;
......
......@@ -261,9 +261,9 @@ static int32_t loadDataBlock(SOperatorInfo* pOperator, STableScanInfo* pTableSca
relocateColumnData(pBlock, pTableScanInfo->pColMatchInfo, pCols);
// currently only the tbname pseudo column
if (pTableScanInfo->numOfPseudoExpr > 0) {
addTagPseudoColumnData(&pTableScanInfo->readHandle, pTableScanInfo->pPseudoExpr, pTableScanInfo->numOfPseudoExpr,
pBlock);
if (pTableScanInfo->pseudoSup.numOfExprs > 0) {
SExprSupp* pSup = &pTableScanInfo->pseudoSup;
addTagPseudoColumnData(&pTableScanInfo->readHandle, pSup->pExprInfo, pSup->numOfExprs, pBlock);
}
int64_t st = taosGetTimestampMs();
......@@ -538,8 +538,9 @@ SOperatorInfo* createTableScanOperatorInfo(STableScanPhysiNode* pTableScanNode,
}
if (pTableScanNode->scan.pScanPseudoCols != NULL) {
pInfo->pPseudoExpr = createExprInfo(pTableScanNode->scan.pScanPseudoCols, NULL, &pInfo->numOfPseudoExpr);
pInfo->pPseudoCtx = createSqlFunctionCtx(pInfo->pPseudoExpr, pInfo->numOfPseudoExpr, &pInfo->rowCellInfoOffset);
SExprSupp* pSup = &pInfo->pseudoSup;
pSup->pExprInfo = createExprInfo(pTableScanNode->scan.pScanPseudoCols, NULL, &pSup->numOfExprs);
pSup->pCtx = createSqlFunctionCtx(pSup->pExprInfo, pSup->numOfExprs, &pSup->rowEntryInfoOffset);
}
pInfo->scanInfo = (SScanInfo){.numOfAsc = pTableScanNode->scanSeq[0], .numOfDesc = pTableScanNode->scanSeq[1]};
......@@ -563,7 +564,7 @@ SOperatorInfo* createTableScanOperatorInfo(STableScanPhysiNode* pTableScanNode,
pOperator->blocking = false;
pOperator->status = OP_NOT_OPENED;
pOperator->info = pInfo;
pOperator->numOfExprs = numOfCols;
pOperator->exprSupp.numOfExprs = numOfCols;
pOperator->pTaskInfo = pTaskInfo;
pOperator->fpSet = createOperatorFpSet(operatorDummyOpenFn, doTableScan, NULL, NULL, destroyTableScanOperatorInfo,
......@@ -1117,7 +1118,7 @@ SOperatorInfo* createStreamScanOperatorInfo(void* pDataReader, SReadHandle* pHan
pOperator->blocking = false;
pOperator->status = OP_NOT_OPENED;
pOperator->info = pInfo;
pOperator->numOfExprs = pInfo->pRes->info.numOfCols;
pOperator->exprSupp.numOfExprs = pInfo->pRes->info.numOfCols;
pOperator->pTaskInfo = pTaskInfo;
pOperator->fpSet =
......@@ -1512,7 +1513,7 @@ static SSDataBlock* doSysTableScan(SOperatorInfo* pOperator) {
}
extractDataBlockFromFetchRsp(pInfo->pRes, &pInfo->loadInfo, pRsp->numOfRows, pRsp->data, pRsp->compLen,
pOperator->numOfExprs, startTs, NULL, pInfo->scanCols);
pOperator->exprSupp.numOfExprs, startTs, NULL, pInfo->scanCols);
// todo log the filter info
doFilterResult(pInfo);
......@@ -1627,7 +1628,7 @@ SOperatorInfo* createSysTableScanOperatorInfo(void* readHandle, SSystemTableScan
pOperator->blocking = false;
pOperator->status = OP_NOT_OPENED;
pOperator->info = pInfo;
pOperator->numOfExprs = pResBlock->info.numOfCols;
pOperator->exprSupp.numOfExprs = pResBlock->info.numOfCols;
pOperator->pTaskInfo = pTaskInfo;
pOperator->fpSet =
......@@ -1658,11 +1659,11 @@ static SSDataBlock* doTagScan(SOperatorInfo* pOperator) {
int32_t count = 0;
SArray* pa = GET_TABLEGROUP(pRuntimeEnv, 0);
int32_t functionId = getExprFunctionId(&pOperator->pExpr[0]);
int32_t functionId = getExprFunctionId(&pOperator->exprSupp.pExprInfo[0]);
if (functionId == FUNCTION_TID_TAG) { // return the tags & table Id
assert(pQueryAttr->numOfOutput == 1);
SExprInfo* pExprInfo = &pOperator->pExpr[0];
SExprInfo* pExprInfo = &pOperator->exprSupp.pExprInfo[0];
int32_t rsize = pExprInfo->base.resSchema.bytes;
count = 0;
......@@ -1725,7 +1726,7 @@ static SSDataBlock* doTagScan(SOperatorInfo* pOperator) {
#endif
STagScanInfo* pInfo = pOperator->info;
SExprInfo* pExprInfo = &pOperator->pExpr[0];
SExprInfo* pExprInfo = &pOperator->exprSupp.pExprInfo[0];
SSDataBlock* pRes = pInfo->pRes;
int32_t size = taosArrayGetSize(pInfo->pTableList->pTableList);
......@@ -1743,7 +1744,7 @@ static SSDataBlock* doTagScan(SOperatorInfo* pOperator) {
STableKeyInfo* item = taosArrayGet(pInfo->pTableList->pTableList, pInfo->curPos);
metaGetTableEntryByUid(&mr, item->uid);
for (int32_t j = 0; j < pOperator->numOfExprs; ++j) {
for (int32_t j = 0; j < pOperator->exprSupp.numOfExprs; ++j) {
SColumnInfoData* pDst = taosArrayGet(pRes->pDataBlock, pExprInfo[j].base.resSchema.slotId);
// refactor later
......@@ -1823,8 +1824,8 @@ SOperatorInfo* createTagScanOperatorInfo(SReadHandle* pReadHandle, STagScanPhysi
pOperator->blocking = false;
pOperator->status = OP_NOT_OPENED;
pOperator->info = pInfo;
pOperator->pExpr = pExprInfo;
pOperator->numOfExprs = numOfExprs;
pOperator->exprSupp.pExprInfo = pExprInfo;
pOperator->exprSupp.numOfExprs = numOfExprs;
pOperator->pTaskInfo = pTaskInfo;
initResultSizeInfo(pOperator, 4096);
......@@ -1869,7 +1870,7 @@ typedef struct STableMergeScanInfo {
SNode* pFilterNode; // filter info, which is push down by optimizer
SqlFunctionCtx* pCtx; // which belongs to the direct upstream operator operator query context
SResultRowInfo* pResultRowInfo;
int32_t* rowCellInfoOffset;
int32_t* rowEntryInfoOffset;
SExprInfo* pExpr;
SSDataBlock* pResBlock;
SArray* pColMatchInfo;
......@@ -1878,7 +1879,7 @@ typedef struct STableMergeScanInfo {
SExprInfo* pPseudoExpr;
int32_t numOfPseudoExpr;
SqlFunctionCtx* pPseudoCtx;
// int32_t* rowCellInfoOffset;
// int32_t* rowEntryInfoOffset;
SQueryTableDataCond cond;
int32_t scanFlag; // table scan flag to denote if it is a repeat/reverse/main scan
......@@ -2257,7 +2258,7 @@ SOperatorInfo* createTableMergeScanOperatorInfo(STableScanPhysiNode* pTableScanN
if (pTableScanNode->scan.pScanPseudoCols != NULL) {
pInfo->pPseudoExpr = createExprInfo(pTableScanNode->scan.pScanPseudoCols, NULL, &pInfo->numOfPseudoExpr);
pInfo->pPseudoCtx = createSqlFunctionCtx(pInfo->pPseudoExpr, pInfo->numOfPseudoExpr, &pInfo->rowCellInfoOffset);
pInfo->pPseudoCtx = createSqlFunctionCtx(pInfo->pPseudoExpr, pInfo->numOfPseudoExpr, &pInfo->rowEntryInfoOffset);
}
pInfo->scanInfo = (SScanInfo){.numOfAsc = pTableScanNode->scanSeq[0], .numOfDesc = pTableScanNode->scanSeq[1]};
......@@ -2303,7 +2304,7 @@ SOperatorInfo* createTableMergeScanOperatorInfo(STableScanPhysiNode* pTableScanN
pOperator->blocking = false;
pOperator->status = OP_NOT_OPENED;
pOperator->info = pInfo;
pOperator->numOfExprs = numOfCols;
pOperator->exprSupp.numOfExprs = numOfCols;
pOperator->pTaskInfo = pTaskInfo;
initResultSizeInfo(pOperator, 1024);
......
......@@ -39,7 +39,7 @@ SOperatorInfo* createSortOperatorInfo(SOperatorInfo* downstream, SSortPhysiNode*
SArray* pColMatchColInfo =
extractColMatchInfo(pSortPhyNode->pTargets, pDescNode, &numOfOutputCols, COL_MATCH_FROM_SLOT_ID);
pInfo->binfo.pCtx = createSqlFunctionCtx(pExprInfo, numOfCols, &pInfo->binfo.rowCellInfoOffset);
pOperator->exprSupp.pCtx = createSqlFunctionCtx(pExprInfo, numOfCols, &pOperator->exprSupp.rowEntryInfoOffset);
pInfo->binfo.pRes = pResBlock;
initResultSizeInfo(pOperator, 1024);
......@@ -51,8 +51,8 @@ SOperatorInfo* createSortOperatorInfo(SOperatorInfo* downstream, SSortPhysiNode*
pOperator->blocking = true;
pOperator->status = OP_NOT_OPENED;
pOperator->info = pInfo;
pOperator->pExpr = pExprInfo;
pOperator->numOfExprs = numOfCols;
pOperator->exprSupp.pExprInfo = pExprInfo;
pOperator->exprSupp.numOfExprs = numOfCols;
pOperator->pTaskInfo = pTaskInfo;
// lazy evaluation for the following parameter since the input datablock is not known till now.
......@@ -145,9 +145,9 @@ SSDataBlock* loadNextDataBlock(void* param) {
void applyScalarFunction(SSDataBlock* pBlock, void* param) {
SOperatorInfo* pOperator = param;
SSortOperatorInfo* pSort = pOperator->info;
if (pOperator->pExpr != NULL) {
if (pOperator->exprSupp.pExprInfo != NULL) {
int32_t code =
projectApplyFunctions(pOperator->pExpr, pBlock, pBlock, pSort->binfo.pCtx, pOperator->numOfExprs, NULL);
projectApplyFunctions(pOperator->exprSupp.pExprInfo, pBlock, pBlock, pOperator->exprSupp.pCtx, pOperator->exprSupp.numOfExprs, NULL);
if (code != TSDB_CODE_SUCCESS) {
longjmp(pOperator->pTaskInfo->env, code);
}
......
......@@ -150,6 +150,8 @@ SNode* nodesMakeNode(ENodeType type) {
return makeNode(type, sizeof(SDropTopicStmt));
case QUERY_NODE_DROP_CGROUP_STMT:
return makeNode(type, sizeof(SDropCGroupStmt));
case QUERY_NODE_ALTER_LOCAL_STMT:
return makeNode(type, sizeof(SAlterLocalStmt));
case QUERY_NODE_EXPLAIN_STMT:
return makeNode(type, sizeof(SExplainStmt));
case QUERY_NODE_DESCRIBE_STMT:
......@@ -206,11 +208,13 @@ SNode* nodesMakeNode(ENodeType type) {
case QUERY_NODE_SHOW_APPS_STMT:
case QUERY_NODE_SHOW_SCORES_STMT:
case QUERY_NODE_SHOW_VARIABLE_STMT:
case QUERY_NODE_SHOW_TRANSACTIONS_STMT:
return makeNode(type, sizeof(SShowStmt));
case QUERY_NODE_SHOW_CREATE_DATABASE_STMT:
return makeNode(type, sizeof(SShowCreateDatabaseStmt));
case QUERY_NODE_SHOW_CREATE_TABLE_STMT:
case QUERY_NODE_SHOW_CREATE_STABLE_STMT:
case QUERY_NODE_SHOW_TRANSACTIONS_STMT:
return makeNode(type, sizeof(SShowStmt));
return makeNode(type, sizeof(SShowCreateTableStmt));
case QUERY_NODE_KILL_QUERY_STMT:
return makeNode(type, sizeof(SKillQueryStmt));
case QUERY_NODE_KILL_TRANSACTION_STMT:
......
......@@ -59,8 +59,8 @@ typedef enum EDatabaseOptionType {
typedef enum ETableOptionType {
TABLE_OPTION_COMMENT = 1,
TABLE_OPTION_FILE_FACTOR,
TABLE_OPTION_DELAY,
TABLE_OPTION_MAXDELAY,
TABLE_OPTION_WATERMARK,
TABLE_OPTION_ROLLUP,
TABLE_OPTION_TTL,
TABLE_OPTION_SMA
......@@ -152,7 +152,7 @@ SNode* createAlterTableRenameCol(SAstCreateContext* pCxt, SNode* pRealTable, int
SNode* createAlterTableSetTag(SAstCreateContext* pCxt, SNode* pRealTable, SToken* pTagName, SNode* pVal);
SNode* createUseDatabaseStmt(SAstCreateContext* pCxt, SToken* pDbName);
SNode* createShowStmt(SAstCreateContext* pCxt, ENodeType type, SNode* pDbName, SNode* pTbNamePattern);
SNode* createShowCreateDatabaseStmt(SAstCreateContext* pCxt, const SToken* pDbName);
SNode* createShowCreateDatabaseStmt(SAstCreateContext* pCxt, SToken* pDbName);
SNode* createShowCreateTableStmt(SAstCreateContext* pCxt, ENodeType type, SNode* pRealTable);
SNode* createCreateUserStmt(SAstCreateContext* pCxt, SToken* pUserName, const SToken* pPassword);
SNode* createAlterUserStmt(SAstCreateContext* pCxt, SToken* pUserName, int8_t alterType, const SToken* pVal);
......
......@@ -111,7 +111,7 @@ priv_level(A) ::= db_name(B) NK_DOT NK_STAR.
/************************************************ create/drop/alter dnode *********************************************/
cmd ::= CREATE DNODE dnode_endpoint(A). { pCxt->pRootNode = createCreateDnodeStmt(pCxt, &A, NULL); }
cmd ::= CREATE DNODE dnode_host_name(A) PORT NK_INTEGER(B). { pCxt->pRootNode = createCreateDnodeStmt(pCxt, &A, &B); }
cmd ::= CREATE DNODE dnode_endpoint(A) PORT NK_INTEGER(B). { pCxt->pRootNode = createCreateDnodeStmt(pCxt, &A, &B); }
cmd ::= DROP DNODE NK_INTEGER(A). { pCxt->pRootNode = createDropDnodeStmt(pCxt, &A); }
cmd ::= DROP DNODE dnode_endpoint(A). { pCxt->pRootNode = createDropDnodeStmt(pCxt, &A); }
cmd ::= ALTER DNODE NK_INTEGER(A) NK_STRING(B). { pCxt->pRootNode = createAlterDnodeStmt(pCxt, &A, &B, NULL); }
......@@ -122,11 +122,8 @@ cmd ::= ALTER ALL DNODES NK_STRING(A) NK_STRING(B).
%type dnode_endpoint { SToken }
%destructor dnode_endpoint { }
dnode_endpoint(A) ::= NK_STRING(B). { A = B; }
%type dnode_host_name { SToken }
%destructor dnode_host_name { }
dnode_host_name(A) ::= NK_ID(B). { A = B; }
dnode_host_name(A) ::= NK_IPTOKEN(B). { A = B; }
dnode_endpoint(A) ::= NK_ID(B). { A = B; }
dnode_endpoint(A) ::= NK_IPTOKEN(B). { A = B; }
/************************************************ alter local *********************************************************/
cmd ::= ALTER LOCAL NK_STRING(A). { pCxt->pRootNode = createAlterLocalStmt(pCxt, &A, NULL); }
......@@ -317,8 +314,8 @@ tags_def(A) ::= TAGS NK_LP column_def_list(B) NK_RP.
table_options(A) ::= . { A = createDefaultTableOptions(pCxt); }
table_options(A) ::= table_options(B) COMMENT NK_STRING(C). { A = setTableOption(pCxt, B, TABLE_OPTION_COMMENT, &C); }
//table_options(A) ::= table_options(B) DELAY NK_INTEGER(C). { A = setTableOption(pCxt, B, TABLE_OPTION_DELAY, &C); }
table_options(A) ::= table_options(B) FILE_FACTOR NK_FLOAT(C). { A = setTableOption(pCxt, B, TABLE_OPTION_FILE_FACTOR, &C); }
table_options(A) ::= table_options(B) MAX_DELAY duration_list(C). { A = setTableOption(pCxt, B, TABLE_OPTION_MAXDELAY, C); }
table_options(A) ::= table_options(B) WATERMARK duration_list(C). { A = setTableOption(pCxt, B, TABLE_OPTION_WATERMARK, C); }
table_options(A) ::= table_options(B) ROLLUP NK_LP rollup_func_list(C) NK_RP. { A = setTableOption(pCxt, B, TABLE_OPTION_ROLLUP, C); }
table_options(A) ::= table_options(B) TTL NK_INTEGER(C). { A = setTableOption(pCxt, B, TABLE_OPTION_TTL, &C); }
table_options(A) ::= table_options(B) SMA NK_LP col_name_list(C) NK_RP. { A = setTableOption(pCxt, B, TABLE_OPTION_SMA, C); }
......@@ -331,6 +328,11 @@ alter_table_options(A) ::= alter_table_options(B) alter_table_option(C).
alter_table_option(A) ::= COMMENT NK_STRING(B). { A.type = TABLE_OPTION_COMMENT; A.val = B; }
alter_table_option(A) ::= TTL NK_INTEGER(B). { A.type = TABLE_OPTION_TTL; A.val = B; }
%type duration_list { SNodeList* }
%destructor duration_list { nodesDestroyList($$); }
duration_list(A) ::= duration_literal(B). { A = createNodeList(pCxt, releaseRawExprNode(pCxt, B)); }
duration_list(A) ::= duration_list(B) NK_COMMA duration_literal(C). { A = addNodeToList(pCxt, B, releaseRawExprNode(pCxt, C)); }
%type rollup_func_list { SNodeList* }
%destructor rollup_func_list { nodesDestroyList($$); }
rollup_func_list(A) ::= rollup_func_name(B). { A = createNodeList(pCxt, B); }
......
......@@ -397,6 +397,14 @@ static int32_t collectMetaKeyFromShowVariables(SCollectMetaKeyCxt* pCxt, SShowSt
pCxt->pMetaCache);
}
static int32_t collectMetaKeyFromShowCreateDatabase(SCollectMetaKeyCxt* pCxt, SShowCreateDatabaseStmt* pStmt) {
return reserveDbCfgInCache(pCxt->pParseCxt->acctId, pStmt->dbName, pCxt->pMetaCache);
}
static int32_t collectMetaKeyFromShowCreateTable(SCollectMetaKeyCxt* pCxt, SShowCreateTableStmt* pStmt) {
return reserveTableMetaInCache(pCxt->pParseCxt->acctId, pStmt->dbName, pStmt->tableName, pCxt->pMetaCache);
}
static int32_t collectMetaKeyFromShowApps(SCollectMetaKeyCxt* pCxt, SShowStmt* pStmt) {
return reserveTableMetaInCache(pCxt->pParseCxt->acctId, TSDB_PERFORMANCE_SCHEMA_DB, TSDB_PERFS_TABLE_APPS,
pCxt->pMetaCache);
......@@ -478,6 +486,11 @@ static int32_t collectMetaKeyFromQuery(SCollectMetaKeyCxt* pCxt, SNode* pStmt) {
return collectMetaKeyFromShowQueries(pCxt, (SShowStmt*)pStmt);
case QUERY_NODE_SHOW_VARIABLE_STMT:
return collectMetaKeyFromShowVariables(pCxt, (SShowStmt*)pStmt);
case QUERY_NODE_SHOW_CREATE_DATABASE_STMT:
return collectMetaKeyFromShowCreateDatabase(pCxt, (SShowCreateDatabaseStmt*)pStmt);
case QUERY_NODE_SHOW_CREATE_TABLE_STMT:
case QUERY_NODE_SHOW_CREATE_STABLE_STMT:
return collectMetaKeyFromShowCreateTable(pCxt, (SShowCreateTableStmt*)pStmt);
case QUERY_NODE_SHOW_APPS_STMT:
return collectMetaKeyFromShowApps(pCxt, (SShowStmt*)pStmt);
case QUERY_NODE_SHOW_TRANSACTIONS_STMT:
......
......@@ -81,7 +81,7 @@ static SKeyword keywordTable[] = {
{"DURATION", TK_DURATION},
{"EXISTS", TK_EXISTS},
{"EXPLAIN", TK_EXPLAIN},
{"FILE_FACTOR", TK_FILE_FACTOR},
// {"FILE_FACTOR", TK_FILE_FACTOR},
{"FILL", TK_FILL},
{"FIRST", TK_FIRST},
{"FLOAT", TK_FLOAT},
......
......@@ -1450,7 +1450,8 @@ static int32_t addMnodeToVgroupList(const SEpSet* pEpSet, SArray** pVgroupList)
}
static int32_t setSysTableVgroupList(STranslateContext* pCxt, SName* pName, SRealTableNode* pRealTable) {
if (0 != strcmp(pRealTable->table.tableName, TSDB_INS_TABLE_USER_TABLES)) {
if (0 != strcmp(pRealTable->table.tableName, TSDB_INS_TABLE_USER_TABLES) &&
0 != strcmp(pRealTable->table.tableName, TSDB_INS_TABLE_USER_TABLE_DISTRIBUTED)) {
return TSDB_CODE_SUCCESS;
}
......@@ -1531,7 +1532,8 @@ static bool joinTableIsSingleTable(SJoinTableNode* pJoinTable) {
static bool isSingleTable(SRealTableNode* pRealTable) {
int8_t tableType = pRealTable->pMeta->tableType;
if (TSDB_SYSTEM_TABLE == tableType) {
return 0 != strcmp(pRealTable->table.tableName, TSDB_INS_TABLE_USER_TABLES);
return 0 != strcmp(pRealTable->table.tableName, TSDB_INS_TABLE_USER_TABLES) &&
0 != strcmp(pRealTable->table.tableName, TSDB_INS_TABLE_USER_TABLE_DISTRIBUTED);
}
return (TSDB_CHILD_TABLE == tableType || TSDB_NORMAL_TABLE == tableType);
}
......@@ -2623,7 +2625,7 @@ static int32_t checkDbDaysOption(STranslateContext* pCxt, SDatabaseOptions* pOpt
if (TIME_UNIT_MINUTE != pOptions->pDaysPerFile->unit && TIME_UNIT_HOUR != pOptions->pDaysPerFile->unit &&
TIME_UNIT_DAY != pOptions->pDaysPerFile->unit) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_OPTION_UNIT, "daysPerFile",
pOptions->pDaysPerFile->unit);
pOptions->pDaysPerFile->unit, TIME_UNIT_MINUTE, TIME_UNIT_HOUR, TIME_UNIT_DAY);
}
pOptions->daysPerFile = getBigintFromValueNode(pOptions->pDaysPerFile);
}
......@@ -2925,14 +2927,6 @@ static SColumnDefNode* findColDef(SNodeList* pCols, const SColumnNode* pCol) {
return NULL;
}
static int32_t checTableFactorOption(STranslateContext* pCxt, float val) {
if (val < TSDB_MIN_ROLLUP_FILE_FACTOR || val > TSDB_MAX_ROLLUP_FILE_FACTOR) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_F_RANGE_OPTION, "file_factor", val,
TSDB_MIN_ROLLUP_FILE_FACTOR, TSDB_MAX_ROLLUP_FILE_FACTOR);
}
return TSDB_CODE_SUCCESS;
}
static int32_t checkTableSmaOption(STranslateContext* pCxt, SCreateTableStmt* pStmt) {
if (NULL != pStmt->pOptions->pSma) {
SNode* pNode = NULL;
......@@ -3082,27 +3076,74 @@ static int32_t checkTableSchema(STranslateContext* pCxt, SCreateTableStmt* pStmt
return code;
}
static int32_t checkSchemalessDb(STranslateContext* pCxt, const char* pDbName) {
// if (0 != pCxt->pParseCxt->schemalessType) {
// return TSDB_CODE_SUCCESS;
// }
// SDbCfgInfo info = {0};
// int32_t code = getDBCfg(pCxt, pDbName, &info);
// if (TSDB_CODE_SUCCESS == code) {
// code = info.schemaless ? TSDB_CODE_SML_INVALID_DB_CONF : TSDB_CODE_SUCCESS;
// }
// return code;
return TSDB_CODE_SUCCESS;
static int32_t getTableDelayOrWatermarkOption(STranslateContext* pCxt, const char* pName, int64_t minVal,
int64_t maxVal, SValueNode* pVal, int64_t* pMaxDelay) {
int32_t code = (DEAL_RES_ERROR == translateValue(pCxt, pVal) ? pCxt->errCode : TSDB_CODE_SUCCESS);
if (TSDB_CODE_SUCCESS == code && TIME_UNIT_MILLISECOND != pVal->unit && TIME_UNIT_SECOND != pVal->unit &&
TIME_UNIT_MINUTE != pVal->unit) {
code = generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_OPTION_UNIT, pName, pVal->unit,
TIME_UNIT_MILLISECOND, TIME_UNIT_SECOND, TIME_UNIT_MINUTE);
}
if (TSDB_CODE_SUCCESS == code) {
code = checkRangeOption(pCxt, pName, pVal->datum.i, minVal, maxVal);
}
if (TSDB_CODE_SUCCESS == code) {
*pMaxDelay = pVal->datum.i;
}
return code;
}
static int32_t getTableMaxDelayOption(STranslateContext* pCxt, SValueNode* pVal, int64_t* pMaxDelay) {
return getTableDelayOrWatermarkOption(pCxt, "maxDelay", TSDB_MIN_ROLLUP_MAX_DELAY, TSDB_MAX_ROLLUP_MAX_DELAY, pVal,
pMaxDelay);
}
static int32_t checkTableMaxDelayOption(STranslateContext* pCxt, STableOptions* pOptions) {
if (NULL == pOptions->pMaxDelay) {
return TSDB_CODE_SUCCESS;
}
if (LIST_LENGTH(pOptions->pMaxDelay) > 2) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_TABLE_OPTION, "maxdelay");
}
int32_t code =
getTableMaxDelayOption(pCxt, (SValueNode*)nodesListGetNode(pOptions->pMaxDelay, 0), &pOptions->maxDelay1);
if (TSDB_CODE_SUCCESS == code && 2 == LIST_LENGTH(pOptions->pMaxDelay)) {
code = getTableMaxDelayOption(pCxt, (SValueNode*)nodesListGetNode(pOptions->pMaxDelay, 1), &pOptions->maxDelay2);
}
return code;
}
static int32_t getTableWatermarkOption(STranslateContext* pCxt, SValueNode* pVal, int64_t* pMaxDelay) {
return getTableDelayOrWatermarkOption(pCxt, "watermark", TSDB_MIN_ROLLUP_WATERMARK, TSDB_MAX_ROLLUP_WATERMARK, pVal,
pMaxDelay);
}
static int32_t checkTableWatermarkOption(STranslateContext* pCxt, STableOptions* pOptions) {
if (NULL == pOptions->pWatermark) {
return TSDB_CODE_SUCCESS;
}
if (LIST_LENGTH(pOptions->pWatermark) > 2) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_TABLE_OPTION, "watermark");
}
int32_t code =
getTableWatermarkOption(pCxt, (SValueNode*)nodesListGetNode(pOptions->pWatermark, 0), &pOptions->watermark1);
if (TSDB_CODE_SUCCESS == code && 2 == LIST_LENGTH(pOptions->pWatermark)) {
code = getTableWatermarkOption(pCxt, (SValueNode*)nodesListGetNode(pOptions->pWatermark, 1), &pOptions->watermark2);
}
return code;
}
static int32_t checkCreateTable(STranslateContext* pCxt, SCreateTableStmt* pStmt) {
int32_t code = checkSchemalessDb(pCxt, pStmt->dbName);
int32_t code = checkTableMaxDelayOption(pCxt, pStmt->pOptions);
if (TSDB_CODE_SUCCESS == code) {
code = checTableFactorOption(pCxt, pStmt->pOptions->filesFactor);
code = checkTableWatermarkOption(pCxt, pStmt->pOptions);
}
// if (TSDB_CODE_SUCCESS == code) {
// code = checkRangeOption(pCxt, "delay", pStmt->pOptions->delay, TSDB_MIN_ROLLUP_DELAY, TSDB_MAX_ROLLUP_DELAY);
// }
if (TSDB_CODE_SUCCESS == code) {
code = checkTableRollupOption(pCxt, pStmt->pOptions->pRollupFuncs);
}
......@@ -3345,8 +3386,10 @@ static int32_t buildRollupAst(STranslateContext* pCxt, SCreateTableStmt* pStmt,
static int32_t buildCreateStbReq(STranslateContext* pCxt, SCreateTableStmt* pStmt, SMCreateStbReq* pReq) {
pReq->igExists = pStmt->ignoreExists;
// pReq->delay = pStmt->pOptions->delay;
pReq->xFilesFactor = pStmt->pOptions->filesFactor;
pReq->delay1 = pStmt->pOptions->maxDelay1;
pReq->delay2 = pStmt->pOptions->maxDelay2;
pReq->watermark1 = pStmt->pOptions->watermark1;
pReq->watermark2 = pStmt->pOptions->watermark2;
pReq->ttl = pStmt->pOptions->ttl;
columnDefNodeToField(pStmt->pCols, &pReq->pColumns);
columnDefNodeToField(pStmt->pTags, &pReq->pTags);
......@@ -3571,27 +3614,6 @@ static int32_t translateAlterDnode(STranslateContext* pCxt, SAlterDnodeStmt* pSt
return buildCmdMsg(pCxt, TDMT_MND_CONFIG_DNODE, (FSerializeFunc)tSerializeSMCfgDnodeReq, &cfgReq);
}
static int32_t nodeTypeToShowType(ENodeType nt) {
switch (nt) {
case QUERY_NODE_SHOW_CONNECTIONS_STMT:
return TSDB_MGMT_TABLE_CONNS;
case QUERY_NODE_SHOW_LICENCE_STMT:
return TSDB_MGMT_TABLE_GRANTS;
case QUERY_NODE_SHOW_QUERIES_STMT:
return TSDB_MGMT_TABLE_QUERIES;
case QUERY_NODE_SHOW_VARIABLE_STMT:
return TSDB_MGMT_TABLE_CONFIGS;
default:
break;
}
return 0;
}
static int32_t translateShow(STranslateContext* pCxt, SShowStmt* pStmt) {
SShowReq showReq = {.type = nodeTypeToShowType(nodeType(pStmt))};
return buildCmdMsg(pCxt, TDMT_MND_SHOW, (FSerializeFunc)tSerializeSShowReq, &showReq);
}
static int32_t getSmaIndexDstVgId(STranslateContext* pCxt, char* pTableName, int32_t* pVgId) {
SVgroupInfo vg = {0};
int32_t code = getTableHashVgroup(pCxt, pCxt->pParseCxt->db, pTableName, &vg);
......@@ -4137,6 +4159,18 @@ static int32_t translateSplitVgroup(STranslateContext* pCxt, SSplitVgroupStmt* p
return buildCmdMsg(pCxt, TDMT_MND_SPLIT_VGROUP, (FSerializeFunc)tSerializeSSplitVgroupReq, &req);
}
static int32_t translateShowCreateDatabase(STranslateContext* pCxt, SShowCreateDatabaseStmt* pStmt) {
pStmt->pCfg = taosMemoryCalloc(1, sizeof(SDbCfgInfo));
if (NULL == pStmt->pCfg) {
return TSDB_CODE_OUT_OF_MEMORY;
}
return getDBCfg(pCxt, pStmt->dbName, (SDbCfgInfo*)pStmt->pCfg);
}
static int32_t translateShowCreateTable(STranslateContext* pCxt, SShowCreateTableStmt* pStmt) {
return getTableMeta(pCxt, pStmt->dbName, pStmt->tableName, &pStmt->pMeta);
}
static int32_t translateQuery(STranslateContext* pCxt, SNode* pNode) {
int32_t code = TSDB_CODE_SUCCESS;
switch (nodeType(pNode)) {
......@@ -4191,12 +4225,6 @@ static int32_t translateQuery(STranslateContext* pCxt, SNode* pNode) {
case QUERY_NODE_ALTER_DNODE_STMT:
code = translateAlterDnode(pCxt, (SAlterDnodeStmt*)pNode);
break;
case QUERY_NODE_SHOW_CONNECTIONS_STMT:
case QUERY_NODE_SHOW_QUERIES_STMT:
case QUERY_NODE_SHOW_TOPICS_STMT:
case QUERY_NODE_SHOW_VARIABLE_STMT:
code = translateShow(pCxt, (SShowStmt*)pNode);
break;
case QUERY_NODE_CREATE_INDEX_STMT:
code = translateCreateIndex(pCxt, (SCreateIndexStmt*)pNode);
break;
......@@ -4272,6 +4300,13 @@ static int32_t translateQuery(STranslateContext* pCxt, SNode* pNode) {
case QUERY_NODE_SPLIT_VGROUP_STMT:
code = translateSplitVgroup(pCxt, (SSplitVgroupStmt*)pNode);
break;
case QUERY_NODE_SHOW_CREATE_DATABASE_STMT:
code = translateShowCreateDatabase(pCxt, (SShowCreateDatabaseStmt*)pNode);
break;
case QUERY_NODE_SHOW_CREATE_TABLE_STMT:
case QUERY_NODE_SHOW_CREATE_STABLE_STMT:
code = translateShowCreateTable(pCxt, (SShowCreateTableStmt*)pNode);
break;
default:
break;
}
......@@ -4354,6 +4389,42 @@ static int32_t extractDescribeResultSchema(int32_t* numOfCols, SSchema** pSchema
return TSDB_CODE_SUCCESS;
}
static int32_t extractShowCreateDatabaseResultSchema(int32_t* numOfCols, SSchema** pSchema) {
*numOfCols = 2;
*pSchema = taosMemoryCalloc((*numOfCols), sizeof(SSchema));
if (NULL == (*pSchema)) {
return TSDB_CODE_OUT_OF_MEMORY;
}
(*pSchema)[0].type = TSDB_DATA_TYPE_BINARY;
(*pSchema)[0].bytes = TSDB_DB_NAME_LEN;
strcpy((*pSchema)[0].name, "Database");
(*pSchema)[1].type = TSDB_DATA_TYPE_BINARY;
(*pSchema)[1].bytes = TSDB_MAX_BINARY_LEN;
strcpy((*pSchema)[1].name, "Create Database");
return TSDB_CODE_SUCCESS;
}
static int32_t extractShowCreateTableResultSchema(int32_t* numOfCols, SSchema** pSchema) {
*numOfCols = 2;
*pSchema = taosMemoryCalloc((*numOfCols), sizeof(SSchema));
if (NULL == (*pSchema)) {
return TSDB_CODE_OUT_OF_MEMORY;
}
(*pSchema)[0].type = TSDB_DATA_TYPE_BINARY;
(*pSchema)[0].bytes = TSDB_TABLE_NAME_LEN;
strcpy((*pSchema)[0].name, "Table");
(*pSchema)[1].type = TSDB_DATA_TYPE_BINARY;
(*pSchema)[1].bytes = TSDB_MAX_BINARY_LEN;
strcpy((*pSchema)[1].name, "Create Table");
return TSDB_CODE_SUCCESS;
}
int32_t extractResultSchema(const SNode* pRoot, int32_t* numOfCols, SSchema** pSchema) {
if (NULL == pRoot) {
return TSDB_CODE_SUCCESS;
......@@ -4367,6 +4438,11 @@ int32_t extractResultSchema(const SNode* pRoot, int32_t* numOfCols, SSchema** pS
return extractExplainResultSchema(numOfCols, pSchema);
case QUERY_NODE_DESCRIBE_STMT:
return extractDescribeResultSchema(numOfCols, pSchema);
case QUERY_NODE_SHOW_CREATE_DATABASE_STMT:
return extractShowCreateDatabaseResultSchema(numOfCols, pSchema);
case QUERY_NODE_SHOW_CREATE_TABLE_STMT:
case QUERY_NODE_SHOW_CREATE_STABLE_STMT:
return extractShowCreateTableResultSchema(numOfCols, pSchema);
default:
break;
}
......@@ -5014,10 +5090,7 @@ static int32_t rewriteCreateMultiTable(STranslateContext* pCxt, SQuery* pQuery)
SNode* pNode;
FOREACH(pNode, pStmt->pSubTables) {
SCreateSubTableClause* pClause = (SCreateSubTableClause*)pNode;
code = checkSchemalessDb(pCxt, pClause->dbName);
if (TSDB_CODE_SUCCESS == code) {
code = rewriteCreateSubTable(pCxt, pClause, pVgroupHashmap);
}
code = rewriteCreateSubTable(pCxt, pClause, pVgroupHashmap);
if (TSDB_CODE_SUCCESS != code) {
taosHashCleanup(pVgroupHashmap);
return code;
......@@ -5590,10 +5663,14 @@ static int32_t setQuery(STranslateContext* pCxt, SQuery* pQuery) {
pQuery->msgType = toMsgType(((SVnodeModifOpStmt*)pQuery->pRoot)->sqlNodeType);
break;
case QUERY_NODE_DESCRIBE_STMT:
case QUERY_NODE_SHOW_CREATE_DATABASE_STMT:
case QUERY_NODE_SHOW_CREATE_TABLE_STMT:
case QUERY_NODE_SHOW_CREATE_STABLE_STMT:
pQuery->execMode = QUERY_EXEC_MODE_LOCAL;
pQuery->haveResultSet = true;
break;
case QUERY_NODE_RESET_QUERY_CACHE_STMT:
case QUERY_NODE_ALTER_LOCAL_STMT:
pQuery->execMode = QUERY_EXEC_MODE_LOCAL;
break;
default:
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册