提交 03d81cdd 编写于 作者: P Ping Xiao

Merge branch 'develop' into xiaoping/add_test_case

......@@ -38,7 +38,8 @@ def pre_test(){
sudo rmtaos || echo "taosd has not installed"
'''
sh '''
killall -9 taosd ||echo "no taosd running"
kill -9 $(pidof taosd) ||echo "no taosd running"
kill -9 $(pidof taosadapter) ||echo "no taosadapter running"
killall -9 gdb || echo "no gdb running"
killall -9 python3.8 || echo "no python program running"
cd ${WKC}
......@@ -104,7 +105,7 @@ def pre_test(){
git clean -dfx
mkdir debug
cd debug
cmake .. > /dev/null
cmake .. -DBUILD_HTTP=false > /dev/null
make > /dev/null
make install > /dev/null
cd ${WKC}/tests
......@@ -178,7 +179,7 @@ def pre_test_noinstall(){
git clean -dfx
mkdir debug
cd debug
cmake .. > /dev/null
cmake .. -DBUILD_HTTP=false > /dev/null
make
'''
return 1
......@@ -455,23 +456,42 @@ pipeline {
nohup taosd >/dev/null &
sleep 10
'''
sh '''
cd ${WKC}/tests/examples/nodejs
npm install td2.0-connector > /dev/null 2>&1
node nodejsChecker.js host=localhost
node test1970.js
cd ${WKC}/tests/connectorTest/nodejsTest/nanosupport
npm install td2.0-connector > /dev/null 2>&1
node nanosecondTest.js
cd ${WKC}/src/connector/python
export PYTHONPATH=$PWD/
export LD_LIBRARY_PATH=${WKC}/debug/build/lib
pip3 install pytest
pytest tests/
python3 examples/bind-multi.py
python3 examples/bind-row.py
python3 examples/demo.py
python3 examples/insert-lines.py
python3 examples/pep-249.py
python3 examples/query-async.py
python3 examples/query-objectively.py
python3 examples/subscribe-sync.py
python3 examples/subscribe-async.py
'''
sh '''
cd ${WKC}/tests/examples/nodejs
npm install td2.0-connector > /dev/null 2>&1
node nodejsChecker.js host=localhost
node test1970.js
cd ${WKC}/tests/connectorTest/nodejsTest/nanosupport
npm install td2.0-connector > /dev/null 2>&1
node nanosecondTest.js
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh '''
cd ${WKC}/tests/examples/C#/taosdemo
mcs -out:taosdemo *.cs > /dev/null 2>&1
echo '' |./taosdemo -c /etc/taos
dotnet build -c Release
tree | true
./bin/Release/net5.0/taosdemo -c /etc/taos -y
'''
}
}
sh '''
cd ${WKC}/tests/gotest
bash batchtest.sh
......
......@@ -100,7 +100,7 @@ sudo dnf install -y maven
#### Install build dependencies for taos-tools
To build the [taos-tools](https://github.com/taosdata/taos-tools) on CentOS, the following packages need to be installed.
```bash
sudo yum install libz-devel xz-devel snappy-devel jansson-devel pkgconfig libatomic
sudo yum install zlib-devel xz-devel snappy-devel jansson-devel pkgconfig libatomic
```
Note: Since snappy lacks pkg-config support (refer to [link](https://github.com/google/snappy/pull/86)), it lead a cmake prompt libsnappy not found. But snappy will works well.
......@@ -142,15 +142,15 @@ mkdir debug && cd debug
cmake .. && cmake --build .
```
Note TDengine 2.3.x.0 and later use a component named 'taosadapter' to play http daemon role by default instead of the http daemon embedded in the early version of TDengine. The taosadapter is programmed by go language. If you pull TDengine source code to the latest from an existing codebase, please execute 'git submodule update --init --recursive' to pull taosadapter source code. Please install go language 1.14 or above for compiling taosadapter. If you meet difficulties regarding 'go mod', especially you are from China, you can use a proxy to solve the problem.
Note TDengine 2.3.x.0 and later use a component named 'taosAdapter' to play http daemon role by default instead of the http daemon embedded in the early version of TDengine. The taosAdapter is programmed by go language. If you pull TDengine source code to the latest from an existing codebase, please execute 'git submodule update --init --recursive' to pull taosAdapter source code. Please install go language 1.14 or above for compiling taosAdapter. If you meet difficulties regarding 'go mod', especially you are from China, you can use a proxy to solve the problem.
```
go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.cn,direct
```
Or you can use the following command to choose to embed old httpd too.
The embedded http daemon still be built from TDengine source code by default. Or you can use the following command to choose to build taosAdapter.
```
cmake .. -DBUILD_HTTP=true
cmake .. -DBUILD_HTTP=false
```
You can use Jemalloc as memory allocator instead of glibc:
......
......@@ -113,7 +113,7 @@ TDengine is a highly efficient platform to store, query, and analyze time-series
- [System Connection and Task Query Management](/administrator#status): show the system connections, queries, streaming calculation and others
- [System Monitor](/administrator#monitoring): monitor TDengine cluster with log database and TDinsight.
- [File Directory Structure](/administrator#directories): directories where TDengine data files and configuration files located
- [Parameter Limitss and Reserved Keywords](/administrator#keywords): TDengine’s list of parameter limits and reserved keywords
- [Parameter Limits and Reserved Keywords](/administrator#keywords): TDengine’s list of parameter limits and reserved keywords
## Performance: TDengine vs Others
......
Since TDengine was open sourced in July 2019, it has gained a lot of popularity among time-series database developers with its innovative data modeling design, simple installation mehtod, easy programming interface, and powerful data insertion and query performance. The insertion and querying performance is often astonishing to users who are new to TDengine. In order to help users to experience the high performance and functions of TDengine in the shortest time, we developed an application called taosdemo for insertion and querying performance testing of TDengine. Then user can easily simulate the scenario of a large number of devices generating a very large amount of data. User can easily maniplate the number of columns, data types, disorder ratio, and number of concurrent threads with taosdemo customized parameters.
Since TDengine was open sourced in July 2019, it has gained a lot of popularity among time-series database developers with its innovative data modeling design, simple installation method, easy programming interface, and powerful data insertion and query performance. The insertion and querying performance is often astonishing to users who are new to TDengine. In order to help users to experience the high performance and functions of TDengine in the shortest time, we developed an application called taosdemo for insertion and querying performance testing of TDengine. Then user can easily simulate the scenario of a large number of devices generating a very large amount of data. User can easily manipulate the number of columns, data types, disorder ratio, and number of concurrent threads with taosdemo customized parameters.
Running taosdemo is very simple. Just download the TDengine installation package (https://www.taosdata.com/cn/all-downloads/) or compiling the TDengine code yourself (https://github.com/taosdata/TDengine). It can be found and run in the installation directory or in the compiled results directory.
......@@ -221,7 +221,7 @@ To reach TDengine performance limits, data insertion can be executed by using mu
```
-t, --tables=NUMBER The number of tables. Default is 10000.
-n, --records=NUMBER The number of records per table. Default is 10000.
-M, --random The value of records generated are totally random. The default is to simulate power equipment senario.
-M, --random The value of records generated are totally random. The default is to simulate power equipment scenario.
```
As mentioned earlier, taosdemo creates 10,000 tables by default, and each table writes 10,000 records. taosdemo can set the number of tables and the number of records in each table by -t and -n. The data generated by default without parameters are simulated real scenarios, and the simulated data are current and voltage phase values with certain jitter, which can more realistically show TDengine's efficient data compression ability. If you need to simulate the generation of completely random data, you can pass the -M parameter.
```
......
......@@ -193,7 +193,7 @@ A complete TDengine system runs on one or more physical nodes. Logically, it inc
**Redirection**: No matter about dnode or TAOSC, the connection to the mnode shall be initiated first, but the mnode is automatically created and maintained by the system, so the user does not know which dnode is running the mnode. TDengine only requires a connection to any working dnode in the system. Because any running dnode maintains the currently running mnode EP List, when receiving a connecting request from the newly started dnode or TAOSC, if it’s not a mnode by self, it will reply to the mnode EP List back. After receiving this list, TAOSC or the newly started dnode will try to establish the connection again. When the mnode EP List changes, each data node quickly obtains the latest list and notifies TAOSC through messaging interaction among nodes.
### A Typical Data Writinfg Process
### A Typical Data Writing Process
To explain the relationship between vnode, mnode, TAOSC and application and their respective roles, the following is an analysis of a typical data writing process.
......@@ -244,7 +244,7 @@ The meta data of each table (including schema, tags, etc.) is also stored in vno
### Data Partitioning
In addition to vnode sharding, TDengine partitions the time-series data by time range. Each data file contains only one time range of time-series data, and the length of the time range is determined by DB's configuration parameter `“days”`. This method of partitioning by time rang is also convenient to efficiently implement the data retention policy. As long as the data file exceeds the specified number of days (system configuration parameter `“keep”`), it will be automatically deleted. Moreover, different time ranges can be stored in different paths and storage media, so as to facilitate the tiered-storage. Cold/hot data can be stored in different storage meida to reduce the storage cost.
In addition to vnode sharding, TDengine partitions the time-series data by time range. Each data file contains only one time range of time-series data, and the length of the time range is determined by DB's configuration parameter `“days”`. This method of partitioning by time rang is also convenient to efficiently implement the data retention policy. As long as the data file exceeds the specified number of days (system configuration parameter `“keep”`), it will be automatically deleted. Moreover, different time ranges can be stored in different paths and storage media, so as to facilitate the tiered-storage. Cold/hot data can be stored in different storage media to reduce the storage cost.
In general, **TDengine splits big data by vnode and time range in two dimensions** to manage the data efficiently with horizontal scalability.
......
......@@ -182,7 +182,7 @@ In the output plugins section, add the [[outputs.http]] configuration:
In agent section:
- hostname: The machine name that distinguishes different collection devices, and it is necessary to ensure its uniqueness
- metric_batch_size: 100, which is the max number of records per batch wriiten by Telegraf allowed. Increasing the number can reduce the request sending frequency of Telegraf.
- metric_batch_size: 100, which is the max number of records per batch written by Telegraf allowed. Increasing the number can reduce the request sending frequency of Telegraf.
For information on how to use Telegraf to collect data and more about using Telegraf, please refer to the official [document](https://docs.influxdata.com/telegraf/v1.11/) of Telegraf.
......
......@@ -11,7 +11,7 @@ TDengine uses SQL as the query language. Applications can send SQL statements th
- Time stamp aligned join query (implicit join) operations
- Multiple aggregation/calculation functions: count, max, min, avg, sum, twa, stddev, leastsquares, top, bottom, first, last, percentile, apercentile, last_row, spread, diff, etc
For example, in TAOS shell, the records with vlotage > 215 are queried from table d1001, sorted in descending order by timestamps, and only two records are outputted.
For example, in TAOS shell, the records with voltage > 215 are queried from table d1001, sorted in descending order by timestamps, and only two records are outputted.
```mysql
taos> select * from d1001 where voltage > 215 order by ts desc limit 2;
......
......@@ -110,10 +110,10 @@ First, use `taos_subscribe` to create a subscription:
```c
TAOS_SUB* tsub = NULL;
if (async) {
  // create an asynchronized subscription, the callback function will be called every 1s
  // create an asynchronous subscription, the callback function will be called every 1s
  tsub = taos_subscribe(taos, restart, topic, sql, subscribe_callback, &blockFetch, 1000);
} else {
  // create an synchronized subscription, need to call 'taos_consume' manually
  // create an synchronous subscription, need to call 'taos_consume' manually
  tsub = taos_subscribe(taos, restart, topic, sql, NULL, NULL, 0);
}
```
......@@ -201,7 +201,7 @@ taos_unsubscribe(tsub, keep);
Its second parameter is used to decide whether to keep the progress information of subscription on the client. If this parameter is **false** (zero), the subscription can only be restarted no matter what the `restart` parameter is when `taos_subscribe` is called next time. In addition, progress information is saved in the directory {DataDir}/subscribe/. Each subscription has a file with the same name as its `topic`. Deleting a file will also lead to a new start when the corresponding subscription is created next time.
After introducing the code, let's take a look at the actual running effect. For exmaple:
After introducing the code, let's take a look at the actual running effect. For example:
- Sample code has been downloaded locally
- TDengine has been installed on the same machine
......
......@@ -54,33 +54,33 @@ INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('beijing') VALUES(
## JDBC driver version and supported TDengine and JDK versions
| taos-jdbcdriver | TDengine | JDK |
| -------------------- | ----------------- | -------- |
| 2.0.33 - 2.0.34 | 2.0.3.0 and above | 1.8.x |
| 2.0.31 - 2.0.32 | 2.1.3.0 and above | 1.8.x |
| 2.0.22 - 2.0.30 | 2.0.18.0 - 2.1.2.x | 1.8.x |
| 2.0.12 - 2.0.21 | 2.0.8.0 - 2.0.17.x | 1.8.x |
| 2.0.4 - 2.0.11 | 2.0.0.0 - 2.0.7.x | 1.8.x |
| 1.0.3 | 1.6.1.x and above | 1.8.x |
| 1.0.2 | 1.6.1.x and above | 1.8.x |
| 1.0.1 | 1.6.1.x and above | 1.8.x |
| taos-jdbcdriver | TDengine | JDK |
| --------------- | ------------------ | ----- |
| 2.0.33 - 2.0.34 | 2.0.3.0 and above | 1.8.x |
| 2.0.31 - 2.0.32 | 2.1.3.0 and above | 1.8.x |
| 2.0.22 - 2.0.30 | 2.0.18.0 - 2.1.2.x | 1.8.x |
| 2.0.12 - 2.0.21 | 2.0.8.0 - 2.0.17.x | 1.8.x |
| 2.0.4 - 2.0.11 | 2.0.0.0 - 2.0.7.x | 1.8.x |
| 1.0.3 | 1.6.1.x and above | 1.8.x |
| 1.0.2 | 1.6.1.x and above | 1.8.x |
| 1.0.1 | 1.6.1.x and above | 1.8.x |
## DataType in TDengine and Java connector
The TDengine supports the following data types and Java data types:
| TDengine DataType | JDBCType (driver version < 2.0.24) | JDBCType (driver version >= 2.0.24) |
| ----------------- | ------------------ | ------------------ |
| TIMESTAMP | java.lang.Long | java.sql.Timestamp |
| INT | java.lang.Integer | java.lang.Integer |
| BIGINT | java.lang.Long | java.lang.Long |
| FLOAT | java.lang.Float | java.lang.Float |
| DOUBLE | java.lang.Double | java.lang.Double |
| SMALLINT | java.lang.Short | java.lang.Short |
| TINYINT | java.lang.Byte | java.lang.Byte |
| BOOL | java.lang.Boolean | java.lang.Boolean |
| BINARY | java.lang.String | byte array |
| NCHAR | java.lang.String | java.lang.String |
| ----------------- | ---------------------------------- | ----------------------------------- |
| TIMESTAMP | java.lang.Long | java.sql.Timestamp |
| INT | java.lang.Integer | java.lang.Integer |
| BIGINT | java.lang.Long | java.lang.Long |
| FLOAT | java.lang.Float | java.lang.Float |
| DOUBLE | java.lang.Double | java.lang.Double |
| SMALLINT | java.lang.Short | java.lang.Short |
| TINYINT | java.lang.Byte | java.lang.Byte |
| BOOL | java.lang.Boolean | java.lang.Boolean |
| BINARY | java.lang.String | byte array |
| NCHAR | java.lang.String | java.lang.String |
## Install Java connector
......@@ -448,7 +448,7 @@ public static void main(String[] args) throws SQLException {
config.setMinimumIdle(10); //minimum number of idle connection
config.setMaximumPoolSize(10); //maximum number of connection in the pool
config.setConnectionTimeout(30000); //maximum wait milliseconds for get connection from pool
config.setMaxLifetime(0); // maximum life time for each connection
config.setMaxLifetime(0); // maximum lifetime for each connection
config.setIdleTimeout(0); // max idle time for recycle idle connection
config.setConnectionTestQuery("select server_status()"); //validation query
HikariDataSource ds = new HikariDataSource(config); //create datasource
......@@ -456,7 +456,7 @@ public static void main(String[] args) throws SQLException {
Statement statement = connection.createStatement(); // get statement
//query or insert
// ...
connection.close(); // put back to conneciton pool
connection.close(); // put back to connection pool
}
```
......@@ -480,7 +480,7 @@ public static void main(String[] args) throws Exception {
Statement statement = connection.createStatement(); // get statement
//query or insert
// ...
connection.close(); // put back to conneciton pool
connection.close(); // put back to connection pool
}
```
......
......@@ -15,7 +15,7 @@ if you use the default features, it'll depend on:
- [TDengine Client](https://www.taosdata.com/cn/getting-started/#%E9%80%9A%E8%BF%87%E5%AE%89%E8%A3%85%E5%8C%85%E5%AE%89%E8%A3%85) library and headers.
- clang because bindgen will requires the clang AST library.
## Fetures
## Features
In-design features:
......
......@@ -884,7 +884,7 @@ dotnet new console
``` cmd
dotnet add package TDengine.Connector
```
* inlucde the TDnengineDriver in you application's namespace
* include the TDnengineDriver in you application's namespace
```C#
using TDengineDriver;
```
......
......@@ -228,7 +228,7 @@ Note: In 2.0.15.0 and later versions, STABLE reserved words are supported. That
```mysql
CREATE STABLE [IF NOT EXISTS] stb_name (timestamp_field_name TIMESTAMP, field1_name data_type1 [, field2_name data_type2 ...]) TAGS (tag1_name tag_type1, tag2_name tag_type2 [, tag3_name tag_type3]);
```
Similiar to a standard table creation SQL, but you need to specify name and type of TAGS field.
Similar to a standard table creation SQL, but you need to specify name and type of TAGS field.
Note:
......@@ -673,7 +673,7 @@ Query OK, 1 row(s) in set (0.001091s)
SELECT * FROM tb1 WHERE ts >= NOW - 1h;
```
- Look up table tb1 from 2018-06-01 08:00:00. 000 to 2018-06-02 08:00:00. 000, and col3 string is a record ending in'nny ', and the result is in descending order of timestamp:
- Look up table tb1 from 2018-06-01 08:00:00. 000 to 2018-06-02 08:00:00. 000, and col3 string is a record ending in 'nny ', and the result is in descending order of timestamp:
```mysql
SELECT * FROM tb1 WHERE ts > '2018-06-01 08:00:00.000' AND ts <= '2018-06-02 08:00:00.000' AND col3 LIKE '%nny' ORDER BY ts DESC;
......@@ -782,7 +782,7 @@ TDengine supports aggregations over data, they are listed below:
Function: return the sum of a statistics/STable.
Return Data Type: long integer INMT64 and Double.
Return Data Type: INT64 and Double.
Applicable Fields: All types except timestamp, binary, nchar, bool.
......@@ -1196,7 +1196,7 @@ SELECT function_list FROM stb_name
- FILL statement specifies a filling mode when data missed in a certain interval. Applicable filling modes include the following:
1. Do not fill: NONE (default filingl mode).
1. Do not fill: NONE (default filing mode).
2. VALUE filling: Fixed value filling, where the filled value needs to be specified. For example: fill (VALUE, 1.23).
3. NULL filling: Fill the data with NULL. For example: fill (NULL).
4. PREV filling: Filling data with the previous non-NULL value. For example: fill (PREV).
......
......@@ -28,7 +28,7 @@ The overall system architecture of a typical DevOps application scenario is show
In this application scenario, there are Agent tools deployed in the application environment to collect machine metrics, network metrics, and application metrics, data collectors to aggregate information collected by agents, systems for data persistence storage and management, and tools for monitoring data visualization (e.g., Grafana, etc.).
Among them, Agents deployed in application nodes are responsible for providing operational metrics from different sources to collectd/Statsd, and collectd/StatsD is responsible for pushing the aggregated data to the OpenTSDB cluster system and then visualizing the data using the visualization kanban board Grafana.
Among them, Agents deployed in application nodes are responsible for providing operational metrics from different sources to collectd/Statsd, and collectd/StatsD is responsible for pushing the aggregated data to the OpenTSDB cluster system and then visualizing the data using the visualization board of Grafana.
### 2. Migration Service
......@@ -127,11 +127,11 @@ On the one hand, TDengine requires a strict schema definition for its incoming d
Now let's assume a DevOps scenario where we use collectd to collect base metrics of devices, including memory, swap, disk, etc. The schema in OpenTSDB is as follows:
| No. | metric | value | type | tag1 | tag2 | tag3 | tag4 | tag5 |
| ---- | -------------- | ------ | ------ | ---- | ----------- | -------------------- | --------- | ------ |
| 1 | memory | value | double | host | memory_type | memory_type_instance | source | |
| 2 | swap | value | double | host | swap_type | swap_type_instance | source | |
| 3 | disk | value | double | host | disk_point | disk_instance | disk_type | source |
| No. | metric | value | type | tag1 | tag2 | tag3 | tag4 | tag5 |
| --- | ------ | ----- | ------ | ---- | ----------- | -------------------- | --------- | ------ |
| 1 | memory | value | double | host | memory_type | memory_type_instance | source | |
| 2 | swap | value | double | host | swap_type | swap_type_instance | source | |
| 3 | disk | value | double | host | disk_point | disk_instance | disk_type | source |
......
[Unit]
Description=TDengine server service
After=network-online.target
Wants=network-online.target
After=network-online.target taosadapter.service
Wants=network-online.target taosadapter.service
[Service]
Type=simple
......
......@@ -3,7 +3,7 @@
# Generate the deb package for ubuntu, or rpm package for centos, or tar.gz package for other linux os
set -e
#set -x
set -x
# release.sh -v [cluster | edge]
# -c [aarch32 | aarch64 | x64 | x86 | mips64 ...]
......@@ -414,10 +414,14 @@ fi
if [[ "$httpdBuild" == "true" ]]; then
BUILD_HTTP=true
BUILD_TOOLS=false
else
BUILD_HTTP=false
fi
if [[ "$pagMode" == "full" ]]; then
BUILD_TOOLS=true
else
BUILD_TOOLS=false
fi
# check support cpu type
......@@ -514,15 +518,17 @@ if [ "$osType" != "Darwin" ]; then
cd ${script_dir}/deb
${csudo} ./makedeb.sh ${compile_dir} ${output_dir} ${verNumber} ${cpuType} ${osType} ${verMode} ${verType}
if [ -d ${top_dir}/src/kit/taos-tools/packaging/deb ]; then
cd ${top_dir}/src/kit/taos-tools/packaging/deb
[ -z "$taos_tools_ver" ] && taos_tools_ver="0.1.0"
if [[ "$pagMode" == "full" ]]; then
if [ -d ${top_dir}/src/kit/taos-tools/packaging/deb ]; then
cd ${top_dir}/src/kit/taos-tools/packaging/deb
[ -z "$taos_tools_ver" ] && taos_tools_ver="0.1.0"
taos_tools_ver=$(git describe --tags|sed -e 's/ver-//g'|awk -F '-' '{print $1}')
${csudo} ./make-taos-tools-deb.sh ${top_dir} \
${compile_dir} ${output_dir} ${taos_tools_ver} ${cpuType} ${osType} ${verMode} ${verType}
taos_tools_ver=$(git describe --tags|sed -e 's/ver-//g'|awk -F '-' '{print $1}')
${csudo} ./make-taos-tools-deb.sh ${top_dir} \
${compile_dir} ${output_dir} ${taos_tools_ver} ${cpuType} ${osType} ${verMode} ${verType}
fi
fi
else
else
echo "==========dpkg command not exist, so not release deb package!!!"
fi
ret='0'
......@@ -537,15 +543,17 @@ if [ "$osType" != "Darwin" ]; then
cd ${script_dir}/rpm
${csudo} ./makerpm.sh ${compile_dir} ${output_dir} ${verNumber} ${cpuType} ${osType} ${verMode} ${verType}
if [ -d ${top_dir}/src/kit/taos-tools/packaging/rpm ]; then
cd ${top_dir}/src/kit/taos-tools/packaging/rpm
[ -z "$taos_tools_ver" ] && taos_tools_ver="0.1.0"
if [[ "$pagMode" == "full" ]]; then
if [ -d ${top_dir}/src/kit/taos-tools/packaging/rpm ]; then
cd ${top_dir}/src/kit/taos-tools/packaging/rpm
[ -z "$taos_tools_ver" ] && taos_tools_ver="0.1.0"
taos_tools_ver=$(git describe --tags|sed -e 's/ver-//g'|awk -F '-' '{print $1}'|sed -e 's/-/_/g')
${csudo} ./make-taos-tools-rpm.sh ${top_dir} \
${compile_dir} ${output_dir} ${taos_tools_ver} ${cpuType} ${osType} ${verMode} ${verType}
taos_tools_ver=$(git describe --tags|sed -e 's/ver-//g'|awk -F '-' '{print $1}'|sed -e 's/-/_/g')
${csudo} ./make-taos-tools-rpm.sh ${top_dir} \
${compile_dir} ${output_dir} ${taos_tools_ver} ${cpuType} ${osType} ${verMode} ${verType}
fi
fi
else
else
echo "==========rpmbuild command not exist, so not release rpm package!!!"
fi
fi
......
......@@ -188,9 +188,6 @@ function update() {
install_log
install_header
install_lib
if [ "$pagMode" != "lite" ]; then
install_connector
fi
install_examples
install_bin
install_config
......@@ -215,9 +212,6 @@ function install() {
install_log
install_header
install_lib
if [ "$pagMode" != "lite" ]; then
install_connector
fi
install_examples
install_bin
install_config
......
......@@ -189,9 +189,6 @@ function update() {
install_log
install_header
install_lib
if [ "$pagMode" != "lite" ]; then
install_connector
fi
install_examples
install_bin
install_config
......@@ -216,9 +213,6 @@ function install() {
install_log
install_header
install_lib
if [ "$pagMode" != "lite" ]; then
install_connector
fi
install_examples
install_bin
install_config
......
......@@ -247,9 +247,6 @@ function update_PowerDB() {
install_log
install_header
install_lib
if [ "$pagMode" != "lite" ]; then
install_connector
fi
install_examples
install_bin
install_config
......@@ -275,9 +272,6 @@ function install_PowerDB() {
install_header
install_lib
install_jemalloc
if [ "$pagMode" != "lite" ]; then
install_connector
fi
install_examples
install_bin
install_config
......
......@@ -189,9 +189,6 @@ function update_prodb() {
install_log
install_header
install_lib
if [ "$pagMode" != "lite" ]; then
install_connector
fi
install_examples
install_bin
install_config
......@@ -216,9 +213,6 @@ function install_prodb() {
install_log
install_header
install_lib
if [ "$pagMode" != "lite" ]; then
install_connector
fi
install_examples
install_bin
install_config
......
......@@ -193,9 +193,6 @@ function update_tq() {
install_log
install_header
install_lib
if [ "$pagMode" != "lite" ]; then
install_connector
fi
install_examples
install_bin
install_config
......@@ -220,9 +217,6 @@ function install_tq() {
install_log
install_header
install_lib
if [ "$pagMode" != "lite" ]; then
install_connector
fi
install_examples
install_bin
install_config
......
......@@ -69,10 +69,10 @@ if [ "$osType" != "Darwin" ]; then
if [ "$pagMode" == "lite" ]; then
strip ${build_dir}/bin/taos
cp ${build_dir}/bin/taos ${install_dir}/bin/jh_taos
cp ${script_dir}/remove_jh.sh ${install_dir}/bin
cp ${script_dir}/remove_client_jh.sh ${install_dir}/bin
else
cp ${build_dir}/bin/taos ${install_dir}/bin/jh_taos
cp ${script_dir}/remove_jh.sh ${install_dir}/bin
cp ${script_dir}/remove_client_jh.sh ${install_dir}/bin
cp ${build_dir}/bin/taosdemo ${install_dir}/bin/jhdemo
cp ${build_dir}/bin/taosdump ${install_dir}/bin/jh_taosdump
cp ${script_dir}/set_core.sh ${install_dir}/bin
......
......@@ -69,10 +69,10 @@ if [ "$osType" != "Darwin" ]; then
if [ "$pagMode" == "lite" ]; then
strip ${build_dir}/bin/taos
cp ${build_dir}/bin/taos ${install_dir}/bin/khclient
cp ${script_dir}/remove_kh.sh ${install_dir}/bin
cp ${script_dir}/remove_client_kh.sh ${install_dir}/bin
else
cp ${build_dir}/bin/taos ${install_dir}/bin/khclient
cp ${script_dir}/remove_kh.sh ${install_dir}/bin
cp ${script_dir}/remove_client_kh.sh ${install_dir}/bin
cp ${build_dir}/bin/taosdemo ${install_dir}/bin/khdemo
cp ${build_dir}/bin/taosdump ${install_dir}/bin/khdump
cp ${script_dir}/set_core.sh ${install_dir}/bin
......
......@@ -109,10 +109,10 @@ if [ "$osType" != "Darwin" ]; then
if [ "$pagMode" == "lite" ]; then
strip ${build_dir}/bin/taos
cp ${build_dir}/bin/taos ${install_dir}/bin/power
cp ${script_dir}/remove_power.sh ${install_dir}/bin
cp ${script_dir}/remove_client_power.sh ${install_dir}/bin
else
cp ${build_dir}/bin/taos ${install_dir}/bin/power
cp ${script_dir}/remove_power.sh ${install_dir}/bin
cp ${script_dir}/remove_client_power.sh ${install_dir}/bin
cp ${build_dir}/bin/taosdemo ${install_dir}/bin/powerdemo
cp ${build_dir}/bin/taosdump ${install_dir}/bin/powerdump
cp ${script_dir}/set_core.sh ${install_dir}/bin
......
......@@ -69,10 +69,10 @@ if [ "$osType" != "Darwin" ]; then
if [ "$pagMode" == "lite" ]; then
strip ${build_dir}/bin/taos
cp ${build_dir}/bin/taos ${install_dir}/bin/prodbc
cp ${script_dir}/remove_pro.sh ${install_dir}/bin
cp ${script_dir}/remove_client_pro.sh ${install_dir}/bin
else
cp ${build_dir}/bin/taos ${install_dir}/bin/prodbc
cp ${script_dir}/remove_pro.sh ${install_dir}/bin
cp ${script_dir}/remove_client_pro.sh ${install_dir}/bin
cp ${build_dir}/bin/taosdemo ${install_dir}/bin/prodemo
cp ${build_dir}/bin/taosdump ${install_dir}/bin/prodump
cp ${script_dir}/set_core.sh ${install_dir}/bin
......
......@@ -69,10 +69,10 @@ if [ "$osType" != "Darwin" ]; then
if [ "$pagMode" == "lite" ]; then
strip ${build_dir}/bin/taos
cp ${build_dir}/bin/taos ${install_dir}/bin/tq
cp ${script_dir}/remove_tq.sh ${install_dir}/bin
cp ${script_dir}/remove_client_tq.sh ${install_dir}/bin
else
cp ${build_dir}/bin/taos ${install_dir}/bin/tq
cp ${script_dir}/remove_tq.sh ${install_dir}/bin
cp ${script_dir}/remove_client_tq.sh ${install_dir}/bin
cp ${build_dir}/bin/taosdemo ${install_dir}/bin/tqdemo
cp ${build_dir}/bin/taosdump ${install_dir}/bin/tqdump
cp ${script_dir}/set_core.sh ${install_dir}/bin
......
......@@ -256,7 +256,7 @@ void tscColumnListDestroy(SArray* pColList);
void tscColumnListCopy(SArray* dst, const SArray* src, uint64_t tableUid);
void tscColumnListCopyAll(SArray* dst, const SArray* src);
void convertQueryResult(SSqlRes* pRes, SQueryInfo* pQueryInfo, uint64_t objId, bool convertNchar);
void convertQueryResult(SSqlRes* pRes, SQueryInfo* pQueryInfo, uint64_t objId, bool convertNchar, bool convertJson);
void tscDequoteAndTrimToken(SStrToken* pToken);
void tscRmEscapeAndTrimToken(SStrToken* pToken);
......@@ -264,7 +264,7 @@ int32_t tscValidateName(SStrToken* pToken, bool escapeEnabled, bool *dbIncluded)
void tscIncStreamExecutionCount(void* pStream);
bool tscValidateColumnId(STableMetaInfo* pTableMetaInfo, int32_t colId, int32_t numOfParams);
bool tscValidateColumnId(STableMetaInfo* pTableMetaInfo, int32_t colId);
// get starter position of metric query condition (query on tags) in SSqlCmd.payload
SCond* tsGetSTableQueryCond(STagCond* pCond, uint64_t uid);
......@@ -394,6 +394,9 @@ void tscRemoveCachedTableMeta(STableMetaInfo* pTableMetaInfo, uint64_t id);
char* cloneCurrentDBName(SSqlObj* pSql);
int parseJsontoTagData(char* json, SKVRowBuilder* kvRowBuilder, char* errMsg, int16_t startColId);
int8_t jsonType2DbType(double data, int jsonType);
void getJsonKey(SStrToken *t0);
char* cloneCurrentDBName(SSqlObj* pSql);
......
......@@ -453,7 +453,7 @@ void tscRestoreFuncForSTableQuery(SQueryInfo *pQueryInfo);
int32_t tscCreateResPointerInfo(SSqlRes *pRes, SQueryInfo *pQueryInfo);
void tscSetResRawPtr(SSqlRes* pRes, SQueryInfo* pQueryInfo, bool converted);
void tscSetResRawPtrRv(SSqlRes* pRes, SQueryInfo* pQueryInfo, SSDataBlock* pBlock, bool convertNchar);
void tscSetResRawPtrRv(SSqlRes* pRes, SQueryInfo* pQueryInfo, SSDataBlock* pBlock, bool convertNchar, bool convertJson);
void handleDownstreamOperator(SSqlObj** pSqlList, int32_t numOfUpstream, SQueryInfo* px, SSqlObj* pParent);
void destroyTableNameList(SInsertStatementParam* pInsertParam);
......
......@@ -547,6 +547,11 @@ JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_TSDBJNIConnector_fetchRowImp(JNIEn
jniFromNCharToByteArray(env, (char *)row[i], length[i]));
break;
}
case TSDB_DATA_TYPE_JSON: {
(*env)->CallVoidMethod(env, rowobj, g_rowdataSetByteArrayFp, i,
jniFromNCharToByteArray(env, (char *)row[i], length[i]));
break;
}
case TSDB_DATA_TYPE_TIMESTAMP: {
int precision = taos_result_precision(result);
(*env)->CallVoidMethod(env, rowobj, g_rowdataSetTimestampFp, i, (jlong) * ((int64_t *)row[i]), precision);
......
......@@ -85,16 +85,15 @@ static int32_t tscSetValueToResObj(SSqlObj *pSql, int32_t rowLen) {
pField = tscFieldInfoGetField(&pQueryInfo->fieldsInfo, 1);
dst = pRes->data + tscFieldInfoGetOffset(pQueryInfo, 1) * totalNumOfRows + pField->bytes * i;
STR_WITH_MAXSIZE_TO_VARSTR(dst, type, pField->bytes);
int32_t bytes = pSchema[i].bytes;
if (pSchema[i].type == TSDB_DATA_TYPE_BINARY || pSchema[i].type == TSDB_DATA_TYPE_NCHAR) {
if (pSchema[i].type == TSDB_DATA_TYPE_BINARY){
bytes -= VARSTR_HEADER_SIZE;
if (pSchema[i].type == TSDB_DATA_TYPE_NCHAR) {
bytes = bytes / TSDB_NCHAR_SIZE;
}
}
else if(pSchema[i].type == TSDB_DATA_TYPE_NCHAR || pSchema[i].type == TSDB_DATA_TYPE_JSON) {
bytes -= VARSTR_HEADER_SIZE;
bytes = bytes / TSDB_NCHAR_SIZE;
}
pField = tscFieldInfoGetField(&pQueryInfo->fieldsInfo, 2);
......@@ -222,7 +221,7 @@ static int32_t tscGetNthFieldResult(TAOS_ROW row, TAOS_FIELD* fields, int *lengt
return -1;
}
uint8_t type = fields[idx].type;
int32_t length = lengths[idx];
int32_t length = lengths[idx];
switch (type) {
case TSDB_DATA_TYPE_BOOL:
......@@ -248,6 +247,7 @@ static int32_t tscGetNthFieldResult(TAOS_ROW row, TAOS_FIELD* fields, int *lengt
break;
case TSDB_DATA_TYPE_NCHAR:
case TSDB_DATA_TYPE_BINARY:
case TSDB_DATA_TYPE_JSON:
memcpy(result, val, length);
break;
case TSDB_DATA_TYPE_TIMESTAMP:
......@@ -636,9 +636,9 @@ static int32_t tscRebuildDDLForNormalTable(SSqlObj *pSql, const char *tableName,
if (type == TSDB_DATA_TYPE_NCHAR) {
bytes = bytes/TSDB_NCHAR_SIZE;
}
snprintf(result + strlen(result), TSDB_MAX_BINARY_LEN - strlen(result), "%s %s(%d),", pSchema[i].name, tDataTypes[pSchema[i].type].name, bytes);
snprintf(result + strlen(result), TSDB_MAX_BINARY_LEN - strlen(result), "`%s` %s(%d),", pSchema[i].name, tDataTypes[pSchema[i].type].name, bytes);
} else {
snprintf(result + strlen(result), TSDB_MAX_BINARY_LEN - strlen(result), "%s %s,", pSchema[i].name, tDataTypes[pSchema[i].type].name);
snprintf(result + strlen(result), TSDB_MAX_BINARY_LEN - strlen(result), "`%s` %s,", pSchema[i].name, tDataTypes[pSchema[i].type].name);
}
}
sprintf(result + strlen(result) - 1, "%s", ")");
......@@ -663,9 +663,9 @@ static int32_t tscRebuildDDLForSuperTable(SSqlObj *pSql, const char *tableName,
if (type == TSDB_DATA_TYPE_NCHAR) {
bytes = bytes/TSDB_NCHAR_SIZE;
}
snprintf(result + strlen(result), TSDB_MAX_BINARY_LEN - strlen(result),"%s %s(%d),", pSchema[i].name,tDataTypes[pSchema[i].type].name, bytes);
snprintf(result + strlen(result), TSDB_MAX_BINARY_LEN - strlen(result),"`%s` %s(%d),", pSchema[i].name,tDataTypes[pSchema[i].type].name, bytes);
} else {
snprintf(result + strlen(result), TSDB_MAX_BINARY_LEN - strlen(result), "%s %s,", pSchema[i].name, tDataTypes[type].name);
snprintf(result + strlen(result), TSDB_MAX_BINARY_LEN - strlen(result), "`%s` %s,", pSchema[i].name, tDataTypes[type].name);
}
}
snprintf(result + strlen(result) - 1, TSDB_MAX_BINARY_LEN - strlen(result), "%s %s", ")", "TAGS (");
......@@ -677,9 +677,9 @@ static int32_t tscRebuildDDLForSuperTable(SSqlObj *pSql, const char *tableName,
if (type == TSDB_DATA_TYPE_NCHAR) {
bytes = bytes/TSDB_NCHAR_SIZE;
}
snprintf(result + strlen(result), TSDB_MAX_BINARY_LEN - strlen(result), "%s %s(%d),", pSchema[i].name,tDataTypes[pSchema[i].type].name, bytes);
snprintf(result + strlen(result), TSDB_MAX_BINARY_LEN - strlen(result), "`%s` %s(%d),", pSchema[i].name,tDataTypes[pSchema[i].type].name, bytes);
} else {
snprintf(result + strlen(result), TSDB_MAX_BINARY_LEN - strlen(result), "%s %s,", pSchema[i].name, tDataTypes[type].name);
snprintf(result + strlen(result), TSDB_MAX_BINARY_LEN - strlen(result), "`%s` %s,", pSchema[i].name, tDataTypes[type].name);
}
}
sprintf(result + strlen(result) - 1, "%s", ")");
......
......@@ -385,6 +385,19 @@ int32_t tsParseOneColumn(SSchema *pSchema, SStrToken *pToken, char *payload, cha
}
break;
case TSDB_DATA_TYPE_JSON:
if (pToken->n >= pSchema->bytes) { // reserve 1 byte for select
return tscInvalidOperationMsg(msg, "json tag length too long", pToken->z);
}
if (pToken->type == TK_NULL) {
*(int8_t *)payload = TSDB_DATA_TINYINT_NULL;
} else if (pToken->type != TK_STRING){
tscInvalidOperationMsg(msg, "invalid json data", pToken->z);
} else{
*((int8_t *)payload) = TSDB_DATA_JSON_PLACEHOLDER;
}
break;
case TSDB_DATA_TYPE_TIMESTAMP: {
if (pToken->type == TK_NULL) {
if (primaryKey) {
......@@ -1094,9 +1107,26 @@ static int32_t tscCheckIfCreateTable(char **sqlstr, SSqlObj *pSql, char** boundC
return code;
}
tdAddColToKVRow(&kvRowBuilder, pSchema->colId, pSchema->type, tagVal);
}
tdAddColToKVRow(&kvRowBuilder, pSchema->colId, pSchema->type, tagVal, false);
if(pSchema->type == TSDB_DATA_TYPE_JSON){
assert(spd.numOfBound == 1);
if(sToken.n > TSDB_MAX_JSON_TAGS_LEN/TSDB_NCHAR_SIZE){
tdDestroyKVRowBuilder(&kvRowBuilder);
tscDestroyBoundColumnInfo(&spd);
return tscSQLSyntaxErrMsg(pInsertParam->msg, "json tag too long", NULL);
}
char* json = strndup(sToken.z, sToken.n);
code = parseJsontoTagData(json, &kvRowBuilder, pInsertParam->msg, pTagSchema[spd.boundedColumns[0]].colId);
if (code != TSDB_CODE_SUCCESS) {
tdDestroyKVRowBuilder(&kvRowBuilder);
tscDestroyBoundColumnInfo(&spd);
tfree(json);
return code;
}
tfree(json);
}
}
tscDestroyBoundColumnInfo(&spd);
SKVRow row = tdGetKVRowFromBuilder(&kvRowBuilder);
......@@ -1110,7 +1140,7 @@ static int32_t tscCheckIfCreateTable(char **sqlstr, SSqlObj *pSql, char** boundC
if (pInsertParam->tagData.dataLen <= 0){
return tscSQLSyntaxErrMsg(pInsertParam->msg, "tag value expected", NULL);
}
char* pTag = realloc(pInsertParam->tagData.data, pInsertParam->tagData.dataLen);
if (pTag == NULL) {
return TSDB_CODE_TSC_OUT_OF_MEMORY;
......
......@@ -125,8 +125,9 @@ static int32_t parseTelnetTimeStamp(TAOS_SML_KV **pTS, int *num_kvs, const char
}
tfree(value);
(*pTS)->key = tcalloc(sizeof(key), 1);
(*pTS)->key = tcalloc(sizeof(key) + TS_ESCAPE_CHAR_SIZE, 1);
memcpy((*pTS)->key, key, sizeof(key));
addEscapeCharToString((*pTS)->key, (int32_t)strlen(key));
*num_kvs += 1;
*index = cur + 1;
......
......@@ -1533,6 +1533,41 @@ int stmtGenInsertStatement(SSqlObj* pSql, STscStmt* pStmt, const char* name, TAO
return TSDB_CODE_SUCCESS;
}
int32_t stmtValidateValuesFields(SSqlCmd *pCmd, char * sql) {
int32_t loopCont = 1, index0 = 0, values = 0;
SStrToken sToken;
while (loopCont) {
sToken = tStrGetToken(sql, &index0, false);
if (sToken.n <= 0) {
return TSDB_CODE_SUCCESS;
}
switch (sToken.type) {
case TK_RP:
if (values) {
return TSDB_CODE_SUCCESS;
}
break;
case TK_VALUES:
values = 1;
break;
case TK_QUESTION:
case TK_LP:
break;
default:
if (values) {
tscError("only ? allowed in values");
return tscInvalidOperationMsg(pCmd->payload, "only ? allowed in values", NULL);
}
break;
}
}
return TSDB_CODE_SUCCESS;
}
////////////////////////////////////////////////////////////////////////////////
// interface functions
......@@ -1637,6 +1672,11 @@ int taos_stmt_prepare(TAOS_STMT* stmt, const char* sql, unsigned long length) {
STMT_RET(ret);
}
ret = stmtValidateValuesFields(&pSql->cmd, pSql->sqlstr);
if (ret != TSDB_CODE_SUCCESS) {
STMT_RET(ret);
}
if (pStmt->multiTbInsert) {
STMT_RET(TSDB_CODE_SUCCESS);
}
......
此差异已折叠。
......@@ -506,7 +506,7 @@ void tscProcessMsgFromServer(SRpcMsg *rpcMsg, SRpcEpSet *pEpSet) {
}
}
if (pRes->code == TSDB_CODE_SUCCESS && tscProcessMsgRsp[pCmd->command]) {
if (pRes->code == TSDB_CODE_SUCCESS && pCmd->command < TSDB_SQL_MAX && tscProcessMsgRsp[pCmd->command]) {
rpcMsg->code = (*tscProcessMsgRsp[pCmd->command])(pSql);
}
......@@ -832,7 +832,7 @@ static int32_t serializeSqlExpr(SSqlExpr* pExpr, STableMetaInfo* pTableMetaInfo,
return TSDB_CODE_TSC_INVALID_TABLE_NAME;
}
if (validateColumn && !tscValidateColumnId(pTableMetaInfo, pExpr->colInfo.colId, pExpr->numOfParams)) {
if (validateColumn && !tscValidateColumnId(pTableMetaInfo, pExpr->colInfo.colId)) {
tscError("0x%"PRIx64" table schema is not matched with parsed sql", id);
return TSDB_CODE_TSC_INVALID_OPERATION;
}
......@@ -906,6 +906,7 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
SArray* queryOperator = createExecOperatorPlan(&query);
SQueryTableMsg *pQueryMsg = (SQueryTableMsg *)pCmd->payload;
tstrncpy(pQueryMsg->version, version, tListLen(pQueryMsg->version));
int32_t numOfTags = query.numOfTags;
......@@ -1146,6 +1147,23 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
memcpy(pMsg, pSql->sqlstr, sqlLen);
pMsg += sqlLen;
/*
//MSG EXTEND DEMO
pQueryMsg->extend = 1;
STLV *tlv = (STLV *)pMsg;
tlv->type = htons(TLV_TYPE_DUMMY);
tlv->len = htonl(sizeof(int16_t));
*(int16_t *)tlv->value = htons(12345);
pMsg += sizeof(*tlv) + ntohl(tlv->len);
tlv = (STLV *)pMsg;
tlv->len = 0;
pMsg += sizeof(*tlv);
*/
int32_t msgLen = (int32_t)(pMsg - pCmd->payload);
tscDebug("0x%"PRIx64" msg built success, len:%d bytes", pSql->self, msgLen);
......@@ -1492,7 +1510,8 @@ int tscEstimateCreateTableMsgLength(SSqlObj *pSql, SSqlInfo *pInfo) {
SCreateTableSql *pCreateTableInfo = pInfo->pCreateTableInfo;
if (pCreateTableInfo->type == TSQL_CREATE_TABLE_FROM_STABLE) {
int32_t numOfTables = (int32_t)taosArrayGetSize(pInfo->pCreateTableInfo->childTableInfo);
size += numOfTables * (sizeof(SCreateTableMsg) + TSDB_MAX_TAGS_LEN);
size += numOfTables * (sizeof(SCreateTableMsg) +
((TSDB_MAX_TAGS_LEN > TSDB_MAX_JSON_TAGS_LEN)?TSDB_MAX_TAGS_LEN:TSDB_MAX_JSON_TAGS_LEN));
} else {
size += sizeof(SSchema) * (pCmd->numOfCols + pCmd->count);
}
......@@ -1614,9 +1633,6 @@ int tscEstimateAlterTableMsgLength(SSqlCmd *pCmd) {
}
int tscBuildAlterTableMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
char *pMsg;
int msgLen = 0;
SSqlCmd *pCmd = &pSql->cmd;
SQueryInfo *pQueryInfo = tscGetQueryInfo(pCmd);
......@@ -1645,14 +1661,7 @@ int tscBuildAlterTableMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
pSchema++;
}
pMsg = (char *)pSchema;
pAlterTableMsg->tagValLen = htonl(pAlterInfo->tagData.dataLen);
if (pAlterInfo->tagData.dataLen > 0) {
memcpy(pMsg, pAlterInfo->tagData.data, pAlterInfo->tagData.dataLen);
}
pMsg += pAlterInfo->tagData.dataLen;
msgLen = (int32_t)(pMsg - (char*)pAlterTableMsg);
int msgLen = sizeof(SAlterTableMsg) + sizeof(SSchema) * tscNumOfFields(pQueryInfo);
pCmd->payloadLen = msgLen;
pCmd->msgType = TSDB_MSG_TYPE_CM_ALTER_TABLE;
......@@ -1854,7 +1863,9 @@ int tscProcessRetrieveGlobalMergeRsp(SSqlObj *pSql) {
uint64_t localQueryId = pSql->self;
qTableQuery(pQueryInfo->pQInfo, &localQueryId);
convertQueryResult(pRes, pQueryInfo, pSql->self, true);
bool convertJson = true;
if (pQueryInfo->isStddev == true) convertJson = false;
convertQueryResult(pRes, pQueryInfo, pSql->self, true, convertJson);
code = pRes->code;
if (pRes->code == TSDB_CODE_SUCCESS) {
......@@ -2813,7 +2824,11 @@ static int32_t getTableMetaFromMnode(SSqlObj *pSql, STableMetaInfo *pTableMetaIn
tscAddQueryInfo(&pNew->cmd);
SQueryInfo *pNewQueryInfo = tscGetQueryInfoS(&pNew->cmd);
if (TSDB_CODE_SUCCESS != tscAllocPayload(&pNew->cmd, TSDB_DEFAULT_PAYLOAD_SIZE + pSql->cmd.payloadLen)) {
int payLoadLen = TSDB_DEFAULT_PAYLOAD_SIZE + pSql->cmd.payloadLen;
if (autocreate && pSql->cmd.insertParam.tagData.dataLen != 0) {
payLoadLen += pSql->cmd.insertParam.tagData.dataLen;
}
if (TSDB_CODE_SUCCESS != tscAllocPayload(&pNew->cmd, payLoadLen)) {
tscError("0x%"PRIx64" malloc failed for payload to get table meta", pSql->self);
tscFreeSqlObj(pNew);
......
......@@ -445,7 +445,7 @@ TAOS_FIELD *taos_fetch_fields(TAOS_RES *res) {
// revise the length for binary and nchar fields
if (f[j].type == TSDB_DATA_TYPE_BINARY) {
f[j].bytes -= VARSTR_HEADER_SIZE;
} else if (f[j].type == TSDB_DATA_TYPE_NCHAR) {
} else if (f[j].type == TSDB_DATA_TYPE_NCHAR || f[j].type == TSDB_DATA_TYPE_JSON) {
f[j].bytes = (f[j].bytes - VARSTR_HEADER_SIZE)/TSDB_NCHAR_SIZE;
}
......
......@@ -227,6 +227,7 @@ static int64_t doTSBlockIntersect(SSqlObj* pSql, STimeWindow * win) {
if (skipped) {
slot = 0;
stackidx = 0;
tVariantDestroy(&tag);
continue;
}
......@@ -334,6 +335,7 @@ static int64_t doTSBlockIntersect(SSqlObj* pSql, STimeWindow * win) {
}
if (mergeDone) {
tVariantDestroy(&tag);
break;
}
......@@ -341,6 +343,7 @@ static int64_t doTSBlockIntersect(SSqlObj* pSql, STimeWindow * win) {
stackidx = 0;
skipRemainValue(mainCtx->p->pTSBuf, &tag);
tVariantDestroy(&tag);
}
stackidx = 0;
......@@ -633,16 +636,21 @@ static int32_t tscLaunchRealSubqueries(SSqlObj* pSql) {
// set the join condition tag column info, todo extract method
if (UTIL_TABLE_IS_SUPER_TABLE(pTableMetaInfo)) {
assert(pQueryInfo->tagCond.joinInfo.hasJoin);
pExpr->base.numOfParams = 0; // the value is 0 by default. just make sure.
// add json tag key, if there is no json tag key, just hold place.
tVariantCreateFromBinary(&(pExpr->base.param[pExpr->base.numOfParams]), pQueryInfo->tagCond.joinInfo.joinTables[0]->tagJsonKeyName,
strlen(pQueryInfo->tagCond.joinInfo.joinTables[0]->tagJsonKeyName), TSDB_DATA_TYPE_BINARY);
pExpr->base.numOfParams++;
int16_t colId = tscGetJoinTagColIdByUid(&pQueryInfo->tagCond, pTableMetaInfo->pTableMeta->id.uid);
// set the tag column id for executor to extract correct tag value
tVariant* pVariant = &pExpr->base.param[0];
tVariant* pVariant = &pExpr->base.param[pExpr->base.numOfParams];
pVariant->i64 = colId;
pVariant->nType = TSDB_DATA_TYPE_BIGINT;
pVariant->nLen = sizeof(int64_t);
pExpr->base.numOfParams = 1;
pExpr->base.numOfParams++;
}
if (UTIL_TABLE_IS_SUPER_TABLE(pTableMetaInfo)) {
......@@ -729,8 +737,15 @@ int32_t tagValCompar(const void* p1, const void* p2) {
const STidTags* t1 = (const STidTags*) varDataVal(p1);
const STidTags* t2 = (const STidTags*) varDataVal(p2);
__compar_fn_t func = getComparFunc(t1->padding, 0);
if (t1->padding == TSDB_DATA_TYPE_JSON){
bool canReturn = true;
int32_t result = jsonCompareUnit(t1->tag, t2->tag, &canReturn);
if(canReturn) return result;
__compar_fn_t func = getComparFunc(t1->tag[0], 0);
return func(t1->tag + CHAR_BYTES, t2->tag + CHAR_BYTES);
}
__compar_fn_t func = getComparFunc(t1->padding, 0);
return func(t1->tag, t2->tag);
}
......@@ -821,16 +836,21 @@ static void issueTsCompQuery(SSqlObj* pSql, SJoinSupporter* pSupporter, SSqlObj*
SSchema colSchema = {.type = TSDB_DATA_TYPE_BINARY, .bytes = 1};
SColumnIndex index = {0, PRIMARYKEY_TIMESTAMP_COL_INDEX};
tscAddFuncInSelectClause(pQueryInfo, 0, TSDB_FUNC_TS_COMP, &index, &colSchema, TSDB_COL_NORMAL, getNewResColId(pCmd));
SExprInfo *pExpr = tscAddFuncInSelectClause(pQueryInfo, 0, TSDB_FUNC_TS_COMP, &index, &colSchema, TSDB_COL_NORMAL, getNewResColId(pCmd));
// set the tags value for ts_comp function
if (UTIL_TABLE_IS_SUPER_TABLE(pTableMetaInfo)) {
SExprInfo *pExpr = tscExprGet(pQueryInfo, 0);
int16_t tagColId = tscGetJoinTagColIdByUid(&pSupporter->tagCond, pTableMetaInfo->pTableMeta->id.uid);
pExpr->base.param[0].i64 = tagColId;
pExpr->base.param[0].nLen = sizeof(int64_t);
pExpr->base.param[0].nType = TSDB_DATA_TYPE_BIGINT;
pExpr->base.numOfParams = 1;
pExpr->base.numOfParams = 0; // the value is 0 by default. just make sure.
// add json tag key, if there is no json tag key, just hold place.
tVariantCreateFromBinary(&(pExpr->base.param[pExpr->base.numOfParams]), pSupporter->tagCond.joinInfo.joinTables[0]->tagJsonKeyName,
strlen(pSupporter->tagCond.joinInfo.joinTables[0]->tagJsonKeyName), TSDB_DATA_TYPE_BINARY);
pExpr->base.numOfParams++;
int16_t tagColId = tscGetJoinTagColIdByUid(&pSupporter->tagCond, pTableMetaInfo->pTableMeta->id.uid);
pExpr->base.param[pExpr->base.numOfParams].i64 = tagColId;
pExpr->base.param[pExpr->base.numOfParams].nLen = sizeof(int64_t);
pExpr->base.param[pExpr->base.numOfParams].nType = TSDB_DATA_TYPE_BIGINT;
pExpr->base.numOfParams++;
}
// add the filter tag column
......@@ -2012,7 +2032,12 @@ int32_t tscCreateJoinSubquery(SSqlObj *pSql, int16_t tableIndex, SJoinSupporter
// set get tags query type
TSDB_QUERY_SET_TYPE(pNewQueryInfo->type, TSDB_QUERY_TYPE_TAG_FILTER_QUERY);
tscAddFuncInSelectClause(pNewQueryInfo, 0, TSDB_FUNC_TID_TAG, &colIndex, &s1, TSDB_COL_TAG, getNewResColId(pCmd));
SExprInfo* pExpr = tscAddFuncInSelectClause(pNewQueryInfo, 0, TSDB_FUNC_TID_TAG, &colIndex, &s1, TSDB_COL_TAG, getNewResColId(pCmd));
if(strlen(pTagCond->joinInfo.joinTables[0]->tagJsonKeyName) > 0){
tVariantCreateFromBinary(&(pExpr->base.param[pExpr->base.numOfParams]), pTagCond->joinInfo.joinTables[0]->tagJsonKeyName,
strlen(pTagCond->joinInfo.joinTables[0]->tagJsonKeyName), TSDB_DATA_TYPE_BINARY);
pExpr->base.numOfParams++;
}
size_t numOfCols = taosArrayGetSize(pNewQueryInfo->colList);
tscDebug(
......@@ -2025,15 +2050,6 @@ int32_t tscCreateJoinSubquery(SSqlObj *pSql, int16_t tableIndex, SJoinSupporter
SColumnIndex colIndex = {0, PRIMARYKEY_TIMESTAMP_COL_INDEX};
tscAddFuncInSelectClause(pNewQueryInfo, 0, TSDB_FUNC_TS_COMP, &colIndex, &colSchema, TSDB_COL_NORMAL, getNewResColId(pCmd));
// set the tags value for ts_comp function
SExprInfo *pExpr = tscExprGet(pNewQueryInfo, 0);
if (UTIL_TABLE_IS_SUPER_TABLE(pTableMetaInfo)) {
int16_t tagColId = tscGetJoinTagColIdByUid(&pSupporter->tagCond, pTableMetaInfo->pTableMeta->id.uid);
pExpr->base.param->i64 = tagColId;
pExpr->base.numOfParams = 1;
}
// add the filter tag column
if (pSupporter->colList != NULL) {
size_t s = taosArrayGetSize(pSupporter->colList);
......@@ -2427,6 +2443,7 @@ int32_t tscHandleFirstRoundStableQuery(SSqlObj *pSql) {
STableMetaInfo* pTableMetaInfo = tscGetMetaInfo(pNewQueryInfo, 0);
tscInitQueryInfo(pNewQueryInfo);
pNewQueryInfo->isStddev = true; // for json tag
// add the group cond
pNewQueryInfo->groupbyExpr = pQueryInfo->groupbyExpr;
......@@ -2490,6 +2507,10 @@ int32_t tscHandleFirstRoundStableQuery(SSqlObj *pSql) {
}
SExprInfo* p = tscAddFuncInSelectClause(pNewQueryInfo, index++, TSDB_FUNC_TAG, &colIndex, schema, TSDB_COL_TAG, getNewResColId(pCmd));
if (schema->type == TSDB_DATA_TYPE_JSON){
p->base.numOfParams = pExpr->base.numOfParams;
tVariantAssign(&p->base.param[0], &pExpr->base.param[0]);
}
p->base.resColId = pExpr->base.resColId;
} else if (pExpr->base.functionId == TSDB_FUNC_PRJ) {
int32_t num = (int32_t) taosArrayGetSize(pNewQueryInfo->groupbyExpr.columnInfo);
......@@ -3681,7 +3702,9 @@ TAOS_ROW doSetResultRowData(SSqlObj *pSql) {
int32_t type = pInfo->field.type;
int32_t bytes = pInfo->field.bytes;
if (type != TSDB_DATA_TYPE_BINARY && type != TSDB_DATA_TYPE_NCHAR) {
if (pQueryInfo->isStddev && type == TSDB_DATA_TYPE_JSON){ // for json tag compare in the second round of stddev
pRes->tsrow[j] = pRes->urow[i];
}else if (!IS_VAR_DATA_TYPE(type) && type != TSDB_DATA_TYPE_JSON) {
pRes->tsrow[j] = isNull(pRes->urow[i], type) ? NULL : pRes->urow[i];
} else {
pRes->tsrow[j] = isNull(pRes->urow[i], type) ? NULL : varDataVal(pRes->urow[i]);
......
......@@ -29,6 +29,7 @@
#include "tsclient.h"
#include "ttimer.h"
#include "ttokendef.h"
#include "cJSON.h"
#ifdef HTTP_EMBEDDED
#include "httpInt.h"
......@@ -81,23 +82,43 @@ int32_t converToStr(char *str, int type, void *buf, int32_t bufSize, int32_t *le
break;
case TSDB_DATA_TYPE_FLOAT:
n = sprintf(str, "%e", GET_FLOAT_VAL(buf));
n = sprintf(str, "%.*e", DECIMAL_DIG, GET_FLOAT_VAL(buf));
break;
case TSDB_DATA_TYPE_DOUBLE:
n = sprintf(str, "%e", GET_DOUBLE_VAL(buf));
n = sprintf(str, "%.*e", DECIMAL_DIG, GET_DOUBLE_VAL(buf));
break;
case TSDB_DATA_TYPE_BINARY:
if (bufSize < 0) {
tscError("invalid buf size");
return TSDB_CODE_TSC_INVALID_VALUE;
}
int32_t escapeSize = 0;
*str++ = '\'';
++escapeSize;
char* data = buf;
for (int32_t i = 0; i < bufSize; ++i) {
if (data[i] == '\'' || data[i] == '"') {
*str++ = '\\';
++escapeSize;
}
*str++ = data[i];
}
*str = '\'';
++escapeSize;
n = bufSize + escapeSize;
break;
case TSDB_DATA_TYPE_NCHAR:
case TSDB_DATA_TYPE_JSON:
if (bufSize < 0) {
tscError("invalid buf size");
return TSDB_CODE_TSC_INVALID_VALUE;
}
*str = '"';
*str = '\'';
memcpy(str + 1, buf, bufSize);
*(str + bufSize + 1) = '"';
*(str + bufSize + 1) = '\'';
n = bufSize + 2;
break;
......@@ -713,34 +734,33 @@ int32_t tscCreateResPointerInfo(SSqlRes* pRes, SQueryInfo* pQueryInfo) {
return TSDB_CODE_SUCCESS;
}
static void setResRawPtrImpl(SSqlRes* pRes, SInternalField* pInfo, int32_t i, bool convertNchar) {
static void setResRawPtrImpl(SSqlRes* pRes, SInternalField* pInfo, int32_t i, bool convertNchar, bool convertJson) {
// generated the user-defined column result
if (pInfo->pExpr->pExpr == NULL && TSDB_COL_IS_UD_COL(pInfo->pExpr->base.colInfo.flag)) {
if (pInfo->pExpr->base.param[1].nType == TSDB_DATA_TYPE_NULL) {
if (pInfo->pExpr->base.param[0].nType == TSDB_DATA_TYPE_NULL) {
setNullN(pRes->urow[i], pInfo->field.type, pInfo->field.bytes, (int32_t) pRes->numOfRows);
} else {
if (pInfo->field.type == TSDB_DATA_TYPE_NCHAR || pInfo->field.type == TSDB_DATA_TYPE_BINARY) {
assert(pInfo->pExpr->base.param[1].nLen <= pInfo->field.bytes);
assert(pInfo->pExpr->base.param[0].nLen <= pInfo->field.bytes);
for (int32_t k = 0; k < pRes->numOfRows; ++k) {
char* p = ((char**)pRes->urow)[i] + k * pInfo->field.bytes;
memcpy(varDataVal(p), pInfo->pExpr->base.param[1].pz, pInfo->pExpr->base.param[1].nLen);
varDataSetLen(p, pInfo->pExpr->base.param[1].nLen);
memcpy(varDataVal(p), pInfo->pExpr->base.param[0].pz, pInfo->pExpr->base.param[0].nLen);
varDataSetLen(p, pInfo->pExpr->base.param[0].nLen);
}
} else {
for (int32_t k = 0; k < pRes->numOfRows; ++k) {
char* p = ((char**)pRes->urow)[i] + k * pInfo->field.bytes;
memcpy(p, &pInfo->pExpr->base.param[1].i64, pInfo->field.bytes);
memcpy(p, &pInfo->pExpr->base.param[0].i64, pInfo->field.bytes);
}
}
}
} else if (convertNchar && pInfo->field.type == TSDB_DATA_TYPE_NCHAR) {
} else if (convertNchar && (pInfo->field.type == TSDB_DATA_TYPE_NCHAR)) {
// convert unicode to native code in a temporary buffer extra one byte for terminated symbol
char* buffer = realloc(pRes->buffer[i], pInfo->field.bytes * pRes->numOfRows);
if(buffer == NULL)
return ;
if (buffer == NULL) return;
pRes->buffer[i] = buffer;
// string terminated char for binary data
memset(pRes->buffer[i], 0, pInfo->field.bytes * pRes->numOfRows);
......@@ -764,8 +784,63 @@ static void setResRawPtrImpl(SSqlRes* pRes, SInternalField* pInfo, int32_t i, bo
p += pInfo->field.bytes;
}
memcpy(pRes->urow[i], pRes->buffer[i], pInfo->field.bytes * pRes->numOfRows);
}else if (pInfo->field.type == TSDB_DATA_TYPE_JSON) {
if (convertJson){
// convert unicode to native code in a temporary buffer extra one byte for terminated symbol
char* buffer = realloc(pRes->buffer[i], pInfo->field.bytes * pRes->numOfRows);
if (buffer == NULL) return;
pRes->buffer[i] = buffer;
// string terminated char for binary data
memset(pRes->buffer[i], 0, pInfo->field.bytes * pRes->numOfRows);
char* p = pRes->urow[i];
for (int32_t k = 0; k < pRes->numOfRows; ++k) {
char* dst = pRes->buffer[i] + k * pInfo->field.bytes;
char type = *p;
char* realData = p + CHAR_BYTES;
if (type == TSDB_DATA_TYPE_JSON && isNull(realData, TSDB_DATA_TYPE_JSON)) {
memcpy(dst, realData, varDataTLen(realData));
} else if (type == TSDB_DATA_TYPE_BINARY) {
assert(*(uint32_t*)varDataVal(realData) == TSDB_DATA_JSON_null); // json null value
assert(varDataLen(realData) == INT_BYTES);
sprintf(varDataVal(dst), "%s", "null");
varDataSetLen(dst, strlen(varDataVal(dst)));
}else if (type == TSDB_DATA_TYPE_JSON) {
int32_t length = taosUcs4ToMbs(varDataVal(realData), varDataLen(realData), varDataVal(dst));
varDataSetLen(dst, length);
if (length == 0) {
tscError("charset:%s to %s. val:%s convert failed.", DEFAULT_UNICODE_ENCODEC, tsCharset, (char*)p);
}
}else if (type == TSDB_DATA_TYPE_NCHAR) { // value -> "value"
*(char*)varDataVal(dst) = '\"';
int32_t length = taosUcs4ToMbs(varDataVal(realData), varDataLen(realData), POINTER_SHIFT(varDataVal(dst), CHAR_BYTES));
*(char*)(POINTER_SHIFT(varDataVal(dst), length + CHAR_BYTES)) = '\"';
varDataSetLen(dst, length + CHAR_BYTES*2);
if (length == 0) {
tscError("charset:%s to %s. val:%s convert failed.", DEFAULT_UNICODE_ENCODEC, tsCharset, (char*)p);
}
}else if (type == TSDB_DATA_TYPE_DOUBLE) {
double jsonVd = *(double*)(realData);
sprintf(varDataVal(dst), "%.9lf", jsonVd);
varDataSetLen(dst, strlen(varDataVal(dst)));
}else if (type == TSDB_DATA_TYPE_BIGINT) {
int64_t jsonVd = *(int64_t*)(realData);
sprintf(varDataVal(dst), "%" PRId64, jsonVd);
varDataSetLen(dst, strlen(varDataVal(dst)));
}else if (type == TSDB_DATA_TYPE_BOOL) {
sprintf(varDataVal(dst), "%s", (*((char *)realData) == 1) ? "true" : "false");
varDataSetLen(dst, strlen(varDataVal(dst)));
}else {
assert(0);
}
p += pInfo->field.bytes;
}
memcpy(pRes->urow[i], pRes->buffer[i], pInfo->field.bytes * pRes->numOfRows);
}else{
// if convertJson is false, json data as raw data used for stddev for the second round
}
}
if (convertNchar) {
......@@ -773,6 +848,10 @@ static void setResRawPtrImpl(SSqlRes* pRes, SInternalField* pInfo, int32_t i, bo
}
}
void tscJson2String(char *src, char* dst){
}
void tscSetResRawPtr(SSqlRes* pRes, SQueryInfo* pQueryInfo, bool converted) {
assert(pRes->numOfCols > 0);
if (pRes->numOfRows == 0) {
......@@ -786,11 +865,11 @@ void tscSetResRawPtr(SSqlRes* pRes, SQueryInfo* pQueryInfo, bool converted) {
pRes->length[i] = pInfo->field.bytes;
offset += pInfo->field.bytes;
setResRawPtrImpl(pRes, pInfo, i, converted ? false : true);
setResRawPtrImpl(pRes, pInfo, i, converted ? false : true, true);
}
}
void tscSetResRawPtrRv(SSqlRes* pRes, SQueryInfo* pQueryInfo, SSDataBlock* pBlock, bool convertNchar) {
void tscSetResRawPtrRv(SSqlRes* pRes, SQueryInfo* pQueryInfo, SSDataBlock* pBlock, bool convertNchar, bool convertJson) {
assert(pRes->numOfCols > 0);
for (int32_t i = 0; i < pQueryInfo->fieldsInfo.numOfOutput; ++i) {
......@@ -801,7 +880,7 @@ void tscSetResRawPtrRv(SSqlRes* pRes, SQueryInfo* pQueryInfo, SSDataBlock* pBloc
pRes->urow[i] = pColData->pData;
pRes->length[i] = pInfo->field.bytes;
setResRawPtrImpl(pRes, pInfo, i, convertNchar);
setResRawPtrImpl(pRes, pInfo, i, convertNchar, convertJson);
/*
// generated the user-defined column result
if (pInfo->pExpr->pExpr == NULL && TSDB_COL_IS_UD_COL(pInfo->pExpr->base.ColName.flag)) {
......@@ -857,17 +936,6 @@ void tscSetResRawPtrRv(SSqlRes* pRes, SQueryInfo* pQueryInfo, SSDataBlock* pBloc
}
}
static SColumnInfo* extractColumnInfoFromResult(SArray* pTableCols) {
int32_t numOfCols = (int32_t) taosArrayGetSize(pTableCols);
SColumnInfo* pColInfo = calloc(numOfCols, sizeof(SColumnInfo));
for(int32_t i = 0; i < numOfCols; ++i) {
SColumn* pCol = taosArrayGetP(pTableCols, i);
pColInfo[i] = pCol->info;//[index].type;
}
return pColInfo;
}
typedef struct SDummyInputInfo {
SSDataBlock *block;
STableQueryInfo *pTableQueryInfo;
......@@ -1257,14 +1325,14 @@ SOperatorInfo* createJoinOperatorInfo(SOperatorInfo** pUpstream, int32_t numOfUp
return pOperator;
}
void convertQueryResult(SSqlRes* pRes, SQueryInfo* pQueryInfo, uint64_t objId, bool convertNchar) {
void convertQueryResult(SSqlRes* pRes, SQueryInfo* pQueryInfo, uint64_t objId, bool convertNchar, bool convertJson) {
// set the correct result
SSDataBlock* p = pQueryInfo->pQInfo->runtimeEnv.outputBuf;
pRes->numOfRows = (p != NULL)? p->info.rows: 0;
if (pRes->code == TSDB_CODE_SUCCESS && pRes->numOfRows > 0) {
tscCreateResPointerInfo(pRes, pQueryInfo);
tscSetResRawPtrRv(pRes, pQueryInfo, p, convertNchar);
tscSetResRawPtrRv(pRes, pQueryInfo, p, convertNchar, convertJson);
}
tscDebug("0x%"PRIx64" retrieve result in pRes, numOfRows:%d", objId, pRes->numOfRows);
......@@ -1298,8 +1366,6 @@ void handleDownstreamOperator(SSqlObj** pSqlObjList, int32_t numOfUpstream, SQue
// handle the following query process
if (px->pQInfo == NULL) {
SColumnInfo* pColumnInfo = extractColumnInfoFromResult(px->colList);
STableMeta* pTableMeta = tscGetMetaInfo(px, 0)->pTableMeta;
SSchema* pSchema = tscGetTableSchema(pTableMeta);
......@@ -1414,7 +1480,6 @@ void handleDownstreamOperator(SSqlObj** pSqlObjList, int32_t numOfUpstream, SQue
px->pQInfo->runtimeEnv.udfIsCopy = true;
px->pQInfo->runtimeEnv.pUdfInfo = pUdfInfo;
tfree(pColumnInfo);
tfree(schema);
// set the pRuntimeEnv for pSourceOperator
......@@ -1423,7 +1488,7 @@ void handleDownstreamOperator(SSqlObj** pSqlObjList, int32_t numOfUpstream, SQue
uint64_t qId = pSql->self;
qTableQuery(px->pQInfo, &qId);
convertQueryResult(pOutput, px, pSql->self, false);
convertQueryResult(pOutput, px, pSql->self, false, false);
}
static void tscDestroyResPointerInfo(SSqlRes* pRes) {
......@@ -3074,12 +3139,12 @@ void tscIncStreamExecutionCount(void* pStream) {
ps->num += 1;
}
bool tscValidateColumnId(STableMetaInfo* pTableMetaInfo, int32_t colId, int32_t numOfParams) {
bool tscValidateColumnId(STableMetaInfo* pTableMetaInfo, int32_t colId) {
if (pTableMetaInfo->pTableMeta == NULL) {
return false;
}
if (colId == TSDB_TBNAME_COLUMN_INDEX || (colId <= TSDB_UD_COLUMN_INDEX && numOfParams == 2)) {
if (colId == TSDB_TBNAME_COLUMN_INDEX || colId <= TSDB_UD_COLUMN_INDEX) {
return true;
}
......@@ -5372,3 +5437,152 @@ char* cloneCurrentDBName(SSqlObj* pSql) {
return p;
}
int parseJsontoTagData(char* json, SKVRowBuilder* kvRowBuilder, char* errMsg, int16_t startColId){
// set json NULL data
uint8_t nullTypeVal[CHAR_BYTES + VARSTR_HEADER_SIZE + INT_BYTES] = {0};
uint32_t jsonNULL = TSDB_DATA_JSON_NULL;
int jsonIndex = startColId + 1;
char nullTypeKey[VARSTR_HEADER_SIZE + INT_BYTES] = {0};
varDataSetLen(nullTypeKey, INT_BYTES);
nullTypeVal[0] = TSDB_DATA_TYPE_JSON;
varDataSetLen(nullTypeVal + CHAR_BYTES, INT_BYTES);
*(uint32_t*)(varDataVal(nullTypeKey)) = jsonNULL;
tdAddColToKVRow(kvRowBuilder, jsonIndex++, TSDB_DATA_TYPE_NCHAR, nullTypeKey, false); // add json null type
if (strtrim(json) == 0 || strcasecmp(json, "null") == 0){
*(uint32_t*)(varDataVal(nullTypeVal + CHAR_BYTES)) = jsonNULL;
tdAddColToKVRow(kvRowBuilder, jsonIndex++, TSDB_DATA_TYPE_NCHAR, nullTypeVal, true); // add json null value
return TSDB_CODE_SUCCESS;
}
int32_t jsonNotNull = TSDB_DATA_JSON_NOT_NULL;
*(uint32_t*)(varDataVal(nullTypeVal + CHAR_BYTES)) = jsonNotNull;
tdAddColToKVRow(kvRowBuilder, jsonIndex++, TSDB_DATA_TYPE_NCHAR, nullTypeVal, true); // add json type
// set json real data
cJSON *root = cJSON_Parse(json);
if (root == NULL){
tscError("json parse error");
return tscSQLSyntaxErrMsg(errMsg, "json parse error", NULL);
}
int size = cJSON_GetArraySize(root);
if(!cJSON_IsObject(root)){
tscError("json error invalide value");
return tscSQLSyntaxErrMsg(errMsg, "json error invalide value", NULL);
}
int retCode = 0;
SHashObj* keyHash = taosHashInit(8, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), false, false);
for(int i = 0; i < size; i++) {
cJSON* item = cJSON_GetArrayItem(root, i);
if (!item) {
tscError("json inner error:%d", i);
retCode = tscSQLSyntaxErrMsg(errMsg, "json inner error", NULL);
goto end;
}
char *jsonKey = item->string;
if(!isValidateTag(jsonKey)){
tscError("json key not validate");
retCode = tscSQLSyntaxErrMsg(errMsg, "json key not validate", NULL);
goto end;
}
if(strlen(jsonKey) > TSDB_MAX_JSON_KEY_LEN){
tscError("json key too long error");
retCode = tscSQLSyntaxErrMsg(errMsg, "json key too long, more than 256", NULL);
goto end;
}
if(strlen(jsonKey) == 0 || taosHashGet(keyHash, jsonKey, strlen(jsonKey)) != NULL){
continue;
}
// json key encode by binary
char tagKey[TSDB_MAX_JSON_KEY_LEN + VARSTR_HEADER_SIZE] = {0};
strncpy(varDataVal(tagKey), jsonKey, strlen(jsonKey));
int32_t outLen = (int32_t)strlen(jsonKey);
taosHashPut(keyHash, jsonKey, outLen, &outLen, CHAR_BYTES); // add key to hash to remove dumplicate, value is useless
varDataSetLen(tagKey, outLen);
tdAddColToKVRow(kvRowBuilder, jsonIndex++, TSDB_DATA_TYPE_NCHAR, tagKey, false); // add json key
if(item->type == cJSON_String){ // add json value format: type|data
char *jsonValue = item->valuestring;
outLen = 0;
char tagVal[TSDB_MAX_JSON_TAGS_LEN] = {0};
*tagVal = jsonType2DbType(0, item->type); // type
char* tagData = POINTER_SHIFT(tagVal,CHAR_BYTES);
if (!taosMbsToUcs4(jsonValue, strlen(jsonValue), varDataVal(tagData),
TSDB_MAX_JSON_TAGS_LEN - CHAR_BYTES - VARSTR_HEADER_SIZE, &outLen)) {
tscError("json string error:%s|%s", strerror(errno), jsonValue);
retCode = tscSQLSyntaxErrMsg(errMsg, "serizelize json error", NULL);
goto end;
}
varDataSetLen(tagData, outLen);
tdAddColToKVRow(kvRowBuilder, jsonIndex++, TSDB_DATA_TYPE_NCHAR, tagVal, true);
}else if(item->type == cJSON_Number){
char tagVal[LONG_BYTES + CHAR_BYTES] = {0};
*tagVal = jsonType2DbType(item->valuedouble, item->type); // type
char* tagData = POINTER_SHIFT(tagVal,CHAR_BYTES);
if(*tagVal == TSDB_DATA_TYPE_DOUBLE) *((double *)tagData) = item->valuedouble;
else if(*tagVal == TSDB_DATA_TYPE_BIGINT) *((int64_t *)tagData) = item->valueint;
tdAddColToKVRow(kvRowBuilder, jsonIndex++, TSDB_DATA_TYPE_BIGINT, tagVal, true);
}else if(item->type == cJSON_True || item->type == cJSON_False){
char tagVal[CHAR_BYTES + CHAR_BYTES] = {0};
*tagVal = jsonType2DbType((double)(item->valueint), item->type); // type
char* tagData = POINTER_SHIFT(tagVal,CHAR_BYTES);
*tagData = (char)(item->valueint);
tdAddColToKVRow(kvRowBuilder, jsonIndex++, TSDB_DATA_TYPE_BOOL, tagVal, true);
}else if(item->type == cJSON_NULL){
char tagVal[CHAR_BYTES + VARSTR_HEADER_SIZE + INT_BYTES] = {0};
*tagVal = jsonType2DbType(0, item->type); // type
int32_t* tagData = POINTER_SHIFT(tagVal,CHAR_BYTES);
varDataSetLen(tagData, INT_BYTES);
*(uint32_t*)(varDataVal(tagData)) = TSDB_DATA_JSON_null;
tdAddColToKVRow(kvRowBuilder, jsonIndex++, TSDB_DATA_TYPE_BINARY, tagVal, true);
}
else{
retCode = tscSQLSyntaxErrMsg(errMsg, "invalidate json value", NULL);
goto end;
}
}
if(taosHashGetSize(keyHash) == 0){ // set json NULL true
*(uint32_t*)(varDataVal(nullTypeVal + CHAR_BYTES)) = jsonNULL;
memcpy(POINTER_SHIFT(kvRowBuilder->buf, kvRowBuilder->pColIdx[2].offset), nullTypeVal, CHAR_BYTES + VARSTR_HEADER_SIZE + INT_BYTES);
}
end:
taosHashCleanup(keyHash);
cJSON_Delete(root);
return retCode;
}
int8_t jsonType2DbType(double data, int jsonType){
switch(jsonType){
case cJSON_Number:
if (data - (int64_t)data > 0) return TSDB_DATA_TYPE_DOUBLE; else return TSDB_DATA_TYPE_BIGINT;
case cJSON_String:
return TSDB_DATA_TYPE_NCHAR;
case cJSON_NULL:
return TSDB_DATA_TYPE_BINARY;
case cJSON_True:
case cJSON_False:
return TSDB_DATA_TYPE_BOOL;
}
return TSDB_DATA_TYPE_NULL;
}
// get key from json->'key'
void getJsonKey(SStrToken *t0){
while(true){
t0->n = tGetToken(t0->z, &t0->type);
if (t0->type == TK_STRING){
t0->z++;
t0->n -= 2;
break;
}else if (t0->type == TK_ILLEGAL){
assert(0);
}
t0->z += t0->n;
}
}
......@@ -547,7 +547,7 @@ void tdDestroyKVRowBuilder(SKVRowBuilder *pBuilder);
void tdResetKVRowBuilder(SKVRowBuilder *pBuilder);
SKVRow tdGetKVRowFromBuilder(SKVRowBuilder *pBuilder);
static FORCE_INLINE int tdAddColToKVRow(SKVRowBuilder *pBuilder, int16_t colId, int8_t type, void *value) {
static FORCE_INLINE int tdAddColToKVRow(SKVRowBuilder *pBuilder, int16_t colId, int8_t type, void *value, bool isJumpJsonVType) {
if (pBuilder->nCols >= pBuilder->tCols) {
pBuilder->tCols *= 2;
SColIdx* pColIdx = (SColIdx *)realloc((void *)(pBuilder->pColIdx), sizeof(SColIdx) * pBuilder->tCols);
......@@ -560,9 +560,14 @@ static FORCE_INLINE int tdAddColToKVRow(SKVRowBuilder *pBuilder, int16_t colId,
pBuilder->nCols++;
int tlen = IS_VAR_DATA_TYPE(type) ? varDataTLen(value) : TYPE_BYTES[type];
char* jumpType = (char*)value;
if(isJumpJsonVType) jumpType += CHAR_BYTES;
int tlen = IS_VAR_DATA_TYPE(type) ? varDataTLen(jumpType) : TYPE_BYTES[type];
if(isJumpJsonVType) tlen += CHAR_BYTES; // add type size
if (tlen > pBuilder->alloc - pBuilder->size) {
while (tlen > pBuilder->alloc - pBuilder->size) {
assert(pBuilder->alloc > 0);
pBuilder->alloc *= 2;
}
void* buf = realloc(pBuilder->buf, pBuilder->alloc);
......
......@@ -25,7 +25,7 @@ extern "C" {
// variant, each number/string/field_id has a corresponding struct during parsing sql
typedef struct tVariant {
uint32_t nType;
int32_t nType; // change uint to int, because in tVariantCreate() pVar->nType = -1; // -1 means error type
int32_t nLen; // only used for string, for number, it is useless
union {
int64_t i64;
......
......@@ -117,11 +117,12 @@ void tExprTreeDestroy(tExprNode *pNode, void (*fp)(void *)) {
doExprTreeDestroy(&pNode, fp);
} else if (pNode->nodeType == TSQL_NODE_VALUE) {
tVariantDestroy(pNode->pVal);
tfree(pNode->pVal);
} else if (pNode->nodeType == TSQL_NODE_COL) {
tfree(pNode->pSchema);
}
free(pNode);
tfree(pNode);
}
static void doExprTreeDestroy(tExprNode **pExpr, void (*fp)(void *)) {
......@@ -138,12 +139,12 @@ static void doExprTreeDestroy(tExprNode **pExpr, void (*fp)(void *)) {
}
} else if ((*pExpr)->nodeType == TSQL_NODE_VALUE) {
tVariantDestroy((*pExpr)->pVal);
free((*pExpr)->pVal);
tfree((*pExpr)->pVal);
} else if ((*pExpr)->nodeType == TSQL_NODE_COL) {
free((*pExpr)->pSchema);
tfree((*pExpr)->pSchema);
}
free(*pExpr);
tfree(*pExpr);
*pExpr = NULL;
}
......@@ -288,7 +289,7 @@ static void exprTreeToBinaryImpl(SBufferWriter* bw, tExprNode* expr) {
tVariant* pVal = expr->pVal;
tbufWriteUint32(bw, pVal->nType);
if (pVal->nType == TSDB_DATA_TYPE_BINARY) {
if (pVal->nType == TSDB_DATA_TYPE_BINARY || pVal->nType == TSDB_DATA_TYPE_NCHAR) {
tbufWriteInt32(bw, pVal->nLen);
tbufWrite(bw, pVal->pz, pVal->nLen);
} else {
......@@ -349,7 +350,7 @@ static tExprNode* exprTreeFromBinaryImpl(SBufferReader* br) {
pExpr->pVal = pVal;
pVal->nType = tbufReadUint32(br);
if (pVal->nType == TSDB_DATA_TYPE_BINARY) {
if (pVal->nType == TSDB_DATA_TYPE_BINARY || pVal->nType == TSDB_DATA_TYPE_NCHAR) {
tbufReadToBuffer(br, &pVal->nLen, sizeof(pVal->nLen));
pVal->pz = calloc(1, pVal->nLen + 1);
tbufReadToBuffer(br, pVal->pz, pVal->nLen);
......
......@@ -1332,7 +1332,7 @@ static void doInitGlobalConfig(void) {
cfg.option = "httpDbNameMandatory";
cfg.ptr = &tsHttpDbNameMandatory;
cfg.valType = TAOS_CFG_VTYPE_INT8;
cfg.cfgType = TSDB_CFG_CTYPE_B_CONFIG;
cfg.cfgType = TSDB_CFG_CTYPE_B_CONFIG | TSDB_CFG_CTYPE_B_SHOW;
cfg.minValue = 0;
cfg.maxValue = 1;
cfg.ptrLength = 0;
......
......@@ -4,6 +4,7 @@
#include "tname.h"
#include "ttoken.h"
#include "tvariant.h"
#include "tglobal.h"
#define VALIDNUMOFCOLS(x) ((x) >= TSDB_MIN_COLUMNS && (x) <= TSDB_MAX_COLUMNS)
#define VALIDNUMOFTAGS(x) ((x) >= 0 && (x) <= TSDB_MAX_TAGS)
......@@ -251,6 +252,9 @@ static bool doValidateSchema(SSchema* pSchema, int32_t numOfCols, int32_t maxLen
int32_t rowLen = 0;
for (int32_t i = 0; i < numOfCols; ++i) {
if (pSchema[i].type == TSDB_DATA_TYPE_JSON && numOfCols != 1){
return false;
}
// 1. valid types
if (!isValidDataType(pSchema[i].type)) {
return false;
......@@ -301,8 +305,12 @@ bool tIsValidSchema(struct SSchema* pSchema, int32_t numOfCols, int32_t numOfTag
if (!doValidateSchema(pSchema, numOfCols, TSDB_MAX_BYTES_PER_ROW)) {
return false;
}
int32_t maxTagLen = TSDB_MAX_TAGS_LEN;
if (numOfTags == 1 && pSchema[numOfCols].type == TSDB_DATA_TYPE_JSON){
maxTagLen = TSDB_MAX_JSON_TAGS_LEN;
}
if (!doValidateSchema(&pSchema[numOfCols], numOfTags, TSDB_MAX_TAGS_LEN)) {
if (!doValidateSchema(&pSchema[numOfCols], numOfTags, maxTagLen)) {
return false;
}
......
......@@ -18,7 +18,7 @@
#include "ttokendef.h"
#include "tscompression.h"
const int32_t TYPE_BYTES[15] = {
const int32_t TYPE_BYTES[16] = {
-1, // TSDB_DATA_TYPE_NULL
sizeof(int8_t), // TSDB_DATA_TYPE_BOOL
sizeof(int8_t), // TSDB_DATA_TYPE_TINYINT
......@@ -34,6 +34,7 @@ const int32_t TYPE_BYTES[15] = {
sizeof(uint16_t), // TSDB_DATA_TYPE_USMALLINT
sizeof(uint32_t), // TSDB_DATA_TYPE_UINT
sizeof(uint64_t), // TSDB_DATA_TYPE_UBIGINT
sizeof(int8_t), // TSDB_DATA_TYPE_JSON
};
#define DO_STATICS(__sum, __min, __max, __minIndex, __maxIndex, _list, _index) \
......@@ -367,8 +368,8 @@ static void getStatics_nchr(const void *pData, int32_t numOfRow, int64_t *min, i
*maxIndex = 0;
}
tDataTypeDescriptor tDataTypes[15] = {
{TSDB_DATA_TYPE_NULL, 6, 1, "NOTYPE", 0, 0, NULL, NULL, NULL},
tDataTypeDescriptor tDataTypes[16] = {
{TSDB_DATA_TYPE_NULL, 6, 1, "NOTYPE", 0, 0, NULL, NULL, NULL},
{TSDB_DATA_TYPE_BOOL, 4, CHAR_BYTES, "BOOL", false, true, tsCompressBool, tsDecompressBool, getStatics_bool},
{TSDB_DATA_TYPE_TINYINT, 7, CHAR_BYTES, "TINYINT", INT8_MIN, INT8_MAX, tsCompressTinyint, tsDecompressTinyint, getStatics_i8},
{TSDB_DATA_TYPE_SMALLINT, 8, SHORT_BYTES, "SMALLINT", INT16_MIN, INT16_MAX, tsCompressSmallint, tsDecompressSmallint, getStatics_i16},
......@@ -376,13 +377,14 @@ tDataTypeDescriptor tDataTypes[15] = {
{TSDB_DATA_TYPE_BIGINT, 6, LONG_BYTES, "BIGINT", INT64_MIN, INT64_MAX, tsCompressBigint, tsDecompressBigint, getStatics_i64},
{TSDB_DATA_TYPE_FLOAT, 5, FLOAT_BYTES, "FLOAT", 0, 0, tsCompressFloat, tsDecompressFloat, getStatics_f},
{TSDB_DATA_TYPE_DOUBLE, 6, DOUBLE_BYTES, "DOUBLE", 0, 0, tsCompressDouble, tsDecompressDouble, getStatics_d},
{TSDB_DATA_TYPE_BINARY, 6, 0, "BINARY", 0, 0, tsCompressString, tsDecompressString, getStatics_bin},
{TSDB_DATA_TYPE_BINARY, 6, 0, "BINARY", 0, 0, tsCompressString, tsDecompressString, getStatics_bin},
{TSDB_DATA_TYPE_TIMESTAMP, 9, LONG_BYTES, "TIMESTAMP", INT64_MIN, INT64_MAX, tsCompressTimestamp, tsDecompressTimestamp, getStatics_i64},
{TSDB_DATA_TYPE_NCHAR, 5, 8, "NCHAR", 0, 0, tsCompressString, tsDecompressString, getStatics_nchr},
{TSDB_DATA_TYPE_NCHAR, 5, 8, "NCHAR", 0, 0, tsCompressString, tsDecompressString, getStatics_nchr},
{TSDB_DATA_TYPE_UTINYINT, 16, CHAR_BYTES, "TINYINT UNSIGNED", 0, UINT8_MAX, tsCompressTinyint, tsDecompressTinyint, getStatics_u8},
{TSDB_DATA_TYPE_USMALLINT, 17, SHORT_BYTES, "SMALLINT UNSIGNED", 0, UINT16_MAX, tsCompressSmallint, tsDecompressSmallint, getStatics_u16},
{TSDB_DATA_TYPE_UINT, 12, INT_BYTES, "INT UNSIGNED", 0, UINT32_MAX, tsCompressInt, tsDecompressInt, getStatics_u32},
{TSDB_DATA_TYPE_UBIGINT, 15, LONG_BYTES, "BIGINT UNSIGNED", 0, UINT64_MAX, tsCompressBigint, tsDecompressBigint, getStatics_u64},
{TSDB_DATA_TYPE_JSON,4, TSDB_MAX_JSON_TAGS_LEN, "JSON", 0, 0, tsCompressString, tsDecompressString, getStatics_nchr},
};
char tTokenTypeSwitcher[13] = {
......@@ -428,7 +430,7 @@ FORCE_INLINE void* getDataMax(int32_t type) {
bool isValidDataType(int32_t type) {
return type >= TSDB_DATA_TYPE_NULL && type <= TSDB_DATA_TYPE_UBIGINT;
return type >= TSDB_DATA_TYPE_NULL && type <= TSDB_DATA_TYPE_JSON;
}
void setVardataNull(void* val, int32_t type) {
......@@ -438,6 +440,9 @@ void setVardataNull(void* val, int32_t type) {
} else if (type == TSDB_DATA_TYPE_NCHAR) {
varDataSetLen(val, sizeof(int32_t));
*(uint32_t*) varDataVal(val) = TSDB_DATA_NCHAR_NULL;
} else if (type == TSDB_DATA_TYPE_JSON) {
varDataSetLen(val, sizeof(int32_t));
*(uint32_t*) varDataVal(val) = TSDB_DATA_JSON_NULL;
} else {
assert(0);
}
......@@ -505,6 +510,7 @@ void setNullN(void *val, int32_t type, int32_t bytes, int32_t numOfElems) {
break;
case TSDB_DATA_TYPE_NCHAR:
case TSDB_DATA_TYPE_BINARY:
case TSDB_DATA_TYPE_JSON:
for (int32_t i = 0; i < numOfElems; ++i) {
setVardataNull(POINTER_SHIFT(val, i * bytes), type);
}
......
......@@ -158,7 +158,7 @@ void tVariantCreateFromBinary(tVariant *pVar, const char *pz, size_t len, uint32
pVar->dKey = GET_FLOAT_VAL(pz);
break;
}
case TSDB_DATA_TYPE_NCHAR: { // here we get the nchar length from raw binary bits length
case TSDB_DATA_TYPE_NCHAR:{ // here we get the nchar length from raw binary bits length
size_t lenInwchar = len / TSDB_NCHAR_SIZE;
pVar->wpz = calloc(1, (lenInwchar + 1) * TSDB_NCHAR_SIZE);
......@@ -167,7 +167,13 @@ void tVariantCreateFromBinary(tVariant *pVar, const char *pz, size_t len, uint32
break;
}
case TSDB_DATA_TYPE_BINARY: { // todo refactor, extract a method
case TSDB_DATA_TYPE_JSON:{
pVar->pz = calloc(len + 2, sizeof(char));
memcpy(pVar->pz, pz, len);
pVar->nLen = (int32_t)len;
break;
}
case TSDB_DATA_TYPE_BINARY:{
pVar->pz = calloc(len + 1, sizeof(char));
memcpy(pVar->pz, pz, len);
pVar->nLen = (int32_t)len;
......@@ -185,7 +191,7 @@ void tVariantCreateFromBinary(tVariant *pVar, const char *pz, size_t len, uint32
void tVariantDestroy(tVariant *pVar) {
if (pVar == NULL) return;
if (pVar->nType == TSDB_DATA_TYPE_BINARY || pVar->nType == TSDB_DATA_TYPE_NCHAR) {
if (pVar->nType == TSDB_DATA_TYPE_BINARY || pVar->nType == TSDB_DATA_TYPE_NCHAR || pVar->nType == TSDB_DATA_TYPE_JSON) {
tfree(pVar->pz);
pVar->nLen = 0;
}
......@@ -210,11 +216,41 @@ bool tVariantIsValid(tVariant *pVar) {
return isValidDataType(pVar->nType);
}
bool tVariantTypeMatch(tVariant *pVar, int8_t dbType){
switch (dbType) {
case TSDB_DATA_TYPE_BINARY:
case TSDB_DATA_TYPE_NCHAR: {
if(pVar->nType != TSDB_DATA_TYPE_BINARY && pVar->nType != TSDB_DATA_TYPE_NCHAR){
return false;
}
break;
}
case TSDB_DATA_TYPE_BOOL:
case TSDB_DATA_TYPE_TINYINT:
case TSDB_DATA_TYPE_SMALLINT:
case TSDB_DATA_TYPE_INT:
case TSDB_DATA_TYPE_UTINYINT:
case TSDB_DATA_TYPE_USMALLINT:
case TSDB_DATA_TYPE_UINT:
case TSDB_DATA_TYPE_BIGINT:
case TSDB_DATA_TYPE_UBIGINT:
case TSDB_DATA_TYPE_FLOAT:
case TSDB_DATA_TYPE_DOUBLE:{
if(pVar->nType == TSDB_DATA_TYPE_BINARY || pVar->nType == TSDB_DATA_TYPE_NCHAR){
return false;
}
break;
}
}
return true;
}
void tVariantAssign(tVariant *pDst, const tVariant *pSrc) {
if (pSrc == NULL || pDst == NULL) return;
pDst->nType = pSrc->nType;
if (pSrc->nType == TSDB_DATA_TYPE_BINARY || pSrc->nType == TSDB_DATA_TYPE_NCHAR) {
if (pSrc->nType == TSDB_DATA_TYPE_BINARY || pSrc->nType == TSDB_DATA_TYPE_NCHAR || pSrc->nType == TSDB_DATA_TYPE_JSON) {
int32_t len = pSrc->nLen + TSDB_NCHAR_SIZE;
char* p = realloc(pDst->pz, len);
assert(p);
......@@ -249,7 +285,7 @@ void tVariantAssign(tVariant *pDst, const tVariant *pSrc) {
}
}
if (pDst->nType != TSDB_DATA_TYPE_POINTER_ARRAY && pDst->nType != TSDB_DATA_TYPE_VALUE_ARRAY) {
if (pDst->nType != TSDB_DATA_TYPE_POINTER_ARRAY && pDst->nType != TSDB_DATA_TYPE_VALUE_ARRAY && isValidDataType(pDst->nType)) { // if pDst->nType=-1, core dump. eg: where intcolumn=999999999999999999999999999
pDst->nLen = tDataTypes[pDst->nType].bytes;
}
}
......@@ -267,7 +303,7 @@ int32_t tVariantCompare(const tVariant* p1, const tVariant* p2) {
return 1;
}
if (p1->nType == TSDB_DATA_TYPE_BINARY || p1->nType == TSDB_DATA_TYPE_NCHAR) {
if (p1->nType == TSDB_DATA_TYPE_BINARY || p1->nType == TSDB_DATA_TYPE_NCHAR || p1->nType == TSDB_DATA_TYPE_JSON) {
if (p1->nLen == p2->nLen) {
return memcmp(p1->pz, p2->pz, p1->nLen);
} else {
......@@ -815,7 +851,7 @@ int32_t tVariantDumpEx(tVariant *pVariant, char *payload, int16_t type, bool inc
break;
}
case TSDB_DATA_TYPE_BINARY: {
case TSDB_DATA_TYPE_BINARY:{
if (!includeLengthPrefix) {
if (pVariant->nType == TSDB_DATA_TYPE_NULL) {
*(uint8_t*) payload = TSDB_DATA_BINARY_NULL;
......@@ -852,7 +888,7 @@ int32_t tVariantDumpEx(tVariant *pVariant, char *payload, int16_t type, bool inc
}
break;
}
case TSDB_DATA_TYPE_NCHAR: {
case TSDB_DATA_TYPE_NCHAR:{
int32_t newlen = 0;
if (!includeLengthPrefix) {
if (pVariant->nType == TSDB_DATA_TYPE_NULL) {
......@@ -888,6 +924,16 @@ int32_t tVariantDumpEx(tVariant *pVariant, char *payload, int16_t type, bool inc
break;
}
case TSDB_DATA_TYPE_JSON:{
if (pVariant->nType == TSDB_DATA_TYPE_BINARY){
*((int8_t *)payload) = TSDB_DATA_JSON_PLACEHOLDER;
} else if (pVariant->nType == TSDB_DATA_TYPE_JSON){ // select * from stable, set tag type to json,from setTagValue/tag_project_function
memcpy(payload, pVariant->pz, pVariant->nLen);
}else {
return -1;
}
break;
}
}
return 0;
......

Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio Version 16
VisualStudioVersion = 16.0.30114.105
MinimumVisualStudioVersion = 10.0.40219.1
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "src", "src", "{A1FB5B66-E32F-4789-9BE9-042E5BD21087}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "TDengineDriver", "src\TDengineDriver\TDengineDriver.csproj", "{5BED7402-0A65-4ED9-A491-C56BFB518045}"
EndProject
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "test", "test", "{CB8E6458-31E1-4351-B704-1B918E998654}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "XUnitTest", "src\test\XUnitTest\XUnitTest.csproj", "{64C0A478-2591-4459-9F8F-A70F37976A41}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "Cases", "src\test\Cases\Cases.csproj", "{19A69D26-66BF-4227-97BE-9B087BC76B2F}"
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Any CPU = Debug|Any CPU
Debug|x64 = Debug|x64
Debug|x86 = Debug|x86
Release|Any CPU = Release|Any CPU
Release|x64 = Release|x64
Release|x86 = Release|x86
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
EndGlobalSection
GlobalSection(ProjectConfigurationPlatforms) = postSolution
{5BED7402-0A65-4ED9-A491-C56BFB518045}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{5BED7402-0A65-4ED9-A491-C56BFB518045}.Debug|Any CPU.Build.0 = Debug|Any CPU
{5BED7402-0A65-4ED9-A491-C56BFB518045}.Debug|x64.ActiveCfg = Debug|Any CPU
{5BED7402-0A65-4ED9-A491-C56BFB518045}.Debug|x64.Build.0 = Debug|Any CPU
{5BED7402-0A65-4ED9-A491-C56BFB518045}.Debug|x86.ActiveCfg = Debug|Any CPU
{5BED7402-0A65-4ED9-A491-C56BFB518045}.Debug|x86.Build.0 = Debug|Any CPU
{5BED7402-0A65-4ED9-A491-C56BFB518045}.Release|Any CPU.ActiveCfg = Release|Any CPU
{5BED7402-0A65-4ED9-A491-C56BFB518045}.Release|Any CPU.Build.0 = Release|Any CPU
{5BED7402-0A65-4ED9-A491-C56BFB518045}.Release|x64.ActiveCfg = Release|Any CPU
{5BED7402-0A65-4ED9-A491-C56BFB518045}.Release|x64.Build.0 = Release|Any CPU
{5BED7402-0A65-4ED9-A491-C56BFB518045}.Release|x86.ActiveCfg = Release|Any CPU
{5BED7402-0A65-4ED9-A491-C56BFB518045}.Release|x86.Build.0 = Release|Any CPU
{64C0A478-2591-4459-9F8F-A70F37976A41}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{64C0A478-2591-4459-9F8F-A70F37976A41}.Debug|Any CPU.Build.0 = Debug|Any CPU
{64C0A478-2591-4459-9F8F-A70F37976A41}.Debug|x64.ActiveCfg = Debug|Any CPU
{64C0A478-2591-4459-9F8F-A70F37976A41}.Debug|x64.Build.0 = Debug|Any CPU
{64C0A478-2591-4459-9F8F-A70F37976A41}.Debug|x86.ActiveCfg = Debug|Any CPU
{64C0A478-2591-4459-9F8F-A70F37976A41}.Debug|x86.Build.0 = Debug|Any CPU
{64C0A478-2591-4459-9F8F-A70F37976A41}.Release|Any CPU.ActiveCfg = Release|Any CPU
{64C0A478-2591-4459-9F8F-A70F37976A41}.Release|Any CPU.Build.0 = Release|Any CPU
{64C0A478-2591-4459-9F8F-A70F37976A41}.Release|x64.ActiveCfg = Release|Any CPU
{64C0A478-2591-4459-9F8F-A70F37976A41}.Release|x64.Build.0 = Release|Any CPU
{64C0A478-2591-4459-9F8F-A70F37976A41}.Release|x86.ActiveCfg = Release|Any CPU
{64C0A478-2591-4459-9F8F-A70F37976A41}.Release|x86.Build.0 = Release|Any CPU
{19A69D26-66BF-4227-97BE-9B087BC76B2F}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{19A69D26-66BF-4227-97BE-9B087BC76B2F}.Debug|Any CPU.Build.0 = Debug|Any CPU
{19A69D26-66BF-4227-97BE-9B087BC76B2F}.Debug|x64.ActiveCfg = Debug|Any CPU
{19A69D26-66BF-4227-97BE-9B087BC76B2F}.Debug|x64.Build.0 = Debug|Any CPU
{19A69D26-66BF-4227-97BE-9B087BC76B2F}.Debug|x86.ActiveCfg = Debug|Any CPU
{19A69D26-66BF-4227-97BE-9B087BC76B2F}.Debug|x86.Build.0 = Debug|Any CPU
{19A69D26-66BF-4227-97BE-9B087BC76B2F}.Release|Any CPU.ActiveCfg = Release|Any CPU
{19A69D26-66BF-4227-97BE-9B087BC76B2F}.Release|Any CPU.Build.0 = Release|Any CPU
{19A69D26-66BF-4227-97BE-9B087BC76B2F}.Release|x64.ActiveCfg = Release|Any CPU
{19A69D26-66BF-4227-97BE-9B087BC76B2F}.Release|x64.Build.0 = Release|Any CPU
{19A69D26-66BF-4227-97BE-9B087BC76B2F}.Release|x86.ActiveCfg = Release|Any CPU
{19A69D26-66BF-4227-97BE-9B087BC76B2F}.Release|x86.Build.0 = Release|Any CPU
EndGlobalSection
GlobalSection(NestedProjects) = preSolution
{5BED7402-0A65-4ED9-A491-C56BFB518045} = {A1FB5B66-E32F-4789-9BE9-042E5BD21087}
{CB8E6458-31E1-4351-B704-1B918E998654} = {A1FB5B66-E32F-4789-9BE9-042E5BD21087}
{64C0A478-2591-4459-9F8F-A70F37976A41} = {CB8E6458-31E1-4351-B704-1B918E998654}
{19A69D26-66BF-4227-97BE-9B087BC76B2F} = {CB8E6458-31E1-4351-B704-1B918E998654}
EndGlobalSection
EndGlobal
......@@ -47,6 +47,14 @@ namespace TDengineDriver
TDDB_OPTION_SHELL_ACTIVITY_TIMER = 4
}
enum TaosField
{
STRUCT_SIZE = 68,
NAME_LENGTH = 65,
TYPE_OFFSET = 65,
BYTES_OFFSET = 66,
}
public class TDengineMeta
{
public string name;
......@@ -90,6 +98,51 @@ namespace TDengineDriver
}
}
[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Ansi)]
public struct TAOS_BIND
{
// column type
public int buffer_type;
// one column value
public IntPtr buffer;
// unused
public Int32 buffer_length;
// actual value length in buffer
public IntPtr length;
// indicates the column value is null or not
public IntPtr is_null;
// unused
public int is_unsigned;
// unused
public IntPtr error;
public Int64 u;
public uint allocated;
}
[StructLayout(LayoutKind.Sequential)]
public struct TAOS_MULTI_BIND
{
// column type
public int buffer_type;
// array, one or more lines column value
public IntPtr buffer;
//length of element in TAOS_MULTI_BIND.buffer (for binary and nchar it is the longest element's length)
public ulong buffer_length;
//array, actual data length for each value
public IntPtr length;
//array, indicates each column value is null or not
public IntPtr is_null;
// line number, or the values number in buffer
public int num;
}
public class TDengine
{
public const int TSDB_CODE_SUCCESS = 0;
......@@ -130,7 +183,7 @@ namespace TDengineDriver
static extern private IntPtr taos_fetch_fields(IntPtr res);
static public List<TDengineMeta> FetchFields(IntPtr res)
{
const int fieldSize = 68;
// const int fieldSize = 68;
List<TDengineMeta> metas = new List<TDengineMeta>();
if (res == IntPtr.Zero)
......@@ -143,12 +196,11 @@ namespace TDengineDriver
for (int i = 0; i < fieldCount; ++i)
{
int offset = i * fieldSize;
int offset = i * (int)TaosField.STRUCT_SIZE;
TDengineMeta meta = new TDengineMeta();
meta.name = Marshal.PtrToStringAnsi(fieldsPtr + offset);
meta.type = Marshal.ReadByte(fieldsPtr + offset + 65);
meta.size = Marshal.ReadInt16(fieldsPtr + offset + 66);
meta.type = Marshal.ReadByte(fieldsPtr + offset + (int)TaosField.TYPE_OFFSET);
meta.size = Marshal.ReadInt16(fieldsPtr + offset + (int)TaosField.BYTES_OFFSET);
metas.Add(meta);
}
......@@ -163,12 +215,185 @@ namespace TDengineDriver
[DllImport("taos", EntryPoint = "taos_close", CallingConvention = CallingConvention.Cdecl)]
static extern public int Close(IntPtr taos);
//get precision in restultset
//get precision of restultset
[DllImport("taos", EntryPoint = "taos_result_precision", CallingConvention = CallingConvention.Cdecl)]
static extern public int ResultPrecision(IntPtr taos);
//schemaless API
[DllImport("taos",SetLastError = true, EntryPoint = "taos_schemaless_insert", CallingConvention = CallingConvention.Cdecl)]
static extern public IntPtr SchemalessInsert(IntPtr taos, string[] lines, int numLines, int protocol, int precision);
//stmt APIs:
/// <summary>
/// init a TAOS_STMT object for later use.
/// </summary>
/// <param name="taos">a valid taos connection</param>
/// <returns>
/// Not NULL returned for success, NULL for failure. And it should be freed with taos_stmt_close.
/// </returns>
[DllImport("taos", EntryPoint = "taos_stmt_init", CallingConvention = CallingConvention.Cdecl)]
static extern public IntPtr StmtInit(IntPtr taos);
/// <summary>
/// prepare a sql statement,'sql' should be a valid INSERT/SELECT statement.
/// </summary>
/// <param name="stmt">could be the value returned by 'StmtInit', that may be a valid object or NULL.</param>
/// <param name="sql">sql string,used to bind parameters with</param>
/// <param name="length">no used</param>
/// <returns>0 for success, non-zero for failure.</returns>
[DllImport("taos", EntryPoint = "taos_stmt_prepare", CallingConvention = CallingConvention.Cdecl)]
static extern public int StmtPrepare(IntPtr stmt, string sql);
/// <summary>
/// For INSERT only. Used to bind table name as a parmeter for the input stmt object.
/// </summary>
/// <param name="stmt">could be the value returned by 'StmtInit', that may be a valid object or NULL.</param>
/// <param name="name">table name you want to bind</param>
/// <returns>0 for success, non-zero for failure.</returns>
[DllImport("taos", EntryPoint = "taos_stmt_set_tbname", CallingConvention = CallingConvention.Cdecl)]
static extern public int StmtSetTbname(IntPtr stmt, string name);
/// <summary>
/// For INSERT only.
/// Set a table name for binding table name as parameter. Only used for binding all tables
/// in one stable, user application must call 'loadTableInfo' API to load all table
/// meta before calling this API. If the table meta is not cached locally, it will return error.
/// </summary>
/// <param name="stmt">could be the value returned by 'StmtInit', that may be a valid object or NULL.</param>
/// <param name="name">table name which is belong to an stable</param>
/// <returns>0 for success, non-zero for failure.</returns>
[DllImport("taos", EntryPoint = "taos_stmt_set_sub_tbname", CallingConvention = CallingConvention.Cdecl)]
static extern public int StmtSetSubTbname(IntPtr stmt, string name);
/// <summary>
/// For INSERT only.
/// set a table name for binding table name as parameter and tag values for all tag parameters.
/// </summary>
/// <param name="stmt">could be the value returned by 'StmtInit', that may be a valid object or NULL.</param>
/// <param name="name">use to set table name</param>
/// <param name="tags">
/// is an array contains all tag values,each item in the array represents a tag column's value.
/// the item number and sequence should keep consistence with that in stable tag definition.
/// </param>
/// <returns>0 for success, non-zero for failure.</returns>
[DllImport("taos", EntryPoint = "taos_stmt_set_tbname_tags", CallingConvention = CallingConvention.Cdecl)]
static extern public int StmtSetTbnameTags(IntPtr stmt, string name, TAOS_BIND[] tags);
/// <summary>
/// For both INSERT and SELECT.
/// bind a whole line data.
/// The usage of structure TAOS_BIND is the same with MYSQL_BIND in MySQL.
/// </summary>
/// <param name="stmt">could be the value returned by 'StmtInit', that may be a valid object or NULL.</param>
/// <param name="bind">
/// points to an array contains the whole line data.
/// the item number and sequence should keep consistence with columns in sql statement.
/// </param>
/// <returns>0 for success, non-zero for failure.</returns>
[DllImport("taos", EntryPoint = "taos_stmt_bind_param", CallingConvention = CallingConvention.Cdecl, SetLastError = true)]
static extern public int StmtBindParam(IntPtr stmt, TAOS_BIND[] bind);
/// <summary>
/// bind a single column's data, INTERNAL used and for INSERT only.
/// </summary>
/// <param name="stmt">could be the value returned by 'StmtInit', that may be a valid object or NULL.</param>
/// <param name="bind">points to a column's data which could be the one or more lines. </param>
/// <param name="colIdx">the column's index in prepared sql statement, it starts from 0.</param>
/// <returns>0 for success, non-zero for failure.</returns>
[DllImport("taos", EntryPoint = "taos_stmt_bind_single_param_batch", CallingConvention = CallingConvention.Cdecl)]
static extern public int StmtBindSingleParamBatch(IntPtr stmt, ref TAOS_MULTI_BIND bind, int colIdx);
/// <summary>
/// for INSERT only
/// bind one or multiple lines data. The parameter 'bind'
/// </summary>
/// <param name="stmt">could be the value returned by 'StmtInit', that may be a valid object or NULL.</param>
/// <param name="bind">
/// points to an array contains one or more lines data.Each item in array represents a column's value(s),
/// the item number and sequence should keep consistence with columns in sql statement.
/// </param>
/// <returns>0 for success, non-zero for failure.</returns>
[DllImport("taos", EntryPoint = "taos_stmt_bind_param_batch", CallingConvention = CallingConvention.Cdecl)]
static extern public int StmtBindParamBatch(IntPtr stmt, [In, Out] TAOS_MULTI_BIND[] bind);
/// <summary>
/// For INSERT only.
/// add all current bound parameters to batch process. Must be called after each call to
/// StmtBindParam/StmtBindSingleParamBatch, or all columns binds for one or more lines
/// with StmtBindSingleParamBatch. User application can call any bind parameter
/// API again to bind more data lines after calling to this API.
/// </summary>
/// <param name="stmt">could be the value returned by 'StmtInit', that may be a valid object or NULL.</param>
/// <returns>0 for success, non-zero for failure.</returns>
[DllImport("taos", EntryPoint = "taos_stmt_add_batch", CallingConvention = CallingConvention.Cdecl)]
static extern public int StmtAddBatch(IntPtr stmt);
/// <summary>
/// actually execute the INSERT/SELECT sql statement.
/// User application can continue to bind new data after calling to this API.
/// </summary>
/// <param name="stmt">could be the value returned by 'StmtInit', that may be a valid object or NULL.</param>
/// <returns></returns>
[DllImport("taos", EntryPoint = "taos_stmt_execute", CallingConvention = CallingConvention.Cdecl)]
static extern public int StmtExecute(IntPtr stmt);
/// <summary>
/// For SELECT only,getting the query result. User application should free it with API 'FreeResult' at the end.
/// </summary>
/// <param name="stmt">could be the value returned by 'StmtInit', that may be a valid object or NULL.</param>
/// <returns>Not NULL for success, NULL for failure.</returns>
[DllImport("taos", EntryPoint = "taos_stmt_use_result", CallingConvention = CallingConvention.Cdecl)]
static extern public IntPtr StmtUseResult(IntPtr stmt);
/// <summary>
/// close STMT object and free resources.
/// </summary>
/// <param name="stmt">could be the value returned by 'StmtInit', that may be a valid object or NULL.</param>
/// <returns>0 for success, non-zero for failure.</returns>
[DllImport("taos", EntryPoint = "taos_stmt_close", CallingConvention = CallingConvention.Cdecl)]
static extern public int StmtClose(IntPtr stmt);
[DllImport("taos", EntryPoint = "taos_load_table_info", CallingConvention = CallingConvention.Cdecl)]
/// <summary>
/// user application must call this API to load all tables meta,
/// </summary>
/// <param name="taos">taos connection</param>
/// <param name="tableList">tablelist</param>
/// <returns></returns>
static extern private int LoadTableInfoDll(IntPtr taos, string tableList);
/// <summary>
/// user application call this API to load all tables meta,this method call the native
/// method LoadTableInfoDll.
/// this method must be called before StmtSetSubTbname(IntPtr stmt, string name);
/// </summary>
/// <param name="taos">taos connection</param>
/// <param name="tableList">tables need to load meta info are form in an array</param>
/// <returns></returns>
static public int LoadTableInfo(IntPtr taos, string[] tableList)
{
string listStr = string.Join(",", tableList);
return LoadTableInfoDll(taos, listStr);
}
/// <summary>
/// get detail error message when got failure for any stmt API call. If not failure, the result
/// returned in this API is unknown.
/// </summary>
/// <param name="stmt">could be the value returned by 'StmtInit', that may be a valid object or NULL.</param>
/// <returns>piont the error message</returns>
[DllImport("taos", EntryPoint = "taos_stmt_errstr", CallingConvention = CallingConvention.Cdecl)]
static extern private IntPtr StmtErrPtr(IntPtr stmt);
/// <summary>
/// get detail error message when got failure for any stmt API call. If not failure, the result
/// returned in this API is unknown.
/// </summary>
/// <param name="stmt">could be the value returned by 'StmtInit', that may be a valid object or NULL.</param>
/// <returns>error string</returns>
static public string StmtErrorStr(IntPtr stmt)
{
IntPtr stmtErrPrt = StmtErrPtr(stmt);
return Marshal.PtrToStringAnsi(stmtErrPrt);
}
}
}
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>net5.0</TargetFramework>
</PropertyGroup>
</Project>
using System;
using System.Runtime.InteropServices;
namespace TDengineDriver
{
/// <summary>
/// this class used to get an instance of struct of TAO_BIND or TAOS_MULTI_BIND
/// And the instance is corresponding with TDengine data type. For example, calling
/// "bindBinary" will return a TAOS_BIND object that is corresponding with TDengine's
/// binary type.
/// </summary>
public class TaosBind
{
public static TAOS_BIND BindBool(bool val)
{
TAOS_BIND bind = new TAOS_BIND();
byte[] boolByteArr = BitConverter.GetBytes(val);
int boolByteArrSize = Marshal.SizeOf(boolByteArr[0]) * boolByteArr.Length;
IntPtr bo = Marshal.AllocHGlobal(1);
Marshal.Copy(boolByteArr, 0, bo, boolByteArr.Length);
int length = sizeof(Boolean);
IntPtr lengPtr = Marshal.AllocHGlobal(sizeof(int));
Marshal.WriteInt32(lengPtr, length);
bind.buffer_type = (int)TDengineDataType.TSDB_DATA_TYPE_BOOL;
bind.buffer = bo;
bind.buffer_length = length;
bind.length = lengPtr;
bind.is_null = IntPtr.Zero;
return bind;
}
public static TAOS_BIND BindTinyInt(sbyte val)
{
TAOS_BIND bind = new TAOS_BIND();
byte[] tinyIntByteArr = BitConverter.GetBytes(val);
int tinyIntByteArrSize = Marshal.SizeOf(tinyIntByteArr[0]) * tinyIntByteArr.Length;
IntPtr uManageTinyInt = Marshal.AllocHGlobal(tinyIntByteArrSize);
Marshal.Copy(tinyIntByteArr, 0, uManageTinyInt, tinyIntByteArr.Length);
int length = sizeof(sbyte);
IntPtr lengPtr = Marshal.AllocHGlobal(sizeof(int));
Marshal.WriteInt32(lengPtr, length);
bind.buffer_type = (int)TDengineDataType.TSDB_DATA_TYPE_TINYINT;
bind.buffer = uManageTinyInt;
bind.buffer_length = length;
bind.length = lengPtr;
bind.is_null = IntPtr.Zero;
return bind;
}
public static TAOS_BIND BindSmallInt(short val)
{
TAOS_BIND bind = new TAOS_BIND();
IntPtr uManageSmallInt = Marshal.AllocHGlobal(sizeof(short));
Marshal.WriteInt16(uManageSmallInt, val);
int length = sizeof(short);
IntPtr lengPtr = Marshal.AllocHGlobal(sizeof(int));
Marshal.WriteInt32(lengPtr, length);
bind.buffer_type = (int)TDengineDataType.TSDB_DATA_TYPE_SMALLINT;
bind.buffer = uManageSmallInt;
bind.buffer_length = length;
bind.length = lengPtr;
bind.is_null = IntPtr.Zero;
return bind;
}
public static TAOS_BIND BindInt(int val)
{
TAOS_BIND bind = new TAOS_BIND();
IntPtr uManageInt = Marshal.AllocHGlobal(sizeof(int));
Marshal.WriteInt32(uManageInt, val);
int length = sizeof(int);
IntPtr lengPtr = Marshal.AllocHGlobal(sizeof(int));
Marshal.WriteInt32(lengPtr, length);
bind.buffer_type = (int)TDengineDataType.TSDB_DATA_TYPE_INT;
bind.buffer = uManageInt;
bind.buffer_length = length;
bind.length = lengPtr;
bind.is_null = IntPtr.Zero;
return bind;
}
public static TAOS_BIND BindBigInt(long val)
{
TAOS_BIND bind = new TAOS_BIND();
IntPtr uManageBigInt = Marshal.AllocHGlobal(sizeof(long));
Marshal.WriteInt64(uManageBigInt, val);
int length = sizeof(long);
IntPtr lengPtr = Marshal.AllocHGlobal(sizeof(int));
Marshal.WriteInt32(lengPtr, length);
bind.buffer_type = (int)TDengineDataType.TSDB_DATA_TYPE_BIGINT;
bind.buffer = uManageBigInt;
bind.buffer_length = length;
bind.length = lengPtr;
bind.is_null = IntPtr.Zero;
return bind;
}
public static TAOS_BIND BindUTinyInt(byte val)
{
TAOS_BIND bind = new TAOS_BIND();
IntPtr uManageTinyInt = Marshal.AllocHGlobal(sizeof(byte));
Marshal.WriteByte(uManageTinyInt, val);
int length = sizeof(byte);
IntPtr lengPtr = Marshal.AllocHGlobal(sizeof(int));
Marshal.WriteInt32(lengPtr, length);
bind.buffer_type = (int)TDengineDataType.TSDB_DATA_TYPE_UTINYINT;
bind.buffer = uManageTinyInt;
bind.buffer_length = length;
bind.length = lengPtr;
bind.is_null = IntPtr.Zero;
return bind;
}
public static TAOS_BIND BindUSmallInt(UInt16 val)
{
TAOS_BIND bind = new TAOS_BIND();
byte[] uSmallIntByteArr = BitConverter.GetBytes(val);
int usmallSize = Marshal.SizeOf(uSmallIntByteArr[0]) * uSmallIntByteArr.Length;
IntPtr uManageUnsignSmallInt = Marshal.AllocHGlobal(usmallSize);
Marshal.Copy(uSmallIntByteArr, 0, uManageUnsignSmallInt, uSmallIntByteArr.Length);
int length = sizeof(UInt16);
IntPtr lengPtr = Marshal.AllocHGlobal(sizeof(int));
Marshal.WriteInt32(lengPtr, length);
bind.buffer_type = (int)TDengineDataType.TSDB_DATA_TYPE_USMALLINT;
bind.buffer = uManageUnsignSmallInt;
bind.buffer_length = length;
bind.length = lengPtr;
bind.is_null = IntPtr.Zero;
return bind;
}
public static TAOS_BIND BindUInt(uint val)
{
TAOS_BIND bind = new TAOS_BIND();
byte[] uManageIntByteArr = BitConverter.GetBytes(val);
int usmallSize = Marshal.SizeOf(uManageIntByteArr[0]) * uManageIntByteArr.Length;
IntPtr uManageInt = Marshal.AllocHGlobal(usmallSize);
Marshal.Copy(uManageIntByteArr, 0, uManageInt, uManageIntByteArr.Length);
int length = sizeof(uint);
IntPtr lengPtr = Marshal.AllocHGlobal(sizeof(int));
Marshal.WriteInt32(lengPtr, length);
bind.buffer_type = (int)TDengineDataType.TSDB_DATA_TYPE_UINT;
bind.buffer = uManageInt;
bind.buffer_length = length;
bind.length = lengPtr;
bind.is_null = IntPtr.Zero;
return bind;
}
public static TAOS_BIND BindUBigInt(ulong val)
{
TAOS_BIND bind = new TAOS_BIND();
byte[] uManageBigIntByteArr = BitConverter.GetBytes(val);
int usmallSize = Marshal.SizeOf(uManageBigIntByteArr[0]) * uManageBigIntByteArr.Length;
IntPtr uManageBigInt = Marshal.AllocHGlobal(usmallSize);
Marshal.Copy(uManageBigIntByteArr, 0, uManageBigInt, uManageBigIntByteArr.Length);
int length = sizeof(ulong);
IntPtr lengPtr = Marshal.AllocHGlobal(sizeof(int));
Marshal.WriteInt32(lengPtr, length);
bind.buffer_type = (int)TDengineDataType.TSDB_DATA_TYPE_UBIGINT;
bind.buffer = uManageBigInt;
bind.buffer_length = length;
bind.length = lengPtr;
bind.is_null = IntPtr.Zero;
return bind;
}
public static TAOS_BIND BindFloat(float val)
{
TAOS_BIND bind = new TAOS_BIND();
byte[] floatByteArr = BitConverter.GetBytes(val);
int floatByteArrSize = Marshal.SizeOf(floatByteArr[0]) * floatByteArr.Length;
IntPtr uManageFloat = Marshal.AllocHGlobal(floatByteArrSize);
Marshal.Copy(floatByteArr, 0, uManageFloat, floatByteArr.Length);
int length = sizeof(float);
IntPtr lengPtr = Marshal.AllocHGlobal(sizeof(int));
Marshal.WriteInt32(lengPtr, length);
bind.buffer_type = (int)TDengineDataType.TSDB_DATA_TYPE_FLOAT;
bind.buffer = uManageFloat;
bind.buffer_length = length;
bind.length = lengPtr;
bind.is_null = IntPtr.Zero;
return bind;
}
public static TAOS_BIND BindDouble(Double val)
{
TAOS_BIND bind = new TAOS_BIND();
byte[] doubleByteArr = BitConverter.GetBytes(val);
int doubleByteArrSize = Marshal.SizeOf(doubleByteArr[0]) * doubleByteArr.Length;
IntPtr uManageDouble = Marshal.AllocHGlobal(doubleByteArrSize);
Marshal.Copy(doubleByteArr, 0, uManageDouble, doubleByteArr.Length);
int length = sizeof(Double);
IntPtr lengPtr = Marshal.AllocHGlobal(sizeof(int));
Marshal.WriteInt32(lengPtr, length);
bind.buffer_type = (int)TDengineDataType.TSDB_DATA_TYPE_DOUBLE;
bind.buffer = uManageDouble;
bind.buffer_length = length;
bind.length = lengPtr;
bind.is_null = IntPtr.Zero;
return bind;
}
public static TAOS_BIND BindBinary(String val)
{
TAOS_BIND bind = new TAOS_BIND();
IntPtr umanageBinary = Marshal.StringToHGlobalAnsi(val);
int leng = val.Length;
IntPtr lenPtr = Marshal.AllocHGlobal(sizeof(ulong));
Marshal.WriteInt64(lenPtr, leng);
bind.buffer_type = (int)TDengineDataType.TSDB_DATA_TYPE_BINARY;
bind.buffer = umanageBinary;
bind.buffer_length = leng;
bind.length = lenPtr;
bind.is_null = IntPtr.Zero;
return bind;
}
public static TAOS_BIND BindNchar(String val)
{
TAOS_BIND bind = new TAOS_BIND();
IntPtr umanageNchar = (IntPtr)Marshal.StringToHGlobalAnsi(val);
int leng = val.Length;
IntPtr lenPtr = Marshal.AllocHGlobal(sizeof(ulong));
Marshal.WriteInt64(lenPtr, leng);
bind.buffer_type = (int)TDengineDataType.TSDB_DATA_TYPE_NCHAR;
bind.buffer = umanageNchar;
bind.buffer_length = leng;
bind.length = lenPtr;
bind.is_null = IntPtr.Zero;
return bind;
}
public static TAOS_BIND BindNil()
{
TAOS_BIND bind = new TAOS_BIND();
int isNull = 1;//IntPtr.Size;
IntPtr lenPtr = Marshal.AllocHGlobal(sizeof(int));
Marshal.WriteInt32(lenPtr, isNull);
bind.buffer_type = (int)TDengineDataType.TSDB_DATA_TYPE_NULL;
bind.is_null = lenPtr;
return bind;
}
public static TAOS_BIND BindTimestamp(long ts)
{
TAOS_BIND bind = new TAOS_BIND();
IntPtr uManageTs = Marshal.AllocHGlobal(sizeof(long));
Marshal.WriteInt64(uManageTs, ts);
int length = sizeof(long);
IntPtr lengPtr = Marshal.AllocHGlobal(4);
Marshal.WriteInt32(lengPtr, length);
bind.buffer_type = (int)TDengineDataType.TSDB_DATA_TYPE_TIMESTAMP;
bind.buffer = uManageTs;
bind.buffer_length = length;
bind.length = lengPtr;
bind.is_null = IntPtr.Zero;
return bind;
}
public static void FreeTaosBind(TAOS_BIND[] binds)
{
foreach (TAOS_BIND bind in binds)
{
Marshal.FreeHGlobal(bind.buffer);
Marshal.FreeHGlobal(bind.length);
if (bind.is_null != IntPtr.Zero)
{
// Console.WriteLine(bind.is_null);
Marshal.FreeHGlobal(bind.is_null);
}
}
}
}
}
\ No newline at end of file
此差异已折叠。
<Project Sdk="Microsoft.NET.Sdk">
<ItemGroup>
<ProjectReference Include="..\..\TDengineDriver\TDengineDriver.csproj" />
</ItemGroup>
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net5.0</TargetFramework>
</PropertyGroup>
</Project>
using System;
using Test.UtilsTools;
using TDengineDriver;
namespace Test.UtilsTools.DataSource
{
public class DataSource
{
public static long[] tsArr = new long[5] { 1637064040000, 1637064041000, 1637064042000, 1637064043000, 1637064044000 };
public static bool?[] boolArr = new bool?[5] { true, false, null, true, true };
public static sbyte?[] tinyIntArr = new sbyte?[5] { -127, 0, null, 8, 127 };
public static short?[] shortArr = new short?[5] { short.MinValue + 1, -200, null, 100, short.MaxValue };
public static int?[] intArr = new int?[5] { -200, -100, null, 0, 300 };
public static long?[] longArr = new long?[5] { long.MinValue + 1, -2000, null, 1000, long.MaxValue };
public static float?[] floatArr = new float?[5] { float.MinValue + 1, -12.1F, null, 0F, float.MaxValue };
public static double?[] doubleArr = new double?[5] { double.MinValue + 1, -19.112D, null, 0D, double.MaxValue };
public static byte?[] uTinyIntArr = new byte?[5] { byte.MinValue, 12, null, 89, byte.MaxValue - 1 };
public static ushort?[] uShortArr = new ushort?[5] { ushort.MinValue, 200, null, 400, ushort.MaxValue - 1 };
public static uint?[] uIntArr = new uint?[5] { uint.MinValue, 100, null, 2, uint.MaxValue - 1 };
public static ulong?[] uLongArr = new ulong?[5] { ulong.MinValue, 2000, null, 1000, long.MaxValue - 1 };
public static string[] binaryArr = new string[5] { "1234567890~!@#$%^&*()_+=-`[]{}:,./<>?", String.Empty, null, "qwertyuiopasdfghjklzxcvbnmQWERTYUIOPASDFGHJKLZXCVBNM", "qwertyuiopasdfghjklzxcvbnmQWERTYUIOPASDFGHJKLZXCVBNM1234567890~!@#$%^&*()_+=-`[]{}:,./<>?" };
public static string[] ncharArr = new string[5] { "1234567890~!@#$%^&*()_+=-`[]{}:,./<>?", null, "qwertyuiopasdfghjklzxcvbnmQWERTYUIOPASDFGHJKLZXCVBNM", "qwertyuiopasdfghjklzxcvbnmQWERTYUIOPASDFGHJKLZXCVBNM1234567890~!@#$%^&*()_+=-`[]{}:,./<>?", string.Empty };
public static TAOS_BIND[] getTags()
{
TAOS_BIND[] binds = new TAOS_BIND[13];
binds[0] = TaosBind.BindBool(true);
binds[1] = TaosBind.BindTinyInt(-2);
binds[2] = TaosBind.BindSmallInt(short.MaxValue);
binds[3] = TaosBind.BindInt(int.MaxValue);
binds[4] = TaosBind.BindBigInt(Int64.MaxValue);
binds[5] = TaosBind.BindUTinyInt(byte.MaxValue - 1);
binds[6] = TaosBind.BindUSmallInt(UInt16.MaxValue - 1);
binds[7] = TaosBind.BindUInt(uint.MinValue + 1);
binds[8] = TaosBind.BindUBigInt(UInt64.MinValue + 1);
binds[9] = TaosBind.BindFloat(11.11F);
binds[10] = TaosBind.BindDouble(22.22D);
binds[11] = TaosBind.BindBinary("qwertyuiopasdfghjklzxcvbnmQWERTYUIOPASDFGHJKZXCVBNM`1234567890-=+_)(*&^%$#@!~[];,./<>?:{}");
binds[12] = TaosBind.BindNchar("qwertyuiopasdfghjklzxcvbnmQWERTYUIOPASDFGHJKZXCVBNM`1234567890-=+_)(*&^%$#@!~[];,./<>?:{}");
return binds;
}
public static TAOS_BIND[] getNtableRow()
{
TAOS_BIND[] binds = new TAOS_BIND[15];
binds[0] = TaosBind.BindTimestamp(1637064040000);
binds[1] = TaosBind.BindTinyInt(-2);
binds[2] = TaosBind.BindSmallInt(short.MaxValue);
binds[3] = TaosBind.BindInt(int.MaxValue);
binds[4] = TaosBind.BindBigInt(Int64.MaxValue);
binds[5] = TaosBind.BindUTinyInt(byte.MaxValue - 1);
binds[6] = TaosBind.BindUSmallInt(UInt16.MaxValue - 1);
binds[7] = TaosBind.BindUInt(uint.MinValue + 1);
binds[8] = TaosBind.BindUBigInt(UInt64.MinValue + 1);
binds[9] = TaosBind.BindFloat(11.11F);
binds[10] = TaosBind.BindDouble(22.22D);
binds[11] = TaosBind.BindBinary("qwertyuiopasdfghjklzxcvbnmQWERTYUIOPASDFGHJKZXCVBNM`1234567890-=+_)(*&^%$#@!~[];,./<>?:{}");
binds[12] = TaosBind.BindNchar("qwertyuiopasdfghjklzxcvbnmQWERTYUIOPASDFGHJKZXCVBNM`1234567890-=+_)(*&^%$#@!~[];,./<>?:{}");
binds[13] = TaosBind.BindBool(true);
binds[14] = TaosBind.BindNil();
return binds;
}
public static TAOS_MULTI_BIND[] GetMultiBindArr()
{
TAOS_MULTI_BIND[] mBinds = new TAOS_MULTI_BIND[14];
mBinds[0] = TaosMultiBind.MultiBindTimestamp(tsArr);
mBinds[1] = TaosMultiBind.MultiBindBool(boolArr);
mBinds[2] = TaosMultiBind.MultiBindTinyInt(tinyIntArr);
mBinds[3] = TaosMultiBind.MultiBindSmallInt(shortArr);
mBinds[4] = TaosMultiBind.MultiBindInt(intArr);
mBinds[5] = TaosMultiBind.MultiBindBigint(longArr);
mBinds[6] = TaosMultiBind.MultiBindFloat(floatArr);
mBinds[7] = TaosMultiBind.MultiBindDouble(doubleArr);
mBinds[8] = TaosMultiBind.MultiBindUTinyInt(uTinyIntArr);
mBinds[9] = TaosMultiBind.MultiBindUSmallInt(uShortArr);
mBinds[10] = TaosMultiBind.MultiBindUInt(uIntArr);
mBinds[11] = TaosMultiBind.MultiBindUBigInt(uLongArr);
mBinds[12] = TaosMultiBind.MultiBindBinary(binaryArr);
mBinds[13] = TaosMultiBind.MultiBindNchar(ncharArr);
return mBinds;
}
public static TAOS_BIND[] GetQueryCondition()
{
TAOS_BIND[] queryCondition = new TAOS_BIND[2];
queryCondition[0] = TaosBind.BindTinyInt(0);
queryCondition[1] = TaosBind.BindInt(1000);
return queryCondition;
}
public static void FreeTaosBind(TAOS_BIND[] binds)
{
TaosBind.FreeTaosBind(binds);
}
public static void FreeTaosMBind(TAOS_MULTI_BIND[] mbinds)
{
TaosMultiBind.FreeTaosBind(mbinds);
}
}
}
\ No newline at end of file
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
......@@ -25,6 +25,7 @@ class FieldType(object):
C_SMALLINT_UNSIGNED = 12
C_INT_UNSIGNED = 13
C_BIGINT_UNSIGNED = 14
C_JSON = 15
# NULL value definition
# NOTE: These values should change according to C definition in tsdb.h
C_BOOL_NULL = 0x02
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册