提交 1ae55bda 编写于 作者: A Alex Duan

feat(rpc): sync from 2.6

......@@ -7,9 +7,6 @@
[submodule "deps/jemalloc"]
path = deps/jemalloc
url = https://github.com/jemalloc/jemalloc.git
[submodule "src/kit/taos-tools"]
path = src/kit/taos-tools
url = https://github.com/taosdata/taos-tools.git
[submodule "src/plugins/taosadapter"]
path = src/plugins/taosadapter
url = https://github.com/taosdata/taosadapter.git
......
......@@ -134,7 +134,7 @@ def sync_source() {
git submodule update --init --recursive
'''
}
def pre_test() {
def pre_test_arm64() {
sync_source()
sh '''
cd ${WK}
......@@ -144,7 +144,14 @@ def pre_test() {
go env -w GO111MODULE=on
cmake .. -DBUILD_HTTP=false -DBUILD_TOOLS=true > /dev/null
make -j8 >/dev/null
make install
'''
return 1
}
def pre_test() {
sync_source()
sh '''
cd ${WKC}/tests/parallel_test
./container_build.sh -w ${WKDIR} -t 8 >/dev/null
'''
return 1
}
......@@ -312,6 +319,7 @@ pipeline {
agent none
options { skipDefaultCheckout() }
environment{
WKDIR = '/var/data/jenkins/workspace'
WK = '/var/data/jenkins/workspace/TDinternal'
WKC = '/var/data/jenkins/workspace/TDinternal/community'
LOGDIR = '/var/data/jenkins/workspace/log'
......@@ -330,7 +338,7 @@ pipeline {
agent {label " worker07_arm64 || worker09_arm64 "}
steps {
timeout(time: 20, unit: 'MINUTES') {
pre_test()
pre_test_arm64()
script {
sh '''
echo "arm64 build done"
......
......@@ -51,7 +51,7 @@ TDengine is a distributed and high performance time series database, there are a
1. Set proper number of `vgroups` according to available CPU cores. Normally, we recommend 2 \* number_of_cores as a starting point. If the verification result shows this is not enough to utilize CPU resources, you can use a higher value.
2. Set proper `minTablesPerVnode`, `tableIncStepPerVnode`, and `maxVgroupsPerDb` according to the number of tables so that tables are distributed even across vgroups. The purpose is to balance the workload among all vnodes so that system resources can be utilized better to get higher performance.
For more performance tuning tips, please refer to [Performance Optimization](../../operation/optimize) and [Configuration Parameters](../../reference/config).
For more performance tuning tips, please refer to [Performance Optimization](../../../operation/optimize) and [Configuration Parameters](../../../reference/config).
## Sample Programs
......
......@@ -218,9 +218,6 @@ Query OK, 5 row(s) in set (0.004896s)
{/* <TabItem label="Go" value="go">
<Go/>
</TabItem> */}
<TabItem label="Rust" value="rust">
<Rust />
</TabItem>
{/* <TabItem label="Node.js" value="nodejs">
<Node/>
</TabItem>
......
......@@ -5,7 +5,7 @@ description: "The syntax supported by TDengine SQL "
This section explains the syntax of SQL to perform operations on databases, tables and STables, insert data, select data and use functions. We also provide some tips that can be used in TDengine SQL. If you have previous experience with SQL this section will be fairly easy to understand. If you do not have previous experience with SQL, you'll come to appreciate the simplicity and power of SQL.
TDengine SQL is the major interface for users to write data into or query from TDengine. For ease of use, the syntax is similar to that of standard SQL. However, please note that TDengine SQL is not standard SQL. For instance, TDengine doesn't provide a delete function for time series data and so corresponding statements are not provided in TDengine SQL.
TDengine SQL is the major interface for users to write data into or query from TDengine. For ease of use, the syntax is similar to that of standard SQL. However, please note that TDengine SQL is not standard SQL. For instance, TDengine doesn't provide a delete function for time series data and so corresponding statements are not provided in TDengine SQL. However, TDengine Enterprise Edition provides the DELETE function since version 2.6.
Syntax Specifications used in this chapter:
......
......@@ -110,6 +110,7 @@ If you have multiple versions of Python on your system, you may have various `pi
C:\> pip3 install taospy
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: taospy in c:\users\username\appdata\local\programs\python\python310\lib\site-packages (2.3.0)
```
:::
......@@ -255,7 +256,7 @@ The TaosCursor class uses native connections for write and query operations. In
##### Use of TaosRestCursor class
The ``TaosRestCursor`` class is an implementation of the PEP249 Cursor interface.
The `TaosRestCursor` class is an implementation of the PEP249 Cursor interface.
```python title="Use of TaosRestCursor"
{{#include docs/examples/python/connect_rest_examples.py:basic}}
......@@ -293,6 +294,20 @@ For a more detailed description of the `sql()` method, please refer to [RestClie
{{#include docs/examples/python/conn_rest_pandas.py}}
```
</TabItem>
<TabItem value="native+sqlalchemy" label="Native + SQLAlchemy">
```python
{{#include docs/examples/python/conn_native_sqlalchemy.py}}
```
</TabItem>
<TabItem value="rest+sqlalchemy" label="REST + SQLAlchemy">
```python
{{#include docs/examples/python/conn_rest_sqlalchemy.py}}
```
</TabItem>
</Tabs>
......
......@@ -37,12 +37,12 @@ In the schemaless writing data line protocol, each data item in the field_set ne
| **Serial number** | **Postfix** | **Mapping type** | **Size (bytes)** |
| -------- | -------- | ------------ | -------------- |
| 1 | none or f64 | double | 8 |
| 2 | f32 | float | 4 |
| 3 | i8 | TinyInt | 1 |
| 4 | i16 | SmallInt | 2 |
| 5 | i32 | Int | 4 |
| 6 | i64 or i | Bigint | 8 |
| 1 | none or f64 | double | 8 |
| 2 | f32 | float | 4 |
| 3 | i8/u8 | TinyInt/UTinyInt | 1 |
| 4 | i16/u16 | SmallInt/USmallInt | 2 |
| 5 | i32/u32 | Int/UInt | 4 |
| 6 | i64/i/u64/u | BigInt/BigInt/UBigInt/UBigInt | 8 |
- `t`, `T`, `true`, `True`, `TRUE`, `f`, `F`, `false`, and `False` will be handled directly as BOOL types.
......@@ -67,13 +67,13 @@ Schemaless writes process row data according to the following principles.
Note that tag_key1, tag_key2 are not the original order of the tags entered by the user but the result of using the tag names in ascending order of the strings. Therefore, tag_key1 is not the first tag entered in the line protocol.
The string's MD5 hash value "md5_val" is calculated after the ranking is completed. The calculation result is then combined with the string to generate the table name: "t_md5_val". "t*" is a fixed prefix that every table generated by this mapping relationship has.
2. If the super table obtained by parsing the line protocol does not exist, this super table is created.
2. If the super table obtained by parsing the line protocol does not exist, this super table is created(It is not recommended to create a super table manually, otherwise the inserted data may be abnormal).
If the subtable obtained by the parse line protocol does not exist, Schemaless creates the sub-table according to the subtable name determined in steps 1 or 2.
4. If the specified tag or regular column in the data row does not exist, the corresponding tag or regular column is added to the super table (only incremental).
5. If there are some tag columns or regular columns in the super table that are not specified to take values in a data row, then the values of these columns are set to NULL.
6. For BINARY or NCHAR columns, if the length of the value provided in a data row exceeds the column type limit, the maximum length of characters allowed to be stored in the column is automatically increased (only incremented and not decremented) to ensure complete preservation of the data.
7. If the specified data subtable already exists, and the specified tag column takes a value different from the saved value this time, the value in the latest data row overwrites the old tag column take value.
8. Errors encountered throughout the processing will interrupt the writing process and return an error code.
7. Errors encountered throughout the processing will interrupt the writing process and return an error code.
8. In order to improve the efficiency of writing, the order of fields in the same super table should be the same. If the order is different, you need to configure the parameter smlDataFormat to false, otherwise, the data in the library will be abnormal.
:::tip
All processing logic of schemaless will still follow TDengine's underlying restrictions on data structures, such as the total length of each row of data cannot exceed 48k bytes. See [TAOS SQL Boundary Limits](/taos-sql/limit) for specific constraints in this area.
......
import taos
import pandas
from sqlalchemy import create_engine
engine = create_engine("taos://root:taosdata@localhost:6030/power")
df = pandas.read_sql("SELECT * FROM meters", engine)
conn = taos.connect()
df: pandas.DataFrame = pandas.read_sql("SELECT * FROM meters", conn)
# print index
print(df.index)
......
import pandas
from sqlalchemy import create_engine
engine = create_engine("taos://root:taosdata@localhost:6030/power")
df: pandas.DataFrame = pandas.read_sql("SELECT * FROM power.meters", engine)
# print index
print(df.index)
# print data type of element in ts column
print(type(df.ts[0]))
print(df.head(3))
# output:
# RangeIndex(start=0, stop=8, step=1)
# <class 'pandas._libs.tslibs.timestamps.Timestamp'>
# ts current ... location groupid
# 0 2018-10-03 14:38:05.500 11.8 ... california.losangeles 2
# 1 2018-10-03 14:38:16.600 13.4 ... california.losangeles 2
# 2 2018-10-03 14:38:05.000 10.8 ... california.losangeles 3
import taosrest
import pandas
from sqlalchemy import create_engine
engine = create_engine("taosrest://root:taosdata@localhost:6041")
df: pandas.DataFrame = pandas.read_sql("SELECT * FROM power.meters", engine)
conn = taosrest.connect()
df: pandas.DataFrame = pandas.read_sql("SELECT * FROM power.meters", conn)
# print index
print(df.index)
......
import pandas
from sqlalchemy import create_engine
engine = create_engine("taosrest://root:taosdata@localhost:6041")
df: pandas.DataFrame = pandas.read_sql("SELECT * FROM power.meters", engine)
# print index
print(df.index)
# print data type of element in ts column
print(type(df.ts[0]))
print(df.head(3))
# output:
# RangeIndex(start=0, stop=8, step=1)
# <class 'pandas._libs.tslibs.timestamps.Timestamp'>
# ts current ... location groupid
# 0 2018-10-03 06:38:05.500000+00:00 11.8 ... california.losangeles 2
# 1 2018-10-03 06:38:16.600000+00:00 13.4 ... california.losangeles 2
# 2 2018-10-03 06:38:05+00:00 10.8 ... california.losangeles 3
......@@ -44,7 +44,7 @@ import TabItem from "@theme/TabItem";
如果总表数比较大(比如大于500万),适当增加 maxVgroupsPerDb 也能显著提高建表的速度。maxVgroupsPerDb 默认值为 0, 自动配置为 CPU 的核数。 如果表的数量巨大,也建议调节 maxTablesPerVnode 参数,以免超过单个 vnode 建表的上限。
更多调优参数,请参考[性能优化](../../operation/optimize)[配置参考](../../reference/config)部分。
更多调优参数,请参考[性能优化](../../../operation/optimize)[配置参考](../../../reference/config)部分。
## 高效写入示例 {#sample-code}
......
......@@ -213,9 +213,6 @@ Query OK, 5 row(s) in set (0.004896s)
{/* <TabItem label="Go" value="go">
<Go/>
</TabItem> */}
<TabItem label="Rust" value="rust">
<Rust />
</TabItem>
{/* <TabItem label="Node.js" value="nodejs">
<Node/>
</TabItem>
......
......@@ -27,4 +27,4 @@ title: 转义字符说明
2. 反引号``标识符: 保持原样,不转义
2. 数据里有转义字符
1. 遇到上面定义的转义字符会转义(%和\_见下面说明),如果没有匹配的转义字符会忽略掉转义符\。
2. 对于%和\_,因为在 like 里这两个字符是通配符,所以在模式匹配 like 里用`\%`%和`\_`表示字符里本身的%和\_,如果在 like 模式匹配上下文之外使用`\%`或`\_`,则它们的计算结果为字符串`\%`和`\_`,而不是%和\_
2. 对于%和\_,因为在 like 里这两个字符是通配符,所以在模式匹配 like 里用`\%`和`\_`表示字符里本身的%和\_,如果在 like 模式匹配上下文之外使用`\%`或`\_`,则它们的计算结果为字符串`\%`和`\_`,而不是%和\_
......@@ -5,7 +5,7 @@ description: "TAOS SQL 支持的语法规则、主要查询功能、支持的 SQ
本文档说明 TAOS SQL 支持的语法规则、主要查询功能、支持的 SQL 查询函数,以及常用技巧等内容。阅读本文档需要读者具有基本的 SQL 语言的基础。
TAOS SQL 是用户对 TDengine 进行数据写入和查询的主要工具。TAOS SQL 为了便于用户快速上手,在一定程度上提供与标准 SQL 类似的风格和模式。严格意义上,TAOS SQL 并不是也不试图提供标准的 SQL 语法。此外,由于 TDengine 针对的时序性结构化数据不提供删除功能,因此在 TAO SQL 中不提供数据删除的相关功能
TAOS SQL 是用户对 TDengine 进行数据写入和查询的主要工具。TAOS SQL 为了便于用户快速上手,在一定程度上提供与标准 SQL 类似的风格和模式。严格意义上,TAOS SQL 并不是也不试图提供标准的 SQL 语法。此外,由于 TDengine 没有提供时序数据的删除功能,因此 TAOS SQL 中也没有提供数据删除的相关功能。不过从 TDengine 企业版从 2.6 开始提供了 DELETE 语句
本章节 SQL 语法遵循如下约定:
......
......@@ -295,6 +295,20 @@ TaosCursor 类使用原生连接进行写入、查询操作。在客户端多线
{{#include docs/examples/python/conn_rest_pandas.py}}
```
</TabItem>
<TabItem value="native+sqlalchemy" label="原生连接 + SQLAlchemy">
```python
{{#include docs/examples/python/conn_native_sqlalchemy.py}}
```
</TabItem>
<TabItem value="rest+sqlalchemy" label="REST 连接 + SQLAlchemy">
```python
{{#include docs/examples/python/conn_rest_sqlalchemy.py}}
```
</TabItem>
</Tabs>
......
......@@ -41,10 +41,10 @@ tag_set 中的所有的数据自动转化为 nchar 数据类型,并不需要
| -------- | -------- | ------------ | -------------- |
| 1 | 无或 f64 | double | 8 |
| 2 | f32 | float | 4 |
| 3 | i8 | TinyInt | 1 |
| 4 | i16 | SmallInt | 2 |
| 5 | i32 | Int | 4 |
| 6 | i64 或 i | Bigint | 8 |
| 3 | i8/u8 | TinyInt/UTinyInt | 1 |
| 4 | i16/u16 | SmallInt/USmallInt | 2 |
| 5 | i32/u32 | Int/UInt | 4 |
| 6 | i64/i/u64/u | BigInt/BigInt/UBigInt/UBigInt | 8 |
- t, T, true, True, TRUE, f, F, false, False 将直接作为 BOOL 型来处理。
......@@ -69,16 +69,17 @@ st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000
```
需要注意的是,这里的 tag_key1, tag_key2 并不是用户输入的标签的原始顺序,而是使用了标签名称按照字符串升序排列后的结果。所以,tag_key1 并不是在行协议中输入的第一个标签。
排列完成以后计算该字符串的 MD5 散列值 "md5_val"。然后将计算的结果与字符串组合生成表名:“t_md5_val”。其中的 “t\*” 是固定的前缀,每个通过该映射关系自动生成的表都具有该前缀。
排列完成以后计算该字符串的 MD5 散列值 "md5_val"。然后将计算的结果与字符串组合生成表名:“t_md5_val”。其中的 “t_” 是固定的前缀,每个通过该映射关系自动生成的表都具有该前缀。
2. 如果解析行协议获得的超级表不存在,则会创建这个超级表。
2. 如果解析行协议获得的超级表不存在,则会创建这个超级表(不建议手动创建超级表,不然插入数据可能异常)
3. 如果解析行协议获得子表不存在,则 Schemaless 会按照步骤 1 或 2 确定的子表名来创建子表。
4. 如果数据行中指定的标签列或普通列不存在,则在超级表中增加对应的标签列或普通列(只增不减)。
5. 如果超级表中存在一些标签列或普通列未在一个数据行中被指定取值,那么这些列的值在这一行中会被置为
NULL。
6. 对 BINARY 或 NCHAR 列,如果数据行中所提供值的长度超出了列类型的限制,自动增加该列允许存储的字符长度上限(只增不减),以保证数据的完整保存。
7. 如果指定的数据子表已经存在,而且本次指定的标签列取值跟已保存的值不一样,那么最新的数据行中的值会覆盖旧的标签列取值。
8. 整个处理过程中遇到的错误会中断写入过程,并返回错误代码。
7. 整个处理过程中遇到的错误会中断写入过程,并返回错误代码。
8. 为了提高写入的效率,默认假设同一个超级表中field_set的顺序是一样的(第一条数据包含所有的field,后面的数据按照这个顺序),如果顺序不一样,需要配置参数smlDataFormat为false,否则,
数据写入按照相同顺序写入,库中数据会异常。
:::tip
无模式所有的处理逻辑,仍会遵循 TDengine 对数据结构的底层限制,例如每行数据的总长度不能超过
......@@ -94,7 +95,7 @@ st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000
| -------- | ------------------- | ------------------------------- |
| 1 | SML_LINE_PROTOCOL | InfluxDB 行协议(Line Protocol) |
| 2 | SML_TELNET_PROTOCOL | OpenTSDB 文本行协议 |
| 3 | SML_JSON_PROTOCOL | JSON 协议格式 |
| 3 | SML_JSON_PROTOCOL | OpenTSDB JSON 协议格式 |
在 SML_LINE_PROTOCOL 解析模式下,需要用户指定输入的时间戳的时间分辨率。可用的时间分辨率如下表所示:
......
---
sidebar_label: IDEA
title: 通过 IDEA 数据库管理工具连接 TDengine
---
IDEA 全称 IntelliJ IDEA,是 Java 语言开发的集成环境,被公认为最友好且使用范围最广的 Java 开发工具之一。
IDEA Ultimate 版自带数据库管理工具,类似于一个小型 Navicat。这个工具让我们能在 IDEA 上对数据库做简单操作,不需要再切换到其他工具上。对于 TDengine 来说,用户可以通过 JDBC 驱动建立与 IDEA 的连接,不需要再到命令行去写 SQL 语句,直接在 IDEA 中执行即可。
此处以 2.0.40 版本的 JDBC Connector 为例,给大家介绍如何使用源码编译、打包,以及如何使用 IDEA 数据库工具连接 TDengine。
## 前置条件
要让 IDEA 能正常连接 TDengine ,需要以下几方面的准备工作。
- TDengine 集群已经部署并正常运行。
- 若使用 TSDBDriver 驱动类连接请在本地安装 TDengine 客户端。
- 若使用 RestfulDriver 驱动类连接 TDengine,请确保 taosAdapter 已经正常运行。
## 配置步骤
### 源码编译 JDBC-Connector
去各大仓库下载 [dist-jar 包](https://search.maven.org/artifact/com.taosdata.jdbc/taos-jdbcdriver)或者通过源码编译,此处介绍源码编译方法。
- 首先从 GitHub 仓库克隆 JDBC 连接器的源码,`git clone https://github.com/taosdata/taos-connector-jdbc.git -b 2.0.40`(此处推荐 -b 指定发布了的 Tags 版本),也可以在 IDEA 上操作:
![image](https://user-images.githubusercontent.com/70138133/180187698-395762d1-fcac-4cea-b44f-cc8cd07ea0c8.png)
- 克隆完源码后,若是编译 2.0.40 及以下版本的 JDBC-Connector 需要修改 taos-connector-jdbc 目录下 pom.xml 文件,将 dependencies 下的 commons-logging 依赖包的 scope 值由 test 改为 compile,否则编译完后导入 IDEA database 管理工具可能提醒缺少此驱动类。
![image](https://user-images.githubusercontent.com/70138133/180206650-561f9e24-ebb9-4cd2-8868-6f1cede54803.png)
- 在 taos-connector-jdbc 目录下执行:`mvn clean package -Dmaven.test.skip=true`
![image](https://user-images.githubusercontent.com/70138133/180353366-f515a6ae-904d-42d6-9967-1c298112fe88.png)
![image](https://user-images.githubusercontent.com/70138133/180353831-cb0b2c5e-b9a3-4182-ba78-58abfa81e1b4.png)
- 此时 taos-connector-jdbc 目录的 target 文件夹内产生了 taos-jdbcdriver-2.0.40-dist.jar 等驱动包。
### 使用 IDEA database 工具连接 TDengine
- 打开 IDEA database 工具,新建驱动,驱动程序文件选择 target 文件夹下的 taos-jdbcdriver-2.0.40-dist.jar。
- 选择 RESTful 方式进行连接(注意:若使用 com.taosdata.jdbc.TSDBDriver 驱动类需要安装 TDengine 客户端)。
![image](https://user-images.githubusercontent.com/70138133/180208261-34e7ed91-217f-46b5-80f9-f65f67d67662.png)
- 然后通过驱动创建数据源。TDengine 的 JDBC URL 规范为:
`jdbc:[TAOS|TAOS-RS]://[host_name]:[port]/[database_name]?[user={user}|&password={password}|&charset={charset}|&cfgdir={config_dir}|&locale={locale}|&timezone={timezone}]`
- 此处使用 RESTful 连接,URL 示例为:jdbc:TAOS-RS://VM-24-8-centos:6041/log(需要在 Hosts 文件内添加域名解析;URL 内的 locale、timezone 参数在 RESTful 连接中不生效)
![image](https://user-images.githubusercontent.com/70138133/180354534-7d73fe33-c4d3-400d-922b-28b20aadfb1b.png)
- 点击测试连接,出现黄色感叹号不影响使用。
![image](https://user-images.githubusercontent.com/70138133/180197251-98764434-bb7b-4e3a-9674-0620ab6d8bad.png)
## 验证
- 配置完后进行验证,点击刷新后再点击显示所有数据库:
![image](https://user-images.githubusercontent.com/70138133/180202803-6e277132-44bd-4b22-8921-a54d16190d2b.png)
- 右击数据源,新建查询控制台测试能否查询。需要注意的是,RESTful 请求是无状态的,查询、写入需要在表名前带上数据库名。
- 2.X 版本中默认带 log 库,我们可以使用 `SHOW log.stables` 查看包含哪些超级表后对特定表进行查询、调试:
![image](https://user-images.githubusercontent.com/70138133/180202329-6734c874-d4f5-40a3-be7d-c4fabbe73a19.png)
- 可以看到有个超级表叫做 vgroups_info,执行 `DESCRIBE log.vgroups_info` 查看表结构:
![image](https://user-images.githubusercontent.com/70138133/180204391-36fd0806-8cd6-43b8-97eb-1e7ff235846a.png)
- 再执行`SELECT last_row(*) FROM log.vgroups_info GROUP BY vgroup_id`通过 vgroup_id 分组能查看各 VgroupId 下的最新一条数据:
![image](https://user-images.githubusercontent.com/70138133/180205161-7f0314eb-cdaa-442c-acb5-d33931c32648.png)
---
sidebar_label: Google Data Studio 连接器
title: 如何通过 Google Data Studio 可视化处理 TDengine 数据
---
Google Data Studio 是一个强大的报表可视化工具,它提供了丰富的数据图表和数据连接,可以非常方便地按照既定模板生成报表。因其简便易用和生态丰富而在数据分析领域得到一众数据科学家的青睐。
Data Studio 可以支持多种数据来源,除了诸如 Google Analytics、Google AdWords、Search Console、BigQuery 等 Google 自己的服务之外,用户也可以直接将离线文件上传至 Google Cloud Storage,或是通过连接器来接入其它数据源。
目前 TDengine 连接器已经发布到 Google Data Studio 应用商店,你可以在 “Connect to Data” 页面下直接搜索 TDengine,将其选作数据源。
![image](./gds/GDS-2-2.png)
接下来选择 AUTHORIZE 按钮。
![image](./gds/GDS-3-2.png)
设置允许连接自己的账号到外部服务。
![image](./gds/GDS-4-1.png)
在接下来的页面选择运行 TDengine REST 服务的 URL,并输入用户名、密码、数据库名称、表名称以及查询时间范围,并点击右上角的 CONNECT 按钮。
注意:查询时间范围为可选输入项,如果不设置查询开始时间和结束时间,那么返回的数据为截至当前时间前30天的数据。如果30天内没有数据,生成的报告的会没数据。
![image](./gds/GDS-5-1024x426.png)
连接成功后,就可以使用 GDS 方便地进行数据处理并创建报表了。
![image](./gds/GDS-6-1024x368.png)
目前的维度和指标规则是:timestamp 类型的字段和 tag 字段会被连接器定义为维度,而其他类型的字段是指标。用户还可以根据自己的需求创建不同的表。
以下为使用 GDS 对 TDengine 提供数据进行可视化图表设计的过程示例。
![image](./gds/GDS-7-1024x528.png)
![image](./gds/GDS-8-1024x531.png)
![image](./gds/GDS-9-1024x531.png)
![image](./gds/GDS-10-1-1024x531.png)
![image](./gds/GDS-11-1024x531.png)
......@@ -387,6 +387,7 @@ typedef struct SSqlObj {
SSqlRes res;
SSubqueryState subState;
pthread_mutex_t mtxSubs; // avoid double access pSubs after failure
struct SSqlObj **pSubs;
struct SSqlObj *rootObj;
......
......@@ -46,6 +46,8 @@ void doAsyncQuery(STscObj* pObj, SSqlObj* pSql, __async_cb_func_t fp, void* para
pSql->fetchFp = fp;
pSql->rootObj = pSql;
pthread_mutex_init(&pSql->mtxSubs, NULL);
registerSqlObj(pSql);
pSql->sqlstr = calloc(1, sqlLen + 1);
......@@ -317,7 +319,7 @@ static void tscAsyncResultCallback(SSchedMsg *pMsg) {
return ;
}
if (tsShortcutFlag) {
if (tsShortcutFlag && (pSql->res.code == TSDB_CODE_RPC_SHORTCUT)) {
tscDebug("0x%" PRIx64 " async result callback, code:%s", pSql->self, tstrerror(pSql->res.code));
pSql->res.code = TSDB_CODE_SUCCESS;
} else {
......
......@@ -2058,6 +2058,7 @@ static int32_t parseSmlValue(TAOS_SML_KV *pKV, const char **idx,
const char *start, *cur;
int32_t ret = TSDB_CODE_SUCCESS;
char *value = NULL;
int32_t bufSize = TSDB_FUNC_BUF_SIZE;
int16_t len = 0;
bool kv_done = false;
......@@ -2077,6 +2078,11 @@ static int32_t parseSmlValue(TAOS_SML_KV *pKV, const char **idx,
val_rqoute
} val_state;
value = malloc(bufSize);
if (value == NULL) {
ret = TSDB_CODE_TSC_OUT_OF_MEMORY;
goto error;
}
start = cur = *idx;
tag_state = tag_common;
val_state = val_common;
......@@ -2095,7 +2101,6 @@ static int32_t parseSmlValue(TAOS_SML_KV *pKV, const char **idx,
back_slash = false;
cur++;
len++;
break;
}
......@@ -2104,7 +2109,6 @@ static int32_t parseSmlValue(TAOS_SML_KV *pKV, const char **idx,
tag_state = tag_lqoute;
}
cur += 1;
len += 1;
break;
} else if (*cur == 'L') {
line_len = strlen(*idx);
......@@ -2122,7 +2126,6 @@ static int32_t parseSmlValue(TAOS_SML_KV *pKV, const char **idx,
tag_state = tag_lqoute;
}
cur += 2;
len += 2;
break;
}
}
......@@ -2131,8 +2134,7 @@ static int32_t parseSmlValue(TAOS_SML_KV *pKV, const char **idx,
case '\\':
back_slash = true;
cur++;
len++;
break;
continue;
case ',':
kv_done = true;
break;
......@@ -2146,7 +2148,6 @@ static int32_t parseSmlValue(TAOS_SML_KV *pKV, const char **idx,
default:
cur++;
len++;
}
break;
......@@ -2160,7 +2161,6 @@ static int32_t parseSmlValue(TAOS_SML_KV *pKV, const char **idx,
back_slash = false;
cur++;
len++;
break;
} else if (double_quote == true) {
if (*cur != ' ' && *cur != ',' && *cur != '\0') {
......@@ -2182,13 +2182,11 @@ static int32_t parseSmlValue(TAOS_SML_KV *pKV, const char **idx,
case '\\':
back_slash = true;
cur++;
len++;
break;
continue;
case '"':
double_quote = true;
cur++;
len++;
break;
case '\0':
......@@ -2199,7 +2197,6 @@ static int32_t parseSmlValue(TAOS_SML_KV *pKV, const char **idx,
default:
cur++;
len++;
}
break;
......@@ -2217,9 +2214,8 @@ static int32_t parseSmlValue(TAOS_SML_KV *pKV, const char **idx,
goto error;
}
back_slash = false;
cur++;
len++;
back_slash = false;
cur++;
break;
}
......@@ -2235,7 +2231,6 @@ static int32_t parseSmlValue(TAOS_SML_KV *pKV, const char **idx,
}
cur += 1;
len += 1;
break;
} else if (*cur == 'L') {
line_len = strlen(*idx);
......@@ -2252,12 +2247,10 @@ static int32_t parseSmlValue(TAOS_SML_KV *pKV, const char **idx,
if (cur + 1 == *idx + 1) {
val_state = val_lqoute;
cur += 2;
len += 2;
} else {
/* MUST at the end of string */
if (cur + 2 >= *idx + line_len) {
cur += 2;
len += 2;
*is_last_kv = true;
kv_done = true;
} else {
......@@ -2271,7 +2264,6 @@ static int32_t parseSmlValue(TAOS_SML_KV *pKV, const char **idx,
}
cur += 2;
len += 2;
kv_done = true;
}
}
......@@ -2284,8 +2276,7 @@ static int32_t parseSmlValue(TAOS_SML_KV *pKV, const char **idx,
case '\\':
back_slash = true;
cur++;
len++;
break;
continue;
case ',':
kv_done = true;
......@@ -2300,7 +2291,6 @@ static int32_t parseSmlValue(TAOS_SML_KV *pKV, const char **idx,
default:
cur++;
len++;
}
break;
......@@ -2311,10 +2301,11 @@ static int32_t parseSmlValue(TAOS_SML_KV *pKV, const char **idx,
ret = TSDB_CODE_TSC_LINE_SYNTAX_ERROR;
goto error;
}
if (*cur == '"') {
start++;
}
back_slash = false;
cur++;
len++;
break;
} else if (double_quote == true) {
if (*cur != ' ' && *cur != ',' && *cur != '\0') {
......@@ -2336,13 +2327,11 @@ static int32_t parseSmlValue(TAOS_SML_KV *pKV, const char **idx,
case '\\':
back_slash = true;
cur++;
len++;
break;
continue;
case '"':
double_quote = true;
cur++;
len++;
break;
case '\0':
......@@ -2353,7 +2342,6 @@ static int32_t parseSmlValue(TAOS_SML_KV *pKV, const char **idx,
default:
cur++;
len++;
}
break;
......@@ -2362,24 +2350,35 @@ static int32_t parseSmlValue(TAOS_SML_KV *pKV, const char **idx,
}
}
if (start < cur) {
if (bufSize <= len + (cur - start)) {
bufSize *= 2;
char *tmp = realloc(value, bufSize);
if (tmp == NULL) {
ret = TSDB_CODE_TSC_OUT_OF_MEMORY;
goto error;
}
value = tmp;
}
memcpy(value + len, start, cur - start); // [start, cur)
len += cur - start;
start = cur;
}
if (kv_done == true) {
break;
}
}
if (len == 0 || ret != TSDB_CODE_SUCCESS) {
free(pKV->key);
pKV->key = NULL;
return TSDB_CODE_TSC_LINE_SYNTAX_ERROR;
ret = TSDB_CODE_TSC_LINE_SYNTAX_ERROR;
goto error;
}
value = calloc(len + 1, 1);
memcpy(value, start, len);
value[len] = '\0';
if (!convertSmlValueType(pKV, value, len, info, isTag)) {
tscError("SML:0x%"PRIx64" Failed to convert sml value string(%s) to any type",
info->id, value);
free(value);
ret = TSDB_CODE_TSC_INVALID_VALUE;
goto error;
}
......@@ -2389,7 +2388,8 @@ static int32_t parseSmlValue(TAOS_SML_KV *pKV, const char **idx,
return ret;
error:
//free previous alocated key field
//free previous alocated buffer and key field
free(value);
free(pKV->key);
pKV->key = NULL;
return ret;
......
......@@ -444,7 +444,7 @@ int tscSendMsgToServer(SSqlObj *pSql) {
if ((rpcMsg.msgType == TSDB_MSG_TYPE_SUBMIT) && (tsShortcutFlag & TSDB_SHORTCUT_RB_RPC_SEND_SUBMIT)) {
rpcFreeCont(rpcMsg.pCont);
return TSDB_CODE_FAILED;
return TSDB_CODE_RPC_SHORTCUT;
}
......@@ -3376,7 +3376,9 @@ int tscRenewTableMeta(SSqlObj *pSql) {
pSql->rootObj->retryReason = pSql->retryReason;
SSqlObj *rootSql = pSql->rootObj;
pthread_mutex_lock(&rootSql->mtxSubs);
tscFreeSubobj(rootSql);
pthread_mutex_unlock(&rootSql->mtxSubs);
tfree(rootSql->pSubs);
tscResetSqlCmd(&rootSql->cmd, true, rootSql->self);
......
......@@ -1760,6 +1760,10 @@ void tscFreeSqlObj(SSqlObj* pSql) {
tscFreeSubobj(pSql);
if (pSql && (pSql == pSql->rootObj)) {
pthread_mutex_destroy(&pSql->mtxSubs);
}
pSql->signature = NULL;
pSql->fp = NULL;
tfree(pSql->sqlstr);
......
......@@ -60,7 +60,7 @@ int32_t* taosGetErrno();
#define TSDB_CODE_APP_NOT_READY TAOS_DEF_ERROR_CODE(0, 0x0014) //"Database not ready"
#define TSDB_CODE_RPC_FQDN_ERROR TAOS_DEF_ERROR_CODE(0, 0x0015) //"Unable to resolve FQDN"
#define TSDB_CODE_RPC_INVALID_VERSION TAOS_DEF_ERROR_CODE(0, 0x0016) //"Invalid app version"
#define TSDB_CODE_RPC_CONN_BROKEN TAOS_DEF_ERROR_CODE(0, 0x0017) //"connection is broken"
#define TSDB_CODE_RPC_SHORTCUT TAOS_DEF_ERROR_CODE(0, 0x0017) //"Shortcut"
//common & util
#define TSDB_CODE_COM_OPS_NOT_SUPPORT TAOS_DEF_ERROR_CODE(0, 0x0100) //"Operation not supported"
......
Subproject commit f84cb6e51556d8030585128c2b252aa2a6453328
......@@ -63,7 +63,7 @@ ELSE ()
COMMAND CGO_CFLAGS=-I${CMAKE_CURRENT_SOURCE_DIR}/../inc CGO_LDFLAGS=-L${CMAKE_BINARY_DIR}/build/lib go build -a -ldflags "-s -w -X github.com/taosdata/taosadapter/version.Version=${taos_version} -X github.com/taosdata/taosadapter/version.CommitID=${taosadapter_commit_sha1}"
COMMAND CGO_CFLAGS=-I${CMAKE_CURRENT_SOURCE_DIR}/../inc CGO_LDFLAGS=-L${CMAKE_BINARY_DIR}/build/lib go build -a -o taosadapter-debug -ldflags "-X github.com/taosdata/taosadapter/version.Version=${taos_version} -X github.com/taosdata/taosadapter/version.CommitID=${taosadapter_commit_sha1}"
INSTALL_COMMAND
COMMAND wget -c https://github.com/upx/upx/releases/download/v3.96/upx-3.96-${PLATFORM_ARCH_STR}_linux.tar.xz -O ${CMAKE_CURRENT_SOURCE_DIR}/upx.tar.xz && tar -xvJf ${CMAKE_CURRENT_SOURCE_DIR}/upx.tar.xz -C ${CMAKE_CURRENT_SOURCE_DIR} --strip-components 1 > /dev/null && ${CMAKE_CURRENT_SOURCE_DIR}/upx taosadapter || :
COMMAND wget -c https://github.com/upx/upx/releases/download/v3.96/upx-3.96-${PLATFORM_ARCH_STR}_linux.tar.xz -O $ENV{HOME}/upx.tar.xz && tar -xvJf $ENV{HOME}/upx.tar.xz -C $ENV{HOME} --strip-components 1 > /dev/null && $ENV{HOME}/upx taosadapter || :
COMMAND cmake -E copy taosadapter ${CMAKE_BINARY_DIR}/build/bin
COMMAND cmake -E make_directory ${CMAKE_BINARY_DIR}/test/cfg/
COMMAND cmake -E copy ./example/config/taosadapter.toml ${CMAKE_BINARY_DIR}/test/cfg/
......
......@@ -2814,6 +2814,7 @@ static bool notContainSessionOrStateWindow(SQueryAttr *pQueryAttr) { return !(pQ
static int32_t updateBlockLoadStatus(SQueryAttr *pQuery, int32_t status) {
bool hasFirstLastFunc = false;
bool hasOtherFunc = false;
bool hasCount = false;
if (status == BLK_DATA_ALL_NEEDED || status == BLK_DATA_DISCARD) {
return status;
......@@ -2829,6 +2830,8 @@ static int32_t updateBlockLoadStatus(SQueryAttr *pQuery, int32_t status) {
if (functionId == TSDB_FUNC_FIRST_DST || functionId == TSDB_FUNC_LAST_DST) {
hasFirstLastFunc = true;
} else if(functionId == TSDB_FUNC_COUNT) {
hasCount = true;
} else {
hasOtherFunc = true;
}
......@@ -2836,7 +2839,7 @@ static int32_t updateBlockLoadStatus(SQueryAttr *pQuery, int32_t status) {
if (hasFirstLastFunc && status == BLK_DATA_NO_NEEDED) {
if(!hasOtherFunc) {
return BLK_DATA_DISCARD;
return hasCount ? BLK_DATA_NO_NEEDED : BLK_DATA_DISCARD;
} else {
return BLK_DATA_ALL_NEEDED;
}
......
......@@ -1873,4 +1873,4 @@ bool rpcSaveSendInfo(int64_t rpcRid, void** ppContext) {
taosReleaseRef(tsRpcRefId, rpcRid);
return true;
}
}
\ No newline at end of file
......@@ -68,8 +68,7 @@ TAOS_DEFINE_ERROR(TSDB_CODE_RPC_INVALID_TIME_STAMP, "Client and server's t
TAOS_DEFINE_ERROR(TSDB_CODE_APP_NOT_READY, "Database not ready")
TAOS_DEFINE_ERROR(TSDB_CODE_RPC_FQDN_ERROR, "Unable to resolve FQDN")
TAOS_DEFINE_ERROR(TSDB_CODE_RPC_INVALID_VERSION, "Invalid app version")
TAOS_DEFINE_ERROR(TSDB_CODE_RPC_CONN_BROKEN, "Connection broken")
TAOS_DEFINE_ERROR(TSDB_CODE_RPC_SHORTCUT, "Shortcut")
//common & util
TAOS_DEFINE_ERROR(TSDB_CODE_COM_OPS_NOT_SUPPORT, "Operation not supported")
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......@@ -88,4 +87,4 @@
}]
}]
}]
}
\ No newline at end of file
}
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......@@ -84,4 +83,4 @@
"tags": [{"type": "TIMESTAMP"},{"type": "INT"}, {"type": "BIGINT"}, {"type": "FLOAT"}, {"type": "DOUBLE"}, {"type": "SMALLINT"}, {"type": "TINYINT"}, {"type": "BOOL"}, {"type": "NCHAR","len": 17, "count":1}, {"type": "UINT"}, {"type": "UBIGINT"}, {"type": "UTINYINT"}, {"type": "USMALLINT"}, {"type": "BINARY", "len": 19, "count":1}]
}]
}]
}
\ No newline at end of file
}
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......@@ -84,4 +83,4 @@
"tags": [{"type": "INT"}, {"type": "BIGINT"}, {"type": "FLOAT"}, {"type": "DOUBLE"}, {"type": "SMALLINT"}, {"type": "TINYINT"}, {"type": "BOOL"}, {"type": "NCHAR","len": 17, "count":1}, {"type": "UINT"}, {"type": "UBIGINT"}, {"type": "UTINYINT"}, {"type": "USMALLINT"}, {"type": "BINARY", "len": 19, "count":1}]
}]
}]
}
\ No newline at end of file
}
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -27,7 +27,6 @@
"minRows": 100,
"maxRows": 4096,
"comp": 2,
"walLevel": 1,
"cachelast": 0,
"quorum": 1,
"fsync": 3000,
......
......@@ -27,7 +27,6 @@
"minRows": 100,
"maxRows": 4096,
"comp": 2,
"walLevel": 1,
"cachelast": 0,
"quorum": 1,
"fsync": 3000,
......
......@@ -27,7 +27,6 @@
"minRows": 100,
"maxRows": 4096,
"comp": 2,
"walLevel": 1,
"cachelast": 0,
"quorum": 1,
"fsync": 3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -27,7 +27,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......@@ -84,4 +83,4 @@
"tags": [{"type": "TIMESTAMP"},{"type": "INT"}, {"type": "BIGINT"}, {"type": "FLOAT"}, {"type": "DOUBLE"}, {"type": "SMALLINT"}, {"type": "TINYINT"}, {"type": "BOOL"}, {"type": "NCHAR","len": 17, "count":1}, {"type": "UINT"}, {"type": "UBIGINT"}, {"type": "UTINYINT"}, {"type": "USMALLINT"}, {"type": "BINARY", "len": 19, "count":1}]
}]
}]
}
\ No newline at end of file
}
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":1,
"quorum":1,
"fsync":3000,
......@@ -59,4 +58,4 @@
"tags": [{"type": "TIMESTAMP"},{"type": "INT"}, {"type": "BIGINT"}, {"type": "FLOAT"}, {"type": "DOUBLE"}, {"type": "SMALLINT"}, {"type": "TINYINT"}, {"type": "BOOL"}, {"type": "NCHAR"}, {"type": "UINT"}, {"type": "UBIGINT"}, {"type": "UTINYINT"}, {"type": "USMALLINT"}, {"type": "BINARY"}]
}]
}]
}
\ No newline at end of file
}
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -196,12 +196,19 @@ class TDTestCase:
self._conn.schemaless_insert([
"sts,t1=abc,t2=ab\"c,t3=ab\\,c,t4=ab\\=c,t5=ab\\ c c1=3i64,c3=L\"passitagin\",c2=true,c4=5f64,c5=5f64,c6=\"abc\" 1626006833640000000",
"sts,t1=abc c1=3i64,c2=false,c3=L\"{\\\"date\\\":\\\"2020-01-01 08:00:00.000\\\",\\\"temperature\\\":20}\",c6=\"ab\\\\c\" 1626006833640000000"
"sts,t1=abc c1=3i64,c2=false,c3=L\"{\\\"date\\\":\\\"2020-01-01 08:00:00.000\\\",\\\"temperature\\\":20}\",c6=\"ab\\\\c\" 1626006833640000000",
"type_json5,__deviceId__=10 index=0,jsonAttri$j=\"{\\\"jsonC\\\":\\\"0\\\"}\" 1626006833640000001"
], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value)
tdSql.query('select tbname from sts')
tdSql.checkRows(2)
tdSql.query("select * from sts")
tdSql.checkData(1, 2, '''{"date":"2020-01-01 08:00:00.000","temperature":20}''')
tdSql.query("select * from type_json5")
tdSql.checkData(0, 2, '''{"jsonC":"0"}''')
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -74,7 +74,6 @@ class TDTestCase:
"minRows": 100,
"maxRows": 4096,
"comp": 2,
"walLevel": 1,
"cachelast": 0,
"quorum": 1,
"fsync": 3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -65,7 +65,6 @@ class TDTestCase:
"minRows": 100,
"maxRows": 4096,
"comp": 2,
"walLevel": 1,
"cachelast": 0,
"quorum": 1,
"fsync": 3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -25,7 +25,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -25,7 +25,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -25,7 +25,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -25,7 +25,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -25,7 +25,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -17,8 +17,7 @@
"cache": 16,
"blocks": 8,
"precision": "ms",
"update": 0,
"maxtablesPerVnode": 1000
"update": 0
},
"super_tables": [{
"name": "stb01",
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -27,7 +27,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -27,7 +27,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -25,7 +25,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -25,7 +25,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
......@@ -26,7 +26,6 @@
"minRows": 100,
"maxRows": 4096,
"comp":2,
"walLevel":1,
"cachelast":0,
"quorum":1,
"fsync":3000,
......
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册