提交 8bb9edbf 编写于 作者: G Ganlin Zhao

Merge branch '3.0' into fix/maxmin_type

...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
# taos-tools # taos-tools
ExternalProject_Add(taos-tools ExternalProject_Add(taos-tools
GIT_REPOSITORY https://github.com/taosdata/taos-tools.git GIT_REPOSITORY https://github.com/taosdata/taos-tools.git
GIT_TAG 2dba49c GIT_TAG 3588b3d
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools" SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools"
BINARY_DIR "" BINARY_DIR ""
#BUILD_IN_SOURCE TRUE #BUILD_IN_SOURCE TRUE
......
...@@ -52,7 +52,7 @@ Start TDengine service and execute `taosBenchmark` (formerly named `taosdemo`) i ...@@ -52,7 +52,7 @@ Start TDengine service and execute `taosBenchmark` (formerly named `taosdemo`) i
taosBenchmark taosBenchmark
``` ```
This command creates the `meters` supertable in the `test` database. In the `meters` supertable, it then creates 10,000 subtables named `d0` to `d9999`. Each table has 10,000 rows and each row has four columns: `ts`, `current`, `voltage`, and `phase`. The timestamps of the data in these columns range from 2017-07-14 10:40:00 000 to 2017-07-14 10:40:09 999. Each table is randomly assigned a `groupId` tag from 1 to 10 and a `location` tag of either `Campbell`, `Cupertino`, `Los Angeles`, `Mountain View`, `Palo Alto`, `San Diego`, `San Francisco`, `San Jose`, `Santa Clara` or `Sunnyvale`. This command creates the `meters` supertable in the `test` database. In the `meters` supertable, it then creates 10,000 subtables named `d0` to `d9999`. Each table has 10,000 rows and each row has four columns: `ts`, `current`, `voltage`, and `phase`. The timestamps of the data in these columns range from 2017-07-14 10:40:00 000 to 2017-07-14 10:40:09 999. Each table is randomly assigned a `groupId` tag from 1 to 10 and a `location` tag of either `California.Campbell`, `California.Cupertino`, `California.LosAngeles`, `California.MountainView`, `California.PaloAlto`, `California.SanDiego`, `California.SanFrancisco`, `California.SanJose`, `California.SantaClara` or `California.Sunnyvale`.
The `taosBenchmark` command creates a deployment with 100 million data points that you can use for testing purposes. The time required to create the deployment depends on your hardware. On most modern servers, the deployment is created in ten to twenty seconds. The `taosBenchmark` command creates a deployment with 100 million data points that you can use for testing purposes. The time required to create the deployment depends on your hardware. On most modern servers, the deployment is created in ten to twenty seconds.
...@@ -74,10 +74,10 @@ Query the average, maximum, and minimum values of all 100 million rows of data: ...@@ -74,10 +74,10 @@ Query the average, maximum, and minimum values of all 100 million rows of data:
SELECT AVG(current), MAX(voltage), MIN(phase) FROM test.meters; SELECT AVG(current), MAX(voltage), MIN(phase) FROM test.meters;
``` ```
Query the number of rows whose `location` tag is `San Francisco`: Query the number of rows whose `location` tag is `California.SanFrancisco`:
```sql ```sql
SELECT COUNT(*) FROM test.meters WHERE location = "San Francisco"; SELECT COUNT(*) FROM test.meters WHERE location = "California.SanFrancisco";
``` ```
Query the average, maximum, and minimum values of all rows whose `groupId` tag is `10`: Query the average, maximum, and minimum values of all rows whose `groupId` tag is `10`:
......
...@@ -221,7 +221,7 @@ Start TDengine service and execute `taosBenchmark` (formerly named `taosdemo`) i ...@@ -221,7 +221,7 @@ Start TDengine service and execute `taosBenchmark` (formerly named `taosdemo`) i
taosBenchmark taosBenchmark
``` ```
This command creates the `meters` supertable in the `test` database. In the `meters` supertable, it then creates 10,000 subtables named `d0` to `d9999`. Each table has 10,000 rows and each row has four columns: `ts`, `current`, `voltage`, and `phase`. The timestamps of the data in these columns range from 2017-07-14 10:40:00 000 to 2017-07-14 10:40:09 999. Each table is randomly assigned a `groupId` tag from 1 to 10 and a `location` tag of either `Campbell`, `Cupertino`, `Los Angeles`, `Mountain View`, `Palo Alto`, `San Diego`, `San Francisco`, `San Jose`, `Santa Clara` or `Sunnyvale`. This command creates the `meters` supertable in the `test` database. In the `meters` supertable, it then creates 10,000 subtables named `d0` to `d9999`. Each table has 10,000 rows and each row has four columns: `ts`, `current`, `voltage`, and `phase`. The timestamps of the data in these columns range from 2017-07-14 10:40:00 000 to 2017-07-14 10:40:09 999. Each table is randomly assigned a `groupId` tag from 1 to 10 and a `location` tag of either `California.Campbell`, `California.Cupertino`, `California.LosAngeles`, `California.MountainView`, `California.PaloAlto`, `California.SanDiego`, `California.SanFrancisco`, `California.SanJose`, `California.SantaClara` or `California.Sunnyvale`.
The `taosBenchmark` command creates a deployment with 100 million data points that you can use for testing purposes. The time required to create the deployment depends on your hardware. On most modern servers, the deployment is created in ten to twenty seconds. The `taosBenchmark` command creates a deployment with 100 million data points that you can use for testing purposes. The time required to create the deployment depends on your hardware. On most modern servers, the deployment is created in ten to twenty seconds.
...@@ -243,10 +243,10 @@ Query the average, maximum, and minimum values of all 100 million rows of data: ...@@ -243,10 +243,10 @@ Query the average, maximum, and minimum values of all 100 million rows of data:
SELECT AVG(current), MAX(voltage), MIN(phase) FROM test.meters; SELECT AVG(current), MAX(voltage), MIN(phase) FROM test.meters;
``` ```
Query the number of rows whose `location` tag is `San Francisco`: Query the number of rows whose `location` tag is `California.SanFrancisco`:
```sql ```sql
SELECT COUNT(*) FROM test.meters WHERE location = "San Francisco"; SELECT COUNT(*) FROM test.meters WHERE location = "California.SanFrancisco";
``` ```
Query the average, maximum, and minimum values of all rows whose `groupId` tag is `10`: Query the average, maximum, and minimum values of all rows whose `groupId` tag is `10`:
......
...@@ -155,15 +155,15 @@ async fn demo(taos: &Taos, db: &str) -> Result<(), Error> { ...@@ -155,15 +155,15 @@ async fn demo(taos: &Taos, db: &str) -> Result<(), Error> {
let inserted = taos.exec_many([ let inserted = taos.exec_many([
// create super table // create super table
"CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT, `phase` FLOAT) \ "CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT, `phase` FLOAT) \
TAGS (`groupid` INT, `location` BINARY(16))", TAGS (`groupid` INT, `location` BINARY(24))",
// create child table // create child table
"CREATE TABLE `d0` USING `meters` TAGS(0, 'Los Angles')", "CREATE TABLE `d0` USING `meters` TAGS(0, 'California.LosAngles')",
// insert into child table // insert into child table
"INSERT INTO `d0` values(now - 10s, 10, 116, 0.32)", "INSERT INTO `d0` values(now - 10s, 10, 116, 0.32)",
// insert with NULL values // insert with NULL values
"INSERT INTO `d0` values(now - 8s, NULL, NULL, NULL)", "INSERT INTO `d0` values(now - 8s, NULL, NULL, NULL)",
// insert and automatically create table with tags if not exists // insert and automatically create table with tags if not exists
"INSERT INTO `d1` USING `meters` TAGS(1, 'San Francisco') values(now - 9s, 10.1, 119, 0.33)", "INSERT INTO `d1` USING `meters` TAGS(1, 'California.SanFrancisco') values(now - 9s, 10.1, 119, 0.33)",
// insert many records in a single sql // insert many records in a single sql
"INSERT INTO `d1` values (now-8s, 10, 120, 0.33) (now - 6s, 10, 119, 0.34) (now - 4s, 11.2, 118, 0.322)", "INSERT INTO `d1` values (now-8s, 10, 120, 0.33) (now - 6s, 10, 119, 0.34) (now - 4s, 11.2, 118, 0.322)",
]).await?; ]).await?;
......
...@@ -38,12 +38,12 @@ public class SubscribeDemo { ...@@ -38,12 +38,12 @@ public class SubscribeDemo {
statement.executeUpdate("create database " + DB_NAME); statement.executeUpdate("create database " + DB_NAME);
statement.executeUpdate("use " + DB_NAME); statement.executeUpdate("use " + DB_NAME);
statement.executeUpdate( statement.executeUpdate(
"CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT) TAGS (`groupid` INT, `location` BINARY(16))"); "CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT) TAGS (`groupid` INT, `location` BINARY(24))");
statement.executeUpdate("CREATE TABLE `d0` USING `meters` TAGS(0, 'Los Angles')"); statement.executeUpdate("CREATE TABLE `d0` USING `meters` TAGS(0, 'California.LosAngles')");
statement.executeUpdate("INSERT INTO `d0` values(now - 10s, 0.32, 116)"); statement.executeUpdate("INSERT INTO `d0` values(now - 10s, 0.32, 116)");
statement.executeUpdate("INSERT INTO `d0` values(now - 8s, NULL, NULL)"); statement.executeUpdate("INSERT INTO `d0` values(now - 8s, NULL, NULL)");
statement.executeUpdate( statement.executeUpdate(
"INSERT INTO `d1` USING `meters` TAGS(1, 'San Francisco') values(now - 9s, 10.1, 119)"); "INSERT INTO `d1` USING `meters` TAGS(1, 'California.SanFrancisco') values(now - 9s, 10.1, 119)");
statement.executeUpdate( statement.executeUpdate(
"INSERT INTO `d1` values (now-8s, 10, 120) (now - 6s, 10, 119) (now - 4s, 11.2, 118)"); "INSERT INTO `d1` values (now-8s, 10, 120) (now - 6s, 10, 119) (now - 4s, 11.2, 118)");
// create topic // create topic
...@@ -75,4 +75,4 @@ public class SubscribeDemo { ...@@ -75,4 +75,4 @@ public class SubscribeDemo {
} }
timer.cancel(); timer.cancel();
} }
} }
\ No newline at end of file
...@@ -16,7 +16,7 @@ class MockDataSource implements Iterator { ...@@ -16,7 +16,7 @@ class MockDataSource implements Iterator {
private int currentTbId = -1; private int currentTbId = -1;
// mock values // mock values
String[] location = {"LosAngeles", "SanDiego", "Hollywood", "Compton", "San Francisco"}; String[] location = {"California.LosAngeles", "California.SanDiego", "California.SanJose", "California.Campbell", "California.SanFrancisco"};
float[] current = {8.8f, 10.7f, 9.9f, 8.9f, 9.4f}; float[] current = {8.8f, 10.7f, 9.9f, 8.9f, 9.4f};
int[] voltage = {119, 116, 111, 113, 118}; int[] voltage = {119, 116, 111, 113, 118};
float[] phase = {0.32f, 0.34f, 0.33f, 0.329f, 0.141f}; float[] phase = {0.32f, 0.34f, 0.33f, 0.329f, 0.141f};
...@@ -50,4 +50,4 @@ class MockDataSource implements Iterator { ...@@ -50,4 +50,4 @@ class MockDataSource implements Iterator {
return sb.toString(); return sb.toString();
} }
} }
\ No newline at end of file
...@@ -3,11 +3,11 @@ import time ...@@ -3,11 +3,11 @@ import time
class MockDataSource: class MockDataSource:
samples = [ samples = [
"8.8,119,0.32,LosAngeles,0", "8.8,119,0.32,California.LosAngeles,0",
"10.7,116,0.34,SanDiego,1", "10.7,116,0.34,California.SanDiego,1",
"9.9,111,0.33,Hollywood,2", "9.9,111,0.33,California.SanJose,2",
"8.9,113,0.329,Compton,3", "8.9,113,0.329,California.Campbell,3",
"9.4,118,0.141,San Francisco,4" "9.4,118,0.141,California.SanFrancisco,4"
] ]
def __init__(self, tb_name_prefix, table_count): def __init__(self, tb_name_prefix, table_count):
......
...@@ -12,7 +12,7 @@ async fn main() -> anyhow::Result<()> { ...@@ -12,7 +12,7 @@ async fn main() -> anyhow::Result<()> {
// bind table name and tags // bind table name and tags
stmt.set_tbname_tags( stmt.set_tbname_tags(
"d1001", "d1001",
&[Value::VarChar("San Fransico".into()), Value::Int(2)], &[Value::VarChar("California.SanFransico".into()), Value::Int(2)],
)?; )?;
// bind values. // bind values.
let values = vec![ let values = vec![
......
...@@ -19,13 +19,13 @@ struct Record { ...@@ -19,13 +19,13 @@ struct Record {
async fn prepare(taos: Taos) -> anyhow::Result<()> { async fn prepare(taos: Taos) -> anyhow::Result<()> {
let inserted = taos.exec_many([ let inserted = taos.exec_many([
// create child table // create child table
"CREATE TABLE `d0` USING `meters` TAGS(0, 'Los Angles')", "CREATE TABLE `d0` USING `meters` TAGS(0, 'California.LosAngles')",
// insert into child table // insert into child table
"INSERT INTO `d0` values(now - 10s, 10, 116, 0.32)", "INSERT INTO `d0` values(now - 10s, 10, 116, 0.32)",
// insert with NULL values // insert with NULL values
"INSERT INTO `d0` values(now - 8s, NULL, NULL, NULL)", "INSERT INTO `d0` values(now - 8s, NULL, NULL, NULL)",
// insert and automatically create table with tags if not exists // insert and automatically create table with tags if not exists
"INSERT INTO `d1` USING `meters` TAGS(1, 'San Francisco') values(now - 9s, 10.1, 119, 0.33)", "INSERT INTO `d1` USING `meters` TAGS(1, 'California.SanFrancisco') values(now - 9s, 10.1, 119, 0.33)",
// insert many records in a single sql // insert many records in a single sql
"INSERT INTO `d1` values (now-8s, 10, 120, 0.33) (now - 6s, 10, 119, 0.34) (now - 4s, 11.2, 118, 0.322)", "INSERT INTO `d1` values (now-8s, 10, 120, 0.33) (now - 6s, 10, 119, 0.34) (now - 4s, 11.2, 118, 0.322)",
]).await?; ]).await?;
...@@ -48,7 +48,7 @@ async fn main() -> anyhow::Result<()> { ...@@ -48,7 +48,7 @@ async fn main() -> anyhow::Result<()> {
format!("CREATE DATABASE `{db}`"), format!("CREATE DATABASE `{db}`"),
format!("USE `{db}`"), format!("USE `{db}`"),
// create super table // create super table
format!("CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT, `phase` FLOAT) TAGS (`groupid` INT, `location` BINARY(16))"), format!("CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT, `phase` FLOAT) TAGS (`groupid` INT, `location` BINARY(24))"),
// create topic for subscription // create topic for subscription
format!("CREATE TOPIC tmq_meters with META AS DATABASE {db}") format!("CREATE TOPIC tmq_meters with META AS DATABASE {db}")
]) ])
......
...@@ -14,14 +14,14 @@ async fn main() -> anyhow::Result<()> { ...@@ -14,14 +14,14 @@ async fn main() -> anyhow::Result<()> {
]).await?; ]).await?;
let inserted = taos.exec("INSERT INTO let inserted = taos.exec("INSERT INTO
power.d1001 USING power.meters TAGS('San Francisco', 2) power.d1001 USING power.meters TAGS('California.SanFrancisco', 2)
VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000)
('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
power.d1002 USING power.meters TAGS('San Francisco', 3) power.d1002 USING power.meters TAGS('California.SanFrancisco', 3)
VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
power.d1003 USING power.meters TAGS('Los Angeles', 2) power.d1003 USING power.meters TAGS('California.LosAngeles', 2)
VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
power.d1004 USING power.meters TAGS('Los Angeles', 3) power.d1004 USING power.meters TAGS('California.LosAngeles', 3)
VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)").await?; VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)").await?;
assert_eq!(inserted, 8); assert_eq!(inserted, 8);
......
...@@ -52,7 +52,7 @@ taos> ...@@ -52,7 +52,7 @@ taos>
$ taosBenchmark $ taosBenchmark
``` ```
该命令将在数据库 `test` 下面自动创建一张超级表 `meters`,该超级表下有 1 万张表,表名为 `d0``d9999`,每张表有 1 万条记录,每条记录有 `ts``current``voltage``phase` 四个字段,时间戳从 2017-07-14 10:40:00 000 到 2017-07-14 10:40:09 999,每张表带有标签 `location``groupId`,groupId 被设置为 1 到 10,location 被设置为 `Campbell``Cupertino``Los Angeles``Mountain View``Palo Alto``San Diego``San Francisco``San Jose``Santa Clara` 或者 `Sunnyvale` 该命令将在数据库 `test` 下面自动创建一张超级表 `meters`,该超级表下有 1 万张表,表名为 `d0``d9999`,每张表有 1 万条记录,每条记录有 `ts``current``voltage``phase` 四个字段,时间戳从 2017-07-14 10:40:00 000 到 2017-07-14 10:40:09 999,每张表带有标签 `location``groupId`,groupId 被设置为 1 到 10,location 被设置为 `California.Campbell``California.Cupertino``California.LosAngeles``California.MountainView``California.PaloAlto``California.SanDiego``California.SanFrancisco``California.SanJose``California.SantaClara` 或者 `California.Sunnyvale`
这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能,即使在一台普通的 PC 服务器往往也仅需十几秒。 这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能,即使在一台普通的 PC 服务器往往也仅需十几秒。
...@@ -74,10 +74,10 @@ SELECT COUNT(*) FROM test.meters; ...@@ -74,10 +74,10 @@ SELECT COUNT(*) FROM test.meters;
SELECT AVG(current), MAX(voltage), MIN(phase) FROM test.meters; SELECT AVG(current), MAX(voltage), MIN(phase) FROM test.meters;
``` ```
查询 location = "San Francisco" 的记录总条数: 查询 location = "California.SanFrancisco" 的记录总条数:
```sql ```sql
SELECT COUNT(*) FROM test.meters WHERE location = "San Francisco"; SELECT COUNT(*) FROM test.meters WHERE location = "California.SanFrancisco";
``` ```
查询 groupId = 10 的所有记录的平均值、最大值、最小值等: 查询 groupId = 10 的所有记录的平均值、最大值、最小值等:
......
...@@ -223,7 +223,7 @@ Query OK, 2 row(s) in set (0.003128s) ...@@ -223,7 +223,7 @@ Query OK, 2 row(s) in set (0.003128s)
$ taosBenchmark $ taosBenchmark
``` ```
该命令将在数据库 `test` 下面自动创建一张超级表 `meters`,该超级表下有 1 万张表,表名为 `d0``d9999`,每张表有 1 万条记录,每条记录有 `ts``current``voltage``phase` 四个字段,时间戳从 2017-07-14 10:40:00 000 到 2017-07-14 10:40:09 999,每张表带有标签 `location``groupId`,groupId 被设置为 1 到 10,location 被设置为 `Campbell``Cupertino``Los Angeles``Mountain View``Palo Alto``San Diego``San Francisco``San Jose``Santa Clara` 或者 `Sunnyvale` 该命令将在数据库 `test` 下面自动创建一张超级表 `meters`,该超级表下有 1 万张表,表名为 `d0``d9999`,每张表有 1 万条记录,每条记录有 `ts``current``voltage``phase` 四个字段,时间戳从 2017-07-14 10:40:00 000 到 2017-07-14 10:40:09 999,每张表带有标签 `location``groupId`,groupId 被设置为 1 到 10,location 被设置为 `California.Campbell``California.Cupertino``California.LosAngeles``California.MountainView``California.PaloAlto``California.SanDiego``California.SanFrancisco``California.SanJose``California.SantaClara` 或者 `California.Sunnyvale`
这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能,即使在一台普通的 PC 服务器往往也仅需十几秒。 这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能,即使在一台普通的 PC 服务器往往也仅需十几秒。
...@@ -245,10 +245,10 @@ SELECT COUNT(*) FROM test.meters; ...@@ -245,10 +245,10 @@ SELECT COUNT(*) FROM test.meters;
SELECT AVG(current), MAX(voltage), MIN(phase) FROM test.meters; SELECT AVG(current), MAX(voltage), MIN(phase) FROM test.meters;
``` ```
查询 location = "San Francisco" 的记录总条数: 查询 location = "California.SanFrancisco" 的记录总条数:
```sql ```sql
SELECT COUNT(*) FROM test.meters WHERE location = "San Francisco"; SELECT COUNT(*) FROM test.meters WHERE location = "Calaifornia.SanFrancisco";
``` ```
查询 groupId = 10 的所有记录的平均值、最大值、最小值等: 查询 groupId = 10 的所有记录的平均值、最大值、最小值等:
......
...@@ -155,15 +155,15 @@ async fn demo(taos: &Taos, db: &str) -> Result<(), Error> { ...@@ -155,15 +155,15 @@ async fn demo(taos: &Taos, db: &str) -> Result<(), Error> {
let inserted = taos.exec_many([ let inserted = taos.exec_many([
// create super table // create super table
"CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT, `phase` FLOAT) \ "CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT, `phase` FLOAT) \
TAGS (`groupid` INT, `location` BINARY(16))", TAGS (`groupid` INT, `location` BINARY(24))",
// create child table // create child table
"CREATE TABLE `d0` USING `meters` TAGS(0, 'Los Angles')", "CREATE TABLE `d0` USING `meters` TAGS(0, 'California.LosAngles')",
// insert into child table // insert into child table
"INSERT INTO `d0` values(now - 10s, 10, 116, 0.32)", "INSERT INTO `d0` values(now - 10s, 10, 116, 0.32)",
// insert with NULL values // insert with NULL values
"INSERT INTO `d0` values(now - 8s, NULL, NULL, NULL)", "INSERT INTO `d0` values(now - 8s, NULL, NULL, NULL)",
// insert and automatically create table with tags if not exists // insert and automatically create table with tags if not exists
"INSERT INTO `d1` USING `meters` TAGS(1, 'San Francisco') values(now - 9s, 10.1, 119, 0.33)", "INSERT INTO `d1` USING `meters` TAGS(1, 'California.SanFrancisco') values(now - 9s, 10.1, 119, 0.33)",
// insert many records in a single sql // insert many records in a single sql
"INSERT INTO `d1` values (now-8s, 10, 120, 0.33) (now - 6s, 10, 119, 0.34) (now - 4s, 11.2, 118, 0.322)", "INSERT INTO `d1` values (now-8s, 10, 120, 0.33) (now - 6s, 10, 119, 0.34) (now - 4s, 11.2, 118, 0.322)",
]).await?; ]).await?;
......
...@@ -1201,6 +1201,7 @@ typedef struct { ...@@ -1201,6 +1201,7 @@ typedef struct {
int16_t sstTrigger; int16_t sstTrigger;
int16_t hashPrefix; int16_t hashPrefix;
int16_t hashSuffix; int16_t hashSuffix;
int32_t tsdbPageSize;
} SCreateVnodeReq; } SCreateVnodeReq;
int32_t tSerializeSCreateVnodeReq(void* buf, int32_t bufLen, SCreateVnodeReq* pReq); int32_t tSerializeSCreateVnodeReq(void* buf, int32_t bufLen, SCreateVnodeReq* pReq);
......
...@@ -31,7 +31,6 @@ typedef struct SSchedMsg { ...@@ -31,7 +31,6 @@ typedef struct SSchedMsg {
void *thandle; void *thandle;
} SSchedMsg; } SSchedMsg;
typedef struct { typedef struct {
char label[TSDB_LABEL_LEN]; char label[TSDB_LABEL_LEN];
tsem_t emptySem; tsem_t emptySem;
...@@ -48,7 +47,6 @@ typedef struct { ...@@ -48,7 +47,6 @@ typedef struct {
void *pTimer; void *pTimer;
} SSchedQueue; } SSchedQueue;
/** /**
* Create a thread-safe ring-buffer based task queue and return the instance. A thread * Create a thread-safe ring-buffer based task queue and return the instance. A thread
* pool will be created to consume the messages in the queue. * pool will be created to consume the messages in the queue.
...@@ -57,7 +55,7 @@ typedef struct { ...@@ -57,7 +55,7 @@ typedef struct {
* @param label the label of the queue * @param label the label of the queue
* @return the created queue scheduler * @return the created queue scheduler
*/ */
void *taosInitScheduler(int32_t capacity, int32_t numOfThreads, const char *label, SSchedQueue* pSched); void *taosInitScheduler(int32_t capacity, int32_t numOfThreads, const char *label, SSchedQueue *pSched);
/** /**
* Create a thread-safe ring-buffer based task queue and return the instance. * Create a thread-safe ring-buffer based task queue and return the instance.
...@@ -83,7 +81,7 @@ void taosCleanUpScheduler(void *queueScheduler); ...@@ -83,7 +81,7 @@ void taosCleanUpScheduler(void *queueScheduler);
* @param queueScheduler the queue scheduler instance * @param queueScheduler the queue scheduler instance
* @param pMsg the message for the task * @param pMsg the message for the task
*/ */
void taosScheduleTask(void *queueScheduler, SSchedMsg *pMsg); int taosScheduleTask(void *queueScheduler, SSchedMsg *pMsg);
#ifdef __cplusplus #ifdef __cplusplus
} }
......
...@@ -1399,7 +1399,12 @@ void processMsgFromServer(void* parent, SRpcMsg* pMsg, SEpSet* pEpSet) { ...@@ -1399,7 +1399,12 @@ void processMsgFromServer(void* parent, SRpcMsg* pMsg, SEpSet* pEpSet) {
arg->msg = *pMsg; arg->msg = *pMsg;
arg->pEpset = tEpSet; arg->pEpset = tEpSet;
taosAsyncExec(doProcessMsgFromServer, arg, NULL); if (0 != taosAsyncExec(doProcessMsgFromServer, arg, NULL)) {
tscError("failed to sched msg to tsc, tsc ready to quit");
rpcFreeCont(pMsg->pCont);
taosMemoryFree(arg->pEpset);
taosMemoryFree(arg);
}
} }
TAOS* taos_connect_auth(const char* ip, const char* user, const char* auth, const char* db, uint16_t port) { TAOS* taos_connect_auth(const char* ip, const char* user, const char* auth, const char* db, uint16_t port) {
......
...@@ -102,9 +102,10 @@ static const SSysDbTableSchema userDBSchema[] = { ...@@ -102,9 +102,10 @@ static const SSysDbTableSchema userDBSchema[] = {
{.name = "wal_retention_size", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = true}, {.name = "wal_retention_size", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = true},
{.name = "wal_roll_period", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true}, {.name = "wal_roll_period", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true},
{.name = "wal_segment_size", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = true}, {.name = "wal_segment_size", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = true},
{.name = "sst_trigger", .bytes = 2, .type = TSDB_DATA_TYPE_SMALLINT, .sysInfo = true}, {.name = "stt_trigger", .bytes = 2, .type = TSDB_DATA_TYPE_SMALLINT, .sysInfo = true},
{.name = "table_prefix", .bytes = 2, .type = TSDB_DATA_TYPE_SMALLINT, .sysInfo = true}, {.name = "table_prefix", .bytes = 2, .type = TSDB_DATA_TYPE_SMALLINT, .sysInfo = true},
{.name = "table_suffix", .bytes = 2, .type = TSDB_DATA_TYPE_SMALLINT, .sysInfo = true}, {.name = "table_suffix", .bytes = 2, .type = TSDB_DATA_TYPE_SMALLINT, .sysInfo = true},
{.name = "tsdb_pagesize", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true},
}; };
static const SSysDbTableSchema userFuncSchema[] = { static const SSysDbTableSchema userFuncSchema[] = {
......
...@@ -3782,6 +3782,7 @@ int32_t tSerializeSCreateVnodeReq(void *buf, int32_t bufLen, SCreateVnodeReq *pR ...@@ -3782,6 +3782,7 @@ int32_t tSerializeSCreateVnodeReq(void *buf, int32_t bufLen, SCreateVnodeReq *pR
if (tEncodeI16(&encoder, pReq->sstTrigger) < 0) return -1; if (tEncodeI16(&encoder, pReq->sstTrigger) < 0) return -1;
if (tEncodeI16(&encoder, pReq->hashPrefix) < 0) return -1; if (tEncodeI16(&encoder, pReq->hashPrefix) < 0) return -1;
if (tEncodeI16(&encoder, pReq->hashSuffix) < 0) return -1; if (tEncodeI16(&encoder, pReq->hashSuffix) < 0) return -1;
if (tEncodeI32(&encoder, pReq->tsdbPageSize) < 0) return -1;
tEndEncode(&encoder); tEndEncode(&encoder);
...@@ -3857,6 +3858,7 @@ int32_t tDeserializeSCreateVnodeReq(void *buf, int32_t bufLen, SCreateVnodeReq * ...@@ -3857,6 +3858,7 @@ int32_t tDeserializeSCreateVnodeReq(void *buf, int32_t bufLen, SCreateVnodeReq *
if (tDecodeI16(&decoder, &pReq->sstTrigger) < 0) return -1; if (tDecodeI16(&decoder, &pReq->sstTrigger) < 0) return -1;
if (tDecodeI16(&decoder, &pReq->hashPrefix) < 0) return -1; if (tDecodeI16(&decoder, &pReq->hashPrefix) < 0) return -1;
if (tDecodeI16(&decoder, &pReq->hashSuffix) < 0) return -1; if (tDecodeI16(&decoder, &pReq->hashSuffix) < 0) return -1;
if (tDecodeI32(&decoder, &pReq->tsdbPageSize) < 0) return -1;
tEndDecode(&decoder); tEndDecode(&decoder);
tDecoderClear(&decoder); tDecoderClear(&decoder);
......
...@@ -173,6 +173,7 @@ static void vmGenerateVnodeCfg(SCreateVnodeReq *pCreate, SVnodeCfg *pCfg) { ...@@ -173,6 +173,7 @@ static void vmGenerateVnodeCfg(SCreateVnodeReq *pCreate, SVnodeCfg *pCfg) {
pCfg->hashMethod = pCreate->hashMethod; pCfg->hashMethod = pCreate->hashMethod;
pCfg->hashPrefix = pCreate->hashPrefix; pCfg->hashPrefix = pCreate->hashPrefix;
pCfg->hashSuffix = pCreate->hashSuffix; pCfg->hashSuffix = pCreate->hashSuffix;
pCfg->tsdbPageSize = pCreate->tsdbPageSize * 1024;
pCfg->standby = pCfg->standby; pCfg->standby = pCfg->standby;
pCfg->syncCfg.myIndex = pCreate->selfIndex; pCfg->syncCfg.myIndex = pCreate->selfIndex;
...@@ -222,9 +223,11 @@ int32_t vmProcessCreateVnodeReq(SVnodeMgmt *pMgmt, SRpcMsg *pMsg) { ...@@ -222,9 +223,11 @@ int32_t vmProcessCreateVnodeReq(SVnodeMgmt *pMgmt, SRpcMsg *pMsg) {
return -1; return -1;
} }
dInfo("vgId:%d, start to create vnode, tsma:%d standby:%d cacheLast:%d cacheLastSize:%d sstTrigger:%d", dInfo(
createReq.vgId, createReq.isTsma, createReq.standby, createReq.cacheLast, createReq.cacheLastSize, "vgId:%d, start to create vnode, tsma:%d standby:%d cacheLast:%d cacheLastSize:%d sstTrigger:%d "
createReq.sstTrigger); "tsdbPageSize:%d",
createReq.vgId, createReq.isTsma, createReq.standby, createReq.cacheLast, createReq.cacheLastSize,
createReq.sstTrigger, createReq.tsdbPageSize);
dInfo("vgId:%d, hashMethod:%d begin:%u end:%u prefix:%d surfix:%d", createReq.vgId, createReq.hashMethod, dInfo("vgId:%d, hashMethod:%d begin:%u end:%u prefix:%d surfix:%d", createReq.vgId, createReq.hashMethod,
createReq.hashBegin, createReq.hashEnd, createReq.hashPrefix, createReq.hashSuffix); createReq.hashBegin, createReq.hashEnd, createReq.hashPrefix, createReq.hashSuffix);
vmGenerateVnodeCfg(&createReq, &vnodeCfg); vmGenerateVnodeCfg(&createReq, &vnodeCfg);
......
...@@ -308,6 +308,7 @@ typedef struct { ...@@ -308,6 +308,7 @@ typedef struct {
int16_t hashPrefix; int16_t hashPrefix;
int16_t hashSuffix; int16_t hashSuffix;
int16_t sstTrigger; int16_t sstTrigger;
int32_t tsdbPageSize;
int32_t numOfRetensions; int32_t numOfRetensions;
SArray* pRetensions; SArray* pRetensions;
int32_t walRetentionPeriod; int32_t walRetentionPeriod;
......
...@@ -31,7 +31,7 @@ ...@@ -31,7 +31,7 @@
#include "systable.h" #include "systable.h"
#define DB_VER_NUMBER 1 #define DB_VER_NUMBER 1
#define DB_RESERVE_SIZE 58 #define DB_RESERVE_SIZE 54
static SSdbRaw *mndDbActionEncode(SDbObj *pDb); static SSdbRaw *mndDbActionEncode(SDbObj *pDb);
static SSdbRow *mndDbActionDecode(SSdbRaw *pRaw); static SSdbRow *mndDbActionDecode(SSdbRaw *pRaw);
...@@ -128,6 +128,7 @@ static SSdbRaw *mndDbActionEncode(SDbObj *pDb) { ...@@ -128,6 +128,7 @@ static SSdbRaw *mndDbActionEncode(SDbObj *pDb) {
SDB_SET_INT16(pRaw, dataPos, pDb->cfg.sstTrigger, _OVER) SDB_SET_INT16(pRaw, dataPos, pDb->cfg.sstTrigger, _OVER)
SDB_SET_INT16(pRaw, dataPos, pDb->cfg.hashPrefix, _OVER) SDB_SET_INT16(pRaw, dataPos, pDb->cfg.hashPrefix, _OVER)
SDB_SET_INT16(pRaw, dataPos, pDb->cfg.hashSuffix, _OVER) SDB_SET_INT16(pRaw, dataPos, pDb->cfg.hashSuffix, _OVER)
SDB_SET_INT32(pRaw, dataPos, pDb->cfg.tsdbPageSize, _OVER)
SDB_SET_RESERVE(pRaw, dataPos, DB_RESERVE_SIZE, _OVER) SDB_SET_RESERVE(pRaw, dataPos, DB_RESERVE_SIZE, _OVER)
SDB_SET_DATALEN(pRaw, dataPos, _OVER) SDB_SET_DATALEN(pRaw, dataPos, _OVER)
...@@ -214,10 +215,20 @@ static SSdbRow *mndDbActionDecode(SSdbRaw *pRaw) { ...@@ -214,10 +215,20 @@ static SSdbRow *mndDbActionDecode(SSdbRaw *pRaw) {
SDB_GET_INT16(pRaw, dataPos, &pDb->cfg.sstTrigger, _OVER) SDB_GET_INT16(pRaw, dataPos, &pDb->cfg.sstTrigger, _OVER)
SDB_GET_INT16(pRaw, dataPos, &pDb->cfg.hashPrefix, _OVER) SDB_GET_INT16(pRaw, dataPos, &pDb->cfg.hashPrefix, _OVER)
SDB_GET_INT16(pRaw, dataPos, &pDb->cfg.hashSuffix, _OVER) SDB_GET_INT16(pRaw, dataPos, &pDb->cfg.hashSuffix, _OVER)
SDB_GET_INT32(pRaw, dataPos, &pDb->cfg.tsdbPageSize, _OVER)
SDB_GET_RESERVE(pRaw, dataPos, DB_RESERVE_SIZE, _OVER) SDB_GET_RESERVE(pRaw, dataPos, DB_RESERVE_SIZE, _OVER)
taosInitRWLatch(&pDb->lock); taosInitRWLatch(&pDb->lock);
if (pDb->cfg.tsdbPageSize <= TSDB_MIN_TSDB_PAGESIZE) {
mInfo("db:%s, tsdbPageSize set from %d to default %d", pDb->name, pDb->cfg.tsdbPageSize,
TSDB_DEFAULT_TSDB_PAGESIZE);
}
if (pDb->cfg.sstTrigger <= TSDB_MIN_STT_TRIGGER) {
mInfo("db:%s, sstTrigger set from %d to default %d", pDb->name, pDb->cfg.sstTrigger, TSDB_DEFAULT_SST_TRIGGER);
}
terrno = 0; terrno = 0;
_OVER: _OVER:
...@@ -262,6 +273,7 @@ static int32_t mndDbActionUpdate(SSdb *pSdb, SDbObj *pOld, SDbObj *pNew) { ...@@ -262,6 +273,7 @@ static int32_t mndDbActionUpdate(SSdb *pSdb, SDbObj *pOld, SDbObj *pNew) {
pOld->cfg.cacheLast = pNew->cfg.cacheLast; pOld->cfg.cacheLast = pNew->cfg.cacheLast;
pOld->cfg.replications = pNew->cfg.replications; pOld->cfg.replications = pNew->cfg.replications;
pOld->cfg.sstTrigger = pNew->cfg.sstTrigger; pOld->cfg.sstTrigger = pNew->cfg.sstTrigger;
pOld->cfg.tsdbPageSize = pNew->cfg.tsdbPageSize;
taosWUnLockLatch(&pOld->lock); taosWUnLockLatch(&pOld->lock);
return 0; return 0;
} }
...@@ -341,6 +353,7 @@ static int32_t mndCheckDbCfg(SMnode *pMnode, SDbCfg *pCfg) { ...@@ -341,6 +353,7 @@ static int32_t mndCheckDbCfg(SMnode *pMnode, SDbCfg *pCfg) {
if (pCfg->sstTrigger < TSDB_MIN_STT_TRIGGER || pCfg->sstTrigger > TSDB_MAX_STT_TRIGGER) return -1; if (pCfg->sstTrigger < TSDB_MIN_STT_TRIGGER || pCfg->sstTrigger > TSDB_MAX_STT_TRIGGER) return -1;
if (pCfg->hashPrefix < TSDB_MIN_HASH_PREFIX || pCfg->hashPrefix > TSDB_MAX_HASH_PREFIX) return -1; if (pCfg->hashPrefix < TSDB_MIN_HASH_PREFIX || pCfg->hashPrefix > TSDB_MAX_HASH_PREFIX) return -1;
if (pCfg->hashSuffix < TSDB_MIN_HASH_SUFFIX || pCfg->hashSuffix > TSDB_MAX_HASH_SUFFIX) return -1; if (pCfg->hashSuffix < TSDB_MIN_HASH_SUFFIX || pCfg->hashSuffix > TSDB_MAX_HASH_SUFFIX) return -1;
if (pCfg->tsdbPageSize < TSDB_MIN_TSDB_PAGESIZE || pCfg->tsdbPageSize > TSDB_MAX_TSDB_PAGESIZE) return -1;
terrno = 0; terrno = 0;
return terrno; return terrno;
...@@ -377,6 +390,7 @@ static void mndSetDefaultDbCfg(SDbCfg *pCfg) { ...@@ -377,6 +390,7 @@ static void mndSetDefaultDbCfg(SDbCfg *pCfg) {
if (pCfg->sstTrigger <= 0) pCfg->sstTrigger = TSDB_DEFAULT_SST_TRIGGER; if (pCfg->sstTrigger <= 0) pCfg->sstTrigger = TSDB_DEFAULT_SST_TRIGGER;
if (pCfg->hashPrefix < 0) pCfg->hashPrefix = TSDB_DEFAULT_HASH_PREFIX; if (pCfg->hashPrefix < 0) pCfg->hashPrefix = TSDB_DEFAULT_HASH_PREFIX;
if (pCfg->hashSuffix < 0) pCfg->hashSuffix = TSDB_DEFAULT_HASH_SUFFIX; if (pCfg->hashSuffix < 0) pCfg->hashSuffix = TSDB_DEFAULT_HASH_SUFFIX;
if (pCfg->tsdbPageSize <= 0) pCfg->tsdbPageSize = TSDB_DEFAULT_TSDB_PAGESIZE;
} }
static int32_t mndSetCreateDbRedoLogs(SMnode *pMnode, STrans *pTrans, SDbObj *pDb, SVgObj *pVgroups) { static int32_t mndSetCreateDbRedoLogs(SMnode *pMnode, STrans *pTrans, SDbObj *pDb, SVgObj *pVgroups) {
...@@ -496,6 +510,7 @@ static int32_t mndCreateDb(SMnode *pMnode, SRpcMsg *pReq, SCreateDbReq *pCreate, ...@@ -496,6 +510,7 @@ static int32_t mndCreateDb(SMnode *pMnode, SRpcMsg *pReq, SCreateDbReq *pCreate,
.sstTrigger = pCreate->sstTrigger, .sstTrigger = pCreate->sstTrigger,
.hashPrefix = pCreate->hashPrefix, .hashPrefix = pCreate->hashPrefix,
.hashSuffix = pCreate->hashSuffix, .hashSuffix = pCreate->hashSuffix,
.tsdbPageSize = pCreate->tsdbPageSize,
}; };
dbObj.cfg.numOfRetensions = pCreate->numOfRetensions; dbObj.cfg.numOfRetensions = pCreate->numOfRetensions;
...@@ -1726,6 +1741,9 @@ static void mndDumpDbInfoData(SMnode *pMnode, SSDataBlock *pBlock, SDbObj *pDb, ...@@ -1726,6 +1741,9 @@ static void mndDumpDbInfoData(SMnode *pMnode, SSDataBlock *pBlock, SDbObj *pDb,
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++); pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
colDataAppend(pColInfo, rows, (const char *)&pDb->cfg.hashSuffix, false); colDataAppend(pColInfo, rows, (const char *)&pDb->cfg.hashSuffix, false);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
colDataAppend(pColInfo, rows, (const char *)&pDb->cfg.tsdbPageSize, false);
} }
taosMemoryFree(buf); taosMemoryFree(buf);
......
...@@ -237,6 +237,7 @@ void *mndBuildCreateVnodeReq(SMnode *pMnode, SDnodeObj *pDnode, SDbObj *pDb, SVg ...@@ -237,6 +237,7 @@ void *mndBuildCreateVnodeReq(SMnode *pMnode, SDnodeObj *pDnode, SDbObj *pDb, SVg
createReq.sstTrigger = pDb->cfg.sstTrigger; createReq.sstTrigger = pDb->cfg.sstTrigger;
createReq.hashPrefix = pDb->cfg.hashPrefix; createReq.hashPrefix = pDb->cfg.hashPrefix;
createReq.hashSuffix = pDb->cfg.hashSuffix; createReq.hashSuffix = pDb->cfg.hashSuffix;
createReq.tsdbPageSize = pDb->cfg.tsdbPageSize;
for (int32_t v = 0; v < pVgroup->replica; ++v) { for (int32_t v = 0; v < pVgroup->replica; ++v) {
SReplica *pReplica = &createReq.replicas[v]; SReplica *pReplica = &createReq.replicas[v];
......
...@@ -115,6 +115,7 @@ int vnodeEncodeConfig(const void *pObj, SJson *pJson) { ...@@ -115,6 +115,7 @@ int vnodeEncodeConfig(const void *pObj, SJson *pJson) {
if (tjsonAddIntegerToObject(pJson, "hashMethod", pCfg->hashMethod) < 0) return -1; if (tjsonAddIntegerToObject(pJson, "hashMethod", pCfg->hashMethod) < 0) return -1;
if (tjsonAddIntegerToObject(pJson, "hashPrefix", pCfg->hashPrefix) < 0) return -1; if (tjsonAddIntegerToObject(pJson, "hashPrefix", pCfg->hashPrefix) < 0) return -1;
if (tjsonAddIntegerToObject(pJson, "hashSuffix", pCfg->hashSuffix) < 0) return -1; if (tjsonAddIntegerToObject(pJson, "hashSuffix", pCfg->hashSuffix) < 0) return -1;
if (tjsonAddIntegerToObject(pJson, "tsdbPageSize", pCfg->tsdbPageSize) < 0) return -1;
if (tjsonAddIntegerToObject(pJson, "syncCfg.replicaNum", pCfg->syncCfg.replicaNum) < 0) return -1; if (tjsonAddIntegerToObject(pJson, "syncCfg.replicaNum", pCfg->syncCfg.replicaNum) < 0) return -1;
if (tjsonAddIntegerToObject(pJson, "syncCfg.myIndex", pCfg->syncCfg.myIndex) < 0) return -1; if (tjsonAddIntegerToObject(pJson, "syncCfg.myIndex", pCfg->syncCfg.myIndex) < 0) return -1;
...@@ -215,7 +216,7 @@ int vnodeDecodeConfig(const SJson *pJson, void *pObj) { ...@@ -215,7 +216,7 @@ int vnodeDecodeConfig(const SJson *pJson, void *pObj) {
tjsonGetNumberValue(pJson, "wal.level", pCfg->walCfg.level, code); tjsonGetNumberValue(pJson, "wal.level", pCfg->walCfg.level, code);
if (code < 0) return -1; if (code < 0) return -1;
tjsonGetNumberValue(pJson, "sstTrigger", pCfg->sttTrigger, code); tjsonGetNumberValue(pJson, "sstTrigger", pCfg->sttTrigger, code);
if (code < 0) return -1; if (code < 0) pCfg->sttTrigger = TSDB_DEFAULT_SST_TRIGGER;
tjsonGetNumberValue(pJson, "hashBegin", pCfg->hashBegin, code); tjsonGetNumberValue(pJson, "hashBegin", pCfg->hashBegin, code);
if (code < 0) return -1; if (code < 0) return -1;
tjsonGetNumberValue(pJson, "hashEnd", pCfg->hashEnd, code); tjsonGetNumberValue(pJson, "hashEnd", pCfg->hashEnd, code);
...@@ -223,9 +224,9 @@ int vnodeDecodeConfig(const SJson *pJson, void *pObj) { ...@@ -223,9 +224,9 @@ int vnodeDecodeConfig(const SJson *pJson, void *pObj) {
tjsonGetNumberValue(pJson, "hashMethod", pCfg->hashMethod, code); tjsonGetNumberValue(pJson, "hashMethod", pCfg->hashMethod, code);
if (code < 0) return -1; if (code < 0) return -1;
tjsonGetNumberValue(pJson, "hashPrefix", pCfg->hashPrefix, code); tjsonGetNumberValue(pJson, "hashPrefix", pCfg->hashPrefix, code);
if (code < 0) return -1; if (code < 0) pCfg->hashPrefix = TSDB_DEFAULT_HASH_PREFIX;
tjsonGetNumberValue(pJson, "hashSuffix", pCfg->hashSuffix, code); tjsonGetNumberValue(pJson, "hashSuffix", pCfg->hashSuffix, code);
if (code < 0) return -1; if (code < 0) pCfg->hashSuffix = TSDB_DEFAULT_HASH_SUFFIX;
tjsonGetNumberValue(pJson, "syncCfg.replicaNum", pCfg->syncCfg.replicaNum, code); tjsonGetNumberValue(pJson, "syncCfg.replicaNum", pCfg->syncCfg.replicaNum, code);
if (code < 0) return -1; if (code < 0) return -1;
...@@ -255,6 +256,7 @@ int vnodeDecodeConfig(const SJson *pJson, void *pObj) { ...@@ -255,6 +256,7 @@ int vnodeDecodeConfig(const SJson *pJson, void *pObj) {
} }
tjsonGetNumberValue(pJson, "tsdbPageSize", pCfg->tsdbPageSize, code); tjsonGetNumberValue(pJson, "tsdbPageSize", pCfg->tsdbPageSize, code);
if (code < 0) pCfg->tsdbPageSize = TSDB_DEFAULT_TSDB_PAGESIZE * 1024;
return 0; return 0;
} }
......
...@@ -1303,7 +1303,7 @@ SNode* createShowStmtWithCond(SAstCreateContext* pCxt, ENodeType type, SNode* pD ...@@ -1303,7 +1303,7 @@ SNode* createShowStmtWithCond(SAstCreateContext* pCxt, ENodeType type, SNode* pD
EOperatorType tableCondType) { EOperatorType tableCondType) {
CHECK_PARSER_STATUS(pCxt); CHECK_PARSER_STATUS(pCxt);
if (needDbShowStmt(type) && NULL == pDbName) { if (needDbShowStmt(type) && NULL == pDbName) {
snprintf(pCxt->pQueryCxt->pMsg, pCxt->pQueryCxt->msgLen, "db not specified"); snprintf(pCxt->pQueryCxt->pMsg, pCxt->pQueryCxt->msgLen, "database not specified");
pCxt->errCode = TSDB_CODE_PAR_SYNTAX_ERROR; pCxt->errCode = TSDB_CODE_PAR_SYNTAX_ERROR;
return NULL; return NULL;
} }
......
...@@ -1283,6 +1283,36 @@ static int32_t rewriteCountStar(STranslateContext* pCxt, SFunctionNode* pCount) ...@@ -1283,6 +1283,36 @@ static int32_t rewriteCountStar(STranslateContext* pCxt, SFunctionNode* pCount)
return code; return code;
} }
static bool isCountTbname(SFunctionNode* pFunc) {
if (FUNCTION_TYPE_COUNT != pFunc->funcType || 1 != LIST_LENGTH(pFunc->pParameterList)) {
return false;
}
SNode* pPara = nodesListGetNode(pFunc->pParameterList, 0);
return (QUERY_NODE_FUNCTION == nodeType(pPara) && FUNCTION_TYPE_TBNAME == ((SFunctionNode*)pPara)->funcType);
}
// count(tbname) is rewritten as count(ts) for scannning optimization
static int32_t rewriteCountTbname(STranslateContext* pCxt, SFunctionNode* pCount) {
SFunctionNode* pTbname = (SFunctionNode*)nodesListGetNode(pCount->pParameterList, 0);
const char* pTableAlias = NULL;
if (LIST_LENGTH(pTbname->pParameterList) > 0) {
pTableAlias = ((SValueNode*)nodesListGetNode(pTbname->pParameterList, 0))->literal;
}
STableNode* pTable = NULL;
int32_t code = findTable(pCxt, pTableAlias, &pTable);
if (TSDB_CODE_SUCCESS == code) {
SColumnNode* pCol = (SColumnNode*)nodesMakeNode(QUERY_NODE_COLUMN);
if (NULL == pCol) {
code = TSDB_CODE_OUT_OF_MEMORY;
} else {
setColumnInfoBySchema((SRealTableNode*)pTable, ((SRealTableNode*)pTable)->pMeta->schema, -1, pCol);
NODES_DESTORY_LIST(pCount->pParameterList);
code = nodesListMakeAppend(&pCount->pParameterList, (SNode*)pCol);
}
}
return code;
}
static bool hasInvalidFuncNesting(SNodeList* pParameterList) { static bool hasInvalidFuncNesting(SNodeList* pParameterList) {
bool hasInvalidFunc = false; bool hasInvalidFunc = false;
nodesWalkExprs(pParameterList, haveVectorFunction, &hasInvalidFunc); nodesWalkExprs(pParameterList, haveVectorFunction, &hasInvalidFunc);
...@@ -1318,6 +1348,9 @@ static int32_t translateAggFunc(STranslateContext* pCxt, SFunctionNode* pFunc) { ...@@ -1318,6 +1348,9 @@ static int32_t translateAggFunc(STranslateContext* pCxt, SFunctionNode* pFunc) {
if (isCountStar(pFunc)) { if (isCountStar(pFunc)) {
return rewriteCountStar(pCxt, pFunc); return rewriteCountStar(pCxt, pFunc);
} }
if (isCountTbname(pFunc)) {
return rewriteCountTbname(pCxt, pFunc);
}
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
} }
......
...@@ -35,6 +35,8 @@ TEST_F(PlanOptimizeTest, scanPath) { ...@@ -35,6 +35,8 @@ TEST_F(PlanOptimizeTest, scanPath) {
run("SELECT LAST(c1) FROM t1 WHERE ts BETWEEN '2022-7-29 11:10:10' AND '2022-7-30 11:10:10' INTERVAL(10S) " run("SELECT LAST(c1) FROM t1 WHERE ts BETWEEN '2022-7-29 11:10:10' AND '2022-7-30 11:10:10' INTERVAL(10S) "
"FILL(LINEAR)"); "FILL(LINEAR)");
run("SELECT COUNT(TBNAME) FROM t1");
} }
TEST_F(PlanOptimizeTest, pushDownCondition) { TEST_F(PlanOptimizeTest, pushDownCondition) {
......
...@@ -134,8 +134,7 @@ int32_t taosAsyncExec(__async_exec_fn_t execFn, void* execParam, int32_t* code) ...@@ -134,8 +134,7 @@ int32_t taosAsyncExec(__async_exec_fn_t execFn, void* execParam, int32_t* code)
schedMsg.thandle = execParam; schedMsg.thandle = execParam;
schedMsg.msg = code; schedMsg.msg = code;
taosScheduleTask(&pTaskQueue, &schedMsg); return taosScheduleTask(&pTaskQueue, &schedMsg);
return 0;
} }
void destroySendMsgInfo(SMsgSendInfo* pMsgBody) { void destroySendMsgInfo(SMsgSendInfo* pMsgBody) {
...@@ -472,5 +471,3 @@ int32_t cloneDbVgInfo(SDBVgInfo* pSrc, SDBVgInfo** pDst) { ...@@ -472,5 +471,3 @@ int32_t cloneDbVgInfo(SDBVgInfo* pSrc, SDBVgInfo** pDst) {
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
} }
...@@ -185,10 +185,10 @@ int tdbPagerWrite(SPager *pPager, SPage *pPage) { ...@@ -185,10 +185,10 @@ int tdbPagerWrite(SPager *pPager, SPage *pPage) {
// ref page one more time so the page will not be release // ref page one more time so the page will not be release
tdbRefPage(pPage); tdbRefPage(pPage);
tdbDebug("pcache/mdirty page %p/%d/%d", pPage, TDB_PAGE_PGNO(pPage), pPage->id); tdbDebug("pcache/mdirty page %p/%d/%d", pPage, TDB_PAGE_PGNO(pPage), pPage->id);
/*
// Set page as dirty // Set page as dirty
pPage->isDirty = 1; pPage->isDirty = 1;
/*
// Add page to dirty list(TODO: NOT use O(n^2) algorithm) // Add page to dirty list(TODO: NOT use O(n^2) algorithm)
for (ppPage = &pPager->pDirty; (*ppPage) && TDB_PAGE_PGNO(*ppPage) < TDB_PAGE_PGNO(pPage); for (ppPage = &pPager->pDirty; (*ppPage) && TDB_PAGE_PGNO(*ppPage) < TDB_PAGE_PGNO(pPage);
ppPage = &((*ppPage)->pDirtyNext)) { ppPage = &((*ppPage)->pDirtyNext)) {
...@@ -260,6 +260,7 @@ int tdbPagerCommit(SPager *pPager, TXN *pTxn) { ...@@ -260,6 +260,7 @@ int tdbPagerCommit(SPager *pPager, TXN *pTxn) {
pPage->isDirty = 0; pPage->isDirty = 0;
// tRBTreeDrop(&pPager->rbt, (SRBTreeNode *)pPage);
tdbPCacheRelease(pPager->pCache, pPage, pTxn); tdbPCacheRelease(pPager->pCache, pPage, pTxn);
} }
...@@ -345,15 +346,19 @@ int tdbPagerAbort(SPager *pPager, TXN *pTxn) { ...@@ -345,15 +346,19 @@ int tdbPagerAbort(SPager *pPager, TXN *pTxn) {
} }
*/ */
// 3, release the dirty pages // 3, release the dirty pages
for (pPage = pPager->pDirty; pPage; pPage = pPager->pDirty) { SRBTreeIter iter = tRBTreeIterCreate(&pPager->rbt, 1);
pPager->pDirty = pPage->pDirtyNext; SRBTreeNode *pNode = NULL;
pPage->pDirtyNext = NULL; while ((pNode = tRBTreeIterNext(&iter)) != NULL) {
pPage = (SPage *)pNode;
pPage->isDirty = 0; pPage->isDirty = 0;
// tRBTreeDrop(&pPager->rbt, (SRBTreeNode *)pPage);
tdbPCacheRelease(pPager->pCache, pPage, pTxn); tdbPCacheRelease(pPager->pCache, pPage, pTxn);
} }
tRBTreeCreate(&pPager->rbt, pageCmpFn);
// 4, remove the journal file // 4, remove the journal file
tdbOsClose(pPager->jfd); tdbOsClose(pPager->jfd);
tdbOsRemove(pPager->jFileName); tdbOsRemove(pPager->jFileName);
......
...@@ -374,10 +374,12 @@ void cliHandleResp(SCliConn* conn) { ...@@ -374,10 +374,12 @@ void cliHandleResp(SCliConn* conn) {
if (pCtx == NULL && CONN_NO_PERSIST_BY_APP(conn)) { if (pCtx == NULL && CONN_NO_PERSIST_BY_APP(conn)) {
tDebug("%s except, conn %p read while cli ignore it", CONN_GET_INST_LABEL(conn), conn); tDebug("%s except, conn %p read while cli ignore it", CONN_GET_INST_LABEL(conn), conn);
transFreeMsg(transMsg.pCont);
return; return;
} }
if (CONN_RELEASE_BY_SERVER(conn) && transMsg.info.ahandle == NULL) { if (CONN_RELEASE_BY_SERVER(conn) && transMsg.info.ahandle == NULL) {
tDebug("%s except, conn %p read while cli ignore it", CONN_GET_INST_LABEL(conn), conn); tDebug("%s except, conn %p read while cli ignore it", CONN_GET_INST_LABEL(conn), conn);
transFreeMsg(transMsg.pCont);
return; return;
} }
...@@ -393,7 +395,7 @@ void cliHandleResp(SCliConn* conn) { ...@@ -393,7 +395,7 @@ void cliHandleResp(SCliConn* conn) {
} }
if (CONN_NO_PERSIST_BY_APP(conn)) { if (CONN_NO_PERSIST_BY_APP(conn)) {
addConnToPool(pThrd->pool, conn); return addConnToPool(pThrd->pool, conn);
} }
uv_read_start((uv_stream_t*)conn->stream, cliAllocRecvBufferCb, cliRecvCb); uv_read_start((uv_stream_t*)conn->stream, cliAllocRecvBufferCb, cliRecvCb);
......
...@@ -149,18 +149,18 @@ void *taosProcessSchedQueue(void *scheduler) { ...@@ -149,18 +149,18 @@ void *taosProcessSchedQueue(void *scheduler) {
return NULL; return NULL;
} }
void taosScheduleTask(void *queueScheduler, SSchedMsg *pMsg) { int taosScheduleTask(void *queueScheduler, SSchedMsg *pMsg) {
SSchedQueue *pSched = (SSchedQueue *)queueScheduler; SSchedQueue *pSched = (SSchedQueue *)queueScheduler;
int32_t ret = 0; int32_t ret = 0;
if (pSched == NULL) { if (pSched == NULL) {
uError("sched is not ready, msg:%p is dropped", pMsg); uError("sched is not ready, msg:%p is dropped", pMsg);
return; return -1;
} }
if (atomic_load_8(&pSched->stop)) { if (atomic_load_8(&pSched->stop)) {
uError("sched is already stopped, msg:%p is dropped", pMsg); uError("sched is already stopped, msg:%p is dropped", pMsg);
return; return -1;
} }
if ((ret = tsem_wait(&pSched->emptySem)) != 0) { if ((ret = tsem_wait(&pSched->emptySem)) != 0) {
...@@ -185,6 +185,7 @@ void taosScheduleTask(void *queueScheduler, SSchedMsg *pMsg) { ...@@ -185,6 +185,7 @@ void taosScheduleTask(void *queueScheduler, SSchedMsg *pMsg) {
uFatal("post %s fullSem failed(%s)", pSched->label, strerror(errno)); uFatal("post %s fullSem failed(%s)", pSched->label, strerror(errno));
ASSERT(0); ASSERT(0);
} }
return ret;
} }
void taosCleanUpScheduler(void *param) { void taosCleanUpScheduler(void *param) {
...@@ -192,11 +193,11 @@ void taosCleanUpScheduler(void *param) { ...@@ -192,11 +193,11 @@ void taosCleanUpScheduler(void *param) {
if (pSched == NULL) return; if (pSched == NULL) return;
uDebug("start to cleanup %s schedQsueue", pSched->label); uDebug("start to cleanup %s schedQsueue", pSched->label);
atomic_store_8(&pSched->stop, 1); atomic_store_8(&pSched->stop, 1);
taosMsleep(200); taosMsleep(200);
for (int32_t i = 0; i < pSched->numOfThreads; ++i) { for (int32_t i = 0; i < pSched->numOfThreads; ++i) {
if (taosCheckPthreadValid(pSched->qthread[i])) { if (taosCheckPthreadValid(pSched->qthread[i])) {
tsem_post(&pSched->fullSem); tsem_post(&pSched->fullSem);
...@@ -220,7 +221,7 @@ void taosCleanUpScheduler(void *param) { ...@@ -220,7 +221,7 @@ void taosCleanUpScheduler(void *param) {
if (pSched->queue) taosMemoryFree(pSched->queue); if (pSched->queue) taosMemoryFree(pSched->queue);
if (pSched->qthread) taosMemoryFree(pSched->qthread); if (pSched->qthread) taosMemoryFree(pSched->qthread);
//taosMemoryFree(pSched); // taosMemoryFree(pSched);
} }
// for debug purpose, dump the scheduler status every 1min. // for debug purpose, dump the scheduler status every 1min.
......
...@@ -22,9 +22,9 @@ from util.dnodes import * ...@@ -22,9 +22,9 @@ from util.dnodes import *
class TDTestCase: class TDTestCase:
def caseDescription(self): def caseDescription(self):
''' """
[TD-13823] taosBenchmark test cases [TD-13823] taosBenchmark test cases
''' """
return return
def init(self, conn, logSql): def init(self, conn, logSql):
...@@ -34,19 +34,19 @@ class TDTestCase: ...@@ -34,19 +34,19 @@ class TDTestCase:
def getPath(self, tool="taosBenchmark"): def getPath(self, tool="taosBenchmark"):
selfPath = os.path.dirname(os.path.realpath(__file__)) selfPath = os.path.dirname(os.path.realpath(__file__))
if ("community" in selfPath): if "community" in selfPath:
projPath = selfPath[:selfPath.find("community")] projPath = selfPath[: selfPath.find("community")]
else: else:
projPath = selfPath[:selfPath.find("tests")] projPath = selfPath[: selfPath.find("tests")]
paths = [] paths = []
for root, dirs, files in os.walk(projPath): for root, dirs, files in os.walk(projPath):
if ((tool) in files): if (tool) in files:
rootRealPath = os.path.dirname(os.path.realpath(root)) rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath): if "packaging" not in rootRealPath:
paths.append(os.path.join(root, tool)) paths.append(os.path.join(root, tool))
break break
if (len(paths) == 0): if len(paths) == 0:
tdLog.exit("taosBenchmark not found!") tdLog.exit("taosBenchmark not found!")
return return
else: else:
...@@ -55,7 +55,7 @@ class TDTestCase: ...@@ -55,7 +55,7 @@ class TDTestCase:
def run(self): def run(self):
binPath = self.getPath() binPath = self.getPath()
cmd = "%s -n 100 -t 100 -y" %binPath cmd = "%s -n 100 -t 100 -y" % binPath
tdLog.info("%s" % cmd) tdLog.info("%s" % cmd)
os.system("%s" % cmd) os.system("%s" % cmd)
tdSql.execute("use test") tdSql.execute("use test")
...@@ -77,14 +77,16 @@ class TDTestCase: ...@@ -77,14 +77,16 @@ class TDTestCase:
tdSql.checkData(4, 3, "TAG") tdSql.checkData(4, 3, "TAG")
tdSql.checkData(5, 0, "location") tdSql.checkData(5, 0, "location")
tdSql.checkData(5, 1, "VARCHAR") tdSql.checkData(5, 1, "VARCHAR")
tdSql.checkData(5, 2, 16) tdSql.checkData(5, 2, 24)
tdSql.checkData(5, 3, "TAG") tdSql.checkData(5, 3, "TAG")
tdSql.query("select count(*) from test.meters where groupid >= 0") tdSql.query("select count(*) from test.meters where groupid >= 0")
tdSql.checkData(0, 0, 10000) tdSql.checkData(0, 0, 10000)
tdSql.query("select count(*) from test.meters where location = 'San Francisco' or location = 'Los Angles' or location = 'San Diego' or location = 'San Jose' or \ tdSql.query(
location = 'Palo Alto' or location = 'Campbell' or location = 'Mountain View' or location = 'Sunnyvale' or location = 'Santa Clara' or location = 'Cupertino' ") "select count(*) from test.meters where location = 'California.SanFrancisco' or location = 'California.LosAngles' or location = 'California.SanDiego' or location = 'California.SanJose' or \
location = 'California.PaloAlto' or location = 'California.Campbell' or location = 'California.MountainView' or location = 'California.Sunnyvale' or location = 'California.SantaClara' or location = 'California.Cupertino' "
)
tdSql.checkData(0, 0, 10000) tdSql.checkData(0, 0, 10000)
def stop(self): def stop(self):
......
...@@ -81,6 +81,7 @@ class TDTestCase: ...@@ -81,6 +81,7 @@ class TDTestCase:
'binary':self.binary_val, 'binary':self.binary_val,
'nchar':self.nchar_val 'nchar':self.nchar_val
} }
def insert_base_data(self,col_type,tbname,rows,base_data): def insert_base_data(self,col_type,tbname,rows,base_data):
for i in range(rows): for i in range(rows):
if col_type.lower() == 'tinyint': if col_type.lower() == 'tinyint':
...@@ -290,6 +291,9 @@ class TDTestCase: ...@@ -290,6 +291,9 @@ class TDTestCase:
self.delete_data_ntb() self.delete_data_ntb()
self.delete_data_ctb() self.delete_data_ctb()
self.delete_data_stb() self.delete_data_stb()
tdDnodes.stoptaosd(1)
tdDnodes.starttaosd(1)
self.delete_data_ntb()
def stop(self): def stop(self):
tdSql.close() tdSql.close()
tdLog.success("%s successfully executed" % __file__) tdLog.success("%s successfully executed" % __file__)
......
...@@ -154,10 +154,12 @@ class TDTestCase: ...@@ -154,10 +154,12 @@ class TDTestCase:
up_bool = random.randint(0,100)%2 up_bool = random.randint(0,100)%2
up_float = random.uniform(constant.FLOAT_MIN,constant.FLOAT_MAX) up_float = random.uniform(constant.FLOAT_MIN,constant.FLOAT_MAX)
up_double = random.uniform(constant.DOUBLE_MIN*(1E-300),constant.DOUBLE_MAX*(1E-300)) up_double = random.uniform(constant.DOUBLE_MIN*(1E-300),constant.DOUBLE_MAX*(1E-300))
binary_length = random.randint(0,self.str_length) binary_length = []
nchar_length = random.randint(0,self.str_length) for i in range(self.str_length+1):
up_binary = tdCom.getLongName(binary_length) binary_length.append(i)
up_nchar = tdCom.getLongName(nchar_length) nchar_length = []
for i in range(self.str_length+1):
nchar_length.append(i)
for col_name,col_type in column_dict.items(): for col_name,col_type in column_dict.items():
if tb_type == 'ntb': if tb_type == 'ntb':
tdSql.execute(f'create table {tbname} (ts timestamp,{col_name} {col_type})') tdSql.execute(f'create table {tbname} (ts timestamp,{col_name} {col_type})')
...@@ -188,9 +190,13 @@ class TDTestCase: ...@@ -188,9 +190,13 @@ class TDTestCase:
elif col_type.lower() == 'double': elif col_type.lower() == 'double':
self.update_and_check_data(tbname,col_name,col_type,up_double,dbname) self.update_and_check_data(tbname,col_name,col_type,up_double,dbname)
elif 'binary' in col_type.lower(): elif 'binary' in col_type.lower():
self.update_and_check_data(tbname,col_name,col_type,up_binary,dbname) for i in binary_length:
up_binary = tdCom.getLongName(i)
self.update_and_check_data(tbname,col_name,col_type,up_binary,dbname)
elif 'nchar' in col_type.lower(): elif 'nchar' in col_type.lower():
self.update_and_check_data(tbname,col_name,col_type,up_nchar,dbname) for i in nchar_length:
up_nchar = tdCom.getLongName(i)
self.update_and_check_data(tbname,col_name,col_type,up_nchar,dbname)
elif col_type.lower() == 'timestamp': elif col_type.lower() == 'timestamp':
self.update_and_check_data(tbname,col_name,col_type,self.ts+1,dbname) self.update_and_check_data(tbname,col_name,col_type,self.ts+1,dbname)
tdSql.execute(f'insert into {tbname} values({self.ts},null)') tdSql.execute(f'insert into {tbname} values({self.ts},null)')
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册