提交 bdb29386 编写于 作者: P Pengcheng Tang

Add support for gptransfer to transfer leaf partition pair

Gptransfer previously supports transferring data of top level partition
table only. This is adding a support for leaf partition transfer.
The leaf partition pair should have same layout of columns and column
types (table name and column name can be different), also should have
same partition type and criteira (currently we support list and range partition).

Authors: Pengcheng Tang and Chumki Roy
上级 17db8287
......@@ -523,7 +523,7 @@ Feature: Validate command line arguments
@meta
Scenario: Metadata-only restore
Given the database is running
And database "fullbkdb" is created if not exists
And database "fullbkdb" is created if not exists on host "None" with port "PGPORT" with user "None"
And there is schema "schema_heap" exists in "fullbkdb"
And there is a "heap" table "schema_heap.heap_table" with compression "None" in "fullbkdb" with data
When the user runs "gpcrondump -a -x fullbkdb"
......@@ -539,7 +539,7 @@ Feature: Validate command line arguments
@meta
Scenario: Metadata-only restore with global objects (-G)
Given the database is running
And database "fullbkdb" is created if not exists
And database "fullbkdb" is created if not exists on host "None" with port "PGPORT" with user "None"
And there is schema "schema_heap" exists in "fullbkdb"
And there is a "heap" table "schema_heap.heap_table" with compression "None" in "fullbkdb" with data
And the user runs "psql -c 'CREATE ROLE foo_user' fullbkdb"
......@@ -904,7 +904,7 @@ Feature: Validate command line arguments
And there is a "ao" partition table "ao_part_table" with compression "quicklz" in "fullbkdb" with data
And there is a "co" partition table "co_part_table" with compression "None" in "fullbkdb" with data
And there is a backupfile of tables "co_part_table" in "fullbkdb" exists for validation
And there is a file "exclude_file" with tables "public.heap_table,public.ao_part_table"
And there is a file "exclude_file" with tables "public.heap_table|public.ao_part_table"
When the user runs "gpcrondump -a -x fullbkdb --exclude-table-file exclude_file"
Then gpcrondump should return a return code of 0
And the timestamp from gpcrondump is stored
......@@ -922,7 +922,7 @@ Feature: Validate command line arguments
And there is a "ao" partition table "ao_part_table" with compression "quicklz" in "fullbkdb" with data
And there is a "co" partition table "co_part_table" with compression "None" in "fullbkdb" with data
And there is a backupfile of tables "ao_part_table,heap_table" in "fullbkdb" exists for validation
And there is a file "include_file" with tables "public.heap_table,public.ao_part_table"
And there is a file "include_file" with tables "public.heap_table|public.ao_part_table"
When the user runs "gpcrondump -a -x fullbkdb --table-file include_file"
Then gpcrondump should return a return code of 0
And the timestamp from gpcrondump is stored
......@@ -1025,7 +1025,7 @@ Feature: Validate command line arguments
Scenario: Incremental backup of Non-public schema
Given the database is running
And there are no "dirty_backup_list" tempfiles
And database "schematestdb" is created if not exists
And database "schematestdb" is created if not exists on host "None" with port "PGPORT" with user "None"
And there is schema "pepper" exists in "schematestdb"
And there is a "heap" table "pepper.heap_table" with compression "None" in "schematestdb" with data
And there is a "ao" table "pepper.ao_table" with compression "None" in "schematestdb" with data
......@@ -2424,7 +2424,7 @@ Feature: Validate command line arguments
Scenario: Incremental table filter gpdbrestore with different schema for same tablenames
Given the database is running
And there are no backup files
And database "schematestdb" is created if not exists
And database "schematestdb" is created if not exists on host "None" with port "PGPORT" with user "None"
And there is schema "pepper" exists in "schematestdb"
And there are "2" "heap" tables "public.heap_table" with data in "schematestdb"
And there is a "ao" partition table "ao_part_table" with compression "None" in "schematestdb" with data
......@@ -4046,7 +4046,7 @@ Feature: Validate command line arguments
And there is a "heap" table "heap_table" with compression "None" in "fullbkdb" with data
And there is a "ao" partition table "ao_part_table" with compression "quicklz" in "fullbkdb" with data
And there is a "co" partition table "co_part_table" with compression "None" in "fullbkdb" with data
And there is a file "include_file_with_whitespace" with tables "public.heap_table ,public.ao_part_table"
And there is a file "include_file_with_whitespace" with tables "public.heap_table |public.ao_part_table"
And there is a backupfile of tables "heap_table,ao_part_table" in "fullbkdb" exists for validation
When the user runs "gpcrondump -a -x fullbkdb --table-file include_file_with_whitespace"
Then gpcrondump should return a return code of 0
......@@ -4142,7 +4142,7 @@ Feature: Validate command line arguments
Then gpcrondump should return a return code of 0
And the timestamp from gpcrondump is stored
And all the data from "testdb" is saved for verification
And there is a file "restore_file" with tables "public.heap_table,pepper.ao_table,public.co_table"
And there is a file "restore_file" with tables "public.heap_table|pepper.ao_table|public.co_table"
And the database "testdb" does not exist
And database "testdb" exists
And there is schema "pepper" exists in "testdb"
......@@ -4191,7 +4191,7 @@ Feature: Validate command line arguments
And the timestamp from gpcrondump is stored
And the timestamp from gpcrondump is stored in a list
And all the data from "testdb" is saved for verification
And there is a file "restore_file" with tables "public.heap_table,public.ao_table,public.co_table,public.ao_part_table"
And there is a file "restore_file" with tables "public.heap_table|public.ao_table|public.co_table|public.ao_part_table"
And the database "testdb" does not exist
And database "testdb" exists
And the user runs "gpdbrestore --table-file restore_file -a" with the stored timestamp
......@@ -4256,7 +4256,7 @@ Feature: Validate command line arguments
And all the data from "testdb" is saved for verification
And the database "testdb" does not exist
And database "testdb" exists
And there is a file "restore_file" with tables "public.ao_table,public.ext_tab"
And there is a file "restore_file" with tables "public.ao_table|public.ext_tab"
And the user runs "gpdbrestore --table-file restore_file -a" with the stored timestamp
And gpdbrestore should return a return code of 0
And verify that there is a "ao" table "public.ao_table" in "testdb" with data
......@@ -4293,7 +4293,7 @@ Feature: Validate command line arguments
Then gpcrondump should return a return code of 0
And the timestamp from gpcrondump is stored
And all the data from "testdb" is saved for verification
And there is a file "restore_file" with tables "public.ao_table,public.ao_index_table,public.heap_table"
And there is a file "restore_file" with tables "public.ao_table|public.ao_index_table|public.heap_table"
When table "public.ao_index_table" is dropped in "testdb"
And table "public.ao_table" is dropped in "testdb"
And table "public.heap_table" is dropped in "testdb"
......@@ -4341,7 +4341,7 @@ Feature: Validate command line arguments
Then gpcrondump should return a return code of 0
And the timestamp from gpcrondump is stored
And all the data from "testdb" is saved for verification
And there is a file "restore_file" with tables "public.ao_table,public.ao_index_table,public.heap_table,public.heap_table2"
And there is a file "restore_file" with tables "public.ao_table|public.ao_index_table|public.heap_table|public.heap_table2"
And the database "testdb" does not exist
And database "testdb" exists
And there is a trigger function "heap_trigger_func" on table "public.heap_table" in "testdb"
......@@ -4386,7 +4386,7 @@ Feature: Validate command line arguments
When the index "bitmap_co_index" in "testdb" is dropped
And the index "bitmap_ao_index" in "testdb" is dropped
And the user runs "psql -c 'CREATE INDEX bitmap_ao_index_new ON public.ao_index_table USING bitmap(column3);' testdb"
Then there is a file "restore_file" with tables "public.ao_table,public.ao_index_table,public.heap_table"
Then there is a file "restore_file" with tables "public.ao_table|public.ao_index_table|public.heap_table"
And the user runs "gpdbrestore --table-file restore_file -a" with the stored timestamp
And gpdbrestore should return a return code of 0
And verify that there is a "ao" table "public.ao_table" in "testdb" with data
......@@ -4441,7 +4441,7 @@ Feature: Validate command line arguments
And the timestamp from gpcrondump is stored
And the timestamp from gpcrondump is stored in a list
And all the data from "testdb" is saved for verification
And there is a file "restore_file" with tables "pepper.heap_table,pepper.ao_table,public.co_table,pepper.ao_part_table"
And there is a file "restore_file" with tables "pepper.heap_table|pepper.ao_table|public.co_table|pepper.ao_part_table"
And table "pepper.heap_table" is dropped in "testdb"
And table "pepper.ao_table" is dropped in "testdb"
And table "public.co_table" is dropped in "testdb"
......@@ -5861,7 +5861,7 @@ Feature: Validate command line arguments
And there is a "heap" table "schema_heap1.heap_table1" with compression "None" in "fullbkdb" with data
And there is a "ao" partition table "schema_ao.ao_part_table" with compression "quicklz" in "fullbkdb" with data
And there is a backupfile of tables "schema_heap.heap_table, schema_ao.ao_part_table, schema_heap1.heap_table1" in "fullbkdb" exists for validation
And there is a file "exclude_file" with tables "schema_heap1,schema_ao"
And there is a file "exclude_file" with tables "schema_heap1|schema_ao"
When the user runs "gpcrondump -a -x fullbkdb --exclude-schema-file exclude_file"
Then gpcrondump should return a return code of 0
And the timestamp from gpcrondump is stored
......@@ -5883,7 +5883,7 @@ Feature: Validate command line arguments
And there is a "heap" table "schema_heap1.heap_table1" with compression "None" in "fullbkdb" with data
And there is a "ao" partition table "schema_ao.ao_part_table" with compression "quicklz" in "fullbkdb" with data
And there is a backupfile of tables "schema_heap.heap_table, schema_ao.ao_part_table, schema_heap1.heap_table1" in "fullbkdb" exists for validation
And there is a file "include_file" with tables "schema_heap,schema_ao"
And there is a file "include_file" with tables "schema_heap|schema_ao"
When the user runs "gpcrondump -a -x fullbkdb --schema-file include_file"
Then gpcrondump should return a return code of 0
And the timestamp from gpcrondump is stored
......@@ -6116,7 +6116,7 @@ Feature: Validate command line arguments
And there is a "heap" table "schema_heap.heap_table" with compression "None" in "fullbkdb" with data
And there is a "ao" partition table "schema_ao.ao_part_table" with compression "quicklz" in "fullbkdb" with data
And there is a backupfile of tables "schema_heap.heap_table, schema_ao.ao_part_table" in "fullbkdb" exists for validation
And there is a file "include_file" with tables "schema_heap.heap_table,schema_ao.ao_part_table"
And there is a file "include_file" with tables "schema_heap.heap_table|schema_ao.ao_part_table"
When the user runs "gpcrondump -a -x fullbkdb --table-file include_file"
Then gpcrondump should return a return code of 0
And the timestamp from gpcrondump is stored
......@@ -6135,7 +6135,7 @@ Feature: Validate command line arguments
And there is a "heap" table "schema_heap.heap_table" with compression "None" in "fullbkdb" with data
And there is a "ao" partition table "schema_ao.ao_part_table" with compression "quicklz" in "fullbkdb" with data
And there is a backupfile of tables "schema_heap.heap_table, schema_ao.ao_part_table" in "fullbkdb" exists for validation
And there is a file "include_file" with tables "schema_heap.heap_table,schema_ao.ao_part_table"
And there is a file "include_file" with tables "schema_heap.heap_table|schema_ao.ao_part_table"
When the user runs "gpcrondump -a -x fullbkdb --table-file include_file"
Then gpcrondump should return a return code of 0
And the timestamp from gpcrondump is stored
......
......@@ -70,7 +70,7 @@ Feature: gpfdist configure timeout value
@wip
Scenario: gpfdist select from simple external table
Given the database is running
And database "gpfdistdb" is created if not exists
And database "gpfdistdb" is created if not exists on host "None" with port "PGPORT" with user "None"
And the external table "read_ext_data" does not exist in "gpfdistdb"
And the "gpfdist" process is killed
Given the directory "extdata" exists in current working directory
......@@ -89,7 +89,7 @@ Feature: gpfdist configure timeout value
@42nonsolsuse
Scenario: simple writable external table test
Given the database is running
And database "gpfdistdb" is created if not exists
And database "gpfdistdb" is created if not exists on host "None" with port "PGPORT" with user "None"
And the external table "write_ext_data" does not exist in "gpfdistdb"
And the "gpfdist" process is killed
Given the directory "extdata" exists in current working directory
......@@ -277,7 +277,7 @@ Feature: gpfdist configure timeout value
@writable_external_table_encoding_conversion
Scenario: guarantee right encoding conversion when write into external table
Given the database is running
And database "gpfdistdb" is created if not exists
And database "gpfdistdb" is created if not exists on host "None" with port "PGPORT" with user "None"
And the "gpfdist" process is killed
And the external table "wet_encoding_conversion" does not exist in "gpfdistdb"
And the directory "extdata" exists in current working directory
......
......@@ -394,7 +394,7 @@ Feature: NetBackup Integration with GPDB
And there is a "ao" partition table "ao_part_table" with compression "quicklz" in "fullbkdb" with data
And there is a "co" partition table "co_part_table" with compression "None" in "fullbkdb" with data
And there is a backupfile of tables "co_part_table" in "fullbkdb" exists for validation
And there is a file "exclude_file" with tables "public.heap_table,public.ao_part_table"
And there is a file "exclude_file" with tables "public.heap_table|public.ao_part_table"
When the user runs "gpcrondump -a -x fullbkdb --exclude-table-file exclude_file --netbackup-block-size 2048" using netbackup
Then gpcrondump should return a return code of 0
And the timestamp from gpcrondump is stored
......@@ -414,7 +414,7 @@ Feature: NetBackup Integration with GPDB
And there is a "ao" partition table "ao_part_table" with compression "quicklz" in "fullbkdb" with data
And there is a "co" partition table "co_part_table" with compression "None" in "fullbkdb" with data
And there is a backupfile of tables "ao_part_table,heap_table" in "fullbkdb" exists for validation
And there is a file "include_file" with tables "public.heap_table,public.ao_part_table"
And there is a file "include_file" with tables "public.heap_table|public.ao_part_table"
When the user runs "gpcrondump -a -x fullbkdb --table-file include_file --netbackup-block-size 2048" using netbackup
Then gpcrondump should return a return code of 0
And the timestamp from gpcrondump is stored
......@@ -1420,7 +1420,7 @@ Feature: NetBackup Integration with GPDB
And there is a "heap" table "heap_table" with compression "None" in "fullbkdb" with data
And there is a "ao" partition table "ao_part_table" with compression "quicklz" in "fullbkdb" with data
And there is a "co" partition table "co_part_table" with compression "None" in "fullbkdb" with data
And there is a file "include_file_with_whitespace" with tables "public.heap_table ,public.ao_part_table"
And there is a file "include_file_with_whitespace" with tables "public.heap_table|public.ao_part_table"
And there is a backupfile of tables "heap_table,ao_part_table" in "fullbkdb" exists for validation
When the user runs "gpcrondump -a -x fullbkdb --table-file include_file_with_whitespace --netbackup-block-size 2048" using netbackup
Then gpcrondump should return a return code of 0
......@@ -1475,7 +1475,7 @@ Feature: NetBackup Integration with GPDB
Then gpcrondump should return a return code of 0
And the timestamp from gpcrondump is stored
And all the data from "testdb" is saved for verification
And there is a file "restore_file" with tables "public.heap_table,pepper.ao_table,public.co_table"
And there is a file "restore_file" with tables "public.heap_table|pepper.ao_table|public.co_table"
And the database "testdb" does not exist
And database "testdb" exists
And there is schema "pepper" exists in "testdb"
......@@ -1546,7 +1546,7 @@ Feature: NetBackup Integration with GPDB
And all the data from "testdb" is saved for verification
And the database "testdb" does not exist
And database "testdb" exists
And there is a file "restore_file" with tables "public.ao_table,public.ext_tab"
And there is a file "restore_file" with tables "public.ao_table|public.ext_tab"
When the user runs gpdbrestore with the stored timestamp and options "--table-file restore_file --netbackup-block-size 2048" without -e option using netbackup
Then gpdbrestore should return a return code of 0
And verify that there is a "ao" table "public.ao_table" in "testdb" with data
......@@ -1586,7 +1586,7 @@ Feature: NetBackup Integration with GPDB
Then gpcrondump should return a return code of 0
And the timestamp from gpcrondump is stored
And all the data from "testdb" is saved for verification
And there is a file "restore_file" with tables "public.ao_table,public.ao_index_table,public.heap_table"
And there is a file "restore_file" with tables "public.ao_table|public.ao_index_table|public.heap_table"
When table "public.ao_index_table" is dropped in "testdb"
And table "public.ao_table" is dropped in "testdb"
And table "public.heap_table" is dropped in "testdb"
......@@ -1637,7 +1637,7 @@ Feature: NetBackup Integration with GPDB
Then gpcrondump should return a return code of 0
And the timestamp from gpcrondump is stored
And all the data from "testdb" is saved for verification
And there is a file "restore_file" with tables "public.ao_table,public.ao_index_table,public.heap_table,public.heap_table2"
And there is a file "restore_file" with tables "public.ao_table|public.ao_index_table|public.heap_table|public.heap_table2"
And the database "testdb" does not exist
And database "testdb" exists
And there is a trigger function "heap_trigger_func" on table "public.heap_table" in "testdb"
......@@ -1685,7 +1685,7 @@ Feature: NetBackup Integration with GPDB
When the index "bitmap_co_index" in "testdb" is dropped
And the index "bitmap_ao_index" in "testdb" is dropped
And the user runs "psql -c 'CREATE INDEX bitmap_ao_index_new ON public.ao_index_table USING bitmap(column3);' testdb"
Then there is a file "restore_file" with tables "public.ao_table,public.ao_index_table,public.heap_table"
Then there is a file "restore_file" with tables "public.ao_table|public.ao_index_table|public.heap_table"
When the user runs gpdbrestore with the stored timestamp and options "--table-file restore_file --netbackup-block-size 4096" without -e option using netbackup
Then gpdbrestore should return a return code of 0
And verify that there is a "ao" table "public.ao_table" in "testdb" with data
......@@ -2502,7 +2502,7 @@ Feature: NetBackup Integration with GPDB
Given the database is running
And the netbackup params have been parsed
And there are no "dirty_backup_list" tempfiles
And database "schematestdb" is created if not exists
And database "schematestdb" is created if not exists on host "None" with port "0" with user "None"
And there is schema "pepper" exists in "schematestdb"
And there is a "heap" table "pepper.heap_table" with compression "None" in "schematestdb" with data
And there is a "ao" table "pepper.ao_table" with compression "None" in "schematestdb" with data
......@@ -3207,7 +3207,7 @@ Feature: NetBackup Integration with GPDB
Given the database is running
And the netbackup params have been parsed
And there are no backup files
And database "schematestdb" is created if not exists
And database "schematestdb" is created if not exists on host "None" with port "0" with user "None"
And there is schema "pepper" exists in "schematestdb"
And there are "2" "heap" tables "public.heap_table" with data in "schematestdb"
And there is a "ao" partition table "ao_part_table" with compression "None" in "schematestdb" with data
......@@ -4250,7 +4250,7 @@ Feature: NetBackup Integration with GPDB
And the timestamp from gpcrondump is stored
And the timestamp from gpcrondump is stored in a list
And all the data from "testdb" is saved for verification
And there is a file "restore_file" with tables "public.heap_table,public.ao_table,public.co_table,public.ao_part_table"
And there is a file "restore_file" with tables "public.heap_table|public.ao_table|public.co_table|public.ao_part_table"
And the database "testdb" does not exist
And database "testdb" exists
When the user runs gpdbrestore with the stored timestamp and options "--table-file restore_file --netbackup-block-size 4096" using netbackup
......@@ -4307,7 +4307,7 @@ Feature: NetBackup Integration with GPDB
And the timestamp from gpcrondump is stored
And the timestamp from gpcrondump is stored in a list
And all the data from "testdb" is saved for verification
And there is a file "restore_file" with tables "pepper.heap_table,pepper.ao_table,public.co_table,pepper.ao_part_table"
And there is a file "restore_file" with tables "pepper.heap_table|pepper.ao_table|public.co_table|pepper.ao_part_table"
And table "pepper.heap_table" is dropped in "testdb"
And table "pepper.ao_table" is dropped in "testdb"
And table "public.co_table" is dropped in "testdb"
......@@ -4697,7 +4697,7 @@ Feature: NetBackup Integration with GPDB
And there is a "heap" table "schema_heap1.heap_table1" with compression "None" in "fullbkdb" with data
And there is a "ao" partition table "schema_ao.ao_part_table" with compression "quicklz" in "fullbkdb" with data
And there is a backupfile of tables "schema_heap.heap_table, schema_ao.ao_part_table, schema_heap1.heap_table1" in "fullbkdb" exists for validation
And there is a file "exclude_file" with tables "schema_heap1,schema_ao"
And there is a file "exclude_file" with tables "schema_heap1|schema_ao"
When the user runs "gpcrondump -a -x fullbkdb --exclude-schema-file exclude_file --netbackup-block-size 4096" using netbackup
Then gpcrondump should return a return code of 0
And the timestamp from gpcrondump is stored
......@@ -4721,7 +4721,7 @@ Feature: NetBackup Integration with GPDB
And there is a "heap" table "schema_heap1.heap_table1" with compression "None" in "fullbkdb" with data
And there is a "ao" partition table "schema_ao.ao_part_table" with compression "quicklz" in "fullbkdb" with data
And there is a backupfile of tables "schema_heap.heap_table, schema_ao.ao_part_table, schema_heap1.heap_table1" in "fullbkdb" exists for validation
And there is a file "include_file" with tables "schema_heap,schema_ao"
And there is a file "include_file" with tables "schema_heap|schema_ao"
When the user runs "gpcrondump -a -x fullbkdb --schema-file include_file --netbackup-block-size 4096" using netbackup
Then gpcrondump should return a return code of 0
And the timestamp from gpcrondump is stored
......
CREATE TABLE heap_employee(id int, rank int, gender char(1));
-- list partition type, column key: gender
DROP TABLE IF EXISTS employee;
CREATE TABLE employee(id int, rank int, gender char(1))
DISTRIBUTED BY (id)
PARTITION BY list (gender, rank)
(PARTITION girls VALUES (('F', 2)),
PARTITION boys VALUES (('M', 1)),
DEFAULT PARTITION other );
-- list partition type, column key: gender
DROP TABLE IF EXISTS employee;
CREATE TABLE employee(id int, rank int, gender char(1))
DISTRIBUTED BY (id)
PARTITION BY list (rank, gender)
(PARTITION girls VALUES ((2, 'F')),
PARTITION boys VALUES ((1, 'M')),
DEFAULT PARTITION other );
-- create column oriented ao table
DROP TABLE IF EXISTS employee;
CREATE TABLE employee(id int, rank int, gender char(1))
WITH (appendonly=true, orientation=column)
DISTRIBUTED BY (id)
PARTITION BY list (gender)
(PARTITION girls VALUES ('F'),
PARTITION boys VALUES ('M'),
DEFAULT PARTITION other );
-- range partition on rank
DROP TABLE IF EXISTS employee;
CREATE TABLE employee(id int, rank int, gender char(1))
DISTRIBUTED BY (id)
PARTITION BY RANGE (rank)
( START (1) INCLUSIVE
END (3) EXCLUSIVE
EVERY (1) );
-- range partition on erank
DROP TABLE IF EXISTS e_employee;
CREATE TABLE e_employee(eid int, erank int, egender char(1))
DISTRIBUTED BY (eid)
PARTITION BY RANGE (erank)
( START (1) INCLUSIVE
END (3) EXCLUSIVE
EVERY (1) );
-- create row oriented ao table
DROP TABLE IF EXISTS employee;
CREATE TABLE employee(id int, rank int, gender char(1))
WITH (appendonly=true, orientation=row)
DISTRIBUTED BY (id)
PARTITION BY list (gender)
(PARTITION girls VALUES ('F'),
PARTITION boys VALUES ('M'),
DEFAULT PARTITION other );
INSERT INTO e_employee VALUES(1, 1, 'F');
INSERT INTO e_employee VALUES(2, 2, 'M');
INSERT INTO employee VALUES(1, 2, 'F');
INSERT INTO employee VALUES(2, 1, 'M');
INSERT INTO schema1.employee VALUES(1, 2, 'F');
INSERT INTO schema1.employee VALUES(2, 1, 'M');
-- list partition type, column key: gender
DROP TABLE IF EXISTS employee;
CREATE TABLE employee(id int, rank int, gender char(1))
DISTRIBUTED BY (id)
PARTITION BY list (gender)
(PARTITION girls VALUES ('F'),
PARTITION boys VALUES ('M'),
DEFAULT PARTITION other );
-- list partition type, column key: gender
DROP TABLE IF EXISTS schema1.employee;
DROP SCHEMA IF EXISTS schema1;
CREATE SCHEMA schema1;
CREATE TABLE schema1.employee(id int, rank int, gender char(1))
DISTRIBUTED BY (id)
PARTITION BY list (gender)
(PARTITION girls VALUES ('F'),
PARTITION boys VALUES ('M'),
DEFAULT PARTITION other );
-- list partition type, column key: id
DROP TABLE IF EXISTS employee;
CREATE TABLE employee(id int, rank int, gender char(1))
DISTRIBUTED BY (id)
PARTITION BY list (id)
(PARTITION main VALUES (1),
PARTITION private VALUES (2),
DEFAULT PARTITION other );
-- list partition type, column key: gender
DROP TABLE IF EXISTS employee;
CREATE TABLE employee(id int, gender char(1))
DISTRIBUTED BY (id)
PARTITION BY list (gender)
(PARTITION girls VALUES ('F'),
PARTITION boys VALUES ('M'),
DEFAULT PARTITION other );
-- list partition type, column key: id
DROP TABLE IF EXISTS employee;
CREATE TABLE employee(id int, rank int, gender char(1))
DISTRIBUTED BY (id)
PARTITION BY list (rank)
(PARTITION main VALUES (1),
PARTITION private VALUES (2),
DEFAULT PARTITION other );
-- list partition type, column key: gender
DROP TABLE IF EXISTS employee;
CREATE TABLE employee(id int, rank int, gender char(1))
DISTRIBUTED BY (id)
PARTITION BY list (gender)
(PARTITION girls VALUES ('G'),
PARTITION boys VALUES ('B'),
DEFAULT PARTITION other );
-- list partition type, column key: id
DROP TABLE IF EXISTS employee;
CREATE TABLE employee(id int, rank int, gender char(1))
DISTRIBUTED BY (id)
PARTITION BY RANGE (id)
( START (1) INCLUSIVE
END (3) EXCLUSIVE
EVERY (1) );
-- partition table
DROP TABLE IF EXISTS sales;
CREATE TABLE sales (id int, year int, month int, day int,
region text)
DISTRIBUTED BY (id)
PARTITION BY RANGE (month)
(START (1) END (3) EVERY (1),
DEFAULT PARTITION other_months );
-- two levels partition table, range(date) and list(regin)
DROP TABLE IF EXISTS sales;
CREATE TABLE sales (trans_id int, date date, amount decimal(9,2), region text)
DISTRIBUTED BY (trans_id)
PARTITION BY RANGE (date)
SUBPARTITION BY LIST (region)
SUBPARTITION TEMPLATE
(
SUBPARTITION asia VALUES ('asia'),
DEFAULT SUBPARTITION other_regions)
(START (date '2011-01-01') INCLUSIVE
END (date '2011-03-01') EXCLUSIVE
EVERY (INTERVAL '1 month'),
DEFAULT PARTITION outlying_dates );
-- two levels partition, list part followed by range partition
DROP TABLE IF EXISTS sales;
CREATE TABLE sales (trans_id int, sdate date, amount decimal(9,2), region text)
DISTRIBUTED BY (trans_id)
PARTITION BY LIST (region)
SUBPARTITION BY RANGE (sdate)
SUBPARTITION TEMPLATE
(START (date '2011-01-01') INCLUSIVE
END (date '2011-03-01') EXCLUSIVE
EVERY (INTERVAL '1 month'),
DEFAULT SUBPARTITION outlying_dates )
(
PARTITION asia VALUES ('asia'),
DEFAULT PARTITION other_regions);
-- multi level partition, list part followed by range partition, with 5 subpartitions
CREATE TABLE sales (trans_id int, sdate date, amount decimal(9,2), region text)
DISTRIBUTED BY (trans_id)
PARTITION BY LIST (region)
SUBPARTITION BY RANGE (date)
SUBPARTITION TEMPLATE
(START (date '2011-01-01') INCLUSIVE
END (date '2011-03-01') EXCLUSIVE
EVERY (INTERVAL '1 month'),
DEFAULT SUBPARTITION outlying_dates )
( PARTITION usa VALUES ('usa'),
PARTITION other_asia VALUES ('asia'),
DEFAULT PARTITION other_regions);
-- multi level partition, list part followed by range partition, every 2 months
CREATE TABLE dest_sales (dest_trans_id int, dest_date date, dest_amount decimal(9,2), dest_region text)
DISTRIBUTED BY (dest_trans_id)
PARTITION BY LIST (dest_region)
SUBPARTITION BY RANGE (dest_date)
SUBPARTITION TEMPLATE
(START (date '2011-01-01') INCLUSIVE
END (date '2011-05-01') EXCLUSIVE
EVERY (INTERVAL '2 months'),
DEFAULT SUBPARTITION outlying_dates )
(
PARTITION asia VALUES ('asia'),
DEFAULT PARTITION other_regions);
-- two levels partition table, range(date) and list(regin)
DROP TABLE IF EXISTS sales;
CREATE TABLE sales (trans_id int, date date, src_amount decimal(9,2), region text)
DISTRIBUTED BY (trans_id)
PARTITION BY RANGE (trans_id)
SUBPARTITION BY LIST (region)
SUBPARTITION TEMPLATE
(
SUBPARTITION asia VALUES ('asia'),
DEFAULT SUBPARTITION other_regions)
(START (1) INCLUSIVE
END (3) EXCLUSIVE
EVERY (1),
DEFAULT PARTITION outlying_dates );
-- two levels partition, list part followed by range partition
DROP TABLE IF EXISTS sales;
CREATE TABLE sales (trans_id int, sdate date, amount decimal(9,2), region text)
DISTRIBUTED BY (trans_id)
PARTITION BY LIST (region)
SUBPARTITION BY RANGE (sdate)
SUBPARTITION TEMPLATE
(START (date '2011-01-01') INCLUSIVE
END (date '2011-05-01') EXCLUSIVE
EVERY (INTERVAL '1 month'),
DEFAULT SUBPARTITION outlying_dates )
(
PARTITION asia VALUES ('asia'),
DEFAULT PARTITION other_regions);
-- two levels partition table, range(year) and range(month)
DROP TABLE IF EXISTS sales;
CREATE TABLE sales (id int, year int, month int, day int,
region text)
DISTRIBUTED BY (id)
PARTITION BY RANGE (year)
SUBPARTITION BY RANGE (month)
SUBPARTITION TEMPLATE (
START (1) END (3) EVERY (1),
DEFAULT SUBPARTITION other_months )
(START (2002) END (2004) EVERY (1),
DEFAULT PARTITION outlying_years );
......@@ -73,20 +73,32 @@ def impl(context, dbconn, version):
@given('database "{dbname}" exists')
@then('database "{dbname}" exists')
def impl(context, dbname):
create_database(context, dbname)
@given('database "{dbname}" is created if not exists')
@then('database "{dbname}" is created if not exists')
def impl(context, dbname):
create_database_if_not_exists(context, dbname)
@given('database "{dbname}" is created if not exists on host "{HOST}" with port "{PORT}" with user "{USER}"')
@then('database "{dbname}" is created if not exists on host "{HOST}" with port "{PORT}" with user "{USER}"')
def impl(context, dbname, HOST, PORT, USER):
host = os.environ.get(HOST)
port = int(os.environ.get(PORT))
user = os.environ.get(USER)
create_database_if_not_exists(context, dbname, host, port, user)
@when('the database "{dbname}" does not exist')
@given('the database "{dbname}" does not exist')
@then('the database "{dbname}" does not exist')
def impl(context, dbname):
drop_database_if_exists(context, dbname)
@when('the database "{dbname}" does not exist on host "{HOST}" with port "{PORT}" with user "{USER}"')
@given('the database "{dbname}" does not exist on host "{HOST}" with port "{PORT}" with user "{USER}"')
@then('the database "{dbname}" does not exist on host "{HOST}" with port "{PORT}" with user "{USER}"')
def impl(context, dbname, HOST, PORT, USER):
host = os.environ.get(HOST)
port = int(os.environ.get(PORT))
user = os.environ.get(USER)
drop_database_if_exists(context, dbname, host, port, user)
@given('the database "{dbname}" does not exist with connection "{dbconn}"')
@when('the database "{dbname}" does not exist with connection "{dbconn}"')
@then('the database "{dbname}" does not exist with connection "{dbconn}"')
......@@ -94,6 +106,13 @@ def impl(context, dbname, dbconn):
command = '%s -c \'drop database if exists %s;\''%(dbconn, dbname)
run_command(context, command)
@given('the database "{dbname}" exists with connection "{dbconn}"')
@when('the database "{dbname}" exists with connection "{dbconn}"')
@then('the database "{dbname}" exists with connection "{dbconn}"')
def impl(context, dbname, dbconn):
command = '%s -c \'create database %s;\''%(dbconn, dbname)
run_command(context, command)
def get_segment_hostlist():
gparray = GpArray.initFromCatalog(dbconn.DbURL())
segment_hostlist = sorted(gparray.get_hostlist(includeMaster=False))
......@@ -152,19 +171,31 @@ def impl(context, table_list, dbname):
for t in tables:
truncate_table(dbname, t.strip())
def populate_regular_table_data(context, tabletype, table_name, compression_type, dbname, rowcount=1094):
create_database_if_not_exists(context, dbname)
drop_table_if_exists(context, table_name=table_name, dbname=dbname)
def populate_regular_table_data(context, tabletype, table_name, compression_type, dbname, rowcount=1094, with_data=False, host=None, port=0, user=None):
create_database_if_not_exists(context, dbname, host=host, port=port, user=user)
drop_table_if_exists(context, table_name=table_name, dbname=dbname, host=host, port=port, user=user)
if compression_type == "None":
create_partition(context, table_name, tabletype, dbname, compression_type=None, partition=False, rowcount=rowcount)
create_partition(context, table_name, tabletype, dbname, compression_type=None, partition=False,
rowcount=rowcount, with_data=with_data, host=host, port=port, user=user)
else:
create_partition(context, table_name, tabletype, dbname, compression_type, partition=False, rowcount=rowcount)
create_partition(context, table_name, tabletype, dbname, compression_type, partition=False,
rowcount=rowcount, with_data=with_data, host=host, port=port, user=user)
@given('there is a "{tabletype}" table "{table_name}" with compression "{compression_type}" in "{dbname}" with data')
@when('there is a "{tabletype}" table "{table_name}" with compression "{compression_type}" in "{dbname}" with data')
@then('there is a "{tabletype}" table "{table_name}" with compression "{compression_type}" in "{dbname}" with data')
def impl(context, tabletype, table_name, compression_type, dbname):
populate_regular_table_data(context, tabletype, table_name, compression_type, dbname)
populate_regular_table_data(context, tabletype, table_name, compression_type, dbname, with_data=True)
@given('there is a "{tabletype}" table "{table_name}" with compression "{compression_type}" in "{dbname}" with data "{with_data}" on host "{HOST}" with port "{PORT}" with user "{USER}"')
@when('there is a "{tabletype}" table "{table_name}" with compression "{compression_type}" in "{dbname}" with data "{with_data}" on host "{HOST}" with port "{PORT}" with user "{USER}"')
@then('there is a "{tabletype}" table "{table_name}" with compression "{compression_type}" in "{dbname}" with data "{with_data}" on host "{HOST}" with port "{PORT}" with user "{USER}"')
def impl(context, tabletype, table_name, compression_type, dbname, with_data, HOST, PORT, USER):
host = os.environ.get(HOST)
port = int(os.environ.get(PORT))
user = os.environ.get(USER)
with_data = bool(with_data)
populate_regular_table_data(context, tabletype, table_name, compression_type, dbname, 10, with_data, host, port, user)
@when('the partition table "{table_name}" in "{dbname}" is populated with similar data')
def impl(context, table_name, dbname):
......@@ -190,9 +221,9 @@ def impl(context, tabletype, table_name, compression_type, dbname):
create_database_if_not_exists(context, dbname)
drop_table_if_exists(context, table_name=table_name, dbname=dbname)
if compression_type == "None":
create_partition(context, table_name, tabletype, dbname)
create_partition(context, tablename=table_name, storage_type=tabletype, dbname=dbname, with_data=True)
else:
create_partition(context, table_name, tabletype, dbname, compression_type)
create_partition(context, tablename=table_name, storage_type=tabletype, dbname=dbname, with_data=True, compression_type=compression_type)
@given('there is a mixed storage partition table "{tablename}" in "{dbname}" with data')
def impl(context, tablename, dbname):
......@@ -926,15 +957,13 @@ def impl(context, table_list, dbname):
def impl(context, tname, dbname, nrows):
check_row_count(tname, dbname, int(nrows))
@then('verify that table "{tname}" in "{dbname}" has same data on source and destination system')
def impl(context, tname, dbname):
print 'veryfing data integraty'
match_table_select(context, tname, dbname)
@then('verify that table "{src_tname}" in database "{src_dbname}" of source system has same data with table "{dest_tname}" in database "{dest_dbname}" of destination system with options "{options}"')
def impl(context, src_tname, src_dbname, dest_tname, dest_dbname, options):
match_table_select(context, src_tname, src_dbname, dest_tname, dest_dbname, options)
@then('verify that table "{tname}" in "{dbname}" has same data on source and destination system with order by {orderby}')
def impl(context, tname, dbname, orderby):
print 'veryfing data integraty'
match_table_select(context, tname, dbname, orderby)
@then('verify that table "{src_tname}" in database "{src_dbname}" of source system has same data with table "{dest_tname}" in database "{dest_dbname}" of destination system with order by "{orderby}"')
def impl(context, src_tname, src_dbname, dest_table, dest_dbname, orderby):
match_table_select(context, src_tname, src_dbname, dest_tname, dest_dbname, orderby)
@then('verify that partitioned tables "{table_list}" in "{dbname}" have {num_parts} partitions')
@then('verify that partitioned tables "{table_list}" in "{dbname}" have {num_parts} partitions in partition level "{partitionlevel}"')
......@@ -1041,7 +1070,7 @@ def impl(context, filename):
os.remove(table_file)
def create_table_file_locally(context, filename, table_list, location=os.getcwd()):
tables = table_list.split(',')
tables = table_list.split('|')
file_path = os.path.join(location, filename)
with open(file_path, 'w') as fp:
for t in tables:
......
......@@ -181,11 +181,11 @@ def getRow(dbname, exec_sql):
return result
def check_db_exists(dbname):
def check_db_exists(dbname, host=None, port=0, user=None):
LIST_DATABASE_SQL = 'select datname from pg_database'
results = []
with dbconn.connect(dbconn.DbURL(dbname='template1')) as conn:
results = []
with dbconn.connect(dbconn.DbURL(hostname=host, username=user, port=port, dbname='template1')) as conn:
curs = dbconn.execSQL(conn, LIST_DATABASE_SQL)
results = curs.fetchall()
......@@ -195,23 +195,28 @@ def check_db_exists(dbname):
return False
def create_database_if_not_exists(context, dbname):
if not check_db_exists(dbname):
create_database(context, dbname)
def create_database_if_not_exists(context, dbname, host=None, port=0, user=None):
if not check_db_exists(dbname, host, port, user):
create_database(context, dbname, host, port, user)
def create_database(context, dbname):
def create_database(context, dbname=None, host=None, port=0, user=None):
LOOPS = 10
if host == None or port == 0 or user == None:
createdb_cmd = 'createdb %s' % dbname
else:
createdb_cmd = 'psql -h %s -p %d -U %s -d template1 -c "create database %s"' % (host,
port, user, dbname)
for i in range(LOOPS):
context.exception = None
run_gpcommand(context, 'createdb %s' % dbname)
run_command(context, createdb_cmd)
if context.exception:
time.sleep(1)
continue
if check_db_exists(dbname):
if check_db_exists(dbname, host, port, user):
return
time.sleep(1)
......@@ -219,7 +224,7 @@ def create_database(context, dbname):
if context.exception:
raise context.exception
raise Exception("createdb for '%s' failed after %d attempts" % (dbname, LOOPS))
raise Exception("create database for '%s' failed after %d attempts" % (dbname, LOOPS))
def clear_all_saved_data_verify_files(context):
current_dir = os.getcwd()
......@@ -298,7 +303,8 @@ def check_partition_table_exists(context, dbname, schemaname, table_name, table_
return False
return check_table_exists(context, dbname, partitions[0][0].strip(), table_type)
def check_table_exists(context, dbname, table_name, table_type=None):
def check_table_exists(context, dbname, table_name, table_type=None, host=None, port=0, user=None):
SQL = """
select oid::regclass, relkind, relstorage, reloptions \
from pg_class \
......@@ -306,7 +312,7 @@ def check_table_exists(context, dbname, table_name, table_type=None):
""" % table_name
table_row = None
with dbconn.connect(dbconn.DbURL(dbname=dbname)) as conn:
with dbconn.connect(dbconn.DbURL(hostname=host, port=port, username=user, dbname=dbname)) as conn:
try:
table_row = dbconn.execSQLForSingletonRow(conn, SQL)
except Exception as e:
......@@ -352,26 +358,26 @@ def drop_external_table_if_exists(context, table_name, dbname):
if check_table_exists(context, table_name=table_name, dbname=dbname, table_type='external'):
drop_external_table(context, table_name=table_name, dbname=dbname)
def drop_table_if_exists(context, table_name, dbname):
if check_table_exists(context, table_name=table_name, dbname=dbname):
drop_table(context, table_name=table_name, dbname=dbname)
def drop_table_if_exists(context, table_name, dbname, host=None, port=0, user=None):
if check_table_exists(context, table_name=table_name, dbname=dbname, host=host, port=port, user=user):
drop_table(context, table_name=table_name, dbname=dbname, host=host, port=port, user=user)
def drop_external_table(context, table_name, dbname):
def drop_external_table(context, table_name, dbname, host=None, port=0, user=None):
SQL = 'drop external table %s' % table_name
with dbconn.connect(dbconn.DbURL(dbname=dbname)) as conn:
with dbconn.connect(dbconn.DbURL(hostname=host, port=port, username=user, dbname=dbname)) as conn:
dbconn.execSQL(conn, SQL)
conn.commit()
if check_table_exists(context, table_name=table_name, dbname=dbname, table_type='external'):
if check_table_exists(context, table_name=table_name, dbname=dbname, table_type='external', host=host, port=port, user=user):
raise Exception('Unable to successfully drop the table %s' % table_name)
def drop_table(context, table_name, dbname):
def drop_table(context, table_name, dbname, host=None, port=0, user=None):
SQL = 'drop table %s' % table_name
with dbconn.connect(dbconn.DbURL(dbname=dbname)) as conn:
with dbconn.connect(dbconn.DbURL(hostname=host, username=user, port=port, dbname=dbname)) as conn:
dbconn.execSQL(conn, SQL)
conn.commit()
if check_table_exists(context, table_name=table_name, dbname=dbname):
if check_table_exists(context, table_name=table_name, dbname=dbname, host=host, port=port, user=user):
raise Exception('Unable to successfully drop the table %s' % table_name)
def check_schema_exists(context, schema_name, dbname):
......@@ -546,7 +552,7 @@ def drop_partition(context, partitionnum, tablename, dbname):
dbconn.execSQL(conn, alter_table_str)
conn.commit()
def create_partition(context, tablename, storage_type, dbname, compression_type=None, partition=True, rowcount=1094):
def create_partition(context, tablename, storage_type, dbname, compression_type=None, partition=True, rowcount=1094, with_data=True, host=None, port=0, user=None):
interval = '1 year'
......@@ -574,11 +580,12 @@ def create_partition(context, tablename, storage_type, dbname, compression_type=
create_table_str = create_table_str + ";"
with dbconn.connect(dbconn.DbURL(dbname=dbname)) as conn:
with dbconn.connect(dbconn.DbURL(hostname=host, port=port, username=user, dbname=dbname)) as conn:
dbconn.execSQL(conn, create_table_str)
conn.commit()
populate_partition(tablename, PARTITION_START_DATE, dbname, 0, rowcount)
if with_data:
populate_partition(tablename, PARTITION_START_DATE, dbname, 0, rowcount, host, port, user)
# same data size as populate partition, but different values
def populate_partition_diff_data_same_eof(tablename, dbname):
......@@ -587,12 +594,12 @@ def populate_partition_diff_data_same_eof(tablename, dbname):
def populate_partition_same_data(tablename, dbname):
populate_partition(tablename, PARTITION_START_DATE, dbname, 0)
def populate_partition(tablename, start_date, dbname, data_offset, rowcount=1094):
def populate_partition(tablename, start_date, dbname, data_offset, rowcount=1094, host=None, port=0, user=None):
insert_sql_str = "insert into %s select i+%d, 'backup', i + date '%s' from generate_series(0,%d) as i" %(tablename, data_offset, start_date, rowcount)
insert_sql_str += "; insert into %s select i+%d, 'restore', i + date '%s' from generate_series(0,%d) as i" %(tablename, data_offset, start_date, rowcount)
with dbconn.connect(dbconn.DbURL(dbname=dbname)) as conn:
with dbconn.connect(dbconn.DbURL(hostname=host, port=port, username=user, dbname=dbname)) as conn:
dbconn.execSQL(conn, insert_sql_str)
conn.commit()
......@@ -643,13 +650,18 @@ def create_int_table(context, table_name, table_type='heap', dbname='testdb'):
if result != NROW:
raise Exception('Integer table creation was not successful. Expected %d does not match %d' %(NROW, result))
def drop_database(context, dbname):
def drop_database(context, dbname, host=None, port=0, user=None):
LOOPS = 10
if host == None or port == 0 or user == None:
dropdb_cmd = 'dropdb %s' % dbname
else:
dropdb_cmd = 'psql -h %s -p %d -U %s -d template1 -c "drop database %s"' % (host,
port, user, dbname)
for i in range(LOOPS):
context.exception = None
run_gpcommand(context, 'dropdb %s' % dbname)
run_gpcommand(context, dropdb_cmd)
if context.exception:
time.sleep(1)
......@@ -665,9 +677,9 @@ def drop_database(context, dbname):
raise Exception('db exists after dropping: %s' % dbname)
def drop_database_if_exists(context, dbname):
if check_db_exists(dbname):
drop_database(context, dbname)
def drop_database_if_exists(context, dbname=None, host=None, port=0, user=None):
if check_db_exists(dbname, host=host, port=port, user=user):
drop_database(context, dbname, host=host, port=port, user=user)
def run_on_all_segs(context, dbname, query):
gparray = GpArray.initFromCatalog(dbconn.DbURL())
......@@ -753,17 +765,21 @@ def check_row_count(tablename, dbname, nrows):
def check_empty_table(tablename, dbname):
check_row_count(tablename, dbname, 0)
def match_table_select(context, tablename, dbname, orderby=None):
def match_table_select(context, src_tablename, src_dbname, dest_tablename, dest_dbname, orderby=None, options=''):
if orderby != None :
check_query = 'psql -d %s -c \'select * from %s order by %s\'' % (dbname, tablename, orderby)
command = 'psql -p $GPTRANSFER_SOURCE_PORT -h $GPTRANSFER_SOURCE_HOST -U $GPTRANSFER_SOURCE_USER -d %s -c \'select * from %s order by %s\''%(dbname, tablename, orderby)
check_query = 'psql -d %s -c \'select * from %s order by %s\' %s' % (dest_dbname, dest_tablename, orderby, options)
command = '''psql -p $GPTRANSFER_SOURCE_PORT -h $GPTRANSFER_SOURCE_HOST -U $GPTRANSFER_SOURCE_USER -d %s
-c \'select * from %s order by %s\' %s''' % (src_dbname, src_tablename, orderby, options)
else:
check_query = 'psql -d %s -c \'select * from %s\'' % (dbname, tablename)
command = 'psql -p $GPTRANSFER_SOURCE_PORT -h $GPTRANSFER_SOURCE_HOST -U $GPTRANSFER_SOURCE_USER -d %s -c \'select * from %s\''%(dbname, tablename)
check_query = 'psql -d %s -c \'select * from %s\' %s' % (dest_dbname, dest_tablename, options)
command = '''psql -p $GPTRANSFER_SOURCE_PORT -h $GPTRANSFER_SOURCE_HOST -U $GPTRANSFER_SOURCE_USER -d %s
-c \'select * from %s\' %s''' % (src_dbname, src_tablename, options)
(rc, out1, err) = run_cmd(check_query)
(rc, out2, err) = run_cmd(command)
if out2 != out1:
raise Exception('table %s in database %s between source and destination system does not match'%(tablename,dbname))
raise Exception('table %s in database %s of source system does not match rows with table %s in database %s of destination system.' % (
src_tablename,src_dbname, dest_tablename, dest_dbname))
def get_master_hostname(dbname='template1'):
master_hostname_sql = "select distinct hostname from gp_segment_configuration where content=-1 and role='p'"
......
此差异已折叠。
......@@ -18,6 +18,7 @@ gptransfer
[--skip-existing | --truncate | --drop]
[--analyze] [--validate=<type> ] [-x] [--dry-run]
[--schema-only ]
[--partition-transfer ]
[--no-final-count]
[--source-host=<source_host> [--source-port=<source_port>]
......@@ -71,6 +72,12 @@ following types of operations:
When you specify a destination database, the source database tables are
copied into the specified destination database.
For partitioned tables, you can specify the --partition-transfer and -f
options to copy specific leaf child partitions of partitioned tables from
a source database to a destination database. The leaf child partitions are
the lowest level partitions of a partitioned database. If the child
partition is not a leaf child partition, the utility returns an error.
If an invalid set of gptransfer options are specified, or if a specified
source table or database does not exist, gptransfer returns an error and
quits. No data is copied.
......@@ -241,7 +248,7 @@ OPTIONS
If the source database does not exist, gptransfer returns an error and
quits. If a destination database does not exist a database is created.
Not valid with the --full, -f, or -t options.
Not valid with the --full, -f, -t, or --partition-transfer options.
Alternatively, specify the -t or -f option to copy a specified set of
tables.
......@@ -278,7 +285,7 @@ OPTIONS
If destination database does not exist, it is created.
Not valid with the --full option.
Not valid with the --full or --partition-transfer option.
--dest-host=<dest_host>
......@@ -308,7 +315,7 @@ OPTIONS
At most, only one of the options can be specified --skip-existing,
--truncate, or --drop. If one of them is not specified and the table
exists in the destination system, gptransfer returns an error and quits.
Not valid with the --full option.
Not valid with the --full or --partition-transfer option.
--dry-run
......@@ -369,7 +376,7 @@ OPTIONS
You cannot specify views, or system catalog tables.
Not valid with the --full option.
Not valid with the --full or --partition-transfer option.
You can specify the --dry-run option to test the command. The -v option,
displays and logs the excluded tables.
......@@ -400,7 +407,8 @@ OPTIONS
--source-map-file option, the --dest-host option, and if necessary, the
other destination system options.
The --full option cannot be specified with the -t, -d, or -f options.
The --full option cannot be specified with the -t, -d, -f, or
--partition-transfer options.
A full migration copies all database objects including, tables, indexes,
views, users, roles, functions, and resource queues for all user defined
......@@ -442,6 +450,34 @@ OPTIONS
The default is to compare the row count of tables copied to the destination
databases with the tables in the source database.
--partition-transfer
Specify this option with the -f option to copy leaf child partition tables
of partitioned tables from a source database to a destination database.
The text file specified by the -f option contains a list of fully qualified
leaf child partition table names. Each line lists one source and one
destination table name as comma separated pair in below format.
fully_qualified_source_table_name, fully_qualified_destination_table_name
If only one table name is specified in the input file, source table name and
destination table name will be the same.
Wildcard characters are not supported.
If the table is not a leaf child partition, the utility returns an error and
no files are transferred. The destination partitioned table must exist and
these characteristics must be the same for the partitioned table in the source
and destination database.
Number of columns
Column order
Column data types
Partitioning criteria for the leaf child partition (partition type, partition
column, and partition values)
This option is not valid with the -d, --dest-database, --drop, --full, -F, -t
-T, or --schema-only options.
-q | --quiet
......@@ -474,7 +510,7 @@ OPTIONS
If you specify tables with the -t or -f option with --schema-only,
gptransfer creates only the tables and indexes. No data is transferred.
Not valid with the --truncate option.
Not valid with the --truncate, or --partition-transfer option.
Because of the overhead required to set up parallel transfers, the
--schema-only option not recommended when transferring information for a
......@@ -563,7 +599,7 @@ OPTIONS
If you specify the -d option to copy all the tables from a database, you
do not need to specify individual tables from the database.
Not valid with the --full, -d, or -f option.
Not valid with the --full, -d, -f, or --partition-transfer option.
-T <db.schema.table>
......@@ -586,7 +622,7 @@ OPTIONS
If a source table does not exist, gptransfer displays a warning.
Not valid with the --full option.
Not valid with the --full, or --partition-transfer option.
You can specify the --dry-run option to test the command. The -v option
displays and logs the excluded tables.
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册