backup_and_restore_restores.feature 96.7 KB
Newer Older
1
@backup_and_restore_restores
2 3 4 5
Feature: Validate command line arguments

    Scenario: 1 Dirty table list check on recreating a table with same data and contents
        Given the old timestamps are read from json
6
        When the user runs gpdbrestore -e with the stored timestamp
7 8 9 10 11
        Then gpdbrestore should return a return code of 0
        And verify that plan file has latest timestamp for "public.ao_table"

    Scenario: 2 Simple Incremental Backup
        Given the old timestamps are read from json
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
        When the user runs gpdbrestore -e with the stored timestamp and options "-L"
        Then gpdbrestore should return a return code of 0
        And gpdbrestore should print "Table public.ao_index_table" to stdout
        And gpdbrestore should print "Table public.ao_index_table_comp" to stdout
        And gpdbrestore should print "Table public.ao_part_table" to stdout
        And gpdbrestore should print "Table public.ao_part_table_comp" to stdout
        And gpdbrestore should print "Table public.part_external" to stdout
        And gpdbrestore should print "Table public.ao_table" to stdout
        And gpdbrestore should print "Table public.ao_table_comp" to stdout
        And gpdbrestore should print "Table public.co_index_table" to stdout
        And gpdbrestore should print "Table public.co_index_table_comp" to stdout
        And gpdbrestore should print "Table public.co_part_table" to stdout
        And gpdbrestore should print "Table public.co_part_table_comp" to stdout
        And gpdbrestore should print "Table public.co_table" to stdout
        And gpdbrestore should print "Table public.co_table_comp" to stdout
        And gpdbrestore should print "Table public.heap_index_table" to stdout
        And gpdbrestore should print "Table public.heap_part_table" to stdout
        And gpdbrestore should print "Table public.heap_table" to stdout
        And gpdbrestore should print "Table public.part_mixed_1" to stdout
31
        And database "bkdb2" is dropped and recreated
32
        When the user runs gpdbrestore -e with the stored timestamp
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
        Then gpdbrestore should return a return code of 0
        And verify that partitioned tables "ao_part_table, co_part_table, heap_part_table" in "bkdb2" have 6 partitions
        And verify that partitioned tables "ao_part_table_comp, co_part_table_comp" in "bkdb2" have 6 partitions
        And verify that partitioned tables "part_external" in "bkdb2" have 5 partitions in partition level "0"
        And verify that partitioned tables "ao_part_table, co_part_table_comp" in "bkdb2" has 0 empty partitions
        And verify that partitioned tables "co_part_table, ao_part_table_comp" in "bkdb2" has 0 empty partitions
        And verify that partitioned tables "heap_part_table" in "bkdb2" has 0 empty partitions
        And verify that there is a "heap" table "public.heap_table" in "bkdb2"
        And verify that there is a "heap" table "public.heap_index_table" in "bkdb2"
        And verify that there is partition "1" of "ao" partition table "ao_part_table" in "bkdb2" in "public"
        And verify that there is partition "1" of "co" partition table "co_part_table_comp" in "bkdb2" in "public"
        And verify that there is partition "1" of "heap" partition table "heap_part_table" in "bkdb2" in "public"
        And verify that there is partition "2" of "heap" partition table "heap_part_table" in "bkdb2" in "public"
        And verify that there is partition "3" of "heap" partition table "heap_part_table" in "bkdb2" in "public"
        And verify that there is partition "1" of mixed partition table "part_mixed_1" with storage_type "c"  in "bkdb2" in "public"
        And verify that there is partition "2" in partition level "0" of mixed partition table "part_external" with storage_type "x"  in "bkdb2" in "public"
        And verify that the data of the dirty tables under " " in "bkdb2" is validated after restore
        And verify that the distribution policy of all the tables in "bkdb2" are validated after restore
        And verify that the tuple count of all appendonly tables are consistent in "bkdb2"

    Scenario: 3 Incremental Backup with -u option
        Given the old timestamps are read from json
55
        And the backup test is initialized with database "bkdb3"
56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
        Then the user runs gp_restore with the stored timestamp and subdir in "bkdb3" and backup_dir "/tmp"
        And gp_restore should return a return code of 0
        And verify that there is a "heap" table "public.heap_table" in "bkdb3"
        And verify that there is a "ao" table "public.ao_table" in "bkdb3"
        And verify that the data of the dirty tables under "/tmp" in "bkdb3" is validated after restore
        And verify that the distribution policy of all the tables in "bkdb3" are validated after restore

    Scenario: 4 gpdbrestore with -R for full dump
        Given the old timestamps are read from json
        Then the user runs gpdbrestore with "-R" option in path "/tmp/4"
        Then gpdbrestore should return a return code of 0
        And verify that the data of "2" tables in "bkdb4" is validated after restore
        And verify that the tuple count of all appendonly tables are consistent in "bkdb4"

    Scenario: 5 gpdbrestore with -R for incremental dump
        Given the old timestamps are read from json
        Then the user runs gpdbrestore with "-R" option in path "/tmp/5"
73
        And gpdbrestore should print "-R is not supported for restore with incremental timestamp" to stdout
74 75 76

    Scenario: 5a Full Backup and Restore
        Given the old timestamps are read from json
77
        When the user runs gpdbrestore -e with the stored timestamp
78 79 80 81 82 83 84 85 86 87 88 89 90 91 92
        Then gpdbrestore should return a return code of 0
        And verify that there is a "heap" table "public.heap_table" in "bkdb5a" with data
        And verify that there is a "ao" table "public.ao_part_table" in "bkdb5a" with data
        And verify that the "report" file in " " dir does not contain "ERROR"
        And verify that the "status" file in " " dir does not contain "ERROR"
        And verify that there is a constraint "check_constraint_no_domain" in "bkdb5a"
        And verify that there is a constraint "check_constraint_with_domain" in "bkdb5a"
        And verify that there is a constraint "unique_constraint" in "bkdb5a"
        And verify that there is a constraint "foreign_key" in "bkdb5a"
        And verify that there is a rule "myrule" in "bkdb5a"
        And verify that there is a trigger "mytrigger" in "bkdb5a"
        And verify that there is an index "my_unique_index" in "bkdb5a"

    Scenario: 6 Metadata-only restore
        Given the old timestamps are read from json
93
        When the user runs gpdbrestore -e with the stored timestamp and options "-m"
94 95 96 97 98 99 100
        Then gpdbrestore should return a return code of 0
        And verify that there is a "heap" table "schema_heap.heap_table" in "bkdb6"
        And the table names in "bkdb6" is stored
        And tables in "bkdb6" should not contain any data

    Scenario: 7 Metadata-only restore with global objects (-G)
        Given the old timestamps are read from json
101
        When the user runs gpdbrestore -e with the stored timestamp and options "-m -G"
102 103 104 105 106 107 108 109 110
        Then gpdbrestore should return a return code of 0
        And verify that there is a "heap" table "schema_heap.heap_table" in "bkdb7"
        And the table names in "bkdb7" is stored
        And tables in "bkdb7" should not contain any data
        And verify that a role "foo%userWITHCAPS" exists in database "bkdb7"
        And the user runs "psql -c 'DROP ROLE "foo%userWITHCAPS"' bkdb7"

    Scenario: 8 gpdbrestore -L with Full Backup
        Given the old timestamps are read from json
111
        When the user runs gpdbrestore -e with the stored timestamp and options "-L"
112
        Then gpdbrestore should return a return code of 0
113 114
        And gpdbrestore should print "Table public.ao_part_table" to stdout
        And gpdbrestore should print "Table public.heap_table" to stdout
115 116 117

    Scenario: 11 Backup and restore with -G only
        Given the old timestamps are read from json
118
        When the user runs gpdbrestore -e with the stored timestamp and options "-G only"
119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135
        Then gpdbrestore should return a return code of 0
        And verify that a role "foo_user" exists in database "bkdb11"
        And verify that there is no table "public.heap_table" in "bkdb11"
        And the user runs "psql -c 'DROP ROLE foo_user' bkdb11"

    @valgrind
    Scenario: 12 Valgrind test of gp_restore for incremental backup
        Given the old timestamps are read from json
        Then the user runs valgrind with "gp_restore" and options "-i --gp-i --gp-l=p -d bkdb12 --gp-c"

    @valgrind
    Scenario: 13 Valgrind test of gp_restore_agent for incremental backup
        Given the old timestamps are read from json
        Then the user runs valgrind with "gp_restore_agent" and options "--gp-c /bin/gunzip -s --post-data-schema-only --target-dbid 1 -d bkdb13"

    Scenario: 14 Full Backup with option -t and Restore
        Given the old timestamps are read from json
136
        When the user runs gpdbrestore -e with the stored timestamp
137 138 139 140 141 142
        Then gpdbrestore should return a return code of 0
        And verify that there is a "heap" table "public.heap_table" in "bkdb14" with data
        And verify that there is no table "public.ao_part_table" in "bkdb14"

    Scenario: 15 Full Backup with option -T and Restore
        Given the old timestamps are read from json
143
        When the user runs gpdbrestore -e with the stored timestamp
144 145 146 147 148 149 150
        Then verify that the "report" file in " " dir contains "Backup Type: Full"
        Then gpdbrestore should return a return code of 0
        And verify that there is a "ao" table "public.ao_part_table" in "bkdb15" with data
        And verify that there is no table "public.heap_table" in "bkdb15"

    Scenario: 16 Full Backup with option --exclude-table-file and Restore
        Given the old timestamps are read from json
151
        When the user runs gpdbrestore -e with the stored timestamp
152 153 154 155 156 157 158
        Then gpdbrestore should return a return code of 0
        And verify that there is a "co" table "public.co_part_table" in "bkdb16" with data
        And verify that there is no table "public.ao_part_table" in "bkdb16"
        And verify that there is no table "public.heap_table" in "bkdb16"

    Scenario: 17 Full Backup with option --table-file and Restore
        Given the old timestamps are read from json
159
        When the user runs gpdbrestore -e with the stored timestamp
160 161 162 163 164 165 166
        Then gpdbrestore should return a return code of 0
        And verify that there is a "ao" table "public.ao_part_table" in "bkdb17" with data
        And verify that there is a "heap" table "public.heap_table" in "bkdb17" with data
        And verify that there is no table "public.co_part_table" in "bkdb17"

    Scenario: 18 plan file creation in directory
        Given the old timestamps are read from json
167
        When the user runs gpdbrestore -e with the stored timestamp
168 169 170 171 172 173
        Then gpdbrestore should return a return code of 0
        Then "plan" file should be created under " "

    Scenario: 19 Simple Plan File Test
        Given the old timestamps are read from json
        And the timestamp labels for scenario "19" are read from json
174
        When the user runs gpdbrestore -e with the stored timestamp
175 176 177 178 179 180
        Then gpdbrestore should return a return code of 0
        Then "plan" file should be created under " "
        And the plan file for scenario "19" is validated against "data/bar_plan1"

    Scenario: 20 No plan file generated
        Given the old timestamps are read from json
181
        When the user runs gpdbrestore -e with the stored timestamp
182 183 184 185 186
        Then gpdbrestore should return a return code of 0
        Then "plan" file should not be created under " "

    Scenario: 21 Schema only restore of incremental backup
        Given the old timestamps are read from json
187
        And the backup test is initialized with database "bkdb21"
188
        When the user runs gpdbrestore -e with the stored timestamp
189 190 191 192 193
        Then gpdbrestore should return a return code of 0
        And tables names in database "bkdb21" should be identical to stored table names in file "part_table_names"

    Scenario: 22 Simple Incremental Backup with AO/CO statistics w/ filter
        Given the old timestamps are read from json
194
        When the user runs gpdbrestore -e with the stored timestamp and options "--noaostats"
195 196 197
        Then gpdbrestore should return a return code of 0
        And verify that there are "0" tuples in "bkdb22" for table "public.ao_index_table"
        And verify that there are "0" tuples in "bkdb22" for table "public.ao_table"
198
        When the user runs gpdbrestore without -e with the stored timestamp and options "-T public.ao_table"
199 200 201 202 203 204
        Then gpdbrestore should return a return code of 0
        And verify that there are "0" tuples in "bkdb22" for table "public.ao_index_table"
        And verify that there are "8760" tuples in "bkdb22" for table "public.ao_table"

    Scenario: 23 Simple Incremental Backup with TRUNCATE
        Given the old timestamps are read from json
205
        When the user runs gpdbrestore -e with the stored timestamp
206 207 208 209 210 211
        Then gpdbrestore should return a return code of 0
        And verify that the data of "21" tables in "bkdb23" is validated after restore
        And verify that the tuple count of all appendonly tables are consistent in "bkdb23"

    Scenario: 24 Simple Incremental Backup to test ADD COLUMN
        Given the old timestamps are read from json
212
        When the user runs gpdbrestore -e with the stored timestamp
213 214 215 216 217 218
        Then gpdbrestore should return a return code of 0
        And verify that the data of "23" tables in "bkdb24" is validated after restore
        And verify that the tuple count of all appendonly tables are consistent in "bkdb24"

    Scenario: 25 Non compressed incremental backup
        Given the old timestamps are read from json
219
        Then the user runs gpdbrestore -e with the stored timestamp
220 221 222 223 224 225 226 227 228
        Then gpdbrestore should return a return code of 0
        And verify that there is no table "testschema.heap_table" in "bkdb25"
        And verify that the data of "11" tables in "bkdb25" is validated after restore
        And verify that the tuple count of all appendonly tables are consistent in "bkdb25"
        And verify that the plan file is created for the latest timestamp

    Scenario: 26 Rollback Insert
        Given the old timestamps are read from json
        And the timestamp labels for scenario "26" are read from json
229
        When the user runs gpdbrestore -e with the stored timestamp
230 231 232 233 234 235 236 237
        Then gpdbrestore should return a return code of 0
        And the plan file for scenario "26" is validated against "data/bar_plan2"
        And verify that the data of "3" tables in "bkdb26" is validated after restore
        And verify that the tuple count of all appendonly tables are consistent in "bkdb26"

    Scenario: 27 Rollback Truncate Table
        Given the old timestamps are read from json
        And the timestamp labels for scenario "27" are read from json
238
        When the user runs gpdbrestore -e with the stored timestamp
239 240 241 242 243 244 245 246
        Then gpdbrestore should return a return code of 0
        And the plan file for scenario "27" is validated against "data/bar_plan2"
        And verify that the data of "3" tables in "bkdb27" is validated after restore
        And verify that the tuple count of all appendonly tables are consistent in "bkdb27"

    Scenario: 28 Rollback Alter table
        Given the old timestamps are read from json
        And the timestamp labels for scenario "28" are read from json
247
        When the user runs gpdbrestore -e with the stored timestamp
248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270
        Then gpdbrestore should return a return code of 0
        And the plan file for scenario "28" is validated against "data/bar_plan2"
        And verify that the data of "3" tables in "bkdb28" is validated after restore
        And verify that the tuple count of all appendonly tables are consistent in "bkdb28"

    Scenario: 29 Verify gpdbrestore -s option works with full backup
        Given the old timestamps are read from json
        When the user runs "gpdbrestore -e -s bkdb29 -a"
        Then gpdbrestore should return a return code of 0
        Then verify that the data of "2" tables in "bkdb29" is validated after restore
        And verify that the tuple count of all appendonly tables are consistent in "bkdb29"
        And verify that database "bkdb29-2" does not exist

    Scenario: 30 Verify gpdbrestore -s option works with incremental backup
        Given the old timestamps are read from json
        When the user runs "gpdbrestore -e -s bkdb30 -a"
        Then gpdbrestore should return a return code of 0
        And verify that the data of "3" tables in "bkdb30" is validated after restore
        And verify that the tuple count of all appendonly tables are consistent in "bkdb30"
        And verify that database "bkdb30-2" does not exist

    Scenario: 31 gpdbrestore -u option with full backup
        Given the old timestamps are read from json
271
        When the user runs gpdbrestore -e with the stored timestamp and options "-u /tmp"
272 273 274 275 276 277
        Then gpdbrestore should return a return code of 0
        Then verify that the data of "2" tables in "bkdb31" is validated after restore
        And verify that the tuple count of all appendonly tables are consistent in "bkdb31"

    Scenario: 32 gpdbrestore -u option with incremental backup
        Given the old timestamps are read from json
278
        When the user runs gpdbrestore -e with the stored timestamp and options "-u /tmp"
279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295
        Then gpdbrestore should return a return code of 0
        Then verify that the data of "3" tables in "bkdb32" is validated after restore
        And verify that the tuple count of all appendonly tables are consistent in "bkdb32"

    Scenario: 33 gpcrondump -x with multiple databases
        Given the old timestamps are read from json
        When the user runs "gpdbrestore -e -s bkdb33 -a"
        Then gpdbrestore should return a return code of 0
        And the user runs "gpdbrestore -e -s bkdb33-2 -a"
        Then gpdbrestore should return a return code of 0
        Then verify that the data of "2" tables in "bkdb33" is validated after restore
        And verify that the data of "2" tables in "bkdb33-2" is validated after restore
        And verify that the tuple count of all appendonly tables are consistent in "bkdb33"
        And verify that the tuple count of all appendonly tables are consistent in "bkdb33-2"

    Scenario: 34 gpdbrestore with --table-file option
        Given the old timestamps are read from json
296
        When the user runs gpdbrestore -e with the stored timestamp and options "--table-file /tmp/table_file_foo"
297 298 299 300 301 302 303 304 305
        Then gpdbrestore should return a return code of 0
        And verify that the data of "2" tables in "bkdb34" is validated after restore
        And verify that the tuple count of all appendonly tables are consistent in "bkdb34"
        And verify that the restored table "public.ao_table" in database "bkdb34" is analyzed
        And verify that the restored table "public.co_table" in database "bkdb34" is analyzed
        Then the file "/tmp/table_file_foo" is removed from the system

    Scenario: 35 Incremental restore with extra full backup
        Given the old timestamps are read from json
306
        When the user runs gpdbrestore -e with the stored timestamp
307 308 309 310 311 312
        Then gpdbrestore should return a return code of 0
        And verify that the data of "3" tables in "bkdb35" is validated after restore
        And verify that the tuple count of all appendonly tables are consistent in "bkdb35"

    Scenario: 36 gpcrondump should not track external tables
        Given the old timestamps are read from json
313
        When the user runs gpdbrestore -e with the stored timestamp
314 315 316 317 318 319 320 321 322
        Then gpdbrestore should return a return code of 0
        And verify that the data of "4" tables in "bkdb36" is validated after restore
        And verify that the tuple count of all appendonly tables are consistent in "bkdb36"
        And verify that there is no "public.ext_tab" in the "dirty_list" file in " "
        And verify that there is no "public.ext_tab" in the "table_list" file in " "
        Then the file "/tmp/ext_tab" is removed from the system

    Scenario: 37 Full backup with -T option
        Given the old timestamps are read from json
323
        When the user runs gpdbrestore -e with the stored timestamp and options "-T public.ao_index_table"
324 325 326 327 328 329 330
        Then gpdbrestore should return a return code of 0
        Then verify that there is no table "public.ao_part_table" in "fullbkdb37"
        And verify that there is no table "public.heap_table" in "fullbkdb37"
        And verify that there is a "ao" table "public.ao_index_table" in "fullbkdb37" with data

    Scenario: 38 gpdbrestore with -T option
        Given the old timestamps are read from json
331
        When the user runs gpdbrestore -e with the stored timestamp and options "-T public.ao_index_table -a"
332 333 334 335 336 337 338
        Then gpdbrestore should return a return code of 0
        Then verify that there is no table "public.ao_part_table" in "bkdb38"
        And verify that there is no table "public.heap_table" in "bkdb38"
        And verify that there is a "ao" table "public.ao_index_table" in "bkdb38" with data

    Scenario: 39 Full backup and restore with -T and --truncate
        Given the old timestamps are read from json
339
        And the backup test is initialized with database "bkdb39"
340
        And there is a "ao" table "public.ao_index_table" in "bkdb39" with data
341
        When the user runs gpdbrestore without -e with the stored timestamp and options "-T public.ao_index_table --truncate"
342 343 344
        Then gpdbrestore should return a return code of 0
        And verify that there is a "ao" table "public.ao_index_table" in "bkdb39" with data
        And verify that the restored table "public.ao_index_table" in database "bkdb39" is analyzed
345 346
        When the user runs gpdbrestore without -e with the stored timestamp and options "-T public.ao_part_table"
        And the user runs gpdbrestore without -e with the stored timestamp and options "-T public.ao_part_table_1_prt_p1_2_prt_1 --truncate"
347 348 349 350 351
        Then gpdbrestore should return a return code of 0
        And verify that there is a "ao" table "public.ao_part_table" in "bkdb39" with data

    Scenario: 40 Full backup and restore with -T and --truncate with dropped table
        Given the old timestamps are read from json
352
        And the backup test is initialized with database "bkdb40"
353
        When the user runs gpdbrestore without -e with the stored timestamp and options "-T public.heap_table --truncate"
354
        Then gpdbrestore should return a return code of 0
355
        And gpdbrestore should print "Skipping truncate of bkdb40.public.heap_table because the relation does not exist" to stdout
356 357 358 359
        And verify that there is a "heap" table "public.heap_table" in "bkdb40" with data

    Scenario: 41 Full backup -T with truncated table
        Given the old timestamps are read from json
360
        And the backup test is initialized with database "bkdb41"
361 362
        And there is a "ao" partition table "public.ao_part_table" in "bkdb41" with data
        When the user truncates "public.ao_part_table_1_prt_p2_2_prt_3" tables in "bkdb41"
363
        And the user runs gpdbrestore without -e with the stored timestamp and options "-T public.ao_part_table_1_prt_p2_2_prt_3"
364 365 366 367 368 369
        Then gpdbrestore should return a return code of 0
        And verify that there is a "ao" table "public.ao_part_table_1_prt_p2_2_prt_3" in "bkdb41" with data
        And verify that the restored table "public.ao_part_table_1_prt_p2_2_prt_3" in database "bkdb41" is analyzed

    Scenario: 42 Full backup -T with no schema name supplied
        Given the old timestamps are read from json
370
        And the backup test is initialized with database "bkdb42"
371
        When the user runs gpdbrestore -e with the stored timestamp and options "-T ao_index_table -a"
372
        Then gpdbrestore should return a return code of 2
373
        Then gpdbrestore should print "No schema name supplied" to stdout
374 375 376

    Scenario: 43 Full backup with gpdbrestore -T for DB with FUNCTION having DROP SQL
        Given the old timestamps are read from json
377
        And the backup test is initialized with database "bkdb43"
378
        When the user runs gpdbrestore -e with the stored timestamp and options "-T public.ao_index_table -a"
379 380 381 382 383
        Then gpdbrestore should return a return code of 0
        And verify that there is a "ao" table "public.ao_index_table" in "bkdb43" with data

    Scenario: 44 Incremental restore with table filter
        Given the old timestamps are read from json
384
        When the user runs gpdbrestore -e with the stored timestamp and options "-T public.ao_table -T public.co_table"
385 386 387 388 389
        Then gpdbrestore should return a return code of 0
        And verify that exactly "2" tables in "bkdb44" have been restored

    Scenario: 45 Incremental restore with invalid table filter
        Given the old timestamps are read from json
390
        When the user runs gpdbrestore -e with the stored timestamp and options "-T public.heap_table -T public.invalid -q"
391
        Then gpdbrestore should return a return code of 2
392
        And gpdbrestore should print "Tables \[\'public.invalid\'\] not found in backup" to stdout
393 394 395

    Scenario: 46 gpdbrestore -L with -u option
        Given the old timestamps are read from json
396
        When the user runs gpdbrestore -e with the stored timestamp and options "-L -u /tmp"
397
        Then gpdbrestore should return a return code of 0
398 399
        And gpdbrestore should print "Table public.ao_part_table" to stdout
        And gpdbrestore should print "Table public.heap_table" to stdout
400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423

    Scenario: 47 gpdbrestore -b with -u option for Full timestamp
        Given the old timestamps are read from json
        Then the user runs gpdbrestore on dump date directory with options "-u /tmp/47"
        Then gpdbrestore should return a return code of 0
        And verify that the data of "11" tables in "bkdb47" is validated after restore
        And verify that the tuple count of all appendonly tables are consistent in "bkdb47"

    Scenario: 48 gpdbrestore with -s and -u options for full backup
        Given the old timestamps are read from json
        When the user runs "gpdbrestore -e -s bkdb48 -u /tmp -a"
        Then gpdbrestore should return a return code of 0
        And verify that the data of "11" tables in "bkdb48" is validated after restore
        And verify that the tuple count of all appendonly tables are consistent in "bkdb48"

    Scenario: 49 gpdbrestore with -s and -u options for incremental backup
        Given the old timestamps are read from json
        When the user runs "gpdbrestore -e -s bkdb49 -u /tmp -a"
        Then gpdbrestore should return a return code of 0
        And verify that the data of "12" tables in "bkdb49" is validated after restore
        And verify that the tuple count of all appendonly tables are consistent in "bkdb49"

    Scenario: 50 gpdbrestore -b option should display the timestamps in sorted order
        Given the old timestamps are read from json
424
        Then the user runs gpdbrestore -e with the date directory
425 426 427 428 429 430 431 432 433 434 435 436 437
        And the timestamps should be printed in sorted order

    Scenario: 51 gpdbrestore -R option should display the timestamps in sorted order
        Given the old timestamps are read from json
        Then the user runs gpdbrestore with "-R" option in path "/tmp"
        And the timestamps should be printed in sorted order

    @scale
    Scenario: 52 Dirty File Scale Test
        Given the old timestamps are read from json
        Then database "bkdb52" is dropped and recreated
        When the user runs gp_restore with the the stored timestamp and subdir for metadata only in "bkdb52"
        Then gp_restore should return a return code of 0
438
        When the user runs gpdbrestore without -e with the stored timestamp and options "--noplan"
439 440 441 442 443 444 445 446 447 448 449
        Then gpdbrestore should return a return code of 0
        And verify that tables "public.ao_table_3, public.ao_table_4, public.ao_table_5, public.ao_table_6" in "bkdb52" has no rows
        And verify that tables "public.ao_table_7, public.ao_table_8, public.ao_table_9, public.ao_table_10" in "bkdb52" has no rows
        And verify that the data of the dirty tables under " " in "bkdb52" is validated after restore

    @scale
    Scenario: 53 Dirty File Scale Test for partitions
        Given the old timestamps are read from json
        Then database "bkdb53" is dropped and recreated
        When the user runs gp_restore with the the stored timestamp and subdir for metadata only in "bkdb53"
        Then gp_restore should return a return code of 0
450
        When the user runs gpdbrestore without -e with the stored timestamp and options "--noplan"
451 452 453 454 455 456 457
        Then gpdbrestore should return a return code of 0
        And verify that tables "public.ao_table_1_prt_p1_2_prt_3, public.ao_table_1_prt_p2_2_prt_1" in "bkdb53" has no rows
        And verify that tables "public.ao_table_1_prt_p2_2_prt_2, public.ao_table_1_prt_p2_2_prt_3" in "bkdb53" has no rows
        And verify that the data of the dirty tables under " " in "bkdb53" is validated after restore

    Scenario: 54 Test gpcrondump and gpdbrestore verbose option
        Given the old timestamps are read from json
458
        When the user runs gpdbrestore -e with the stored timestamp and options "--verbose"
459 460 461 462 463 464
        Then gpdbrestore should return a return code of 0
        And verify that the data of "2" tables in "bkdb54" is validated after restore
        And verify that the tuple count of all appendonly tables are consistent in "bkdb54"

    Scenario: 55 Incremental table filter gpdbrestore with different schema for same tablenames
        Given the old timestamps are read from json
465
        When the user runs gpdbrestore -e with the stored timestamp and options "-T public.ao_part_table -T testschema.ao_part_table"
466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484
        Then gpdbrestore should return a return code of 0
        And verify that there is no table "public.ao_part_table1_1_prt_p1_2_prt_3" in "bkdb55"
        And verify that there is no table "public.ao_part_table1_1_prt_p2_2_prt_3" in "bkdb55"
        And verify that there is no table "public.ao_part_table1_1_prt_p1_2_prt_2" in "bkdb55"
        And verify that there is no table "public.ao_part_table1_1_prt_p2_2_prt_2" in "bkdb55"
        And verify that there is no table "public.ao_part_table1_1_prt_p1_2_prt_1" in "bkdb55"
        And verify that there is no table "public.ao_part_table1_1_prt_p2_2_prt_1" in "bkdb55"
        And verify that there is no table "testschema.ao_part_table1_1_prt_p1_2_prt_3" in "bkdb55"
        And verify that there is no table "testschema.ao_part_table1_1_prt_p2_2_prt_3" in "bkdb55"
        And verify that there is no table "testschema.ao_part_table1_1_prt_p1_2_prt_2" in "bkdb55"
        And verify that there is no table "testschema.ao_part_table1_1_prt_p2_2_prt_2" in "bkdb55"
        And verify that there is no table "testschema.ao_part_table1_1_prt_p1_2_prt_1" in "bkdb55"
        And verify that there is no table "testschema.ao_part_table1_1_prt_p2_2_prt_1" in "bkdb55"

    Scenario: 56 Incremental table filter gpdbrestore with noplan option
        Given the old timestamps are read from json
        And database "bkdb56" is dropped and recreated
        When the user runs gp_restore with the the stored timestamp and subdir for metadata only in "bkdb56"
        Then gp_restore should return a return code of 0
485
        When the user runs gpdbrestore without -e with the stored timestamp and options "-T public.ao_part_table --noplan"
486 487 488 489 490 491 492 493 494 495
        Then gpdbrestore should return a return code of 0
        And verify that tables "public.ao_part_table_1_prt_p1_2_prt_3, public.ao_part_table_1_prt_p2_2_prt_3" in "bkdb56" has no rows
        And verify that tables "public.ao_part_table_1_prt_p1_2_prt_2, public.ao_part_table_1_prt_p2_2_prt_2" in "bkdb56" has no rows
        And verify that tables "public.ao_part_table_1_prt_p1_2_prt_1, public.ao_part_table_1_prt_p2_2_prt_1" in "bkdb56" has no rows
        And verify that tables "public.ao_part_table1_1_prt_p1_2_prt_3, public.ao_part_table1_1_prt_p2_2_prt_3" in "bkdb56" has no rows
        And verify that tables "public.ao_part_table1_1_prt_p1_2_prt_2, public.ao_part_table1_1_prt_p2_2_prt_2" in "bkdb56" has no rows
        And verify that tables "public.ao_part_table1_1_prt_p1_2_prt_1, public.ao_part_table1_1_prt_p2_2_prt_1" in "bkdb56" has no rows

    Scenario: 57 gpdbrestore list_backup option
        Given the old timestamps are read from json
496
        And the backup test is initialized with database "bkdb57"
497
        When the user runs gpdbrestore without -e with the stored timestamp and options "--list-backup"
498 499 500 501
        Then gpdbrestore should return a return code of 0
        Then "plan" file should be created under " "
        And verify that the list of stored timestamps is printed to stdout
        Then "plan" file is removed under " "
502
        When the user runs gpdbrestore without -e with the stored timestamp and options "--list-backup -a"
503 504 505 506 507 508
        Then gpdbrestore should return a return code of 0
        Then "plan" file should be created under " "
        And verify that the list of stored timestamps is printed to stdout

    Scenario: 58 gpdbrestore list_backup option with -T table filter
        Given the old timestamps are read from json
509
        When the user runs gpdbrestore without -e with the stored timestamp and options "-T public.heap_table --list-backup"
510
        Then gpdbrestore should return a return code of 2
511
        And gpdbrestore should print "Cannot specify -T and --list-backup together" to stdout
512 513 514

    Scenario: 59 gpdbrestore list_backup option with full timestamp
        Given the old timestamps are read from json
515
        And the backup test is initialized with database "bkdb59"
516
        When the user runs gpdbrestore without -e with the stored timestamp and options "--list-backup"
517
        Then gpdbrestore should return a return code of 2
518
        And gpdbrestore should print "--list-backup is not supported for restore with full timestamps" to stdout
519 520 521

    Scenario: 60 Incremental Backup and Restore with named pipes
        Given the old timestamps are read from json
522
        And the backup test is initialized with database "bkdb60"
523 524 525 526 527 528 529 530
        And there is a "heap" table "public.heap_table" with compression "None" in "bkdb60" with data
        And there is a "ao" partition table "public.ao_part_table" with compression "None" in "bkdb60" with data
        Then table "public.ao_part_table" is assumed to be in dirty state in "bkdb60"
        When the named pipe script for the "restore" is run for the files under "/tmp/custom_timestamps"
        And all the data from "bkdb60" is saved for verification
        Then gpdbrestore should return a return code of 0
        And verify that the data of "10" tables in "bkdb60" is validated after restore
        When the named pipe script for the "restore" is run for the files under "/tmp/custom_timestamps"
531 532
        And the user runs gpdbrestore -e with the stored timestamp and options "-T public.ao_part_table -u /tmp/custom_timestamps"
        Then gpdbrestore should print "\[WARNING\]:-Skipping validation of tables in dump file due to the use of named pipes" to stdout
533 534 535 536
        And close all opened pipes

    Scenario: 61 Incremental Backup and Restore with -t filter for Full
        Given the old timestamps are read from json
537
        When the user runs gpdbrestore -e with the stored timestamp and options "--prefix=foo"
538 539 540 541 542 543 544
        Then gpdbrestore should return a return code of 0
        And verify that there is a "heap" table "public.heap_table" in "bkdb61" with data
        And verify that there is a "ao" table "public.ao_index_table" in "bkdb61" with data
        And verify that there is no table "public.ao_part_table" in "bkdb61"

    Scenario: 62 Incremental Backup and Restore with -T filter for Full
        Given the old timestamps are read from json
545
        When the user runs gpdbrestore -e with the stored timestamp and options "--prefix=foo"
546 547 548 549 550 551 552
        Then gpdbrestore should return a return code of 0
        And verify that there is a "ao" table "public.ao_index_table" in "bkdb62" with data
        And verify that there is no table "public.ao_part_table" in "bkdb62"
        And verify that there is no table "public.heap_table" in "bkdb62"

    Scenario: 63 Incremental Backup and Restore with --table-file filter for Full
        Given the old timestamps are read from json
553
        When the user runs gpdbrestore -e with the stored timestamp and options "--prefix=foo"
554 555 556 557 558 559 560
        Then gpdbrestore should return a return code of 0
        And verify that there is a "heap" table "public.heap_table" in "bkdb63" with data
        And verify that there is a "ao" table "public.ao_index_table" in "bkdb63" with data
        And verify that there is no table "public.ao_part_table" in "bkdb63"

    Scenario: 64 Incremental Backup and Restore with --exclude-table-file filter for Full
        Given the old timestamps are read from json
561
        When the user runs gpdbrestore -e with the stored timestamp and options "--prefix=foo"
562 563 564 565 566 567 568
        Then gpdbrestore should return a return code of 0
        And verify that there is a "ao" table "public.ao_index_table" in "bkdb64" with data
        And verify that there is no table "public.ao_part_table" in "bkdb64"
        And verify that there is no table "public.heap_table" in "bkdb64"

    Scenario: 65 Full Backup with option -T and non-existant table
        Given the old timestamps are read from json
569
        When the user runs gpdbrestore -e with the stored timestamp
570 571 572 573 574 575 576
        Then verify that the "report" file in " " dir contains "Backup Type: Full"
        Then gpdbrestore should return a return code of 0
        And verify that there is a "ao" table "public.ao_part_table" in "bkdb65" with data
        And verify that there is no table "public.heap_table" in "bkdb65"

    Scenario: 66 Negative test gpdbrestore -G with incremental timestamp
        Given the old timestamps are read from json
577
        When the user runs gpdbrestore -e with the stored timestamp and options "-G"
578
        Then gpdbrestore should return a return code of 2
579
        And gpdbrestore should print "Unable to locate global file" to stdout
580 581 582

    Scenario: 67 Dump and Restore metadata
        Given the old timestamps are read from json
583
        And the backup test is initialized with database "bkdb67"
584 585 586 587 588 589 590 591 592
        When the user runs "gpdbrestore -a -t 20140101010101 -u /tmp/custom_timestamps"
        Then gpdbrestore should return a return code of 0
        And the user runs """psql -c "ALTER TABLE heap_table DISABLE TRIGGER before_heap_ins_trig;" bkdb67"""
        And the user runs "psql -f test/behave/mgmt_utils/steps/data/check_metadata.sql bkdb67 > /tmp/check_metadata.out"
        And verify that the contents of the files "/tmp/check_metadata.out" and "test/behave/mgmt_utils/steps/data/check_metadata.ans" are identical
        And the directory "/tmp/check_metadata.out" is removed or does not exist

    Scenario: 68 Restore -T for incremental dump should restore metadata/postdata objects for tablenames with English and multibyte (chinese) characters
        Given the old timestamps are read from json
593
        And the backup test is initialized with database "bkdb68"
594
        When the user runs gpdbrestore -e with the stored timestamp and options "--table-file test/behave/mgmt_utils/steps/data/include_tables_with_metadata_postdata"
595 596
        Then gpdbrestore should return a return code of 0
        When the user runs "psql -f test/behave/mgmt_utils/steps/data/select_multi_byte_char_tables.sql bkdb68"
597
        Then psql should print "2000" to stdout 4 times
598 599 600 601 602 603 604 605 606 607 608 609
        And verify that there is a "ao" table "public.ao_index_table" in "bkdb68" with data
        When the user runs "psql -f test/behave/mgmt_utils/steps/data/describe_multi_byte_char.sql bkdb68 > /tmp/describe_multi_byte_char_after"
        And the user runs "psql -c '\d public.ao_index_table' bkdb68 > /tmp/describe_ao_index_table_after"
        Then verify that the contents of the files "/tmp/68_describe_multi_byte_char_before" and "/tmp/describe_multi_byte_char_after" are identical
        And verify that the contents of the files "/tmp/68_describe_ao_index_table_before" and "/tmp/describe_ao_index_table_after" are identical
        And the file "/tmp/68_describe_multi_byte_char_before" is removed from the system
        And the file "/tmp/describe_multi_byte_char_after" is removed from the system
        And the file "/tmp/68_describe_ao_index_table_before" is removed from the system
        And the file "/tmp/describe_ao_index_table_after" is removed from the system

    Scenario: 69 Restore -T for full dump should restore metadata/postdata objects for tablenames with English and multibyte (chinese) characters
        Given the old timestamps are read from json
610
        And the backup test is initialized with database "bkdb69"
611
        When the user runs gpdbrestore -e with the stored timestamp and options "--table-file test/behave/mgmt_utils/steps/data/include_tables_with_metadata_postdata"
612 613
        Then gpdbrestore should return a return code of 0
        When the user runs "psql -f test/behave/mgmt_utils/steps/data/select_multi_byte_char_tables.sql bkdb69"
614
        Then psql should print "1000" to stdout 4 times
615 616 617 618 619 620 621 622 623 624
        And verify that there is a "ao" table "public.ao_index_table" in "bkdb69" with data
        When the user runs "psql -f test/behave/mgmt_utils/steps/data/describe_multi_byte_char.sql bkdb69 > /tmp/describe_multi_byte_char_after"
        And the user runs "psql -c '\d public.ao_index_table' bkdb69 > /tmp/describe_ao_index_table_after"
        Then verify that the contents of the files "/tmp/69_describe_multi_byte_char_before" and "/tmp/describe_multi_byte_char_after" are identical
        And verify that the contents of the files "/tmp/69_describe_ao_index_table_before" and "/tmp/describe_ao_index_table_after" are identical
        And the file "/tmp/69_describe_multi_byte_char_before" is removed from the system
        And the file "/tmp/describe_multi_byte_char_after" is removed from the system
        And the file "/tmp/69_describe_ao_index_table_before" is removed from the system
        And the file "/tmp/describe_ao_index_table_after" is removed from the system

625
    @fails_on_mac
626 627
    Scenario: 70 Restore -T for full dump should restore GRANT privileges for tablenames with English and multibyte (chinese) characters
        Given the old timestamps are read from json
628
        And the backup test is initialized with database "bkdb70"
629 630 631 632
        And the user runs """psql -c "CREATE ROLE test_gpadmin LOGIN ENCRYPTED PASSWORD 'changeme' SUPERUSER INHERIT CREATEDB CREATEROLE RESOURCE QUEUE pg_default;" bkdb70"""
        And the user runs """psql -c "CREATE ROLE customer LOGIN ENCRYPTED PASSWORD 'changeme' NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE RESOURCE QUEUE pg_default;" bkdb70"""
        And the user runs """psql -c "CREATE ROLE select_group NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE RESOURCE QUEUE pg_default;" bkdb70"""
        And the user runs """psql -c "CREATE ROLE test_group NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE RESOURCE QUEUE pg_default;" bkdb70"""
633
        And the user runs "psql -c 'CREATE ROLE foo_user' bkdb70"
634
        When the user runs gpdbrestore -e with the stored timestamp and options "--table-file test/behave/mgmt_utils/steps/data/include_tables_with_grant_permissions -u /tmp --noanalyze"
635
        Then gpdbrestore should return a return code of 0
636
        And the user runs "psql -f test/behave/mgmt_utils/steps/data/select_multi_byte_char_tables.sql bkdb70"
637
        Then psql should print "1000" to stdout 4 times
638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660
        And verify that there is a "heap" table "customer.heap_index_table_1" in "bkdb70" with data
        And verify that there is a "heap" table "customer.heap_index_table_2" in "bkdb70" with data
        When the user runs "psql -c '\d customer.heap_index_table_1' bkdb70 > /tmp/describe_heap_index_table_1_after"
        And the user runs "psql -c '\dp customer.heap_index_table_1' bkdb70 > /tmp/privileges_heap_index_table_1_after"
        And the user runs "psql -c '\d customer.heap_index_table_2' bkdb70 > /tmp/describe_heap_index_table_2_after"
        And the user runs "psql -c '\dp customer.heap_index_table_2' bkdb70 > /tmp/privileges_heap_index_table_2_after"
        And the user runs "psql -f test/behave/mgmt_utils/steps/data/describe_multi_byte_char.sql bkdb70 > /tmp/describe_multi_byte_char_after"
        Then verify that the contents of the files "/tmp/70_describe_heap_index_table_1_before" and "/tmp/describe_heap_index_table_1_after" are identical
        And verify that the contents of the files "/tmp/70_describe_heap_index_table_2_before" and "/tmp/describe_heap_index_table_2_after" are identical
        And verify that the contents of the files "/tmp/70_privileges_heap_index_table_1_before" and "/tmp/privileges_heap_index_table_1_after" are identical
        And verify that the contents of the files "/tmp/70_privileges_heap_index_table_2_before" and "/tmp/privileges_heap_index_table_2_after" are identical
        And verify that the contents of the files "/tmp/70_describe_multi_byte_char_before" and "/tmp/describe_multi_byte_char_after" are identical
        And the file "/tmp/70_describe_heap_index_table_1_before" is removed from the system
        And the file "/tmp/describe_heap_index_table_1_after" is removed from the system
        And the file "/tmp/70_privileges_heap_index_table_1_before" is removed from the system
        And the file "/tmp/privileges_heap_index_table_1_after" is removed from the system
        And the file "/tmp/70_describe_heap_index_table_2_before" is removed from the system
        And the file "/tmp/describe_heap_index_table_2_after" is removed from the system
        And the file "/tmp/70_privileges_heap_index_table_2_before" is removed from the system
        And the file "/tmp/privileges_heap_index_table_2_after" is removed from the system
        And the file "/tmp/70_describe_multi_byte_char_before" is removed from the system
        And the file "/tmp/describe_multi_byte_char_after" is removed from the system

661
    @fails_on_mac
662 663
    Scenario: 71 Restore -T for incremental dump should restore GRANT privileges for tablenames with English and multibyte (chinese) characters
        Given the old timestamps are read from json
664
        And the backup test is initialized with database "bkdb71"
665 666 667 668 669
        And the user runs """psql -c "CREATE ROLE test_gpadmin LOGIN ENCRYPTED PASSWORD 'changeme' SUPERUSER INHERIT CREATEDB CREATEROLE RESOURCE QUEUE pg_default;" bkdb71"""
        And the user runs """psql -c "CREATE ROLE customer LOGIN ENCRYPTED PASSWORD 'changeme' NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE RESOURCE QUEUE pg_default;" bkdb71"""
        And the user runs """psql -c "CREATE ROLE select_group NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE RESOURCE QUEUE pg_default;" bkdb71"""
        And the user runs """psql -c "CREATE ROLE test_group NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE RESOURCE QUEUE pg_default;" bkdb71"""
        When the user runs "psql -c 'CREATE ROLE foo_user' bkdb71"
670
        When the user runs gpdbrestore -e with the stored timestamp and options "--table-file test/behave/mgmt_utils/steps/data/include_tables_with_grant_permissions -u /tmp --noanalyze"
671 672
        Then gpdbrestore should return a return code of 0
        When the user runs "psql -f test/behave/mgmt_utils/steps/data/select_multi_byte_char_tables.sql bkdb71"
673
        Then psql should print "2000" to stdout 4 times
674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698
        And verify that there is a "heap" table "customer.heap_index_table_1" in "bkdb71" with data
        And verify that there is a "heap" table "customer.heap_index_table_2" in "bkdb71" with data
        When the user runs "psql -c '\d customer.heap_index_table_1' bkdb71 > /tmp/71_describe_heap_index_table_1_after"
        And the user runs "psql -c '\dp customer.heap_index_table_1' bkdb71 > /tmp/71_privileges_heap_index_table_1_after"
        And the user runs "psql -c '\d customer.heap_index_table_2' bkdb71 > /tmp/71_describe_heap_index_table_2_after"
        And the user runs "psql -c '\dp customer.heap_index_table_2' bkdb71 > /tmp/71_privileges_heap_index_table_2_after"
        And the user runs "psql -f test/behave/mgmt_utils/steps/data/describe_multi_byte_char.sql bkdb71 > /tmp/71_describe_multi_byte_char_after"
        Then verify that the contents of the files "/tmp/71_describe_heap_index_table_1_before" and "/tmp/71_describe_heap_index_table_1_after" are identical
        And verify that the contents of the files "/tmp/71_describe_heap_index_table_2_before" and "/tmp/71_describe_heap_index_table_2_after" are identical
        And verify that the contents of the files "/tmp/71_privileges_heap_index_table_1_before" and "/tmp/71_privileges_heap_index_table_1_after" are identical
        And verify that the contents of the files "/tmp/71_privileges_heap_index_table_2_before" and "/tmp/71_privileges_heap_index_table_2_after" are identical
        And verify that the contents of the files "/tmp/71_describe_multi_byte_char_before" and "/tmp/71_describe_multi_byte_char_after" are identical
        And the file "/tmp/71_describe_heap_index_table_1_before" is removed from the system
        And the file "/tmp/71_describe_heap_index_table_1_after" is removed from the system
        And the file "/tmp/71_privileges_heap_index_table_1_before" is removed from the system
        And the file "/tmp/71_privileges_heap_index_table_1_after" is removed from the system
        And the file "/tmp/71_describe_heap_index_table_2_before" is removed from the system
        And the file "/tmp/71_describe_heap_index_table_2_after" is removed from the system
        And the file "/tmp/71_privileges_heap_index_table_2_before" is removed from the system
        And the file "/tmp/71_privileges_heap_index_table_2_after" is removed from the system
        And the file "/tmp/71_describe_multi_byte_char_before" is removed from the system
        And the file "/tmp/71_describe_multi_byte_char_after" is removed from the system

    Scenario: 72 Redirected Restore Full Backup and Restore without -e option
        Given the old timestamps are read from json
699
        When the user runs gpdbrestore -e with the stored timestamp and options "--redirect=bkdb72-2"
700 701 702 703 704 705
        Then gpdbrestore should return a return code of 0
        And check that there is a "heap" table "public.heap_table" in "bkdb72-2" with same data from "bkdb72"
        And check that there is a "ao" table "public.ao_part_table" in "bkdb72-2" with same data from "bkdb72"

    Scenario: 73 Full Backup and Restore with -e option
        Given the old timestamps are read from json
706
        When the user runs gpdbrestore -e with the stored timestamp and options "--redirect=bkdb73-2"
707 708 709 710 711 712
        Then gpdbrestore should return a return code of 0
        And check that there is a "heap" table "public.heap_table" in "bkdb73-2" with same data from "bkdb73"
        And check that there is a "ao" table "public.ao_part_table" in "bkdb73-2" with same data from "bkdb73"

    Scenario: 74 Incremental Backup and Redirected Restore
        Given the old timestamps are read from json
713
        When the user runs gpdbrestore -e with the stored timestamp and options "--redirect=bkdb74-2"
714 715 716 717 718
        Then gpdbrestore should return a return code of 0
        And verify that the data of "11" tables in "bkdb74-2" is validated after restore from "bkdb74"

    Scenario: 75 Full backup and redirected restore with -T
        Given the old timestamps are read from json
719
        When the user runs gpdbrestore -e with the stored timestamp and options "-T public.ao_index_table --redirect=bkdb75-2"
720 721 722 723 724 725
        Then gpdbrestore should return a return code of 0
        And check that there is a "ao" table "public.ao_index_table" in "bkdb75-2" with same data from "bkdb75"

    Scenario: 76 Full backup and redirected restore with -T and --truncate
        Given the old timestamps are read from json
        And the database "bkdb76-2" does not exist
726
        When the user runs gpdbrestore without -e with the stored timestamp and options "-T public.ao_index_table --redirect=bkdb76-2 --truncate"
727
        Then gpdbrestore should return a return code of 2
728
        And gpdbrestore should print "Failure from truncating tables, FATAL:  database "bkdb76-2" does not exist" to stdout
729
        And there is a "ao" table "public.ao_index_table" in "bkdb76-2" with data
730
        When the user runs gpdbrestore without -e with the stored timestamp and options "-T public.ao_index_table --redirect=bkdb76-2 --truncate"
731 732 733 734 735
        Then gpdbrestore should return a return code of 0
        And check that there is a "ao" table "public.ao_index_table" in "bkdb76-2" with same data from "bkdb76"

    Scenario: 77 Incremental redirected restore with table filter
        Given the old timestamps are read from json
736
        When the user runs gpdbrestore -e with the stored timestamp and options "-T public.ao_table -T public.co_table --redirect=bkdb77-2"
737 738 739 740 741
        Then gpdbrestore should return a return code of 0
        And verify that exactly "2" tables in "bkdb77-2" have been restored from "bkdb77"

    Scenario: 78 Full Backup and Redirected Restore with --prefix option
        Given the old timestamps are read from json
742
        When the user runs gpdbrestore -e with the stored timestamp and options "--prefix=foo --redirect=bkdb78-2"
743 744 745 746 747 748
        Then gpdbrestore should return a return code of 0
        And check that there is a "heap" table "public.heap_table" in "bkdb78-2" with same data from "bkdb78"
        And check that there is a "ao" table "public.ao_part_table" in "bkdb78-2" with same data from "bkdb78"

    Scenario: 79 Full Backup and Redirected Restore with --prefix option for multiple databases
        Given the old timestamps are read from json
749
        When the user runs gpdbrestore -e with the stored timestamp and options "--prefix=foo --redirect=bkdb79-3"
750 751 752 753 754 755
        Then gpdbrestore should return a return code of 0
        And check that there is a "heap" table "public.heap_table" in "bkdb79-3" with same data from "bkdb79-2"
        And check that there is a "ao" table "public.ao_part_table" in "bkdb79-3" with same data from "bkdb79"

    Scenario: 80 Full Backup and Restore with the master dump file missing
        Given the old timestamps are read from json
756
        When the user runs gpdbrestore -e with the stored timestamp
757
        Then gpdbrestore should return a return code of 2
758
        And gpdbrestore should print "Unable to find .*. Skipping restore." to stdout
759 760 761

    Scenario: 81 Incremental Backup and Restore with the master dump file missing
        Given the old timestamps are read from json
762
        When the user runs gpdbrestore -e with the stored timestamp
763
        Then gpdbrestore should return a return code of 2
764
        And gpdbrestore should print "Unable to find .*. Skipping restore." to stdout
765 766 767

    Scenario: 82 Uppercase Database Name Full Backup and Restore
        Given the old timestamps are read from json
768
        When the user runs gpdbrestore -e with the stored timestamp
769
        Then gpdbrestore should return a return code of 0
770
        And gpdbrestore should not print "Issue with analyze of" to stdout
771 772 773 774 775 776 777
        And verify that there is a "heap" table "public.heap_table" in "82TESTING" with data
        And verify that there is a "ao" table "public.ao_part_table" in "82TESTING" with data

    Scenario: 83 Uppercase Database Name Full Backup and Restore using -s option with and without quotes
        Given the old timestamps are read from json
        When the user runs "gpdbrestore -s 83TESTING -e -a"
        Then gpdbrestore should return a return code of 0
778
        And gpdbrestore should not print "Issue with analyze of" to stdout
779 780 781 782
        And verify that there is a "heap" table "public.heap_table" in "83TESTING" with data
        And verify that there is a "ao" table "public.ao_part_table" in "83TESTING" with data
        And the user runs "gpdbrestore -s "83TESTING" -e -a"
        Then gpdbrestore should return a return code of 0
783
        And gpdbrestore should not print "Issue with analyze of" to stdout
784 785 786 787 788
        And verify that there is a "heap" table "public.heap_table" in "83TESTING" with data
        And verify that there is a "ao" table "public.ao_part_table" in "83TESTING" with data

    Scenario: 84 Uppercase Database Name Incremental Backup and Restore
        Given the old timestamps are read from json
789
        When the user runs gpdbrestore -e with the stored timestamp
790
        Then gpdbrestore should return a return code of 0
791
        And gpdbrestore should not print "Issue with analyze of" to stdout
792 793 794 795
        And verify that the data of "11" tables in "84TESTING" is validated after restore

    Scenario: 85 Full backup and Restore should create the gp_toolkit schema with -e option
        Given the old timestamps are read from json
796
        When the user runs gpdbrestore -e with the stored timestamp
797 798 799 800 801 802
        Then gpdbrestore should return a return code of 0
        And verify that the data of "10" tables in "bkdb85" is validated after restore
        And verify that the schema "gp_toolkit" exists in "bkdb85"

    Scenario: 86 Incremental backup and Restore should create the gp_toolkit schema with -e option
        Given the old timestamps are read from json
803
        When the user runs gpdbrestore -e with the stored timestamp
804 805 806 807 808 809
        Then gpdbrestore should return a return code of 0
        And verify that the data of "11" tables in "bkdb86" is validated after restore
        And verify that the schema "gp_toolkit" exists in "bkdb86"

    Scenario: 87 Redirected Restore should create the gp_toolkit schema with or without -e option
        Given the old timestamps are read from json
810
        When the user runs gpdbrestore -e with the stored timestamp and options "--redirect=bkdb87-2"
811 812 813
        Then gpdbrestore should return a return code of 0
        Then verify that the data of "10" tables in "bkdb87-2" is validated after restore from "bkdb87"
        And verify that the schema "gp_toolkit" exists in "bkdb87-2"
814
        And the user runs gpdbrestore -e with the stored timestamp and options "--redirect=bkdb87-2"
815 816 817 818 819 820
        Then gpdbrestore should return a return code of 0
        And verify that the data of "10" tables in "bkdb87-2" is validated after restore from "bkdb87"
        And verify that the schema "gp_toolkit" exists in "bkdb87-2"

    Scenario: 88 gpdbrestore with noanalyze
        Given the old timestamps are read from json
821
        And the backup test is initialized with database "bkdb88"
822
        When the user runs gpdbrestore -e with the stored timestamp and options "--noanalyze -a"
823
        Then gpdbrestore should return a return code of 0
824
        And gpdbrestore should print "Analyze bypassed on request" to stdout
825 826 827 828 829
        And verify that the data of "10" tables in "bkdb88" is validated after restore
        And verify that the tuple count of all appendonly tables are consistent in "bkdb88"

    Scenario: 89 gpdbrestore without noanalyze
        Given the old timestamps are read from json
830
        When the user runs gpdbrestore -e with the stored timestamp
831
        Then gpdbrestore should return a return code of 0
832 833
        And gpdbrestore should print "Commencing analyze of bkdb89 database" to stdout
        And gpdbrestore should print "Analyze of bkdb89 completed without error" to stdout
834 835 836 837 838
        And verify that the data of "10" tables in "bkdb89" is validated after restore
        And verify that the tuple count of all appendonly tables are consistent in "bkdb89"

    Scenario: 90 Writable Report/Status Directory Full Backup and Restore without --report-status-dir option
        Given the old timestamps are read from json
839
        When the user runs gpdbrestore -e with the stored timestamp
840
        Then gpdbrestore should return a return code of 0
841 842
        And gpdbrestore should not print "gp-r" to stdout
        And gpdbrestore should not print "status" to stdout
843 844 845 846 847 848 849 850 851
        And verify that there is a "heap" table "public.heap_table" in "bkdb90" with data
        And verify that there is a "ao" table "public.ao_part_table" in "bkdb90" with data
        And verify that report file is generated in master_data_directory
        And verify that status file is generated in segment_data_directory
        And there are no report files in "master_data_directory"
        And there are no status files in "segment_data_directory"

    Scenario: 91 Writable Report/Status Directory Full Backup and Restore with --report-status-dir option
        Given the old timestamps are read from json
852
        When the user runs gpdbrestore -e with the stored timestamp and options "--report-status-dir=/tmp"
853
        Then gpdbrestore should return a return code of 0
854 855
        And gpdbrestore should print "gp-r" to stdout
        And gpdbrestore should print "status" to stdout
856 857 858 859 860 861 862 863 864 865 866
        And verify that there is a "heap" table "public.heap_table" in "bkdb91" with data
        And verify that there is a "ao" table "public.ao_part_table" in "bkdb91" with data
        And verify that report file is generated in /tmp
        And verify that status file is generated in /tmp
        And there are no report files in "/tmp"
        And there are no status files in "/tmp"

    Scenario: 92 Writable Report/Status Directory Full Backup and Restore with -u option
        Given the old timestamps are read from json
        When the user runs "gpdbrestore -u /tmp/92_custom_timestamps -e -a -t 20150101010101"
        Then gpdbrestore should return a return code of 0
867 868
        And gpdbrestore should print "gp-r" to stdout
        And gpdbrestore should print "status" to stdout
869 870 871 872 873 874 875 876 877
        And verify that there is a "heap" table "public.heap_table" in "bkdb92" with data
        And verify that there is a "ao" table "public.ao_part_table" in "bkdb92" with data
        And verify that report file is generated in /tmp/92_custom_timestamps/db_dumps/20150101
        And verify that status file is generated in /tmp/92_custom_timestamps/db_dumps/20150101

    Scenario: 93 Writable Report/Status Directory Full Backup and Restore with no write access -u option
        Given the old timestamps are read from json
        When the user runs "gpdbrestore -u /tmp/custom_timestamps -e -a -t 20160101010101"
        Then gpdbrestore should return a return code of 0
878 879
        And gpdbrestore should not print "gp-r" to stdout
        And gpdbrestore should not print "--status=" to stdout
880 881 882 883 884 885 886 887 888 889
        And verify that there is a "heap" table "public.heap_table" in "bkdb93" with data
        And verify that there is a "ao" table "public.ao_part_table" in "bkdb93" with data
        And verify that report file is generated in master_data_directory
        And verify that status file is generated in segment_data_directory
        And there are no report files in "master_data_directory"
        And there are no status files in "segment_data_directory"
        And the user runs command "chmod -R 777 /tmp/custom_timestamps/db_dumps"

    Scenario: 94 Filtered Full Backup with Partition Table
        Given the old timestamps are read from json
890
        When the user runs gpdbrestore -e with the stored timestamp and options "-T public.ao_part_table"
891 892 893 894 895 896 897
        Then gpdbrestore should return a return code of 0
        And verify that there is no table "public.heap_table" in "bkdb94"
        And verify that there is a "ao" table "public.ao_part_table" in "bkdb94" with data
        And verify that the data of "9" tables in "bkdb94" is validated after restore

    Scenario: 95 Filtered Incremental Backup with Partition Table
        Given the old timestamps are read from json
898
        When the user runs gpdbrestore -e with the stored timestamp and options "-T public.ao_part_table"
899 900 901 902 903 904 905
        Then gpdbrestore should return a return code of 0
        And verify that there is no table "public.heap_table" in "bkdb95"
        And verify that there is a "ao" table "public.ao_part_table" in "bkdb95" with data
        And verify that the data of "9" tables in "bkdb95" is validated after restore

    Scenario: 96 gpdbrestore runs ANALYZE on restored table only
        Given the old timestamps are read from json
906
        And the backup test is initialized with database "bkdb96"
907
        And there is a "heap" table "public.heap_table" in "bkdb96" with data
908
        When the user runs gpdbrestore without -e with the stored timestamp and options "-T public.ao_index_table"
909 910 911 912 913 914 915
        Then gpdbrestore should return a return code of 0
        And verify that there is a "ao" table "public.ao_index_table" in "bkdb96" with data
        And verify that the restored table "public.ao_index_table" in database "bkdb96" is analyzed
        And verify that the table "public.heap_table" in database "bkdb96" is not analyzed

    Scenario: 97 Full Backup with multiple -S option and Restore
        Given the old timestamps are read from json
916
        When the user runs gpdbrestore -e with the stored timestamp
917 918 919 920 921 922 923
        Then gpdbrestore should return a return code of 0
        And verify that there is no table "schema_heap.heap_table" in "bkdb97"
        And verify that there is no table "testschema.heap_table" in "bkdb97"
        And verify that there is a "ao" table "schema_ao.ao_part_table" in "bkdb97" with data

    Scenario: 98 Full Backup with option -S and Restore
        Given the old timestamps are read from json
924
        When the user runs gpdbrestore -e with the stored timestamp
925 926 927 928 929 930
        Then gpdbrestore should return a return code of 0
        And verify that there is no table "schema_heap.heap_table" in "bkdb98"
        And verify that there is a "ao" table "schema_ao.ao_part_table" in "bkdb98" with data

    Scenario: 99 Full Backup with option -s and Restore
        Given the old timestamps are read from json
931
        When the user runs gpdbrestore -e with the stored timestamp
932 933 934 935 936 937
        Then gpdbrestore should return a return code of 0
        And verify that there is a "heap" table "schema_heap.heap_table" in "bkdb99" with data
        And verify that there is no table "schema_ao.ao_part_table" in "bkdb99"

    Scenario: 100 Full Backup with option --exclude-schema-file and Restore
        Given the old timestamps are read from json
938
        When the user runs gpdbrestore -e with the stored timestamp
939 940 941 942 943 944 945
        Then gpdbrestore should return a return code of 0
        And verify that there is a "heap" table "schema_heap.heap_table" in "bkdb100" with data
        And verify that there is no table "schema_ao.ao_part_table" in "bkdb100"
        And verify that there is no table "testschema.heap_table" in "bkdb100"

    Scenario: 101 Full Backup with option --schema-file and Restore
        Given the old timestamps are read from json
946
        When the user runs gpdbrestore -e with the stored timestamp
947 948 949 950 951 952 953
        Then gpdbrestore should return a return code of 0
        And verify that there is a "heap" table "schema_heap.heap_table" in "bkdb101" with data
        And verify that there is a "ao" table "schema_ao.ao_part_table" in "bkdb101" with data
        And verify that there is no table "testschema.heap_table" in "bkdb101"

    Scenario: 106 Full Backup and Restore with option --change-schema
        Given the old timestamps are read from json
954
        And the backup test is initialized with database "bkdb106"
955 956
        And schema "schema_ao, schema_new" exists in "bkdb106"
        And there is a "ao" partition table "schema_ao.ao_part_table" in "bkdb106" with data
957
        When the user runs gpdbrestore without -e with the stored timestamp and options "--change-schema=schema_new --table-file 106_include_file"
958 959 960 961 962 963
        Then gpdbrestore should return a return code of 0
        And verify that there is a table "schema_new.heap_table" of "heap" type in "bkdb106" with same data as table "schema_heap.heap_table"
        And verify that there is a table "schema_new.ao_part_table" of "ao" type in "bkdb106" with same data as table "schema_ao.ao_part_table"

    Scenario: 107 Incremental Backup and Restore with option --change-schema
        Given the old timestamps are read from json
964
        And the backup test is initialized with database "bkdb107"
965 966
        And schema "schema_ao, schema_new" exists in "bkdb107"
        And there is a "ao" partition table "schema_ao.ao_part_table" in "bkdb107" with data
967
        When the user runs gpdbrestore without -e with the stored timestamp and options "--change-schema=schema_new --table-file 107_include_file"
968 969 970 971 972 973
        Then gpdbrestore should return a return code of 0
        And verify that there is a table "schema_new.heap_table" of "heap" type in "bkdb107" with same data as table "schema_heap.heap_table"
        And verify that there is a table "schema_new.ao_part_table" of "ao" type in "bkdb107" with same data as table "schema_ao.ao_part_table"

    Scenario: 108 Full backup and restore with statistics
        Given the old timestamps are read from json
974
        When the user runs gpdbrestore -e with the stored timestamp and options "--restore-stats"
975 976 977 978 979 980
        Then gpdbrestore should return a return code of 0
        And verify that the restored table "public.heap_table" in database "bkdb108" is analyzed
        And verify that the restored table "public.ao_part_table" in database "bkdb108" is analyzed
        And database "bkdb108" is dropped and recreated
        And there is a "heap" table "public.heap_table" in "bkdb108" with data
        And there is a "ao" partition table "public.ao_part_table" in "bkdb108" with data
981
        When the user runs gpdbrestore -e with the stored timestamp and options "--restore-stats only"
982
        Then gpdbrestore should return a return code of 2
983
        When the user runs gpdbrestore without -e with the stored timestamp and options "--restore-stats only"
984 985 986 987 988 989
        Then gpdbrestore should return a return code of 0
        And verify that the restored table "public.heap_table" in database "bkdb108" is analyzed
        And verify that the restored table "public.ao_part_table" in database "bkdb108" is analyzed

    Scenario: 109 Backup and restore with statistics and table filters
        Given the old timestamps are read from json
990
        When the user runs gpdbrestore -e with the stored timestamp and options "-T public.heap_index_table --noanalyze"
991
        Then gpdbrestore should return a return code of 0
992
        When the user runs gpdbrestore without -e with the stored timestamp and options "--restore-stats -T public.heap_table"
993 994 995 996 997 998
        Then gpdbrestore should return a return code of 0
        And verify that the table "public.heap_index_table" in database "bkdb109" is not analyzed
        And verify that the restored table "public.heap_table" in database "bkdb109" is analyzed

    Scenario: 110 Restoring a nonexistent table should fail with clear error message
        Given the old timestamps are read from json
999
        When the user runs gpdbrestore -e with the stored timestamp and options "-T public.heap_table2 -q"
1000
        Then gpdbrestore should return a return code of 2
1001 1002
        Then gpdbrestore should print "Tables \[\'public.heap_table2\'\]" to stdout
        Then gpdbrestore should not print "Issue with 'ANALYZE' of restored table 'public.heap_table2' in 'bkdb110' database" to stdout
1003 1004 1005

    Scenario: 111 Full Backup with option --schema-file with prefix option and Restore
        Given the old timestamps are read from json
1006
        When the user runs gpdbrestore -e with the stored timestamp and options "--prefix=foo"
1007 1008 1009 1010 1011 1012 1013
        Then gpdbrestore should return a return code of 0
        And verify that there is a "heap" table "schema_heap.heap_table" in "bkdb111" with data
        And verify that there is a "ao" table "schema_ao.ao_part_table" in "bkdb111" with data
        And verify that there is no table "testschema.heap_table" in "bkdb111"

    Scenario: 112 Simple Full Backup with AO/CO statistics w/ filter
        Given the old timestamps are read from json
1014
        When the user runs gpdbrestore -e with the stored timestamp and options "--noaostats"
1015 1016 1017
        Then gpdbrestore should return a return code of 0
        And verify that there are "0" tuples in "bkdb112" for table "public.ao_index_table"
        And verify that there are "0" tuples in "bkdb112" for table "public.ao_table"
1018
        When the user runs gpdbrestore without -e with the stored timestamp and options "-T public.ao_table"
1019 1020 1021 1022 1023 1024
        Then gpdbrestore should return a return code of 0
        And verify that there are "0" tuples in "bkdb112" for table "public.ao_index_table"
        And verify that there are "4380" tuples in "bkdb112" for table "public.ao_table"

    Scenario: 113 Simple Full Backup with AO/CO statistics w/ filter schema
        Given the old timestamps are read from json
1025
        When the user runs gpdbrestore -e with the stored timestamp and options "--noaostats"
1026 1027 1028 1029 1030 1031
        Then gpdbrestore should return a return code of 0
        And verify that there are "0" tuples in "bkdb113" for table "public.ao_index_table"
        And verify that there are "0" tuples in "bkdb113" for table "public.ao_table"
        And verify that there are "0" tuples in "bkdb113" for table "schema_ao.ao_index_table"
        And verify that there are "0" tuples in "bkdb113" for table "schema_ao.ao_part_table"
        And verify that there are "0" tuples in "bkdb113" for table "testschema.ao_foo"
1032
        When the user runs gpdbrestore without -e with the stored timestamp and options "-S schema_ao -S testschema"
1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043
        Then gpdbrestore should return a return code of 0
        And verify that there are "0" tuples in "bkdb113" for table "public.ao_index_table"
        And verify that there are "0" tuples in "bkdb113" for table "public.ao_table"
        And verify that there are "730" tuples in "bkdb113" for table "testschema.ao_foo_1_prt_p1_2_prt_1"
        And verify that there are "730" tuples in "bkdb113" for table "testschema.ao_foo_1_prt_p1_2_prt_2"
        And verify that there are "730" tuples in "bkdb113" for table "testschema.ao_foo_1_prt_p1_2_prt_3"
        And verify that there are "730" tuples in "bkdb113" for table "testschema.ao_foo_1_prt_p2_2_prt_1"
        And verify that there are "730" tuples in "bkdb113" for table "testschema.ao_foo_1_prt_p2_2_prt_2"
        And verify that there are "730" tuples in "bkdb113" for table "testschema.ao_foo_1_prt_p2_2_prt_3"
        And verify that there are "4380" tuples in "bkdb113" for table "schema_ao.ao_index_table"
        And verify that there are "0" tuples in "bkdb113" for table "schema_ao.ao_part_table"
1044
        When the user runs gpdbrestore without -e with the stored timestamp and options "-S schema_ao -S testschema --truncate"
1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058
        Then gpdbrestore should return a return code of 0
        And verify that there are "0" tuples in "bkdb113" for table "public.ao_index_table"
        And verify that there are "0" tuples in "bkdb113" for table "public.ao_table"
        And verify that there are "365" tuples in "bkdb113" for table "testschema.ao_foo_1_prt_p1_2_prt_1"
        And verify that there are "365" tuples in "bkdb113" for table "testschema.ao_foo_1_prt_p1_2_prt_2"
        And verify that there are "365" tuples in "bkdb113" for table "testschema.ao_foo_1_prt_p1_2_prt_3"
        And verify that there are "365" tuples in "bkdb113" for table "testschema.ao_foo_1_prt_p2_2_prt_1"
        And verify that there are "365" tuples in "bkdb113" for table "testschema.ao_foo_1_prt_p2_2_prt_2"
        And verify that there are "365" tuples in "bkdb113" for table "testschema.ao_foo_1_prt_p2_2_prt_3"
        And verify that there are "2190" tuples in "bkdb113" for table "schema_ao.ao_index_table"
        And verify that there are "0" tuples in "bkdb113" for table "schema_ao.ao_part_table"

    Scenario: 114 Restore with --redirect option should not rely on existance of dumped database
        Given the old timestamps are read from json
1059
        When the user runs gpdbrestore -e with the stored timestamp and options "--redirect=bkdb114"
1060 1061 1062 1063 1064
        Then gpdbrestore should return a return code of 0
        And the database "bkdb114" does not exist

    Scenario: 115 Database owner can be assigned to role containing special characters
        Given the old timestamps are read from json
1065
        And the backup test is initialized with database "bkdb115"
1066
        When the user runs "psql -c 'CREATE ROLE "Foo%user"' -d bkdb115"
1067
        And the user runs gpdbrestore -e with the stored timestamp
1068 1069 1070 1071 1072 1073 1074 1075 1076
        Then gpdbrestore should return a return code of 0
        And verify that the owner of "bkdb115" is "Foo%user"
        And database "bkdb115" is dropped and recreated
        When the user runs "psql -c 'DROP ROLE "Foo%user"' -d bkdb115"
        Then psql should return a return code of 0

    @ignore_pg_temp
    Scenario: 116 pg_temp should be ignored from gpcrondump --table_file option and -t option when given
        Given the old timestamps are read from json
1077
        When the user runs gpdbrestore -e with the stored timestamp
1078 1079 1080 1081 1082
        Then gpdbrestore should return a return code of 0
        And verify that there are "2190" tuples in "bkdb116" for table "public.foo4"

    Scenario: 117 Schema level restore with gpdbrestore -S option for views, sequences, and functions
        Given the old timestamps are read from json
1083
        When the user runs gpdbrestore -e with the stored timestamp and options "-S s1"
1084 1085 1086 1087 1088 1089 1090
        Then gpdbrestore should return a return code of 0
        And verify that sequence "id_seq" exists in schema "s1" and database "schema_level_test_db"
        And verify that view "v1" exists in schema "s1" and database "schema_level_test_db"
        And verify that function "increment(integer)" exists in schema "s1" and database "schema_level_test_db"
        And verify that table "apples" exists in schema "s1" and database "schema_level_test_db"
        And the user runs command "dropdb schema_level_test_db"

1091
    Scenario: 118 Backup a database with database-specific configuration
1092
        Given the old timestamps are read from json
1093
        When the user runs gpdbrestore -e with the stored timestamp
1094 1095 1096 1097 1098 1099 1100
        Then gpdbrestore should return a return code of 0
        And verify that "search_path=daisy" appears in the datconfig for database "bkdb118"
        And verify that "appendonly=true" appears in the datconfig for database "bkdb118"
        And verify that "blocksize=65536" appears in the datconfig for database "bkdb118"

    Scenario: 120 Simple full backup and restore with special character
        Given the old timestamps are read from json
1101
        And the backup test is initialized for special characters
1102
        When the user runs gpdbrestore -e with the stored timestamp
1103
        Then gpdbrestore should return a return code of 0
1104 1105
        And the user runs command "psql -f test/behave/mgmt_utils/steps/data/special_chars/select_from_special_table.sql "$SP_CHAR_DB" > /tmp/special_table_data.out"
        And the user runs command "dropdb "$SP_CHAR_DB""
1106 1107 1108

    Scenario: 121 gpcrondump with -T option where table name, schema name and database name contains special character
        Given the old timestamps are read from json
1109
        And the backup test is initialized for special characters
1110
        When the user runs gpdbrestore -e with the stored timestamp
1111
        Then gpdbrestore should return a return code of 0
1112 1113 1114 1115
        And verify with backedup file "121_ao" that there is a "ao" table "$SP_CHAR_SCHEMA.$SP_CHAR_AO" in "$SP_CHAR_DB" with data
        And verify with backedup file "121_heap" that there is a "heap" table "$SP_CHAR_SCHEMA.$SP_CHAR_HEAP" in "$SP_CHAR_DB" with data
        And verify that there is no table "$SP_CHAR_CO" in "$SP_CHAR_DB"
        And the user runs command "dropdb "$SP_CHAR_DB""
1116 1117 1118

    Scenario: 122 gpcrondump with --exclude-table-file option where table name, schema name and database name contains special character
        Given the old timestamps are read from json
1119 1120
        And the backup test is initialized for special characters
        And the user runs "psql -f test/behave/mgmt_utils/steps/data/special_chars/create_special_database.sql template1"
1121
        And the user runs "psql -f test/behave/mgmt_utils/steps/data/special_chars/create_special_schema.sql template1"
1122
        When the user runs gpdbrestore -e with the stored timestamp
1123
        Then gpdbrestore should return a return code of 0
1124 1125 1126 1127
        And verify with backedup file "122_ao" that there is a "ao" table "$SP_CHAR_SCHEMA.$SP_CHAR_AO" in "$SP_CHAR_DB" with data
        And verify with backedup file "122_heap" that there is a "heap" table "$SP_CHAR_SCHEMA.$SP_CHAR_HEAP" in "$SP_CHAR_DB" with data
        And verify that there is no table "$SP_CHAR_CO" in "$SP_CHAR_DB"
        And the user runs command "dropdb "$SP_CHAR_DB""
1128 1129 1130

    Scenario: 123 gpcrondump with --table-file option where table name, schema name and database name contains special character
        Given the old timestamps are read from json
1131 1132
        And the backup test is initialized for special characters
        And the user runs "psql -f test/behave/mgmt_utils/steps/data/special_chars/create_special_database.sql template1"
1133
        And the user runs "psql -f test/behave/mgmt_utils/steps/data/special_chars/create_special_schema.sql template1"
1134
        When the user runs gpdbrestore without -e with the stored timestamp and options " "
1135
        Then gpdbrestore should return a return code of 0
1136 1137 1138 1139
        And verify with backedup file "123_ao" that there is a "ao" table "$SP_CHAR_SCHEMA.$SP_CHAR_AO" in "$SP_CHAR_DB" with data
        And verify with backedup file "123_heap" that there is a "heap" table "$SP_CHAR_SCHEMA.$SP_CHAR_HEAP" in "$SP_CHAR_DB" with data
        And verify that there is no table "$SP_CHAR_CO" in "$SP_CHAR_DB"
        And the user runs command "dropdb "$SP_CHAR_DB""
1140 1141 1142

    Scenario: 124 gpcrondump with -t option where table name, schema name and database name contains special character
        Given the old timestamps are read from json
1143 1144
        And the backup test is initialized for special characters
        And the user runs "psql -f test/behave/mgmt_utils/steps/data/special_chars/create_special_database.sql template1"
1145
        And the user runs "psql -f test/behave/mgmt_utils/steps/data/special_chars/create_special_schema.sql template1"
1146
        When the user runs gpdbrestore without -e with the stored timestamp and options " "
1147
        Then gpdbrestore should return a return code of 0
1148 1149 1150
        And verify with backedup file "124_ao" that there is a "ao" table "$SP_CHAR_SCHEMA.$SP_CHAR_AO" in "$SP_CHAR_DB" with data
        And verify that there is no table "$SP_CHAR_CO" in "$SP_CHAR_DB"
        And the user runs command "dropdb "$SP_CHAR_DB""
1151 1152 1153

    Scenario: 125 gpcrondump with --schema-file option when schema name and database name contains special character
        Given the old timestamps are read from json
1154
        And the backup test is initialized for special characters
1155
        When the user runs gpdbrestore -e with the stored timestamp
1156
        And the user runs command "psql -f test/behave/mgmt_utils/steps/data/special_chars/select_from_special_table.sql "$SP_CHAR_DB" > /tmp/special_table_data.out"
1157
        And verify that the contents of the files "/tmp/special_table_data.out" and "/tmp/125_special_table_data.ans" are identical
1158
        When the user runs command "dropdb "$SP_CHAR_DB""
1159 1160 1161

    Scenario: 126 gpcrondump with -s option when schema name and database name contains special character
        Given the old timestamps are read from json
1162
        And the backup test is initialized for special characters
1163
        When the user runs gpdbrestore -e with the stored timestamp
1164
        And the user runs command "psql -f test/behave/mgmt_utils/steps/data/special_chars/select_from_special_table.sql "$SP_CHAR_DB" > /tmp/special_table_data.out"
1165
        And verify that the contents of the files "/tmp/special_table_data.out" and "/tmp/126_special_table_data.ans" are identical
1166
        When the user runs command "dropdb "$SP_CHAR_DB""
1167 1168 1169

    Scenario: 127 gpcrondump with --exclude-schema-file option when schema name and database name contains special character
        Given the old timestamps are read from json
1170
        And the backup test is initialized for special characters
1171
        When the user runs gpdbrestore -e with the stored timestamp
1172 1173 1174 1175
        Then verify that there is no table "$SP_CHAR_AO" in "$SP_CHAR_DB"
        And verify that there is no table "$SP_CHAR_CO" in "$SP_CHAR_DB"
        And verify that there is no table "$SP_CHAR_HEAP" in "$SP_CHAR_DB"
        When the user runs command "dropdb "$SP_CHAR_DB""
1176 1177 1178

    Scenario: 128 gpcrondump with -S option when schema name and database name contains special character
        Given the old timestamps are read from json
1179
        And the backup test is initialized for special characters
1180
        When the user runs gpdbrestore -e with the stored timestamp
1181 1182 1183 1184
        Then verify that there is no table "$SP_CHAR_AO" in "$SP_CHAR_DB"
        And verify that there is no table "$SP_CHAR_CO" in "$SP_CHAR_DB"
        And verify that there is no table "$SP_CHAR_HEAP" in "$SP_CHAR_DB"
        When the user runs command "dropdb "$SP_CHAR_DB""
1185 1186 1187

    Scenario: 130 Gpdbrestore with -T, --truncate, and --change-schema options when table name, schema name and database name contains special character
        Given the old timestamps are read from json
1188
        And the backup test is initialized for special characters
1189 1190 1191 1192
        And the user runs "psql -f test/behave/mgmt_utils/steps/data/special_chars/create_special_database.sql template1"
        And the user runs "psql -f test/behave/mgmt_utils/steps/data/special_chars/create_special_schema.sql template1"
        And the user runs "psql -f test/behave/mgmt_utils/steps/data/special_chars/add_schema.sql template1"
        And the user runs "psql -f test/behave/mgmt_utils/steps/data/special_chars/create_special_table.sql template1"
1193
        When the user runs gpdbrestore without -e with the stored timestamp and options "-T "$SP_CHAR_SCHEMA"."$SP_CHAR_AO" --change-schema="$SP_CHAR_SCHEMA" -S "$SP_CHAR_SCHEMA2""
1194
        Then gpdbrestore should return a return code of 2
1195
        And gpcrondump should print "-S option cannot be used with --change-schema option" to stdout
1196
        When the user runs gpdbrestore without -e with the stored timestamp and options "-T "$SP_CHAR_SCHEMA"."$SP_CHAR_AO" --change-schema="$SP_CHAR_SCHEMA2" --truncate"
1197
        Then gpdbrestore should return a return code of 0
1198
        And the user runs command "psql -f  psql -c """select * from \"$SP_CHAR_SCHEMA2\".\"$SP_CHAR_AO\" order by 1""" -d "$SP_CHAR_DB"  > /tmp/table_data.out"
1199
        And verify that the contents of the files "/tmp/130_table_data.ans" and "/tmp/table_data.out" are identical
1200
        When the user runs gpdbrestore without -e with the stored timestamp and options "-T "$SP_CHAR_SCHEMA"."$SP_CHAR_AO" --truncate"
1201
        Then gpdbrestore should return a return code of 0
1202
        And the user runs command "psql -f  psql -c """select * from \"$SP_CHAR_SCHEMA\".\"$SP_CHAR_AO\" order by 1""" -d "$SP_CHAR_DB"  > /tmp/table_data.out"
1203
        Then verify that the contents of the files "/tmp/130_table_data.ans" and "/tmp/table_data.out" are identical
1204
        When the user runs command "dropdb "$SP_CHAR_DB""
1205 1206 1207

    Scenario: 131 gpcrondump with --incremental option when table name, schema name and database name contains special character
        Given the old timestamps are read from json
1208
        And the backup test is initialized for special characters
1209
        When the user runs gpdbrestore -e with the stored timestamp
1210
        And the user runs command "psql -f test/behave/mgmt_utils/steps/data/special_chars/select_from_special_table.sql "$SP_CHAR_DB" > /tmp/special_table_data.out"
1211
        Then verify that the contents of the files "/tmp/special_table_data.out" and "/tmp/131_special_table_data.ans" are identical
1212
        When the user runs command "dropdb "$SP_CHAR_DB""
1213 1214 1215

    Scenario: 132 gpdbrestore with --redirect option with special db name, and all table name, schema name and database name contain special character
        Given the old timestamps are read from json
1216
        And the backup test is initialized for special characters
1217
        When the user runs "psql -f test/behave/mgmt_utils/steps/data/special_chars/drop_special_database.sql template1"
1218 1219
        When the user runs gpdbrestore without -e with the stored timestamp and options "--redirect "$SP_CHAR_DB""
        And the user runs command "psql -f test/behave/mgmt_utils/steps/data/special_chars/select_from_special_table.sql "$SP_CHAR_DB" > /tmp/special_table_data.out"
1220
        Then verify that the contents of the files "/tmp/special_table_data.out" and "/tmp/132_special_table_data.ans" are identical
1221
        When the user runs command "dropdb "$SP_CHAR_DB""
1222 1223 1224

    Scenario: 133 gpdbrestore, -S option, -S truncate option schema level restore with special chars in schema name
        Given the old timestamps are read from json
1225 1226 1227
        And the backup test is initialized for special characters
        When the user runs gpdbrestore -e with the stored timestamp and options "-S "$SP_CHAR_SCHEMA""
        And the user runs command "psql -f test/behave/mgmt_utils/steps/data/special_chars/select_from_special_table.sql "$SP_CHAR_DB" > /tmp/special_table_data.out"
1228
        Then verify that the contents of the files "/tmp/special_table_data.out" and "/tmp/133_special_table_data.ans" are identical
1229
        When the user runs gpdbrestore without -e with the stored timestamp and options "-S "$SP_CHAR_SCHEMA" --truncate"
1230
        Then gpdbrestore should return a return code of 0
1231
        And the user runs command "psql -f test/behave/mgmt_utils/steps/data/special_chars/select_from_special_table.sql "$SP_CHAR_DB" > /tmp/special_table_data.out"
1232 1233
        Then verify that the contents of the files "/tmp/special_table_data.out" and "/tmp/133_special_table_data.ans" are identical

1234 1235 1236 1237
    @skip_for_gpdb_43
    Scenario: 136 Backup and restore CAST, with associated function in restored schema, base_file_name=dump_func_name
        Given the old timestamps are read from json
        # No filter
1238
        When the user runs gpdbrestore -e with the stored timestamp
1239 1240 1241
        And gpdbrestore should return a return code of 0
        Then verify that a cast exists in "bkdb136" in schema "public"
        # Table filter
1242
        And the user runs gpdbrestore -e with the stored timestamp and options "-T public.heap_table"
1243 1244 1245
        And gpdbrestore should return a return code of 0
        Then verify that a cast exists in "bkdb136" in schema "public"
        # Schema filter
1246
        And the user runs gpdbrestore -e with the stored timestamp and options "-S public"
1247 1248 1249 1250 1251
        And gpdbrestore should return a return code of 0
        Then verify that a cast exists in "bkdb136" in schema "public"
        # Change schema filter
        And database "bkdb136" is dropped and recreated
        And schema "newschema" exists in "bkdb136"
1252
        When the user runs gpdbrestore without -e with the stored timestamp and options "-T public.heap_table --change-schema newschema"
1253 1254 1255 1256 1257 1258 1259
        And gpdbrestore should return a return code of 0
        Then verify that a cast exists in "bkdb136" in schema "newschema"

    @skip_for_gpdb_43
    Scenario: 137 Backup and restore CAST, with associated function in non-restored schema
        Given the old timestamps are read from json
        # Table filter
1260
        When the user runs gpdbrestore -e with the stored timestamp and options "-T testschema.heap_table"
1261 1262 1263
        Then gpdbrestore should return a return code of 0
        Then verify that a cast does not exist in "bkdb137" in schema "testschema"
        # Schema filter
1264
        And the user runs gpdbrestore -e with the stored timestamp and options "-S testschema"
1265 1266 1267 1268 1269
        And gpdbrestore should return a return code of 0
        Then verify that a cast does not exist in "bkdb137" in schema "testschema"
        # Change schema filter
        And database "bkdb137" is dropped and recreated
        Given schema "newschema" exists in "bkdb137"
1270
        When the user runs gpdbrestore without -e with the stored timestamp and options "-T testschema.heap_table --change-schema newschema"
1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298
        And gpdbrestore should return a return code of 0
        Then verify that a cast does not exist in "bkdb137" in schema "newschema"

    Scenario: 138 Incremental backup and restore with restore to primary and mirror after failover
        Given the old timestamps are read from json
        When the user runs "gpdbrestore -e -s bkdb138 -a"
        Then gpdbrestore should return a return code of 0
        And verify that there is a "ao" table "public.ao_table" in "bkdb138"
        And verify that there is a "heap" table "public.heap_table" in "bkdb138"
        And verify that the tuple count of all appendonly tables are consistent in "bkdb138"

        And user kills a primary postmaster process
        And user can start transactions
        When the user runs "gpdbrestore -e -s bkdb138-2 -a"
        Then gpdbrestore should return a return code of 0
        And verify that there is a "ao" table "public.ao_table" in "bkdb138-2"
        And verify that there is a "heap" table "public.heap_table" in "bkdb138-2"
        And verify that the tuple count of all appendonly tables are consistent in "bkdb138-2"

        When the user runs "gprecoverseg -a"
        Then gprecoverseg should return a return code of 0
        And all the segments are running
        And the segments are synchronized
        When the user runs "gprecoverseg -ra"
        Then gprecoverseg should return a return code of 0

    Scenario: 141 Backup with all GUC (system settings) set to defaults will succeed
        Given the old timestamps are read from json
1299
        When the user runs gpdbrestore -e with the stored timestamp
1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348
        Then gpdbrestore should return a return code of 0

        # verify that gucs are set to previous defaults
        And verify that "appendonly=true" appears in the datconfig for database "bkdb141"
        And verify that "orientation=row" appears in the datconfig for database "bkdb141"
        And verify that "blocksize=65536" appears in the datconfig for database "bkdb141"
        And verify that "checksum=false" appears in the datconfig for database "bkdb141"
        And verify that "compresslevel=4" appears in the datconfig for database "bkdb141"
        And verify that "compresstype=none" appears in the datconfig for database "bkdb141"

        # verify that my_default table has default gucs
        When execute following sql in db "bkdb141" and store result in the context
            """
            select relstorage,
            reloptions,compresstype,columnstore,compresslevel,checksum from
            pg_class c , pg_appendonly a where c.oid=a.relid and
            c.relname='default_guc'
            """
        Then validate that following rows are in the stored rows
          | relstorage | reloptions                                       | compresstype | columnstore | compresslevel | checksum |
          | a          | {appendonly=true,blocksize=65536,checksum=false} |              | f           | 0             | f        |

        # verify that my_role_table has gucs from role
        When execute following sql in db "bkdb141" and store result in the context
            """
            select relstorage,
            reloptions,compresstype,columnstore,compresslevel,checksum from
            pg_class c , pg_appendonly a where c.oid=a.relid and
            c.relname='role_guc_table'
            """
        Then validate that following rows are in the stored rows
          | relstorage | reloptions                                             | compresstype | columnstore | compresslevel | checksum |
          | c          | {appendonly=true,compresstype=zlib,orientation=column} | zlib         | t           | 1             | t        |

        # verify that session_gucs has gucs from session
        When execute following sql in db "bkdb141" and store result in the context
            """
            select relstorage,
            reloptions,compresstype,columnstore,compresslevel,checksum from
            pg_class c , pg_appendonly a where c.oid=a.relid and
            c.relname='session_guc_table'
            """
        Then validate that following rows are in the stored rows
          | relstorage | reloptions                           | compresstype | columnstore | compresslevel | checksum |
          | c          | {appendonly=true,orientation=column} |              | t           | 0             | t        |


    Scenario: 142 gpcrondump -u option with include table filtering
        Given the old timestamps are read from json
1349
        When the user runs gpdbrestore -e with the stored timestamp and options "-u /tmp"
1350 1351 1352 1353 1354 1355
        And gpdbrestore should return a return code of 0
        Then verify that the data of "2" tables in "bkdb142" is validated after restore
        And verify that the tuple count of all appendonly tables are consistent in "bkdb142"

    Scenario: 143 gpcrondump -u option with exclude table filtering
        Given the old timestamps are read from json
1356
        When the user runs gpdbrestore -e with the stored timestamp and options "-u /tmp"
1357 1358 1359 1360 1361 1362
        And gpdbrestore should return a return code of 0
        Then verify that the data of "1" tables in "bkdb143" is validated after restore
        And verify that the tuple count of all appendonly tables are consistent in "bkdb143"

    Scenario: 144 gpdbrestore -u option with include table filtering
        Given the old timestamps are read from json
1363
        When the user runs gpdbrestore -e with the stored timestamp and options "-u /tmp -T public.ao_table"
1364 1365 1366 1367 1368 1369
        And gpdbrestore should return a return code of 0
        Then verify that the data of "1" tables in "bkdb144" is validated after restore
        And verify that the tuple count of all appendonly tables are consistent in "bkdb144"

    Scenario: 145 gpcrondump with -u and --prefix option
        Given the old timestamps are read from json
1370
        When the user runs gpdbrestore -e with the stored timestamp and options "--prefix=foo --redirect=bkdb145-2 -u /tmp"
1371 1372 1373 1374
        Then gpdbrestore should return a return code of 0
        And there should be dump files under "/tmp" with prefix "foo"
        And check that there is a "heap" table "public.heap_table" in "bkdb145-2" with same data from "bkdb145"
        And check that there is a "ao" table "public.ao_part_table" in "bkdb145-2" with same data from "bkdb145"