未验证 提交 1e68270b 编写于 作者: Z Zhenghua Lyu 提交者: GitHub

Correct and speed up some isolation2 test cases.

* Enable GDD for test concurrent update table with varying length type.

This test case of "test concurrent update table with varying length type"
in isolation2/concurrent_update is introduced by the commit 8ae681e5. It
assumes to test evalPlanQual's logic so GDD has to be enabled.

The mistake happens because when commit 8ae681e5 goes in, GDD is default
enabled. Later commit 29e7f102 set GDD defaultly disabled but did not
handle this test case correctly

This commit fixes this by moving the case in the env with GDD enabled.

* Make concurrent_update* test concise.

Previously, concurrent_update_epq and concurrent_update_distkey
have some cases for AO/CO table, and some cases running without GDD.
That does not make much sense because:
  * for AO/CO table, it always holds ExclusiveLock on the table no
    matter GDD is enabled or disabled
  * without GDD, it always holds ExclusiveLock on the table

I believe for the above two naive cases, just checking lockmode on
the table should be enough, and isolation2/lockmodes test case has
already covered this.

This commit removes such cases concurrent_update_epq and
concurrent_update_distkey. After this, the two scripts can be
merged into concurrent_update because they all need GDD to be
enabled and this commit merges them into one test script.

After this, we can move the only remaining test concurrent_update
to the gdd suites to save time of extra restarting cluster twice.

* Restart the whole cluster for GDD since QEs will use the GUC.

Previous commit a4b2fea3 uses a skill that only restart
master node's postmaster to avoid restart the whole Greenplum
cluster so that saves some time for pipeline test. The skill
works under the assumption that gp_enable_global_deadlock_detector
only needs on master. But actually, in QEs, we also have code
that checking this GUC: ExecDelete and XactLockTableWait. So
we'd better restart the whole cluster to make the GUC in QEs
also changed.

I find this issue when I am trying to use the skill of commit
a4b2fea3 for the isolation2 test case `concurrent_update`,
that case was introduced by the commit 39fbfe96 and that
commit adds some code running on QEs but need the GUC
gp_enable_global_deadlock_detector.
上级 fcc3f377
-- IF we enable the GDD, then the lock maybe downgrade to
-- RowExclusiveLock, when we UPDATE the distribution keys,
-- A SplitUpdate node will add to the Plan, then an UPDATE
-- operator may split to DELETE and INSERT.
-- IF we UPDATE the distribution keys concurrently, the
-- DELETE operator will not execute EvalPlanQual and the
-- INSERT operator can not be *blocked*, so it will
-- generate more tuples in the tables.
-- We raise an error when the GDD is enabled and the
-- distribution keys is updated.
-- create heap table
0: show gp_enable_global_deadlock_detector;
gp_enable_global_deadlock_detector
------------------------------------
off
(1 row)
0: create table tab_update_hashcol (c1 int, c2 int) distributed by(c1);
CREATE
0: insert into tab_update_hashcol values(1,1);
INSERT 1
0: select * from tab_update_hashcol;
c1 | c2
----+----
1 | 1
(1 row)
-- test for heap table
1: begin;
BEGIN
2: begin;
BEGIN
1: update tab_update_hashcol set c1 = c1 + 1 where c1 = 1;
UPDATE 1
2&: update tab_update_hashcol set c1 = c1 + 1 where c1 = 1; <waiting ...>
1: end;
END
2<: <... completed>
UPDATE 0
2: end;
END
0: select * from tab_update_hashcol;
c1 | c2
----+----
2 | 1
(1 row)
0: drop table tab_update_hashcol;
DROP
-- create AO table
0: create table tab_update_hashcol (c1 int, c2 int) with(appendonly=true) distributed by(c1);
CREATE
0: insert into tab_update_hashcol values(1,1);
INSERT 1
0: select * from tab_update_hashcol;
c1 | c2
----+----
1 | 1
(1 row)
-- test for AO table
1: begin;
BEGIN
2: begin;
BEGIN
1: update tab_update_hashcol set c1 = c1 + 1 where c1 = 1;
UPDATE 1
2&: update tab_update_hashcol set c1 = c1 + 1 where c1 = 1; <waiting ...>
1: end;
END
2<: <... completed>
UPDATE 0
2: end;
END
0: select * from tab_update_hashcol;
c1 | c2
----+----
2 | 1
(1 row)
0: drop table tab_update_hashcol;
DROP
1q: ... <quitting>
2q: ... <quitting>
0q: ... <quitting>
-- enable gdd
-- start_ignore
! gpconfig -c gp_enable_global_deadlock_detector -v on;
! gpstop -rai;
-- end_ignore
-- create heap table
0: show gp_enable_global_deadlock_detector;
gp_enable_global_deadlock_detector
------------------------------------
on
(1 row)
0: create table tab_update_hashcol (c1 int, c2 int) distributed by(c1);
CREATE
0: insert into tab_update_hashcol values(1,1);
INSERT 1
0: select * from tab_update_hashcol;
c1 | c2
----+----
1 | 1
(1 row)
-- test for heap table
1: begin;
BEGIN
2: begin;
BEGIN
1: update tab_update_hashcol set c1 = c1 + 1 where c1 = 1;
UPDATE 1
2&: update tab_update_hashcol set c1 = c1 + 1 where c1 = 1; <waiting ...>
1: end;
END
2<: <... completed>
ERROR: concurrent updates distribution keys on the same row is not allowed (seg2 127.0.0.1:25434 pid=33685)
2: end;
END
0: select * from tab_update_hashcol;
c1 | c2
----+----
2 | 1
(1 row)
0: drop table tab_update_hashcol;
DROP
-- create AO table
0: create table tab_update_hashcol (c1 int, c2 int) with(appendonly=true) distributed by(c1);
CREATE
0: insert into tab_update_hashcol values(1,1);
INSERT 1
0: select * from tab_update_hashcol;
c1 | c2
----+----
1 | 1
(1 row)
-- test for AO table
1: begin;
BEGIN
2: begin;
BEGIN
1: update tab_update_hashcol set c1 = c1 + 1 where c1 = 1;
UPDATE 1
2&: update tab_update_hashcol set c1 = c1 + 1 where c1 = 1; <waiting ...>
1: end;
END
2<: <... completed>
UPDATE 0
2: end;
END
0: select * from tab_update_hashcol;
c1 | c2
----+----
2 | 1
(1 row)
0: drop table tab_update_hashcol;
DROP
1q: ... <quitting>
2q: ... <quitting>
0q: ... <quitting>
-- disable gdd
-- start_ignore
! gpconfig -c gp_enable_global_deadlock_detector -v off;
! gpstop -rai;
-- end_ignore
-- If we enable the GDD, then the lock maybe downgrade to
-- RowExclusiveLock, so UPDATE/Delete can be executed
-- concurrently, it may trigger the EvalPlanQual function
-- to recheck the qualifications.
-- If the subPlan have Motion node, then we can not execute
-- EvalPlanQual correctly, so we raise an error when
-- GDD is enabled and EvalPlanQual is tiggered.
-- create heap table
0: show gp_enable_global_deadlock_detector;
gp_enable_global_deadlock_detector
------------------------------------
off
(1 row)
0: create table tab_update_epq1 (c1 int, c2 int) distributed randomly;
CREATE
0: create table tab_update_epq2 (c1 int, c2 int) distributed randomly;
CREATE
0: insert into tab_update_epq1 values(1,1);
INSERT 1
0: insert into tab_update_epq2 values(1,1);
INSERT 1
0: select * from tab_update_epq1;
c1 | c2
----+----
1 | 1
(1 row)
0: select * from tab_update_epq2;
c1 | c2
----+----
1 | 1
(1 row)
1: set optimizer = off;
SET
2: set optimizer = off;
SET
-- test for heap table
1: begin;
BEGIN
2: begin;
BEGIN
1: update tab_update_epq1 set c1 = c1 + 1 where c2 = 1;
UPDATE 1
2&: update tab_update_epq1 set c1 = tab_update_epq1.c1 + 1 from tab_update_epq2 where tab_update_epq1.c2 = tab_update_epq2.c2; <waiting ...>
1: end;
END
2<: <... completed>
UPDATE 1
2: end;
END
0: select * from tab_update_epq1;
c1 | c2
----+----
3 | 1
(1 row)
0: drop table tab_update_epq1;
DROP
0: drop table tab_update_epq2;
DROP
-- create AO table
0: create table tab_update_epq1 (c1 int, c2 int) with(appendonly=true) distributed randomly;
CREATE
0: create table tab_update_epq2 (c1 int, c2 int) with(appendonly=true) distributed randomly;
CREATE
0: insert into tab_update_epq1 values(1,1);
INSERT 1
0: insert into tab_update_epq2 values(1,1);
INSERT 1
0: select * from tab_update_epq1;
c1 | c2
----+----
1 | 1
(1 row)
0: select * from tab_update_epq2;
c1 | c2
----+----
1 | 1
(1 row)
-- test for AO table
1: begin;
BEGIN
2: begin;
BEGIN
1: update tab_update_epq1 set c1 = c1 + 1 where c2 = 1;
UPDATE 1
2&: update tab_update_epq1 set c1 = tab_update_epq1.c1 + 1 from tab_update_epq2 where tab_update_epq1.c2 = tab_update_epq2.c2; <waiting ...>
1: end;
END
2<: <... completed>
UPDATE 1
2: end;
END
0: select * from tab_update_epq1;
c1 | c2
----+----
3 | 1
(1 row)
0: drop table tab_update_epq1;
DROP
0: drop table tab_update_epq2;
DROP
1q: ... <quitting>
2q: ... <quitting>
0q: ... <quitting>
-- enable gdd
-- start_ignore
! gpconfig -c gp_enable_global_deadlock_detector -v on;
! gpstop -rai;
-- end_ignore
-- create heap table
0: show gp_enable_global_deadlock_detector;
gp_enable_global_deadlock_detector
------------------------------------
on
(1 row)
0: create table tab_update_epq1 (c1 int, c2 int) distributed randomly;
CREATE
0: create table tab_update_epq2 (c1 int, c2 int) distributed randomly;
CREATE
0: insert into tab_update_epq1 values(1,1);
INSERT 1
0: insert into tab_update_epq2 values(1,1);
INSERT 1
0: select * from tab_update_epq1;
c1 | c2
----+----
1 | 1
(1 row)
0: select * from tab_update_epq2;
c1 | c2
----+----
1 | 1
(1 row)
1: set optimizer = off;
SET
2: set optimizer = off;
SET
-- test for heap table
1: begin;
BEGIN
2: begin;
BEGIN
1: update tab_update_epq1 set c1 = c1 + 1 where c2 = 1;
UPDATE 1
2&: update tab_update_epq1 set c1 = tab_update_epq1.c1 + 1 from tab_update_epq2 where tab_update_epq1.c2 = tab_update_epq2.c2; <waiting ...>
1: end;
END
2<: <... completed>
ERROR: EvalPlanQual can not handle subPlan with Motion node (seg1 127.0.0.1:25433 pid=34552)
2: end;
END
0: select * from tab_update_epq1;
c1 | c2
----+----
2 | 1
(1 row)
0: drop table tab_update_epq1;
DROP
0: drop table tab_update_epq2;
DROP
-- create AO table
0: create table tab_update_epq1 (c1 int, c2 int) with(appendonly=true) distributed randomly;
CREATE
0: create table tab_update_epq2 (c1 int, c2 int) with(appendonly=true) distributed randomly;
CREATE
0: insert into tab_update_epq1 values(1,1);
INSERT 1
0: insert into tab_update_epq2 values(1,1);
INSERT 1
0: select * from tab_update_epq1;
c1 | c2
----+----
1 | 1
(1 row)
0: select * from tab_update_epq2;
c1 | c2
----+----
1 | 1
(1 row)
-- test for AO table
1: begin;
BEGIN
2: begin;
BEGIN
1: update tab_update_epq1 set c1 = c1 + 1 where c2 = 1;
UPDATE 1
2&: update tab_update_epq1 set c1 = tab_update_epq1.c1 + 1 from tab_update_epq2 where tab_update_epq1.c2 = tab_update_epq2.c2; <waiting ...>
1: end;
END
2<: <... completed>
UPDATE 1
2: end;
END
0: select * from tab_update_epq1;
c1 | c2
----+----
3 | 1
(1 row)
0: drop table tab_update_epq1;
DROP
0: drop table tab_update_epq2;
DROP
1q: ... <quitting>
2q: ... <quitting>
0q: ... <quitting>
-- disable gdd
-- start_ignore
! gpconfig -c gp_enable_global_deadlock_detector -v off;
! gpstop -rai;
-- end_ignore
......@@ -28,48 +28,7 @@ UPDATE 1
DROP TABLE t_concurrent_update;
DROP
--start_ignore
! gpconfig -c gp_enable_global_deadlock_detector -v on;
20200305:08:04:41:016395 gpconfig:09c5497cf854:gpadmin-[INFO]:-completed successfully with parameters '-c gp_enable_global_deadlock_detector -v on'
! gpstop -rai;
20200305:08:04:41:016472 gpstop:09c5497cf854:gpadmin-[INFO]:-Starting gpstop with args: -rai
20200305:08:04:41:016472 gpstop:09c5497cf854:gpadmin-[INFO]:-Gathering information and validating the environment...
20200305:08:04:41:016472 gpstop:09c5497cf854:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20200305:08:04:41:016472 gpstop:09c5497cf854:gpadmin-[INFO]:-Obtaining Segment details from master...
20200305:08:04:41:016472 gpstop:09c5497cf854:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 7.0.0-alpha.0+dev.5622.g0cc5452d2bc build dev'
20200305:08:04:41:016472 gpstop:09c5497cf854:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='immediate'
20200305:08:04:41:016472 gpstop:09c5497cf854:gpadmin-[INFO]:-Master segment instance directory=/home/gpadmin/workspace/gpdb5/gpAux/gpdemo/datadirs/qddir/demoDataDir-1
20200305:08:04:41:016472 gpstop:09c5497cf854:gpadmin-[INFO]:-Attempting forceful termination of any leftover master process
20200305:08:04:41:016472 gpstop:09c5497cf854:gpadmin-[INFO]:-Terminating processes for segment /home/gpadmin/workspace/gpdb5/gpAux/gpdemo/datadirs/qddir/demoDataDir-1
20200305:08:04:41:016472 gpstop:09c5497cf854:gpadmin-[INFO]:-Stopping master standby host 09c5497cf854 mode=immediate
20200305:08:04:41:016472 gpstop:09c5497cf854:gpadmin-[INFO]:-Successfully shutdown standby process on 09c5497cf854
20200305:08:04:41:016472 gpstop:09c5497cf854:gpadmin-[INFO]:-Targeting dbid [2, 5, 3, 6, 4, 7] for shutdown
20200305:08:04:41:016472 gpstop:09c5497cf854:gpadmin-[INFO]:-Commencing parallel primary segment instance shutdown, please wait...
20200305:08:04:41:016472 gpstop:09c5497cf854:gpadmin-[INFO]:-0.00% of jobs completed
20200305:08:04:41:016472 gpstop:09c5497cf854:gpadmin-[INFO]:-100.00% of jobs completed
20200305:08:04:41:016472 gpstop:09c5497cf854:gpadmin-[INFO]:-Commencing parallel mirror segment instance shutdown, please wait...
20200305:08:04:41:016472 gpstop:09c5497cf854:gpadmin-[INFO]:-0.00% of jobs completed
20200305:08:04:42:016472 gpstop:09c5497cf854:gpadmin-[INFO]:-100.00% of jobs completed
20200305:08:04:42:016472 gpstop:09c5497cf854:gpadmin-[INFO]:-----------------------------------------------------
20200305:08:04:42:016472 gpstop:09c5497cf854:gpadmin-[INFO]:- Segments stopped successfully = 6
20200305:08:04:42:016472 gpstop:09c5497cf854:gpadmin-[INFO]:- Segments with errors during stop = 0
20200305:08:04:42:016472 gpstop:09c5497cf854:gpadmin-[INFO]:-----------------------------------------------------
20200305:08:04:42:016472 gpstop:09c5497cf854:gpadmin-[INFO]:-Successfully shutdown 6 of 6 segment instances
20200305:08:04:42:016472 gpstop:09c5497cf854:gpadmin-[INFO]:-Database successfully shutdown with no errors reported
20200305:08:04:42:016472 gpstop:09c5497cf854:gpadmin-[INFO]:-Cleaning up leftover shared memory
20200305:08:04:42:016472 gpstop:09c5497cf854:gpadmin-[INFO]:-Restarting System...
--end_ignore
-- Test the concurrent update transaction order on the segment is reflected on master
-- enable gdd
1: SHOW gp_enable_global_deadlock_detector;
gp_enable_global_deadlock_detector
------------------------------------
on
(1 row)
1: CREATE TABLE t_concurrent_update(a int, b int);
CREATE
1: INSERT INTO t_concurrent_update VALUES(1,1);
......@@ -180,36 +139,103 @@ DROP
5q: ... <quitting>
6q: ... <quitting>
--start_ignore
! gpconfig -r gp_enable_global_deadlock_detector;
20200305:08:04:46:016977 gpconfig:09c5497cf854:gpadmin-[INFO]:-completed successfully with parameters '-r gp_enable_global_deadlock_detector'
! gpstop -rai;
20200305:08:04:47:017053 gpstop:09c5497cf854:gpadmin-[INFO]:-Starting gpstop with args: -rai
20200305:08:04:47:017053 gpstop:09c5497cf854:gpadmin-[INFO]:-Gathering information and validating the environment...
20200305:08:04:47:017053 gpstop:09c5497cf854:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20200305:08:04:47:017053 gpstop:09c5497cf854:gpadmin-[INFO]:-Obtaining Segment details from master...
20200305:08:04:47:017053 gpstop:09c5497cf854:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 7.0.0-alpha.0+dev.5622.g0cc5452d2bc build dev'
20200305:08:04:47:017053 gpstop:09c5497cf854:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='immediate'
20200305:08:04:47:017053 gpstop:09c5497cf854:gpadmin-[INFO]:-Master segment instance directory=/home/gpadmin/workspace/gpdb5/gpAux/gpdemo/datadirs/qddir/demoDataDir-1
20200305:08:04:47:017053 gpstop:09c5497cf854:gpadmin-[INFO]:-Attempting forceful termination of any leftover master process
20200305:08:04:47:017053 gpstop:09c5497cf854:gpadmin-[INFO]:-Terminating processes for segment /home/gpadmin/workspace/gpdb5/gpAux/gpdemo/datadirs/qddir/demoDataDir-1
20200305:08:04:47:017053 gpstop:09c5497cf854:gpadmin-[INFO]:-Stopping master standby host 09c5497cf854 mode=immediate
20200305:08:04:47:017053 gpstop:09c5497cf854:gpadmin-[INFO]:-Successfully shutdown standby process on 09c5497cf854
20200305:08:04:47:017053 gpstop:09c5497cf854:gpadmin-[INFO]:-Targeting dbid [2, 5, 3, 6, 4, 7] for shutdown
20200305:08:04:47:017053 gpstop:09c5497cf854:gpadmin-[INFO]:-Commencing parallel primary segment instance shutdown, please wait...
20200305:08:04:47:017053 gpstop:09c5497cf854:gpadmin-[INFO]:-0.00% of jobs completed
20200305:08:04:47:017053 gpstop:09c5497cf854:gpadmin-[INFO]:-100.00% of jobs completed
20200305:08:04:47:017053 gpstop:09c5497cf854:gpadmin-[INFO]:-Commencing parallel mirror segment instance shutdown, please wait...
20200305:08:04:47:017053 gpstop:09c5497cf854:gpadmin-[INFO]:-0.00% of jobs completed
20200305:08:04:48:017053 gpstop:09c5497cf854:gpadmin-[INFO]:-100.00% of jobs completed
20200305:08:04:48:017053 gpstop:09c5497cf854:gpadmin-[INFO]:-----------------------------------------------------
20200305:08:04:48:017053 gpstop:09c5497cf854:gpadmin-[INFO]:- Segments stopped successfully = 6
20200305:08:04:48:017053 gpstop:09c5497cf854:gpadmin-[INFO]:- Segments with errors during stop = 0
20200305:08:04:48:017053 gpstop:09c5497cf854:gpadmin-[INFO]:-----------------------------------------------------
20200305:08:04:48:017053 gpstop:09c5497cf854:gpadmin-[INFO]:-Successfully shutdown 6 of 6 segment instances
20200305:08:04:48:017053 gpstop:09c5497cf854:gpadmin-[INFO]:-Database successfully shutdown with no errors reported
20200305:08:04:48:017053 gpstop:09c5497cf854:gpadmin-[INFO]:-Cleaning up leftover shared memory
20200305:08:04:48:017053 gpstop:09c5497cf854:gpadmin-[INFO]:-Restarting System...
--end_ignore
-- Test update distkey
-- IF we enable the GDD, then the lock maybe downgrade to
-- RowExclusiveLock, when we UPDATE the distribution keys,
-- A SplitUpdate node will add to the Plan, then an UPDATE
-- operator may split to DELETE and INSERT.
-- IF we UPDATE the distribution keys concurrently, the
-- DELETE operator will not execute EvalPlanQual and the
-- INSERT operator can not be *blocked*, so it will
-- generate more tuples in the tables.
-- We raise an error when the GDD is enabled and the
-- distribution keys is updated.
0: create table tab_update_hashcol (c1 int, c2 int) distributed by(c1);
CREATE
0: insert into tab_update_hashcol values(1,1);
INSERT 1
0: select * from tab_update_hashcol;
c1 | c2
----+----
1 | 1
(1 row)
1: begin;
BEGIN
2: begin;
BEGIN
1: update tab_update_hashcol set c1 = c1 + 1 where c1 = 1;
UPDATE 1
2&: update tab_update_hashcol set c1 = c1 + 1 where c1 = 1; <waiting ...>
1: end;
END
2<: <... completed>
ERROR: concurrent updates distribution keys on the same row is not allowed (seg1 127.0.1.1:7003 pid=108408)
2: end;
END
0: select * from tab_update_hashcol;
c1 | c2
----+----
2 | 1
(1 row)
0: drop table tab_update_hashcol;
DROP
-- Test EvalplanQual
-- If we enable the GDD, then the lock maybe downgrade to
-- RowExclusiveLock, so UPDATE/Delete can be executed
-- concurrently, it may trigger the EvalPlanQual function
-- to recheck the qualifications.
-- If the subPlan have Motion node, then we can not execute
-- EvalPlanQual correctly, so we raise an error when
-- GDD is enabled and EvalPlanQual is tiggered.
0: create table tab_update_epq1 (c1 int, c2 int) distributed randomly;
CREATE
0: create table tab_update_epq2 (c1 int, c2 int) distributed randomly;
CREATE
0: insert into tab_update_epq1 values(1,1);
INSERT 1
0: insert into tab_update_epq2 values(1,1);
INSERT 1
0: select * from tab_update_epq1;
c1 | c2
----+----
1 | 1
(1 row)
0: select * from tab_update_epq2;
c1 | c2
----+----
1 | 1
(1 row)
1: set optimizer = off;
SET
2: set optimizer = off;
SET
1: begin;
BEGIN
2: begin;
BEGIN
1: update tab_update_epq1 set c1 = c1 + 1 where c2 = 1;
UPDATE 1
2&: update tab_update_epq1 set c1 = tab_update_epq1.c1 + 1 from tab_update_epq2 where tab_update_epq1.c2 = tab_update_epq2.c2; <waiting ...>
1: end;
END
2<: <... completed>
ERROR: EvalPlanQual can not handle subPlan with Motion node (seg0 127.0.1.1:7002 pid=108407)
2: end;
END
0: select * from tab_update_epq1;
c1 | c2
----+----
2 | 1
(1 row)
0: drop table tab_update_epq1;
DROP
0: drop table tab_update_epq2;
DROP
0q: ... <quitting>
include: helpers/server_helpers.sql;
CREATE
-- disable GDD
ALTER SYSTEM RESET gp_enable_global_deadlock_detector;
ALTER
ALTER SYSTEM RESET gp_global_deadlock_detector_period;
ALTER
-- start_ignore
! gpconfig -c gp_enable_global_deadlock_detector -v off;
20200424:15:51:08:052278 gpconfig:zlv:gpadmin-[INFO]:-completed successfully with parameters '-c gp_enable_global_deadlock_detector -v off'
! gpstop -rai;
20200424:15:51:08:052977 gpstop:zlv:gpadmin-[INFO]:-Starting gpstop with args: -rai
20200424:15:51:08:052977 gpstop:zlv:gpadmin-[INFO]:-Gathering information and validating the environment...
20200424:15:51:08:052977 gpstop:zlv:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20200424:15:51:08:052977 gpstop:zlv:gpadmin-[INFO]:-Obtaining Segment details from master...
20200424:15:51:08:052977 gpstop:zlv:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 7.0.0-alpha.0+dev.6778.g000faed10b build dev'
20200424:15:51:08:052977 gpstop:zlv:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='immediate'
20200424:15:51:08:052977 gpstop:zlv:gpadmin-[INFO]:-Master segment instance directory=/home/gpadmin/workspace/gpdb/gpAux/gpdemo/datadirs/qddir/demoDataDir-1
20200424:15:51:09:052977 gpstop:zlv:gpadmin-[INFO]:-Attempting forceful termination of any leftover master process
20200424:15:51:09:052977 gpstop:zlv:gpadmin-[INFO]:-Terminating processes for segment /home/gpadmin/workspace/gpdb/gpAux/gpdemo/datadirs/qddir/demoDataDir-1
20200424:15:51:09:052977 gpstop:zlv:gpadmin-[INFO]:-Stopping master standby host zlv mode=immediate
20200424:15:51:10:052977 gpstop:zlv:gpadmin-[INFO]:-Successfully shutdown standby process on zlv
20200424:15:51:10:052977 gpstop:zlv:gpadmin-[INFO]:-Targeting dbid [2, 5, 3, 6, 4, 7] for shutdown
20200424:15:51:10:052977 gpstop:zlv:gpadmin-[INFO]:-Commencing parallel primary segment instance shutdown, please wait...
20200424:15:51:10:052977 gpstop:zlv:gpadmin-[INFO]:-0.00% of jobs completed
20200424:15:51:11:052977 gpstop:zlv:gpadmin-[INFO]:-100.00% of jobs completed
20200424:15:51:11:052977 gpstop:zlv:gpadmin-[INFO]:-Commencing parallel mirror segment instance shutdown, please wait...
20200424:15:51:11:052977 gpstop:zlv:gpadmin-[INFO]:-0.00% of jobs completed
20200424:15:51:12:052977 gpstop:zlv:gpadmin-[INFO]:-100.00% of jobs completed
20200424:15:51:12:052977 gpstop:zlv:gpadmin-[INFO]:-----------------------------------------------------
20200424:15:51:12:052977 gpstop:zlv:gpadmin-[INFO]:- Segments stopped successfully = 6
20200424:15:51:12:052977 gpstop:zlv:gpadmin-[INFO]:- Segments with errors during stop = 0
20200424:15:51:12:052977 gpstop:zlv:gpadmin-[INFO]:-----------------------------------------------------
20200424:15:51:12:052977 gpstop:zlv:gpadmin-[INFO]:-Successfully shutdown 6 of 6 segment instances
20200424:15:51:12:052977 gpstop:zlv:gpadmin-[INFO]:-Database successfully shutdown with no errors reported
20200424:15:51:12:052977 gpstop:zlv:gpadmin-[INFO]:-Cleaning up leftover shared memory
20200424:15:51:13:052977 gpstop:zlv:gpadmin-[INFO]:-Restarting System...
-- end_ignore
-- Use utility session on seg 0 to restart master. This way avoids the
-- situation where session issuing the restart doesn't disappear
-- itself.
1U:SELECT pg_ctl(dir, 'restart') from datadir;
pg_ctl
--------
OK
(1 row)
-- Start new session on master to make sure it has fully completed
-- recovery and up and running again.
1: SHOW gp_enable_global_deadlock_detector;
gp_enable_global_deadlock_detector
------------------------------------
off
(1 row)
1: SHOW gp_global_deadlock_detector_period;
gp_global_deadlock_detector_period
------------------------------------
2min
(1 row)
......@@ -3,7 +3,7 @@
-- different node with the local deadlock detector. To make the local
-- deadlock testcases stable we reset the gdd period to 2min so should
-- not be triggered during the local deadlock tests.
ALTER SYSTEM RESET gp_global_deadlock_detector_period;
ALTER SYSTEM SET gp_global_deadlock_detector_period to '2min';
ALTER
SELECT pg_reload_conf();
pg_reload_conf
......@@ -66,8 +66,9 @@ UPDATE 1
FAILED: Execution failed
20q: ... <quitting>
10<: <... completed>
ERROR: deadlock detected (seg1 127.0.1.1:25433 pid=29851)
DETAIL: Process 29851 waits for ShareLock on transaction 1009; blocked by process 29968.
Process 29968 waits for ShareLock on transaction 1008; blocked by process 29851.
ERROR: deadlock detected (seg1 127.0.1.1:7003 pid=52248)
DETAIL: Process 52248 waits for ShareLock on transaction 632; blocked by process 52265.
Process 52265 waits for ShareLock on transaction 631; blocked by process 52248.
HINT: See server log for query details.
CONTEXT: while updating tuple (0,1) in relation "t03"
10q: ... <quitting>
include: helpers/server_helpers.sql;
CREATE
-- t0r is the reference table to provide the data distribution info.
DROP TABLE IF EXISTS t0p;
DROP
......@@ -62,25 +59,44 @@ SELECT segid(2,10) is not null;
t
(1 row)
-- table to just store the master's data directory path on segment.
CREATE TABLE datadir(a int, dir text);
CREATE
INSERT INTO datadir select 1,datadir from gp_segment_configuration where role='p' and content=-1;
INSERT 1
--enable GDD
-- start_ignore
! gpconfig -c gp_enable_global_deadlock_detector -v on;
20200424:15:49:54:048653 gpconfig:zlv:gpadmin-[INFO]:-completed successfully with parameters '-c gp_enable_global_deadlock_detector -v on'
ALTER SYSTEM SET gp_enable_global_deadlock_detector TO on;
ALTER
ALTER SYSTEM SET gp_global_deadlock_detector_period TO 5;
ALTER
! gpconfig -c gp_global_deadlock_detector_period -v 5;
20200424:15:49:56:049353 gpconfig:zlv:gpadmin-[INFO]:-completed successfully with parameters '-c gp_global_deadlock_detector_period -v 5'
! gpstop -rai;
20200424:15:49:56:050055 gpstop:zlv:gpadmin-[INFO]:-Starting gpstop with args: -rai
20200424:15:49:56:050055 gpstop:zlv:gpadmin-[INFO]:-Gathering information and validating the environment...
20200424:15:49:56:050055 gpstop:zlv:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20200424:15:49:56:050055 gpstop:zlv:gpadmin-[INFO]:-Obtaining Segment details from master...
20200424:15:49:56:050055 gpstop:zlv:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 7.0.0-alpha.0+dev.6778.g000faed10b build dev'
20200424:15:49:56:050055 gpstop:zlv:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='immediate'
20200424:15:49:56:050055 gpstop:zlv:gpadmin-[INFO]:-Master segment instance directory=/home/gpadmin/workspace/gpdb/gpAux/gpdemo/datadirs/qddir/demoDataDir-1
20200424:15:49:56:050055 gpstop:zlv:gpadmin-[INFO]:-Attempting forceful termination of any leftover master process
20200424:15:49:56:050055 gpstop:zlv:gpadmin-[INFO]:-Terminating processes for segment /home/gpadmin/workspace/gpdb/gpAux/gpdemo/datadirs/qddir/demoDataDir-1
20200424:15:49:56:050055 gpstop:zlv:gpadmin-[INFO]:-Stopping master standby host zlv mode=immediate
20200424:15:49:58:050055 gpstop:zlv:gpadmin-[INFO]:-Successfully shutdown standby process on zlv
20200424:15:49:58:050055 gpstop:zlv:gpadmin-[INFO]:-Targeting dbid [2, 5, 3, 6, 4, 7] for shutdown
20200424:15:49:58:050055 gpstop:zlv:gpadmin-[INFO]:-Commencing parallel primary segment instance shutdown, please wait...
20200424:15:49:58:050055 gpstop:zlv:gpadmin-[INFO]:-0.00% of jobs completed
20200424:15:49:59:050055 gpstop:zlv:gpadmin-[INFO]:-100.00% of jobs completed
20200424:15:49:59:050055 gpstop:zlv:gpadmin-[INFO]:-Commencing parallel mirror segment instance shutdown, please wait...
20200424:15:49:59:050055 gpstop:zlv:gpadmin-[INFO]:-0.00% of jobs completed
20200424:15:50:00:050055 gpstop:zlv:gpadmin-[INFO]:-100.00% of jobs completed
20200424:15:50:00:050055 gpstop:zlv:gpadmin-[INFO]:-----------------------------------------------------
20200424:15:50:00:050055 gpstop:zlv:gpadmin-[INFO]:- Segments stopped successfully = 6
20200424:15:50:00:050055 gpstop:zlv:gpadmin-[INFO]:- Segments with errors during stop = 0
20200424:15:50:00:050055 gpstop:zlv:gpadmin-[INFO]:-----------------------------------------------------
20200424:15:50:00:050055 gpstop:zlv:gpadmin-[INFO]:-Successfully shutdown 6 of 6 segment instances
20200424:15:50:00:050055 gpstop:zlv:gpadmin-[INFO]:-Database successfully shutdown with no errors reported
20200424:15:50:00:050055 gpstop:zlv:gpadmin-[INFO]:-Cleaning up leftover shared memory
20200424:15:50:01:050055 gpstop:zlv:gpadmin-[INFO]:-Restarting System...
-- end_ignore
-- Use utility session on seg 0 to restart master. This way avoids the
-- situation where session issuing the restart doesn't disappear
-- itself.
1U:SELECT pg_ctl(dir, 'restart') from datadir;
pg_ctl
--------
OK
(1 row)
-- Start new session on master to make sure it has fully completed
-- recovery and up and running again.
1: SHOW gp_enable_global_deadlock_detector;
......
......@@ -24,6 +24,7 @@ test: packcore
# Tests on global deadlock detector
test: gdd/prepare
test: gdd/concurrent_update
test: gdd/dist-deadlock-01 gdd/dist-deadlock-04 gdd/dist-deadlock-05 gdd/dist-deadlock-06 gdd/dist-deadlock-07 gdd/dist-deadlock-102 gdd/dist-deadlock-103 gdd/dist-deadlock-104 gdd/dist-deadlock-106 gdd/dist-deadlock-upsert gdd/non-lock-105
# until we can improve below flaky case please keep it disabled
ignore: gdd/non-lock-107
......@@ -217,11 +218,6 @@ test: reindex/vacuum_while_reindex_ao_bitmap reindex/vacuum_while_reindex_heap_b
# Cancel test
test: cancel_plpython
# Test concurrent UPDATE
test: concurrent_update
test: concurrent_update_distkeys
test: concurrent_update_epq
# Tests for getting numsegments in utility mode
test: upgrade_numsegments
# Memory accounting tests
......
-- IF we enable the GDD, then the lock maybe downgrade to
-- RowExclusiveLock, when we UPDATE the distribution keys,
-- A SplitUpdate node will add to the Plan, then an UPDATE
-- operator may split to DELETE and INSERT.
-- IF we UPDATE the distribution keys concurrently, the
-- DELETE operator will not execute EvalPlanQual and the
-- INSERT operator can not be *blocked*, so it will
-- generate more tuples in the tables.
-- We raise an error when the GDD is enabled and the
-- distribution keys is updated.
-- create heap table
0: show gp_enable_global_deadlock_detector;
0: create table tab_update_hashcol (c1 int, c2 int) distributed by(c1);
0: insert into tab_update_hashcol values(1,1);
0: select * from tab_update_hashcol;
-- test for heap table
1: begin;
2: begin;
1: update tab_update_hashcol set c1 = c1 + 1 where c1 = 1;
2&: update tab_update_hashcol set c1 = c1 + 1 where c1 = 1;
1: end;
2<:
2: end;
0: select * from tab_update_hashcol;
0: drop table tab_update_hashcol;
-- create AO table
0: create table tab_update_hashcol (c1 int, c2 int) with(appendonly=true) distributed by(c1);
0: insert into tab_update_hashcol values(1,1);
0: select * from tab_update_hashcol;
-- test for AO table
1: begin;
2: begin;
1: update tab_update_hashcol set c1 = c1 + 1 where c1 = 1;
2&: update tab_update_hashcol set c1 = c1 + 1 where c1 = 1;
1: end;
2<:
2: end;
0: select * from tab_update_hashcol;
0: drop table tab_update_hashcol;
1q:
2q:
0q:
-- enable gdd
-- start_ignore
! gpconfig -c gp_enable_global_deadlock_detector -v on;
! gpstop -rai;
-- end_ignore
-- create heap table
0: show gp_enable_global_deadlock_detector;
0: create table tab_update_hashcol (c1 int, c2 int) distributed by(c1);
0: insert into tab_update_hashcol values(1,1);
0: select * from tab_update_hashcol;
-- test for heap table
1: begin;
2: begin;
1: update tab_update_hashcol set c1 = c1 + 1 where c1 = 1;
2&: update tab_update_hashcol set c1 = c1 + 1 where c1 = 1;
1: end;
2<:
2: end;
0: select * from tab_update_hashcol;
0: drop table tab_update_hashcol;
-- create AO table
0: create table tab_update_hashcol (c1 int, c2 int) with(appendonly=true) distributed by(c1);
0: insert into tab_update_hashcol values(1,1);
0: select * from tab_update_hashcol;
-- test for AO table
1: begin;
2: begin;
1: update tab_update_hashcol set c1 = c1 + 1 where c1 = 1;
2&: update tab_update_hashcol set c1 = c1 + 1 where c1 = 1;
1: end;
2<:
2: end;
0: select * from tab_update_hashcol;
0: drop table tab_update_hashcol;
1q:
2q:
0q:
-- disable gdd
-- start_ignore
! gpconfig -c gp_enable_global_deadlock_detector -v off;
! gpstop -rai;
-- end_ignore
-- If we enable the GDD, then the lock maybe downgrade to
-- RowExclusiveLock, so UPDATE/Delete can be executed
-- concurrently, it may trigger the EvalPlanQual function
-- to recheck the qualifications.
-- If the subPlan have Motion node, then we can not execute
-- EvalPlanQual correctly, so we raise an error when
-- GDD is enabled and EvalPlanQual is tiggered.
-- create heap table
0: show gp_enable_global_deadlock_detector;
0: create table tab_update_epq1 (c1 int, c2 int) distributed randomly;
0: create table tab_update_epq2 (c1 int, c2 int) distributed randomly;
0: insert into tab_update_epq1 values(1,1);
0: insert into tab_update_epq2 values(1,1);
0: select * from tab_update_epq1;
0: select * from tab_update_epq2;
1: set optimizer = off;
2: set optimizer = off;
-- test for heap table
1: begin;
2: begin;
1: update tab_update_epq1 set c1 = c1 + 1 where c2 = 1;
2&: update tab_update_epq1 set c1 = tab_update_epq1.c1 + 1 from tab_update_epq2 where tab_update_epq1.c2 = tab_update_epq2.c2;
1: end;
2<:
2: end;
0: select * from tab_update_epq1;
0: drop table tab_update_epq1;
0: drop table tab_update_epq2;
-- create AO table
0: create table tab_update_epq1 (c1 int, c2 int) with(appendonly=true) distributed randomly;
0: create table tab_update_epq2 (c1 int, c2 int) with(appendonly=true) distributed randomly;
0: insert into tab_update_epq1 values(1,1);
0: insert into tab_update_epq2 values(1,1);
0: select * from tab_update_epq1;
0: select * from tab_update_epq2;
-- test for AO table
1: begin;
2: begin;
1: update tab_update_epq1 set c1 = c1 + 1 where c2 = 1;
2&: update tab_update_epq1 set c1 = tab_update_epq1.c1 + 1 from tab_update_epq2 where tab_update_epq1.c2 = tab_update_epq2.c2;
1: end;
2<:
2: end;
0: select * from tab_update_epq1;
0: drop table tab_update_epq1;
0: drop table tab_update_epq2;
1q:
2q:
0q:
-- enable gdd
-- start_ignore
! gpconfig -c gp_enable_global_deadlock_detector -v on;
! gpstop -rai;
-- end_ignore
-- create heap table
0: show gp_enable_global_deadlock_detector;
0: create table tab_update_epq1 (c1 int, c2 int) distributed randomly;
0: create table tab_update_epq2 (c1 int, c2 int) distributed randomly;
0: insert into tab_update_epq1 values(1,1);
0: insert into tab_update_epq2 values(1,1);
0: select * from tab_update_epq1;
0: select * from tab_update_epq2;
1: set optimizer = off;
2: set optimizer = off;
-- test for heap table
1: begin;
2: begin;
1: update tab_update_epq1 set c1 = c1 + 1 where c2 = 1;
2&: update tab_update_epq1 set c1 = tab_update_epq1.c1 + 1 from tab_update_epq2 where tab_update_epq1.c2 = tab_update_epq2.c2;
1: end;
2<:
2: end;
0: select * from tab_update_epq1;
0: drop table tab_update_epq1;
0: drop table tab_update_epq2;
-- create AO table
0: create table tab_update_epq1 (c1 int, c2 int) with(appendonly=true) distributed randomly;
0: create table tab_update_epq2 (c1 int, c2 int) with(appendonly=true) distributed randomly;
0: insert into tab_update_epq1 values(1,1);
0: insert into tab_update_epq2 values(1,1);
0: select * from tab_update_epq1;
0: select * from tab_update_epq2;
-- test for AO table
1: begin;
2: begin;
1: update tab_update_epq1 set c1 = c1 + 1 where c2 = 1;
2&: update tab_update_epq1 set c1 = tab_update_epq1.c1 + 1 from tab_update_epq2 where tab_update_epq1.c2 = tab_update_epq2.c2;
1: end;
2<:
2: end;
0: select * from tab_update_epq1;
0: drop table tab_update_epq1;
0: drop table tab_update_epq2;
1q:
2q:
0q:
-- disable gdd
-- start_ignore
! gpconfig -c gp_enable_global_deadlock_detector -v off;
! gpstop -rai;
-- end_ignore
......@@ -15,15 +15,7 @@ INSERT INTO t_concurrent_update VALUES(1,1,'test');
DROP TABLE t_concurrent_update;
--start_ignore
! gpconfig -c gp_enable_global_deadlock_detector -v on;
! gpstop -rai;
--end_ignore
-- Test the concurrent update transaction order on the segment is reflected on master
-- enable gdd
1: SHOW gp_enable_global_deadlock_detector;
1: CREATE TABLE t_concurrent_update(a int, b int);
1: INSERT INTO t_concurrent_update VALUES(1,1);
......@@ -79,7 +71,60 @@ DROP TABLE t_concurrent_update;
5q:
6q:
--start_ignore
! gpconfig -r gp_enable_global_deadlock_detector;
! gpstop -rai;
--end_ignore
-- Test update distkey
-- IF we enable the GDD, then the lock maybe downgrade to
-- RowExclusiveLock, when we UPDATE the distribution keys,
-- A SplitUpdate node will add to the Plan, then an UPDATE
-- operator may split to DELETE and INSERT.
-- IF we UPDATE the distribution keys concurrently, the
-- DELETE operator will not execute EvalPlanQual and the
-- INSERT operator can not be *blocked*, so it will
-- generate more tuples in the tables.
-- We raise an error when the GDD is enabled and the
-- distribution keys is updated.
0: create table tab_update_hashcol (c1 int, c2 int) distributed by(c1);
0: insert into tab_update_hashcol values(1,1);
0: select * from tab_update_hashcol;
1: begin;
2: begin;
1: update tab_update_hashcol set c1 = c1 + 1 where c1 = 1;
2&: update tab_update_hashcol set c1 = c1 + 1 where c1 = 1;
1: end;
2<:
2: end;
0: select * from tab_update_hashcol;
0: drop table tab_update_hashcol;
-- Test EvalplanQual
-- If we enable the GDD, then the lock maybe downgrade to
-- RowExclusiveLock, so UPDATE/Delete can be executed
-- concurrently, it may trigger the EvalPlanQual function
-- to recheck the qualifications.
-- If the subPlan have Motion node, then we can not execute
-- EvalPlanQual correctly, so we raise an error when
-- GDD is enabled and EvalPlanQual is tiggered.
0: create table tab_update_epq1 (c1 int, c2 int) distributed randomly;
0: create table tab_update_epq2 (c1 int, c2 int) distributed randomly;
0: insert into tab_update_epq1 values(1,1);
0: insert into tab_update_epq2 values(1,1);
0: select * from tab_update_epq1;
0: select * from tab_update_epq2;
1: set optimizer = off;
2: set optimizer = off;
1: begin;
2: begin;
1: update tab_update_epq1 set c1 = c1 + 1 where c2 = 1;
2&: update tab_update_epq1 set c1 = tab_update_epq1.c1 + 1 from tab_update_epq2 where tab_update_epq1.c2 = tab_update_epq2.c2;
1: end;
2<:
2: end;
0: select * from tab_update_epq1;
0: drop table tab_update_epq1;
0: drop table tab_update_epq2;
0q:
include: helpers/server_helpers.sql;
-- disable GDD
ALTER SYSTEM RESET gp_enable_global_deadlock_detector;
ALTER SYSTEM RESET gp_global_deadlock_detector_period;
-- start_ignore
! gpconfig -c gp_enable_global_deadlock_detector -v off;
! gpstop -rai;
-- end_ignore
-- Use utility session on seg 0 to restart master. This way avoids the
-- situation where session issuing the restart doesn't disappear
-- itself.
1U:SELECT pg_ctl(dir, 'restart') from datadir;
-- Start new session on master to make sure it has fully completed
-- recovery and up and running again.
1: SHOW gp_enable_global_deadlock_detector;
1: SHOW gp_global_deadlock_detector_period;
......@@ -3,7 +3,7 @@
-- different node with the local deadlock detector. To make the local
-- deadlock testcases stable we reset the gdd period to 2min so should
-- not be triggered during the local deadlock tests.
ALTER SYSTEM RESET gp_global_deadlock_detector_period;
ALTER SYSTEM SET gp_global_deadlock_detector_period to '2min';
SELECT pg_reload_conf();
-- start new session, which should always have newly reflected value
1: SHOW gp_global_deadlock_detector_period;
......
include: helpers/server_helpers.sql;
-- t0r is the reference table to provide the data distribution info.
DROP TABLE IF EXISTS t0p;
CREATE TABLE t0p (id int, val int);
......@@ -49,17 +47,13 @@ SELECT segid(0,10) is not null;
SELECT segid(1,10) is not null;
SELECT segid(2,10) is not null;
-- table to just store the master's data directory path on segment.
CREATE TABLE datadir(a int, dir text);
INSERT INTO datadir select 1,datadir from gp_segment_configuration where role='p' and content=-1;
ALTER SYSTEM SET gp_enable_global_deadlock_detector TO on;
ALTER SYSTEM SET gp_global_deadlock_detector_period TO 5;
--enable GDD
-- start_ignore
! gpconfig -c gp_enable_global_deadlock_detector -v on;
! gpconfig -c gp_global_deadlock_detector_period -v 5;
! gpstop -rai;
-- end_ignore
-- Use utility session on seg 0 to restart master. This way avoids the
-- situation where session issuing the restart doesn't disappear
-- itself.
1U:SELECT pg_ctl(dir, 'restart') from datadir;
-- Start new session on master to make sure it has fully completed
-- recovery and up and running again.
1: SHOW gp_enable_global_deadlock_detector;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册