1. 01 7月, 2020 1 次提交
    • Z
      Make QEs does not use the GUC gp_enable_global_deadlock_detector · 855f1548
      Zhenghua Lyu 提交于
      Previously, there are some code executed in QEs that will check
      the value of the GUC gp_enable_global_deadlock_detector. The
      historical reason is that:
        Before we have GDD, UPDATE|DELETE operations cannot be
        concurrently executed, and we do not meet the EvalPlanQual
        issue and concurrent split update issue. After we have GDD,
        we meet such issues and add code to resolve them, and we
        add the code `if (gp_enable_global_deadlock_detector)` at
        the very first. It is just like habitual thinking.
      
      In fact, we do not rely on it in QEs and I tried to remove this
      in the context: https://github.com/greenplum-db/gpdb/pull/9992#pullrequestreview-402364938.
      I tried to add an assert there but could not handle utility mode
      as Heikki's comments. To continue that idea, we can just remove
      the check of gp_enable_global_deadlock_detector. This brings two
      benefits:
        1. Some users only change this GUC on master. By removing the usage
           in QEs, it is safe now to make the GUC master only.
        2. We can bring back the skills to only restart master node's postmaster
           to enable GDD, this save a lot of time in pipeline. This commit
           also do this for the isolation2 test cases: lockmodes and gdd/.
      
      The Github issue https://github.com/greenplum-db/gpdb/issues/10030  is gone after this commit.
      855f1548
  2. 26 4月, 2020 1 次提交
    • Z
      Correct and speed up some isolation2 test cases. · 1e68270b
      Zhenghua Lyu 提交于
      * Enable GDD for test concurrent update table with varying length type.
      
      This test case of "test concurrent update table with varying length type"
      in isolation2/concurrent_update is introduced by the commit 8ae681e5. It
      assumes to test evalPlanQual's logic so GDD has to be enabled.
      
      The mistake happens because when commit 8ae681e5 goes in, GDD is default
      enabled. Later commit 29e7f102 set GDD defaultly disabled but did not
      handle this test case correctly
      
      This commit fixes this by moving the case in the env with GDD enabled.
      
      * Make concurrent_update* test concise.
      
      Previously, concurrent_update_epq and concurrent_update_distkey
      have some cases for AO/CO table, and some cases running without GDD.
      That does not make much sense because:
        * for AO/CO table, it always holds ExclusiveLock on the table no
          matter GDD is enabled or disabled
        * without GDD, it always holds ExclusiveLock on the table
      
      I believe for the above two naive cases, just checking lockmode on
      the table should be enough, and isolation2/lockmodes test case has
      already covered this.
      
      This commit removes such cases concurrent_update_epq and
      concurrent_update_distkey. After this, the two scripts can be
      merged into concurrent_update because they all need GDD to be
      enabled and this commit merges them into one test script.
      
      After this, we can move the only remaining test concurrent_update
      to the gdd suites to save time of extra restarting cluster twice.
      
      * Restart the whole cluster for GDD since QEs will use the GUC.
      
      Previous commit a4b2fea3 uses a skill that only restart
      master node's postmaster to avoid restart the whole Greenplum
      cluster so that saves some time for pipeline test. The skill
      works under the assumption that gp_enable_global_deadlock_detector
      only needs on master. But actually, in QEs, we also have code
      that checking this GUC: ExecDelete and XactLockTableWait. So
      we'd better restart the whole cluster to make the GUC in QEs
      also changed.
      
      I find this issue when I am trying to use the skill of commit
      a4b2fea3 for the isolation2 test case `concurrent_update`,
      that case was introduced by the commit 39fbfe96 and that
      commit adds some code running on QEs but need the GUC
      gp_enable_global_deadlock_detector.
      1e68270b
  3. 29 8月, 2019 1 次提交
    • A
      Avoid full cluster restarts in GDD tests and other cleanup. · a4b2fea3
      Ashwin Agrawal 提交于
      To enable or disable the GUC gp_enable_global_deadlock_detector,
      restart is required. But this GUC is only used on master, so just
      restart master instead of full cluster. This helps to cut-down the
      test time by a min. Also, in process remove the pg_sleep(2) calls, as
      GUCs gp_enable_global_deadlock_detector and
      gp_global_deadlock_detector_period can be set sametime and hence don't
      need separate time to reload the config and waste time.
      
      Also, removing prepare-for-local as only one test exists for local
      locks which is local-deadlock-03, hence directly prepare for the same
      inside that sql file.
      a4b2fea3
  4. 01 2月, 2019 1 次提交
    • Z
      Update serially when GDD is disabled · 29e7f102
      Zhang Shujie 提交于
      If Global Deadlock Detector is enabled, then the table lock may
      downgrade to RowExclusiveLock, It may lead two problems:
      
      1. When updating distributed keys concurrently, SplitUpdate node
         would generate more tuples in the table.
      2. When updating concurrently, it may trigger the EvalPlanQual
         function, when the SubPlan has Motion node, it can not execute
         correctly.
      
      Now we add a GUC for GDD, if it is disabled, we execute these
      UPDATE statement serially, if it is enabled, we raise an error when
      updating concurrently.
      
      Co-authored-by: Zhenghua Lyu zlv@pivotal.io
      29e7f102