1. 29 7月, 2020 20 次提交
  2. 28 7月, 2020 2 次提交
    • P
      Fix flaky test isolation2:pg_basebackup_with_tablespaces (#10509) · 5783fa3a
      Paul Guo 提交于
      Here is the diff output of the test result.
      
       drop database some_database_without_tablespace;
       -DROP
       +ERROR:  database "some_database_without_tablespace" is being accessed by other users
       +DETAIL:  There is 1 other session using the database.
       drop tablespace some_basebackup_tablespace;
       -DROP
       +ERROR:  tablespace "some_basebackup_tablespace" is not empty
      
      The reason is that after client connection to the database exits, the server
      needs some time (the process might be scheduled out soon, and the operation
      needs to content for the ProcArrayLock lock) to release the PGPROC in
      proc_exit()->ProcArrayRemove(). During dropdb() (for database drop), postgres
      will call CountOtherDBBackends() to see if there are still sessions that are
      using the database by checking proc->databaseId, and it will try at most 5 sec.
      This test quits the db connection of some_database_without_tablespace and then
      drops the database immediately. This should be mostly fine but if the system is
      in slow or in heavy load, this still could lead to test flakiness.
      
      This issue could be simulated using gdb. Let's poll until database drop
      commands succeeds for the affected database.  It seems that drop database sql
      command could not be in transaction block so I could not use plpgsql to
      implement, instead I use dropdb utility and bash command to implement that.
      Reviewed-by: NAsim R P <pasim@vmware.com>
      (cherry picked from commit c8b00ac7)
      5783fa3a
    • M
      docs - PL/Container 3 supports the DO command - 6.x · 66242858
      mkiyama 提交于
      Also, fix bad cross-ref.
      66242858
  3. 23 7月, 2020 4 次提交
    • H
      Change log level in ExecChooseHashTableSize · 60d50cd6
      Hubert Zhang 提交于
      ExecChooseHashTableSize() is a hot function which is not only called by executor,
      but also by planner. Planner will call this function when calcualting cost for
      each join path. The number of join path grow exponentially with the number of
      table. As a result, do not using elog(LOG) to avoid generating too many logs.
      
      (cherry picked from commit 6b4d93c5)
      60d50cd6
    • P
      Update pre-allocated shared snapshot slot number. · 1b0195c9
      Paul Guo 提交于
      Previously it used max_prepared_xacts for shared snapshot slot number. The
      reason that it does not use MaxBackends, per comment, is that ideally on QE we
      want to use QD MaxBackends for the slot number, and note usually QE MaxBackends
      should be greater than QD MaxBackends due to potential multiple gangs per
      query. The code previously used max_prepared_xacts finally for the shared
      snapshot slot number calculation. That is not correctly given we have read-only
      query, and we have one-phase commit now.  Let's use MaxBackends for shared
      snapshot slot number calculation for safety though this might waste some memory.
      Reviewed-by: Nxiong-gang <gxiong@pivotal.io>
      (cherry picked from commit f6c59503)
      1b0195c9
    • P
      Limit gxact number on master with MaxBackends. · 0c57d9fc
      Paul Guo 提交于
      Previously we assign it as max_prepared_xacts. It is used to initialize some
      2pc related shared memory. For example the array shmCommittedGxactArray is
      created with this length and that array is used to collect not-yet "forgotten"
      distributed transactions during master/standby recovery, but the array length
      might be problematic since:
      
      1. If master max_prepared_xacts is equal to segment max_prepared_xacts as
      usual.  It is possible some distributed transactions use just partial gang so
      the total distributed transactions might be larger (and even much larger) than
      max_prepared_xacts. The document says max_prepared_xacts should be greater than
      max_connections but there is no code to enforce that.
      
      2. Also it is possible that master max_prepared_xacts might be different than
      segment max_prepared_xacts (although the document does not suggest it there is
      no code to enforce that),
      
      To fix that we use MaxBackends for the gxact number on master. We may just use
      guc max_connections (MaxBackends includes number for autovacuum workers and bg
      workers additionally besides guc max_connections), but I'm conservatively using
      MaxBackends,  since this issue is annoying - standby can not recover due to the
      FATAL message as below even after postgres reboot unless we temporarily
      increase the guc max_prepared_transactions value.
      
      2020-07-17 16:48:19.178667
      CST,,,p33652,th1972721600,,,,0,,,seg-1,,,,,"FATAL","XX000","the limit of 3
      distributed transactions has been reached","It should not happen. Temporarily
      increase max_connections (need postmaster reboot) on the postgres (master or
      standby) to work around this issue and then report a bug",,,,"xlog redo at
      0/C339BA0 for Transaction/DISTRIBUTED_COMMIT: distributed commit 2020-07-17
      16:48:19.101832+08 gid = 1594975696-0000000009, gxid =
      9",,0,,"cdbdtxrecovery.c",571,"Stack trace:
      
      1    0xb3a30f postgres errstart (elog.c:558)
      2    0xc3da4d postgres redoDistributedCommitRecord (cdbdtxrecovery.c:565)
      3    0x564227 postgres <symbol not found> (xact.c:6942)
      4    0x564671 postgres xact_redo (xact.c:7080)
      5    0x56fee5 postgres StartupXLOG (xlog.c:7207)
      Reviewed-by: Nxiong-gang <gxiong@pivotal.io>
      (cherry picked from commit 2a961e65)
      0c57d9fc
    • P
      Make test function wait_for_replication_replay() a common UDF. · e6addf3a
      Paul Guo 提交于
      We need that in more than one test.
      Reviewed-by: Nxiong-gang <gxiong@pivotal.io>
      (cherry picked from commit af942980)
      e6addf3a
  4. 22 7月, 2020 4 次提交
    • Z
      Correct plan of general & segmentGeneral path with volatiole functions. · 5b4c4f59
      Zhenghua Lyu 提交于
      General and segmentGeneral locus imply that if the corresponding slice
      is executed in many different segments should provide the same result
      data set. Thus, in some cases, General and segmentGeneral can be
      treated like broadcast.
      
      But what if the segmentGeneral and general locus path contain volatile
      functions? volatile functions, by definition, do not guarantee results
      of different invokes. So for such cases, they lose the property and
      cannot be treated as *general. Previously, Greenplum planner
      does not handle these cases correctly. Limit general or segmentgeneral
      path also has such issue.
      
      The fix idea of this commit is: when we find the pattern (a general or
      segmentGeneral locus paths contain volatile functions), we create a
      motion path above it to turn its locus to singleQE and then create a
      projection path. Then the core job becomes how we choose the places to
      check:
      
        1. For a single base rel, we should only check its restriction, this is
           the at bottom of planner, this is at the function set_rel_pathlist
        2. When creating a join path, if the join locus is general or segmentGeneral,
           check its joinqual to see if it contains volatile functions
        3. When handling subquery, we will invoke set_subquery_pathlist function,
           at the end of this function, check the targetlist and havingQual
        4. When creating limit path, the check and change algorithm should also be used
        5. Correctly handle make_subplan
      
      OrderBy clause and Group Clause should be included in targetlist and handled
      by the above Step 3.
      
      Also this commit fixes DMLs on replicated table. Update & Delete Statement on
      a replicated table is special. These statements have to be dispatched to each
      segment to execute. So if they contain volatile functions in their targetList
      or where clause, we should reject such statements:
      
        1. For targetList, we check it at the function create_motion_path_for_upddel
        2. For where clause, they will be handled in the query planner and if we
           find the pattern and want to fix it, do another check if we are updating
           or deleting replicated table, if so reject the statement.
      
      CherryPick from commit d1f9b96b from master to 6X.
      5b4c4f59
    • P
      Use postgres database for pg_rewind cleanly shutdown execution to avoid potential pg_rewind hang. · 777a4cdc
      Paul Guo 提交于
      During testing, I encountered an incremental gprecoverseg hang issue.
      Incremental gprecoverseg is based on pg_rewind.  pg_rewind launches a single
      mode postgres process and quits after crash recovery if the postgres instance
      was not cleanly shut down - this is used to ensure that the postgres is in a
      consistent state before doing incremental recovery. I found that the single
      mode postgres hangs with the below stack.
      
      \#1  0x00000000008cf2d6 in PGSemaphoreLock (sema=0x7f238274a4b0, interruptOK=1 '\001') at pg_sema.c:422
      \#2  0x00000000009614ed in ProcSleep (locallock=0x2c783c0, lockMethodTable=0xddb140 <default_lockmethod>) at proc.c:1347
      \#3  0x000000000095a0c1 in WaitOnLock (locallock=0x2c783c0, owner=0x2cbf950) at lock.c:1853
      \#4  0x0000000000958e3a in LockAcquireExtended (locktag=0x7ffde826aa60, lockmode=3, sessionLock=0 '\000', dontWait=0 '\000', reportMemoryError=1 '\001', locallockp=0x0) at lock.c:1155
      \#5  0x0000000000957e64 in LockAcquire (locktag=0x7ffde826aa60, lockmode=3, sessionLock=0 '\000', dontWait=0 '\000') at lock.c:700
      \#6  0x000000000095728c in LockSharedObject (classid=1262, objid=1, objsubid=0, lockmode=3) at lmgr.c:939
      \#7  0x0000000000b0152b in InitPostgres (in_dbname=0x2c769f0 "template1", dboid=0, username=0x2c59340 "gpadmin", out_dbname=0x0) at postinit.c:1019
      \#8  0x000000000097b970 in PostgresMain (argc=5, argv=0x2c51990, dbname=0x2c769f0 "template1", username=0x2c59340 "gpadmin") at postgres.c:4820
      \#9  0x00000000007dc432 in main (argc=5, argv=0x2c51990) at main.c:241
      
      It tries to hold the lock for template1 on pg_database with lockmode 3 but
      it conflicts with the lock with lockmode 5 which was held by a recovered dtx
      transaction in startup RecoverPreparedTransactions(). Typically the dtx
      transaction comes from "create database" (by default the template database is
      template1).
      
      Fixing this by using the postgres database for single mode postgres execution.
      The postgres database is commonly used in many background worker backends like
      dtx recovery, gdd and ftsprobe. With this change, we do not need to worry
      about "create database" with template postgres, etc since they won't succeed,
      thus avoid the lock conflict.
      
      We may be able to fix this in InitPostgres() by bypassing the locking code in
      single mode but the current fix seems to be safer.  Note InitPostgres()
      locks/unlocks some other catalog tables also but almost all of them are using
      lock mode 1 (except mode 3 pg_resqueuecapability per debugging output).  It
      seems that it is not usual in real scenario to have a dtx transaction that
      locks catalog with mode 8 which conflicts with mode 1.  If we encounter this
      later we need to think out a better (might not be trivial) solution for this.
      For now let's fix the issue we encountered at first.
      
      Note in this patch the code fixes in buildMirrorSegments.py and twophase.c are
      not related to this patch. They do not seem to be strict bugs but we'd better
      fix them to avoid potential issues in the future.
      Reviewed-by: NAshwin Agrawal <aashwin@vmware.com>
      Reviewed-by: NAsim R P <pasim@vmware.com>
      (cherry picked from commit 288908f3)
      777a4cdc
    • P
      Fix "Too many distributed transactions for snapshot" (#10500) · af8932a0
      Paul Guo 提交于
      Now that we do not have to use full gang for distributed transaction, that
      makes in-progress distributed transaction on master might be greater than
      max_prepared_xacts if max_prepared_xacts is configured with a small value.
      max_prepared_xacts is used to as the inProgressXidArray length for distributed
      snapshot. This might lead to distributed snapshot creation failure due to "Too
      many distributed transactions for snapshot" if the system is in heavy 2pc load.
      Fixing this by using GetMaxSnapshotXidCount() for the length of array
      inProgressXidArray, following the setting on master.
      
      This fixes github issue https://github.com/greenplum-db/gpdb/issues/10057
      
      No test for this since test isolation2:prepare_limit already covered this.  (I
      encountered this issue when backporting a PR that introduces the test
      isolation2:prepare_limit, so need to push this at first then the backporting
      PR).
      Reviewed-by: NHubert Zhang <hzhang@pivotal.io>
      af8932a0
    • Z
      Fix cdbpath_dedup_fixup does not consider merge append path. · 0085ad2a
      Zhenghua Lyu 提交于
      Greenplum use unique row id path as a candidate to implement semijoin.
      It is introduced long before. But GPDB6 has upgraded the kernel
      version to Postgres 9.4 and introduced many new path types and
      new plan nodes, thus cdbpath_dedup_fixup failed to consider them.
      Some typical issues are: https://github.com/greenplum-db/gpdb/issues/9427
      
      On Master branch, Heikki's commit 9628a332 refactored this part of code
      so that it is OK on master. And for 4X and 5X, we do not have many new
      kinds of plannode and pathnode, it is also OK.
      
      It is very hard to backport commit 9628a332 to 6X, there is no concept of
      a Path's target list in 9.4. And to totally remove this kind of path
      is too overkilling. So the policy is to fix them one bye one if reported.
      0085ad2a
  5. 21 7月, 2020 2 次提交
  6. 20 7月, 2020 1 次提交
  7. 17 7月, 2020 5 次提交
    • P
      Add debugging code in shared snapshot code and tweak the shared snapshot code a bit. · e1e2a9bf
      Paul Guo 提交于
      Notably we want the shared snapshot dumping information when encountering the
      "snapshot collision" error, which was seen on real scenario and it is hard to
      debug.
      
      (cherry picked from commit ee2d4641)
      e1e2a9bf
    • P
      Add debugging code for the "latch already owned" error. · 81fdd6c5
      Paul Guo 提交于
      We've seen such a case on a stable release but it is hard to debug via the
      message only, so let's provide more details in the error message.
      81fdd6c5
    • J
      Do not allocate MemoryPoolManager from a memory pool · 086a41c2
      Jesse Zhang 提交于
      Our implementations of memory pools have a hidden dependency on _the_
      global memory pool manager: typically GPOS_NEW and GPOS_DELETE will
      reach for the memory pool manager singleton. This makes GPOS_DELETE on a
      memory pool manager undefined behavior because we call member functions
      on an object after its destructor finishes.
      
      On the Postgres 12 merge branch, this manifests itself in a crash during
      initdb. More concerning is that it only crashed when we set max
      connections and shared buffers to a specific number.
      086a41c2
    • M
      docs - update utility docs with IP/hostname information. (#10379) · d19ca264
      Mel Kiyama 提交于
      * docs - update utility docs with IP/hostname information.
      
      Add information to gpinitsystem, gpaddmirrors, and gpexpand ref. docs
      --Information about using hostnames vs. IP addresses
      --Information about configuring hosts that are configured with mulitple NICs
      
      Also updated some examples in gpinitsystem
      
      * docs - review comment updates. Add more information from dev.
      
      * docs - change examples to show valid configurations that support failorver.
      Also fix typos and minor edits.
      
      * docs - updates based on review comments.
      d19ca264
    • L
      docs - greenplumr input.signature (#10477) · d741099a
      Lisa Owen 提交于
      d741099a
  8. 16 7月, 2020 2 次提交
    • P
      Fix flaky test case 'gpcopy' · f2da25de
      Pengzhou Tang 提交于
      The failed test case is to test the command "copy lineitem to '/tmp/abort.csv'"
      can be cancelled after COPY is dispatched to QEs. To verify this, it checks that
      /tmp/abort.csv has fewer rows than lineitem.
      
      The cancel logical in codes is:
      
      QD dispatched the COPY command to QEs, then if QD get a cancel interrupt, it
      sends a cancel request to QEs, however, the QD will keep receiving data from
      QEs even QD already get a cancel interrupt. QD relies on QEs to receive the
      cancel request and explicitly stop copying data to QD.
      
      Obviously, QEs may already have copied out all data to QDs before they
      get cancel requests, so the test case cannot guarantee /tmp/aborted.csv
      has fewer rows than lineitem.
      
      To fix this, we just verify the COPY command can be aborted with message
      'ERROR:  canceling statement due to user request', the count
      verification looks pointless here.
      
      It's cherry-pick of 9480d631 from master
      f2da25de
    • A
      [Refactor] Pull out KHeap into CKHeap.h · 2dabf684
      Ashuka Xue 提交于
      Pull out the implementation for binary heap into its own templated h
      file.
      2dabf684