1. 08 5月, 2020 2 次提交
    • P
      Fix a spinlock leak for fault injector · d1b10d64
      Pengzhou Tang 提交于
      This is a backport from master 2a7b2bf6
      d1b10d64
    • S
      Reduce flakiness of orafce dbms_pipe test · 3af91fa2
      Soumyadeep Chakraborty 提交于
      The dbms_pipe_session_{A,B} tests are flaky in CI as it can so happen
      that session B calls receiveFrom() before session A can even call
      createImplicitPipe(). This leads to flaky test failures such as:
      
      --- /tmp/build/e18b2f02/gpdb_src/gpcontrib/orafce/expected/dbms_pipe_session_B.out	2020-04-20 17:02:27.270832458 +0000
      +++ /tmp/build/e18b2f02/gpdb_src/gpcontrib/orafce/results/dbms_pipe_session_B.out	2020-04-20 17:02:27.278832994 +0000
      @@ -7,14 +7,6 @@
      
       -- Receives messages sent via an implicit pipe
       SELECT receiveFrom('named_pipe');
      -NOTICE:  RECEIVE 11: Message From Session A
      -NOTICE:  RECEIVE 12: 01-01-2013
      -NOTICE:  RECEIVE 13: Tue Jan 01 09:00:00 2013 PST
      -NOTICE:  RECEIVE 23: \201
      -NOTICE:  RECEIVE 24: (2,rob)
      -NOTICE:  RECEIVE 9: 12345
      -NOTICE:  RECEIVE 9: 12345.6789
      -NOTICE:  RECEIVE 9: 99999999999
        receivefrom
       -------------
      
      @@ -152,12 +144,13 @@
       ORDER BY 1;
             name      | items | limit | private |      owner
       ----------------+-------+-------+---------+-----------------
      + named_pipe     |     9 |    10 | f       |
        pipe_name_3    |     1 |       | f       |
        private_pipe_1 |     0 |    10 | t       | pipe_test_owner
        private_pipe_2 |     9 |    10 | t       | pipe_test_owner
        public_pipe_3  |     0 |    10 | f       |
        public_pipe_4  |     0 |    10 | f       |
      -(5 rows)
      +(6 rows)
      
      This commit introduces an explicit sleep at the start of session B to
      give session A a better chance to run.
      Co-authored-by: NJesse Zhang <jzhang@pivotal.io>
      (cherry picked from commit 985c5e2b)
      3af91fa2
  2. 07 5月, 2020 3 次提交
    • D
      Add LockTagTypeNames for distributed xid · 38ca0dc2
      dh-cloud 提交于
      Commit 13a1f66f forgot to modify the corresponding LockTagTypeNames array. 
      This commit fixes this.
      38ca0dc2
    • B
      Improve handling of -I input file · e055c67d
      Bhuvnesh Chaudhary 提交于
      Previously, gpinitsystem did not allow the user to specify a hostname
      and address for each segment in the input file used with -I; it only
      accepted one value per segment and used it for both hostname and
      address.
      
      This commit changes the behavior so that the user can specify both
      hostname and address. If the user specifies only the address (such as by
      using an old config file), it will preserve the old behavior and set
      both hostname and address to that value. It also adds a few tests around
      input file parsing so SET_VAR is more resilient to further refactors.
      
      The specific changes involved are the following:
      
      1) Change SET_VAR to be able to parse either the old format (address
      only) or new format (host and address) of the segment array
      representation.
      2) Move SET_VAR from gpinitsystem to gp_bash_functions.sh and remove the
      redundant copy in gpcreateseg.sh.
      3) Remove a hardcoded "~0" in QD_PRIMARY_ARRAY in gpinitsystem,
      representing a replication port value, that was left over from 5X.
      4) Improve the check for the number of fields in the segment array
      representation.
      
      Also, Remove use of ignore warning flag and use [[ ]] the check for IGNORE_WARNING
      e055c67d
    • B
      gpinitsystem: update catalog with correct hostname · 91771f4f
      Bhuvnesh Chaudhary 提交于
      Previously, gpintsystem was incorrectly filling the hostname field of each
      segment in gp_segment_configuration with the segment's address. This commit
      changes it to correctly resolve hostnames and update the catalog accordingly.
      
      This reverts commit 12ef7352, Revert "gpinitsystem: update catalog with
      correct hostname".
         Commit message from 12ef7352:
         The commit requires some additional tweaks to the input file logic for
         backwards compatibility purposes, so we're reverting this until the full
         fix is ready.
      91771f4f
  3. 06 5月, 2020 1 次提交
    • F
      6X Backport: Enable parallel writes for Foreign Data Wrappers · 86f6c666
      Francisco Guerrero 提交于
      This commit enables parallel writes for Foreign Data Wrapper. This
      feature is currently missing from the FDW framework, whilst parallel
      scans are supported, parallel writes are missing. FDW parallel writes
      are analogous to writing to writable external tables that run on all
      segments.
      
      One caveat is that in the external table framework, writable tables
      support a distribution policy:
      
          CREATE WRITABLE EXTERNAL TABLE foo (id int)
          LOCATION ('....')
          FORMAT 'CSV'
          DISTRIBUTED BY (id);
      
      In foreign tables, the distribution policy cannot be defined during the
      table definition, so we assume random distribution for all foreign
      tables.
      
      Parallel writes are enabled when the foreign table's exec_location is
      set to FTEXECLOCATION_ALL_SEGMENTS only. For foreign tables that run on
      master or any segment, the current policy behavior remains.
      86f6c666
  4. 05 5月, 2020 1 次提交
  5. 02 5月, 2020 2 次提交
  6. 01 5月, 2020 4 次提交
  7. 30 4月, 2020 2 次提交
  8. 29 4月, 2020 4 次提交
  9. 28 4月, 2020 6 次提交
    • P
      Fix a bug that reader gang always fail due to missing writer gang. (#9828) · 7a665560
      Paul Guo 提交于
      The reason is that new created reader gang would fail on QE due to missing
      writer gang process in locking code, and retry would fail again with the same
      reason, since the cached writer gang is still used because QD does not know &
      check the real libpq network status. See below for the repro case.
      
      Fixing this by checking the error message and then reset all gangs if seeing
      the error message, similar to the code logic that checks the startup/recovery
      message in gang create function. We could have other fixes, e.g. checking the
      writer gang network status, etc but those fixes seem to be ugly after trying.
      
      create table t1(f1 int, f2 text);
      <kill -9 one idle QE>
      
      insert into t1 values(2),(1),(5);
      ERROR:  failed to acquire resources on one or more segments
      DETAIL:  FATAL:  reader could not find writer proc entry, lock [0,1260] AccessShareLock 0 (lock.c:874)
       (seg0 192.168.235.128:7002)
      
      insert into t1 values(2),(1),(5);
       ERROR:  failed to acquire resources on one or more segments
       DETAIL:  FATAL:  reader could not find writer proc entry, lock [0,1260] AccessShareLock 0 (lock.c:874)
        (seg0 192.168.235.128:7002)
      
      <-- Above query fails again.
      
      Cherry-picked from 24f16417 and a0a5b4d5
      7a665560
    • P
      Let Fts tolerate the in-progress 'starting up' case on primary nodes. · 5222ad86
      Paul Guo 提交于
      commit d453a4aa implemented that for the crash
      recovery case (not marking the node down and then not promoting the mirror). It
      seems that we should do that for the usual "starting up" case also(i.e.
      CAC_STARTUP), besides for the existing "in recovery mode" case (i.e.
      CAC_RECOVERY).
      
      We've seen that fts promotes the "starting up" primary during isolation2
      testing due to 'pg_ctl restart'. In this patch we check recovery progress for
      both CAC_STARTUP an CAC_RECOVERY during fts probe and thus can avoid this.
      Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
      
      cherry-picked from d71b3afd
      
      On master the commit message was eliminated by mistake. Added back on gpdb6.
      5222ad86
    • P
      Remove forceEos mechanism for TCP interconnect · 7cf0ac40
      Pengzhou Tang 提交于
      In TCP interconnect, the sender used to force an EOS messages to the
      receiver in two cases:
      1. cancelUnfinished is true in mppExecutorFinishup.
      2. an error occurs.
      
      For case1, the comment says: to finish a cursor, the QD used to send
      a cancel to the QEs, QEs then set the cancelUnfinished flag and did
      a normal executor finish up. We now use QueryFinishPending mechanism
      to stop a cursor, so case1 logic is invalid for a long time.
      
      For case2, the purpose is: when an error occurs, we force an EOS to
      the receiver so the receiver didn't report an interconnect error and
      QD then will check the dispatch results and report the errors in the
      QEs. From the view of interconnect, we have selectedd to the end of
      the query and no error in the interconnect, this logic has two
      problems:
      1. it doesn't work for initplan, initplan will not check the dispatch
      results and throw the errors, so when an error occurs in the QEs for
      the initplan, the QD cannot notice that.
      2. it doesn't work for cursors, for example:
         DECLARE c1 cursor for select i from t1 where i / 0 = 1;
         FETCH all from c1;
         FETCH all from c1;
      All FETCH commands don't report errors which is not expected.
      
      This commit removed the forceEos mechanism, for the case2, the
      receiver will report an interconnect error without forceEos, this is
      ok because when multiple errors reports from QEs, the QD is inclined
      to report non-interconnect error.
      7cf0ac40
    • P
      Remove redundant 'hasError' flag in TeardownTCPInterconnect · d093c024
      Pengzhou Tang 提交于
      This flag is duplicated with 'forceEOS', 'forceEOS' can also tell
      whether errors occur or not.
      d093c024
    • P
      Fix a race condition in flushBuffer · 3e1cc863
      Pengzhou Tang 提交于
      flushBuffer() is used to send packets through TCP interconnect, before
      sending, it first check whether receiver stopped or teared down the
      interconnect, however, there is window between checking and sending, the
      receiver may tear down the interconnect and close the peer, so send()
      will report an error, to resolve this, we recheck whether the receiver
      stopped or teared down the interconnect in this window and don't error
      out in that case.
      Reviewed-by: NJinbao Chen <jinchen@pivotal.io>
      Reviewed-by: NHao Wu <hawu@pivotal.io>
      3e1cc863
    • P
      Fix interconnect hung issue · 7c90c04f
      Pengzhou Tang 提交于
      We hit interconnect hung issue many times in many cases, all have
      the same pattern: the downstream interconnect motion senders keep
      sending the tuples and they are blind to the fact that upstream
      nodes have finished and quitted the execution earlier, the QD
      then get enough tuples and wait all QEs to quit which cause a
      deadlock.
      
      Many nodes may quit execution earlier, eg, LIMIT, HashJoin, Nest
      Loop, to resolve the hung issue, they need to stop the interconnect
      stream explicitly by calling ExecSquelchNode(), however, we cannot
      do that for rescan cases in which data might lose, eg, commit
      2c011ce4. For rescan cases, we tried using QueryFinishPending to
      stop the senders in commit 02213a73 and let senders check this
      flag and quit, that commit has its own problem, firstly, QueryFini
      shPending can only set by QD, it doesn't work for INSERT or UPDATE
      cases, secondly, that commit only let the senders detect the flag
      and quit the loop in a rude way (without sending the EOS to its
      receiver), the receiver may still be stuck inreceiving tuples.
      
      This commit revert the QueryFinishPending method firstly.
      
      To resolve the hung issue, we move TeardownInterconnect to the
      ahead of cdbdisp_checkDispatchResult so it guarantees to stop
      the interconnect stream before waiting and checking the status
      of QEs.
      
      For UDPIFC, TeardownInterconnect() remove the ic entries, any
      packets for this interconnect context will be treated as 'past'
      packets and be acked with STOP flag.
      
      For TCP, TeardownInterconnect() close all connection with its
      children, the children will treat any readable data in the
      connection as a STOP message include the closure operation.
      
      A test case is not included, both commit 2c011ce4 and 02213a73
      contain one.
      7c90c04f
  10. 27 4月, 2020 2 次提交
    • P
      Fix a bug that two-phase sub-transaction is considered as one-phase. · 48ffabce
      Paul Guo 提交于
      QD backend should not forget whether a sub transaction performed writes
      
      QD backend process can avoid two-phase commit overhead if it knows that no QEs
      involved in this transaction or any of its sub transactions performed any
      writes. Previously, if a sub transaction performed write on one or more QEs, it
      was remembered in that sub transaction's global state. However, the sub
      transaction state was lost after sub transaction commit. That resulted in QD
      not performing two-phase commit at the end of top transaction.
      
      In fact, regardless of the transaction nesting level, we only need to remember
      whether a write was performed by a sub transaction. Therefore, use a backend
      global variable, instead of current transaction state to record this
      information.
      Reviewed-by: NGang Xiong <gxiong@pivotal.io>
      Reviewed-by: NHao Wu <gfphoenix78@gmail.com>
      Reviewed-by: NAsim R P <apraveen@pivotal.io>
      48ffabce
    • P
      Enlarge timeout in isolation2:pg_ctl UDF (#9991) · 994cc7ce
      Paul Guo 提交于
      Currently this UDF might report a false positive if the node is still starting
      up after timeout since currently pg_ctl returns 0 for this case. This behavior
      is changed in upstream with the below patch:
      
      commit f13ea95f
      Author: Tom Lane <tgl@sss.pgh.pa.us>
      Date:   Wed Jun 28 17:31:24 2017 -0400
      
          Change pg_ctl to detect server-ready by watching status in postmaster.pid.
      
      We've seen some test flakiness due to this issue since pg_ctl restart needs
      more time sometimes on pipeline (by default pg_ctl timeout is 60 seconds).
      Yesterday I found on a hang job that a primary needs ~ 4 minutes to get the
      recovery finished during 'pg_ctl restart' (It's test
      ao_same_trans_truncate_crash which enables fsync. Even it launches a checkpoint
      before pg_ctl restart, pg_ctl restarts still needs a lot of time).
      
      Enlarge the timeout of pg_ctl to 600 seconds now and add a pg_ctl stdout
      checking before returning OK in the UDF (this check could be removed after PG
      12 merge finishes so I added a FIXME there).
      
      Here is the output of the pg_ctl experiment:
      
      $ pg_ctl -l postmaster.log -D /data/gpdb7/gpAux/gpdemo/datadirs/dbfast1/demoDataDir0 -w -m immediate restart -t 1
      waiting for server to shut down.... done
      server stopped
      waiting for server to start.... stopped waiting
      server is still starting up
      $ echo $?
      0
      Reviewed-by: NAsim R P <apraveen@pivotal.io>
      
      Cherry-picked from 934d87c6
      994cc7ce
  11. 25 4月, 2020 3 次提交
    • M
      docs - updates for PostGIS 2.5.4 (#9989) · 591d1a59
      Mel Kiyama 提交于
      * docs - updates for PostGIS 2.5.4
      
      -Updated version 2.1.5 -> 2.5.4
      -Updated references to PostGIS topics
      -Removed geometry and geography limitation
      -Added enhancement for postgis_manager.sh script - support installing PostGIS in a non-default schema
      -Added examples of updating postgis.enable_outdb_rasters and postgis.gdal_enabled_drivers for a GPDB session
      -Remove ANALYZE limitation for user-defined data types
      -Added  _postgis_index_extent function is not supported
      -Added note: <-> operator returns centroid/centroid distance
      -changed name of PostGIS aggregate function ST_MemCollect to ST_Collect
      
      * docs - fix typo
      
      * docs - minor changes to topic hierarchy.
      
      * docs - one more minor hierarchy change.
      
      * docs - update external URLs
      
      * docs - fix typo, minor cleanup
      
      * docs - found strange character
      591d1a59
    • M
      docs - add Note about host requirements to segment mirror overview (#9971) · 1a959ff4
      Mel Kiyama 提交于
      * docs - add Note about host requirements to segment mirror overview
      
      -also edited some related topics and changed some links
      -fixed a bad link to the gpcheck utiltiy (removed in GPDB 6)
      
      * docs - minor edit
      Co-authored-by: NDavid Yozie <dyozie@pivotal.io>
      1a959ff4
    • M
      docs - add information about view dependency (#9952) · f373c526
      Mel Kiyama 提交于
      * docs - add information about view dependency
      
      -example queries to find dependency information
      -catalog table information
      -best practices
      
      * docs - minor edits and fixes.
      
      * docs - fix minor typo.
      
      * docs - another minor edit.
      f373c526
  12. 24 4月, 2020 2 次提交
  13. 23 4月, 2020 3 次提交
  14. 21 4月, 2020 1 次提交
    • H
      Fix getDtxCheckPointInfo to contain all committed transactions (#9942) · 05913102
      Hao Wu 提交于
      Half committed transactions in shmCommittedGxactArray are omitted.
      The bug could cause data loss/inconsistency. If transaction T1
      failed to commit prepared for some reasons, and the transaction T1
      has been committed on the master and other segments, but the transaction
      T1 isn't appended in the checkpoint record. So the DTX recovery
      can't retrieve the transaction and run recovery-commit-prepared,
      and the prepared transactions on the segment are aborted.
      Co-authored-by: NGang Xiong <gxiong@pivotal.io>
      05913102
  15. 20 4月, 2020 4 次提交
    • Z
      Improve efficiency in pg_lock_status() · 66e7f812
      Zhenghua Lyu 提交于
      Allocate memory of CdbPgResults.pg_results with palloc0() instead
      of calloc(), and free the memory afer use.
      
      The CdbPgResults.pg_results array that is returned from various dispatch
      functions is allocated by cdbdisp_returnResults() via calloc(), but in
      most cases the memory is not free()-ed after use.
      
      To avoid memory leak, the array is now allocated with palloc0() and
      recycled with pfree().
      
      Track which row and which result set is being processed in function context in pg_lock_status(), 
      so that an ineffient inner loop can be eliminated.
      
      Author: Fang Zheng <zhengfang.xjtu@gmail.com>
      66e7f812
    • S
      Do not push Volatile funcs below aggs · 89308890
      Sambitesh Dash 提交于
      Consider the scenario below
      
      ```
      create table tenk1 (c1 int, ten int);
      create temp sequence ts1;
      explain select * from (select distinct ten from tenk1) ss where ten < 10 + nextval('ts1') order by 1;
      ```
      
      The filter outside the subquery is a candidate to be pushed below the
      'distinct' in the sub-query.  But since 'nextval' is a volatile function, we
      should not push it.
      
      Volatile functions give different results with each execution. We don't want
      aggs to use result of a volatile function before it is necessary. We do it for
      all aggs - DISTINCT and GROUP BY.
      
      Also see commit 6327f25d.
      89308890
    • H
      Fix memory leak and bug in checkpointer process (#9766) · fa07e427
      Hao Wu 提交于
      1. The CurrentMemoryContext in CreateCheckPoint is long-live
      memory context owned by the checkpointer process. The memory context
      is reset only if the error occurs. So memory leak will lead to
      huge memory leak in the OS.
      2. The prepared transactions are not appended as an extension in
      the checkpoint wal record, which introduces a bug. It occurs:
        1) T1 does some DML and is prepared.
        2) T2 runs a checkpoint.
        3) The seg0 crashes before the transaction T1 successfully runs `COMMIT PREPARE`, but
           all other segments have successfully committed.
        4) When running local recovery, seg0 doesn't know T1 is prepared
           and needs to be committed again.
      This is different from the master branch. Master uses files to record
      whether there exists any prepared transaction and what them are.
      Reviewed-by: NNing Yu <nyu@pivotal.io>
      Reviewed-by: NGang Xiong <gxiong@pivotal.io>
      fa07e427
    • X
      Fix a bug when setting DistributedLogShared->oldestXmin · e246d777
      xiong-gang 提交于
      The shared oldestXmin (DistributedLogShared->oldestXmin) may be updated
      concurrently. It should be set to a higher value, because a higher xmin
      can belong to another distributed log segment, its older segments might
      already be truncated.
      
      For Example: txA and txB call DistributedLog_AdvanceOldestXmin concurrently.
      
      ```
      txA and txB: both hold shared DistributedLogTruncateLock.
      
      txA: set the DistributedLogShared->oldestXmin to XminA. TransactionIdToSegment(XminA) = 0009
      
      txB: set the DistributedLogShared->oldestXmin to XminB. TransactionIdToSegment(XminB) = 0008
      
      txA: truncate segment 0008, 0007...
      ```
      
      After that, DistributedLogShared->oldestXmin == XminB, it is on removed
      segment 0008. Subsequent GetSnapshotData() calls will be failed because SimpleLruReadPage will error out.
      Co-authored-by: Ndh-cloud <60729713+dh-cloud@users.noreply.github.com>
      e246d777