1. 12 5月, 2020 2 次提交
    • J
      Fix multiple definition linker error · 5b13a9ca
      Jesse Zhang 提交于
      Looks like we were missing an "extern" in two places. While I was at it,
      also tidy up guc_gp.c by moving the definition of Debug_resource_group
      into cdbvars.c, and add declaration of
      gp_encoding_check_locale_compatibility to cdbvars.h.
      
      This is uncovered by building with GCC 10 and Clang 11, where
      -fno-common is the new default [1][2] (vis a vis -fcommon). I could also
      reproduce this by turning on "-fno-common" in older releases of GCC and
      Clang.
      
      We were relying on a myth (or legacy compiler behavior, rather) that C
      tentative definitions act _just like_ declarations -- in plain English:
      missing an "extern" in a global variable declaration-wannabe wouldn't
      harm you, as long as you don't put an initial value after it.
      
      This resolves #10072.
      
      [1] "3.17 Options for Code Generation Conventions: -fcommon"
      https://gcc.gnu.org/onlinedocs/gcc-10.1.0/gcc/Code-Gen-Options.html#index-tentative-definitions
      [2] "Porting to GCC 10" https://gcc.gnu.org/gcc-10/porting_to.html
      [3] "[Driver] Default to -fno-common for all targets" https://reviews.llvm.org/D75056
      
      (cherry picked from commit ee7eb0e8)
      5b13a9ca
    • H
      Limit DPE stats to groups with unresolved partition selectors (#9988) · dddd8366
      Hans Zeller 提交于
      DPE stats are computed when we have a dynamic partition selector that's
      applied on another child of a join. The current code continues to use
      DPE stats even for the common ancestor join and nodes above it, but
      those nodes aren't affected by the partition selector.
      
      Regular Memo groups pick the best expression among several to compute
      stats, which makes row count estimates more reliable. We don't have
      that luxury with DPE stats, therefore they are often less reliable.
      
      By minimizing the places where we use DPE stats, we should overall get
      more reliable row count estimates with DPE stats enabled.
      
      The fix also ignores DPE stats with row counts greater than the group
      stats. Partition selectors eliminate certain partitions, therefore
      it is impossible for them to increase the row count.
      dddd8366
  2. 09 5月, 2020 11 次提交
  3. 08 5月, 2020 3 次提交
    • Z
      Fix possible crash in COPY FORM on QEs · e9c00b80
      Zhenghua Lyu 提交于
      Target partitions need new ResultRelInfos and override previous
      estate->es_result_relation_info in NextCopyFromExecute(). The
      new ResultRelInfo may leave its resultSlot as NULL. If sreh is
      on, the parsing errors will be caught and loop back to parse
      another row; however, the estate->es_result_relation_info was
      already changed. This can cause crash.
      
      Reproduce:
      
      ```sql
      CREATE TABLE partdisttest(id INT, t TIMESTAMP, d VARCHAR(4))
      DISTRIBUTED BY (id)
      PARTITION BY RANGE (t)
      (
        PARTITION p2020 START ('2020-01-01'::TIMESTAMP) END ('2021-01-01'::TIMESTAMP),
        DEFAULT PARTITION extra
      );
      
      COPY partdisttest FROM STDIN LOG ERRORS SEGMENT REJECT LIMIT 2;
      1	'2020-04-15'	abcde
      1	'2020-04-15'	abc
      \.
      ```
      Authored-by: Nggbq <taos.alias@outlook.com>
      e9c00b80
    • P
      Fix a spinlock leak for fault injector · d1b10d64
      Pengzhou Tang 提交于
      This is a backport from master 2a7b2bf6
      d1b10d64
    • S
      Reduce flakiness of orafce dbms_pipe test · 3af91fa2
      Soumyadeep Chakraborty 提交于
      The dbms_pipe_session_{A,B} tests are flaky in CI as it can so happen
      that session B calls receiveFrom() before session A can even call
      createImplicitPipe(). This leads to flaky test failures such as:
      
      --- /tmp/build/e18b2f02/gpdb_src/gpcontrib/orafce/expected/dbms_pipe_session_B.out	2020-04-20 17:02:27.270832458 +0000
      +++ /tmp/build/e18b2f02/gpdb_src/gpcontrib/orafce/results/dbms_pipe_session_B.out	2020-04-20 17:02:27.278832994 +0000
      @@ -7,14 +7,6 @@
      
       -- Receives messages sent via an implicit pipe
       SELECT receiveFrom('named_pipe');
      -NOTICE:  RECEIVE 11: Message From Session A
      -NOTICE:  RECEIVE 12: 01-01-2013
      -NOTICE:  RECEIVE 13: Tue Jan 01 09:00:00 2013 PST
      -NOTICE:  RECEIVE 23: \201
      -NOTICE:  RECEIVE 24: (2,rob)
      -NOTICE:  RECEIVE 9: 12345
      -NOTICE:  RECEIVE 9: 12345.6789
      -NOTICE:  RECEIVE 9: 99999999999
        receivefrom
       -------------
      
      @@ -152,12 +144,13 @@
       ORDER BY 1;
             name      | items | limit | private |      owner
       ----------------+-------+-------+---------+-----------------
      + named_pipe     |     9 |    10 | f       |
        pipe_name_3    |     1 |       | f       |
        private_pipe_1 |     0 |    10 | t       | pipe_test_owner
        private_pipe_2 |     9 |    10 | t       | pipe_test_owner
        public_pipe_3  |     0 |    10 | f       |
        public_pipe_4  |     0 |    10 | f       |
      -(5 rows)
      +(6 rows)
      
      This commit introduces an explicit sleep at the start of session B to
      give session A a better chance to run.
      Co-authored-by: NJesse Zhang <jzhang@pivotal.io>
      (cherry picked from commit 985c5e2b)
      3af91fa2
  4. 07 5月, 2020 3 次提交
    • D
      Add LockTagTypeNames for distributed xid · 38ca0dc2
      dh-cloud 提交于
      Commit 13a1f66f forgot to modify the corresponding LockTagTypeNames array. 
      This commit fixes this.
      38ca0dc2
    • B
      Improve handling of -I input file · e055c67d
      Bhuvnesh Chaudhary 提交于
      Previously, gpinitsystem did not allow the user to specify a hostname
      and address for each segment in the input file used with -I; it only
      accepted one value per segment and used it for both hostname and
      address.
      
      This commit changes the behavior so that the user can specify both
      hostname and address. If the user specifies only the address (such as by
      using an old config file), it will preserve the old behavior and set
      both hostname and address to that value. It also adds a few tests around
      input file parsing so SET_VAR is more resilient to further refactors.
      
      The specific changes involved are the following:
      
      1) Change SET_VAR to be able to parse either the old format (address
      only) or new format (host and address) of the segment array
      representation.
      2) Move SET_VAR from gpinitsystem to gp_bash_functions.sh and remove the
      redundant copy in gpcreateseg.sh.
      3) Remove a hardcoded "~0" in QD_PRIMARY_ARRAY in gpinitsystem,
      representing a replication port value, that was left over from 5X.
      4) Improve the check for the number of fields in the segment array
      representation.
      
      Also, Remove use of ignore warning flag and use [[ ]] the check for IGNORE_WARNING
      e055c67d
    • B
      gpinitsystem: update catalog with correct hostname · 91771f4f
      Bhuvnesh Chaudhary 提交于
      Previously, gpintsystem was incorrectly filling the hostname field of each
      segment in gp_segment_configuration with the segment's address. This commit
      changes it to correctly resolve hostnames and update the catalog accordingly.
      
      This reverts commit 12ef7352, Revert "gpinitsystem: update catalog with
      correct hostname".
         Commit message from 12ef7352:
         The commit requires some additional tweaks to the input file logic for
         backwards compatibility purposes, so we're reverting this until the full
         fix is ready.
      91771f4f
  5. 06 5月, 2020 1 次提交
    • F
      6X Backport: Enable parallel writes for Foreign Data Wrappers · 86f6c666
      Francisco Guerrero 提交于
      This commit enables parallel writes for Foreign Data Wrapper. This
      feature is currently missing from the FDW framework, whilst parallel
      scans are supported, parallel writes are missing. FDW parallel writes
      are analogous to writing to writable external tables that run on all
      segments.
      
      One caveat is that in the external table framework, writable tables
      support a distribution policy:
      
          CREATE WRITABLE EXTERNAL TABLE foo (id int)
          LOCATION ('....')
          FORMAT 'CSV'
          DISTRIBUTED BY (id);
      
      In foreign tables, the distribution policy cannot be defined during the
      table definition, so we assume random distribution for all foreign
      tables.
      
      Parallel writes are enabled when the foreign table's exec_location is
      set to FTEXECLOCATION_ALL_SEGMENTS only. For foreign tables that run on
      master or any segment, the current policy behavior remains.
      86f6c666
  6. 05 5月, 2020 1 次提交
  7. 02 5月, 2020 2 次提交
  8. 01 5月, 2020 4 次提交
  9. 30 4月, 2020 2 次提交
  10. 29 4月, 2020 4 次提交
  11. 28 4月, 2020 6 次提交
    • P
      Fix a bug that reader gang always fail due to missing writer gang. (#9828) · 7a665560
      Paul Guo 提交于
      The reason is that new created reader gang would fail on QE due to missing
      writer gang process in locking code, and retry would fail again with the same
      reason, since the cached writer gang is still used because QD does not know &
      check the real libpq network status. See below for the repro case.
      
      Fixing this by checking the error message and then reset all gangs if seeing
      the error message, similar to the code logic that checks the startup/recovery
      message in gang create function. We could have other fixes, e.g. checking the
      writer gang network status, etc but those fixes seem to be ugly after trying.
      
      create table t1(f1 int, f2 text);
      <kill -9 one idle QE>
      
      insert into t1 values(2),(1),(5);
      ERROR:  failed to acquire resources on one or more segments
      DETAIL:  FATAL:  reader could not find writer proc entry, lock [0,1260] AccessShareLock 0 (lock.c:874)
       (seg0 192.168.235.128:7002)
      
      insert into t1 values(2),(1),(5);
       ERROR:  failed to acquire resources on one or more segments
       DETAIL:  FATAL:  reader could not find writer proc entry, lock [0,1260] AccessShareLock 0 (lock.c:874)
        (seg0 192.168.235.128:7002)
      
      <-- Above query fails again.
      
      Cherry-picked from 24f16417 and a0a5b4d5
      7a665560
    • P
      Let Fts tolerate the in-progress 'starting up' case on primary nodes. · 5222ad86
      Paul Guo 提交于
      commit d453a4aa implemented that for the crash
      recovery case (not marking the node down and then not promoting the mirror). It
      seems that we should do that for the usual "starting up" case also(i.e.
      CAC_STARTUP), besides for the existing "in recovery mode" case (i.e.
      CAC_RECOVERY).
      
      We've seen that fts promotes the "starting up" primary during isolation2
      testing due to 'pg_ctl restart'. In this patch we check recovery progress for
      both CAC_STARTUP an CAC_RECOVERY during fts probe and thus can avoid this.
      Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
      
      cherry-picked from d71b3afd
      
      On master the commit message was eliminated by mistake. Added back on gpdb6.
      5222ad86
    • P
      Remove forceEos mechanism for TCP interconnect · 7cf0ac40
      Pengzhou Tang 提交于
      In TCP interconnect, the sender used to force an EOS messages to the
      receiver in two cases:
      1. cancelUnfinished is true in mppExecutorFinishup.
      2. an error occurs.
      
      For case1, the comment says: to finish a cursor, the QD used to send
      a cancel to the QEs, QEs then set the cancelUnfinished flag and did
      a normal executor finish up. We now use QueryFinishPending mechanism
      to stop a cursor, so case1 logic is invalid for a long time.
      
      For case2, the purpose is: when an error occurs, we force an EOS to
      the receiver so the receiver didn't report an interconnect error and
      QD then will check the dispatch results and report the errors in the
      QEs. From the view of interconnect, we have selectedd to the end of
      the query and no error in the interconnect, this logic has two
      problems:
      1. it doesn't work for initplan, initplan will not check the dispatch
      results and throw the errors, so when an error occurs in the QEs for
      the initplan, the QD cannot notice that.
      2. it doesn't work for cursors, for example:
         DECLARE c1 cursor for select i from t1 where i / 0 = 1;
         FETCH all from c1;
         FETCH all from c1;
      All FETCH commands don't report errors which is not expected.
      
      This commit removed the forceEos mechanism, for the case2, the
      receiver will report an interconnect error without forceEos, this is
      ok because when multiple errors reports from QEs, the QD is inclined
      to report non-interconnect error.
      7cf0ac40
    • P
      Remove redundant 'hasError' flag in TeardownTCPInterconnect · d093c024
      Pengzhou Tang 提交于
      This flag is duplicated with 'forceEOS', 'forceEOS' can also tell
      whether errors occur or not.
      d093c024
    • P
      Fix a race condition in flushBuffer · 3e1cc863
      Pengzhou Tang 提交于
      flushBuffer() is used to send packets through TCP interconnect, before
      sending, it first check whether receiver stopped or teared down the
      interconnect, however, there is window between checking and sending, the
      receiver may tear down the interconnect and close the peer, so send()
      will report an error, to resolve this, we recheck whether the receiver
      stopped or teared down the interconnect in this window and don't error
      out in that case.
      Reviewed-by: NJinbao Chen <jinchen@pivotal.io>
      Reviewed-by: NHao Wu <hawu@pivotal.io>
      3e1cc863
    • P
      Fix interconnect hung issue · 7c90c04f
      Pengzhou Tang 提交于
      We hit interconnect hung issue many times in many cases, all have
      the same pattern: the downstream interconnect motion senders keep
      sending the tuples and they are blind to the fact that upstream
      nodes have finished and quitted the execution earlier, the QD
      then get enough tuples and wait all QEs to quit which cause a
      deadlock.
      
      Many nodes may quit execution earlier, eg, LIMIT, HashJoin, Nest
      Loop, to resolve the hung issue, they need to stop the interconnect
      stream explicitly by calling ExecSquelchNode(), however, we cannot
      do that for rescan cases in which data might lose, eg, commit
      2c011ce4. For rescan cases, we tried using QueryFinishPending to
      stop the senders in commit 02213a73 and let senders check this
      flag and quit, that commit has its own problem, firstly, QueryFini
      shPending can only set by QD, it doesn't work for INSERT or UPDATE
      cases, secondly, that commit only let the senders detect the flag
      and quit the loop in a rude way (without sending the EOS to its
      receiver), the receiver may still be stuck inreceiving tuples.
      
      This commit revert the QueryFinishPending method firstly.
      
      To resolve the hung issue, we move TeardownInterconnect to the
      ahead of cdbdisp_checkDispatchResult so it guarantees to stop
      the interconnect stream before waiting and checking the status
      of QEs.
      
      For UDPIFC, TeardownInterconnect() remove the ic entries, any
      packets for this interconnect context will be treated as 'past'
      packets and be acked with STOP flag.
      
      For TCP, TeardownInterconnect() close all connection with its
      children, the children will treat any readable data in the
      connection as a STOP message include the closure operation.
      
      A test case is not included, both commit 2c011ce4 and 02213a73
      contain one.
      7c90c04f
  12. 27 4月, 2020 1 次提交
    • P
      Fix a bug that two-phase sub-transaction is considered as one-phase. · 48ffabce
      Paul Guo 提交于
      QD backend should not forget whether a sub transaction performed writes
      
      QD backend process can avoid two-phase commit overhead if it knows that no QEs
      involved in this transaction or any of its sub transactions performed any
      writes. Previously, if a sub transaction performed write on one or more QEs, it
      was remembered in that sub transaction's global state. However, the sub
      transaction state was lost after sub transaction commit. That resulted in QD
      not performing two-phase commit at the end of top transaction.
      
      In fact, regardless of the transaction nesting level, we only need to remember
      whether a write was performed by a sub transaction. Therefore, use a backend
      global variable, instead of current transaction state to record this
      information.
      Reviewed-by: NGang Xiong <gxiong@pivotal.io>
      Reviewed-by: NHao Wu <gfphoenix78@gmail.com>
      Reviewed-by: NAsim R P <apraveen@pivotal.io>
      48ffabce