1. 29 9月, 2020 1 次提交
  2. 26 9月, 2020 1 次提交
  3. 22 9月, 2020 1 次提交
  4. 21 9月, 2020 1 次提交
    • J
      Fix interconnect hung issue (#10757) · 3bef5530
      Jinbao Chen 提交于
      We hit interconnect hung issue many times in many cases, all have
      the same pattern: the downstream interconnect motion senders keep
      sending the tuples and they are blind to the fact that upstream
      nodes have finished and quitted the execution earlier, the QD
      then get enough tuples and wait all QEs to quit which cause a
      deadlock.
      
      Many nodes may quit execution earlier, eg, LIMIT, HashJoin, Nest
      Loop, to resolve the hung issue, they need to stop the interconnect
      stream explicitly by calling ExecSquelchNode(), however, we cannot
      do that for rescan cases in which data might lose, eg, commit
      2c011ce4. For rescan cases, we tried using QueryFinishPending to
      stop the senders in commit 02213a73 and let senders check this
      flag and quit, that commit has its own problem, firstly, QueryFini
      shPending can only set by QD, it doesn't work for INSERT or UPDATE
      cases, secondly, that commit only let the senders detect the flag
      and quit the loop in a rude way (without sending the EOS to its
      receiver), the receiver may still be stuck inreceiving tuples.
      
      This commit revert the QueryFinishPending method firstly.
      
      To resolve the hung issue, we move TeardownInterconnect to the
      ahead of cdbdisp_checkDispatchResult so it guarantees to stop
      the interconnect stream before waiting and checking the status
      of QEs.
      
      For UDPIFC, TeardownInterconnect() remove the ic entries, any
      packets for this interconnect context will be treated as 'past'
      packets and be acked with STOP flag.
      
      For TCP, TeardownInterconnect() close all connection with its
      children, the children will treat any readable data in the
      connection as a STOP message include the closure operation.
      
      this commit backport from master ec1d9a70
      3bef5530
  5. 19 9月, 2020 2 次提交
    • A
      Refactor query string truncation on top of 889ba39e · e393c88b
      Asim R P 提交于
      Commit 889ba39e fixed the query string truncation in dispatcher to
      make it locale-aware.  This patch refactors that change so as to avoid
      accessing a string beyond its length.
      
      Reviewed by: Heikki, Ning Yu and Polina Bungina
      
      (cherry picked from commit abf6b330)
      e393c88b
    • P
      Fix query string truncation while dispatching to QE · b76d049b
      Polina Bungina 提交于
      Execution of a long enough query containing multi-byte characters can cause incorrect truncation of the query string. Incorrect truncation implies an occasional cut of a multi-byte character and (with log_min_duration_statement set to 0 ) subsequent write of an invalid symbol to segment logs. Such broken character present in logs produces problems when trying to fetch logs info from gp_toolkit.__gp_log_segment_ext  table - queries fail with the following error: «ERROR: invalid byte sequence for encoding…».
      This is caused by buildGpQueryString function in `cdbdisp_query.c`, which prepares query text for dispatch to QE. It does not take into account character length when truncation is necessary (text is longer than QUERY_STRING_TRUNCATE_SIZE).
      
      (cherry picked from commit f31600e9)
      b76d049b
  6. 18 9月, 2020 2 次提交
    • X
      Don't dispatch client_encoding to QE · 9a6cd1ee
      xiong-gang 提交于
      When client_encoding is dispatch to QE, error messages generated in QEs were
      converted to client_encoding, but QD assumed that they were in server encoding,
      it will leads to corruption.
      
      This is fixed in 6X in a6c9b4, but this skips the gpcopy changes since 5X
      doesn't support syntax 'COPY...ENCODING'.
      
      Fix issue: https://github.com/greenplum-db/gpdb/issues/10815
      9a6cd1ee
    • D
      Align Orca relhasindex behavior with Planner (#10788) · 8083a046
      David Kimura 提交于
      Function `RelationGetIndexList()` does not filter out invalid indexes.
      That responsiblity is left to the caller (e.g. `get_relation_info()`).
      Issue is that Orca was not checking index validity.
      
      This commit also introduces an optimization to Orca that is already used
      in Planner whereby we first check relhasindex before checking pg_index.
      
      (cherry picked from commit b011c351)
      8083a046
  7. 17 9月, 2020 2 次提交
    • A
      Do not read a persistent tuple after it is freed · 5f765a8e
      Asim R P 提交于
      This bug was found in a production environment where vacuum on
      gp_persistent_relation was concurrently running with a backend
      performing end-of-xact filesystem operations.  And the GUC
      debug_persistent_print was enabled.
      
      The *_ReadTuple() function was called on a persistent TID after the
      corresponding tuple was deleted with frozen transaction ID.  The
      concurrent vacuum recycled the tuple and it led to a SIGSEGV when the
      backend tried to access values from the tuple.
      
      Fix it by avoiding the debug log message in case when the persistent
      tuple is freed (transitioning to FREE state).  All other state
      transitions are logged.
      
      In absence of concurrent vacuum, things worked just fine because the
      *_ReadTuple() interface reads tuples from persistent tables directly
      using TID.
      5f765a8e
    • W
      Skip FK check when do relation truncate · b50c134b
      Weinan WANG 提交于
      GPDB does not support FK, but keep FK grammar in DDL, since it
      reduce DB migration manual workload from others.
      Hence, we do not need FK check for truncate command, rid of it.
      b50c134b
  8. 11 9月, 2020 1 次提交
  9. 10 9月, 2020 5 次提交
    • J
      Add .git-blame-ignore-revs · 9b8a2a2f
      Jesse Zhang 提交于
      This file will be used to record commits to be ignored by default by
      git-blame (user still has to opt in). This is intended to include
      large (generally automated) reformatting or renaming commits.
      
      (cherry picked from commit b19e6abb)
      9b8a2a2f
    • K
      gpstart: skip filespace checks for standby when unreachable · 0ceee69f
      Kalen Krempely 提交于
      When the standby is unreachable and the user proceeds with startup,
      gpstart fails to start when temporary or transaction files have been
      moved to a non-default filespace.
      
      To determine when the standby is unreachable fetch_tli was reworked to
      raise a StandbyUnreachable exception. And the standby is not started if
      it is unreachable.
      Co-authored-by: NBhuvnesh Chaudhary <bchaudhary@vmware.com>
      0ceee69f
    • J
      behave: test that gpstart continues if standby is unreachable · e037b5ba
      Jacob Champion 提交于
      Add a failing behave test to ensure that gpstart prompts and continues
      successfully if the standby host is unreachable. The subsequent commit
      will fix the test case.
      Co-authored-by: NKalen Krempely <kkrempely@vmware.com>
      e037b5ba
    • L
      706acd7b
    • D
      Allow direct dispatch in Orca if predicate on column gp_segment_id (#10679) (#10785) · b52d5b9e
      David Kimura 提交于
      This approach special cases gp_segment_id enough to include the column
      as a distributed column constraint. It also updates direct dispatch info
      to be aware of gp_segment_id which represents the raw value of the
      segment where the data resides. This is different than other columns
      which hash the datum value to decide where the data resides.
      
      After this change the following DDL shows Gather Motion from 2 segments
      on a 3 segment demo cluster.
      
      ```
      CREATE TABLE t(a int, b int) DISTRIBUTED BY (a);
      EXPLAIN SELECT gp_segment_id, * FROM t WHERE gp_segment_id=1 or gp_segment_id=2;
                                        QUERY PLAN
      -------------------------------------------------------------------------------
       Gather Motion 2:1  (slice1; segments: 2)  (cost=0.00..431.00 rows=1 width=12)
         ->  Seq Scan on t  (cost=0.00..431.00 rows=1 width=12)
               Filter: ((gp_segment_id = 1) OR (gp_segment_id = 2))
       Optimizer: Pivotal Optimizer (GPORCA)
      (4 rows)
      
      ```
      
      (cherry picked from commit 10e2b2d9)
      
      * Bump ORCA version to 3.110.0
      b52d5b9e
  10. 09 9月, 2020 4 次提交
  11. 04 9月, 2020 1 次提交
  12. 03 9月, 2020 5 次提交
    • H
      Fix formatting issue in answer file · 13baea67
      Hubert Zhang 提交于
      13baea67
    • H
      Bump ORCA version to 3.109, add test cases for corr subq with LOJs (#10512) · c023d9db
      Hans Zeller 提交于
      * Add test cases for correlated subqueries with outer joins
      
      We found several problems with outer references in outer joins and
      related areas, especially when using optimizer_join_order = exhaustive2.
      
      Adding some tests. Please note that due to some remaining problems in
      both ORCA and planner, the tests contain some FIXMEs.
      
      * Bump ORCA version to 3.109.0
      c023d9db
    • H
      Using lwlock to protect resgroup slot in session state · 1e24b618
      Hubert Zhang 提交于
      Resource group used to access resGroupSlot in SessionState without
      lock. This is correct when session only access resGroupSlot by itself.
      But as we introduced runaway feature, we need to traverse the current
      session array to find the top consumer session when redzone is reached.
      This requires:
      1. runaway detector should hold shared resgroup lock to avoid resGroupSlot
      is detached from a session concurrently when redzone is reached.
      2. normal session should hold exclusive lock when modifying resGroupSlot
      in SessionState.
      Reviewed-by: NNing Yu <nyu@pivotal.io>
      
      (cherry picked from commit a4cb06b4)
      1e24b618
    • H
      Fix resource group runaway rounding issue · e9223710
      Hubert Zhang 提交于
      When calculating safeChunksThreshold of runaway in resource group,
      we used to divide by 100 to get the number of safe chunks. This may
      lead to small chunk numbers to be rounded to zero. Fix it by storing
      safeChunksThreshold100(100 times bigger than the real safe chunk) and
      do the computation on the fly.
      Reviewed-by: NNing Yu <nyu@pivotal.io>
      (cherry picked from commit 757184f9)
      e9223710
    • P
      Correctly use atomic variable in ResGroupControl.freeChunks. (#8434) · 1557fd13
      Paul Guo 提交于
      This variable was used mixing with atomic api functions and direct access.
      This is not wrong usually in real scenario but is not a good implementation
      since 1) that depends on compiler and H/W to ensure the correctness of direct
      access. 2) code is not graceful.
      
      Changing to all use atomic api functions.
      Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
      (cherry picked from commit f59307f5)
      1557fd13
  13. 29 8月, 2020 2 次提交
    • J
      Fix double deduction of FREEABLE_BATCHFILE_METADATA · 567025bd
      Jesse Zhang 提交于
      Earlier, we always deducted FREEABLE_BATCHFILE_METADATA inside
      closeSpillFile() regardless of whether the spill file was already
      suspended. This deduction, is already performed inside
      suspendSpillFiles(). This double accounting leads to
      hashtable->mem_for_metadata becoming negative and we get:
      
      FailedAssertion("!(hashtable->mem_for_metadata > 0)", File: "execHHashagg.c", Line: 2019)
      Co-authored-by: NSoumyadeep Chakraborty <sochakraborty@pivotal.io>
      567025bd
    • J
      Fix assert condition in spill_hash_table() · 679ed508
      Jesse Zhang 提交于
      This commit fixes the following assertion failure message reported in:
      (#9902) https://github.com/greenplum-db/gpdb/issues/9902
      
      FailedAssertion("!(hashtable->nbuckets > spill_set->num_spill_files)", File: "execHHashagg.c", Line: 1355)
      
      hashtable->nbuckets can actually end up being equal to
      spill_set->num_spill_files, which causes the failure. This is because:
      
      hashtable->nbuckets is set with HashAggTableSizes->nbuckets, which can
      end up being equal to: gp_hashagg_default_nbatches. Refer:
      nbuckets = Max(nbuckets, gp_hashagg_default_nbatches);
      
      Also, spill_set->num_spill_files is set with
      HashAggTableSizes->nbatches, which is further set to
      gp_hashagg_default_nbatches.
      
      Thus, these two entities can be equal.
      Co-authored-by: NSoumyadeep Chakraborty <sochakraborty@pivotal.io>
      (cherry picked from commit 067bb350)
      679ed508
  14. 27 8月, 2020 3 次提交
    • (
      Error out when changing datatype of column with constraint. (#10712) · 9ebc0423
      (Jerome)Junfeng Yang 提交于
      Raise a meaningful error message for this case.
      GPDB doesn't support alter type on primary key and unique
      constraint column. Because it requires to drop - recreate logic.
      The drop currently only performs on master which lead error when
      recreating index (since recreate index will dispatch to segments and
      there still an old constraint index exists).
      
      This fixes the issue https://github.com/greenplum-db/gpdb/issues/10561.
      Reviewed-by: NHubert Zhang <hzhang@pivotal.io>
      (cherry picked from commit 32446a32)
      9ebc0423
    • G
      Fix assertion failures in BackoffSweeper · c9f2a816
      ggbq 提交于
      Previous commit ab74e1c6, c7befb1d did not completely solve its race
      condition, it did not test for last iteration of the while/for loop.
      This could result in failed assertion in the following loop. The patch
      moves the judgement to the ending of the for loop, it is safe, because
      the first iteration will never trigger: Assert(activeWeight > 0.0).
      
      Also, the other one race condition can trigger this assertion
      Assert(gl->numFollowersActive > 0). Consider this situation:
      
          Backend A, B belong to the same statement.
      
          Timestamp1: backend A's leader is A, backend B's leader is B.
      
          Timestamp2: backend A's numFollowersActive remains zero due to timeout.
      
          Timestamp3: Sweeper calculates leader B's numFollowersActive to 1.
      
          Timestamp4: backend B changes it's leader to A even if A is inactive.
      
      We stop sweeping for this race condition just like commit ab74e1c6 did.
      
      Both Assert(activeWeight > 0.0) and Assert(gl->numFollowersActive > 0)
      are removed.
      
      (cherry picked from commit b1c19196)
      c9f2a816
    • P
      Minimize the race condition in BackoffSweeper() · a3233b6b
      Pengzhou Tang 提交于
      There is a long-standing race condition in BackoffSweeper() which
      triggers an error and then triggers another assertion failure for
      not reset sweeperInProgress to false.
      
      This commit doesn't resolve the race condition fundamentally with
      lock or other implementation, because the whole backoff mechanism
      did not ask for accurate control, so skipping some sweeps should
      be fine so far. We also downgrade the log level to DEBUG because
      a restart of sweeper backend is unnecessary.
      
      (cherry picked from commit ab74e1c6)
      a3233b6b
  15. 26 8月, 2020 4 次提交
    • X
      PANIC when the shared memory is corrupted · 4f5a2c23
      xiong-gang 提交于
      shmNumGxacts and shmGxactArray are accessed under the protection of
      shmControlLock, this commit add some defensive code and PANIC at the earliest
      when the shared memory is corrupted.
      4f5a2c23
    • X
      Fix dblink's libpq issue on gpdb 5X (#10695) · ed85fe85
      Xiaoran Wang 提交于
      * Fix dblink's libpq issue
      
      When using dblink to connect to a postgres database, it reports the
      following error:
      unsupported frontend protocol 28675.0: server supports 2.0 to 3.0
      
      Even if dblink.so is dynamic linked to libpq.so which is compiled
      with the option -DFRONTEND, but when it's loaded in gpdb and run,
      it will use the backend libpq which is compiled together with
      postgres program and reports the error. So we define FRONTEND
      before defining libpq-fe.h.
      
      * dblink can't be built on Mac
      ed85fe85
    • X
      Fix gp_error_handling makefile · b6970883
      Xiaoran Wang 提交于
      b6970883
    • C
      Harden analyzedb further against dropped/recreated tables (#10704) · 62013e67
      Chris Hajas 提交于
      Commit 445fc7cc hardened some parts of analyzedb. However, it missed a
      couple of cases.
      
      1) When the statement to get the modcount from the pg_aoseg table failed
      due to a dropped table, the transaction was also terminated. This caused
      further modcount queries to fail and while those tables were analyzed,
      it would error and not properly record the mod count. Therefore, we now
      restart the transaction when it errors.
      
      2) If the table is dropped and then recreated while analyzedb is running
      (or some other mechanism that results in the table being successfully
      analyzed, but the pg_aoseg table did not exist during the initial
      check), the logic to update the modcount may fail. Now, we skip the
      update for the table if this occurs. In this case, the modcount would
      not be recorded and the next analyzedb run will consider the table
      modified (or dirty) and re-analyze it, which is the desired behavior.
      
      Note: This isn't as hardened as gpdb master/6X due to improvements in
      newer versions of pygresql, so there does exist a window where dropped
      tables still cause analyzedb to fail.
      62013e67
  16. 25 8月, 2020 1 次提交
    • T
      Fix unexpected corrupt of persistent filespace table (#10623) · 424e382a
      Tang Pengzhou 提交于
      With a segment whose primary is down and its mirror is promoted to
      primary, we run gp_remove_segment_mirror to remove the mirror of
      the segment, we see the mirror related fields are cleaned up in
      gp_persistent_filespace_node. But when we run gp_remove_segment_mirror
      for the same segment again, the primary related fields are also
      cleaned up, this is wrong and not expected.
      
      Such a case was observed in production when gprecoverseg -F was
      interrupted in the middle of __updateSystemConfigRemoveAddMirror() and
      run again.
      Reviewed-by: NAsim R P <pasim@vmware.com>
      424e382a
  17. 13 8月, 2020 2 次提交
    • (
    • (
      Modify error context callback functions to not assume that they can fetch · dc572635
      (Jerome)Junfeng Yang 提交于
      catalog entries via SearchSysCache and related operations.  Although, at the
      time that these callbacks are called by elog.c, we have not officially aborted
      the current transaction, it still seems rather risky to initiate any new
      catalog fetches.  In all these cases the needed information is readily
      available in the caller and so it's just a matter of a bit of extra notation
      to pass it to the callback.
      
      Per crash report from Dennis Koegel.  I've concluded that the real fix for
      his problem is to clear the error context stack at entry to proc_exit, but
      it still seems like a good idea to make the callbacks a bit less fragile
      for other cases.
      
      Backpatch to 8.4.  We could go further back, but the patch doesn't apply
      cleanly.  In the absence of proof that this fixes something and isn't just
      paranoia, I'm not going to expend the effort.
      
      (cherry picked from commit a836abe9)
      Note the changes from the above commit in `inline_set_returning_function` is not
      included cause the function does not exist in 5X right now.
      Co-authored-by: NTom Lane <tgl@sss.pgh.pa.us>
      dc572635
  18. 12 8月, 2020 1 次提交
    • Z
      Print CTID when we detect data distribution wrong for UPDATE|DELETE. · 324b7834
      Zhenghua Lyu 提交于
      When update or delete statement errors out because of the CTID is
      not belong to the local segment, we should also print out the CTID
      of the tuple so that it will be much easier to locate the wrong-
      distributed data via:
        `select * from t where gp_segment_id = xxx and ctid='(aaa,bbb)'`.
      324b7834
  19. 03 8月, 2020 1 次提交
    • (
      Resolve high `CacheMemoryContext` usage for `ANALYZE` on large partition table.(#10555) · 3d41c361
      (Jerome)Junfeng Yang 提交于
      In some cases, merge stats logic for root partition table may consume
      very high memory usage in CacheMemoryContext.
      This may lead to `Canceling query because of high VMEM usage` when
      concurrently ANALYZE partition tables.
      
      For example, there are several root partition tables and they both have
      thousands of leaf tables. And these tables are all wide tables that may
      contain hundreds of columns.
      So when analyze()/auto_stats() leaf tables concurrently,
      `leaf_parts_analyzed` will consume lots of memory(catalog catch for
      pg_statistic and pg_attribute) under
      CacheMemoryContext for each backend, which may hit the protect VMEM
      limit.
      In `leaf_parts_analyzed`, a single backend's leaf table analysis for a
      root partition table, it may add cache entries up to
      number_of_leaf_tables * number_of_columns tuples from pg_statistic and
      number_of_leaf_tables * number_of_columns tuples from pg_arrtibute.
      Set guc `optimizer_analyze_root_partition` or
      `optimizer_analyze_enable_merge_of_leaf_stats` to false could skip merge
      stats for root table and `leaf_parts_analyzed` will not execute.
      
      To resolve this issue:
      1. When checking whether merge stats are available for a root table in
      `leaf_parts_analyzed`, check whether all leaf tables are ANALYZEd first,
      if they're still un-ANALYZE leaf table exists, return quickly to avoid touch
      columns' pg_attribute and pg_statistic per leaf table(this will save lots of time).
      And also don't rely on system catalog cache and use the
      index to fetch the stats tuple to avoid one-time cache usage(in common cases).
      
      2. When merging a stats in `merge_leaf_stats`, don't rely on system
      catalog cache and use the index to fetch the stats tuple.
      
      There are side-effects for not rely on system catalog cache(which are all **rare** situations).
      1. If insert/update/copy several leaf tables which under **same
      root partition** table in **same session** and all leaf tables are **analyzed**
      will be much slower since auto_stats will call `leaf_parts_analyzed` once the leaf
      table gets updated, and we don't rely on system catalog cache now.
      (`set optimizer_analyze_enable_merge_of_leaf_stats=false` could avoid
      this)
      
      2. ANALYZE the same root table several times in the same session is much
      slower than before since we don't rely on system catalog cache.
      
      Seems this solution improves the performance for ANALYZE, and
      it also makes ANALYZE won't hit the memory issue anymore.
      
      (cherry picked from commit 533a47dd)
      3d41c361