1. 14 8月, 2020 4 次提交
    • D
      Don't mutate fdw_private on building CDB plan · fcf6fc6d
      Denis Smirnov 提交于
      Currently FDW expects that foreign plan's fdw_private field
      contains a private list of DefElem nodes (options) that should
      not be CDB mutated. We don't try to mutate it in master but still
      mutate in 6X that causes "ERROR:  unrecognized node type: 920".
      Backport described behaviour from master.
      
      Steps to reproduce:
      
      create extension file_fdw;
      create server file foreign data wrapper file_fdw;
      create foreign table t_file (a int) server file options(
      mpp_execute 'all segments', filename '/tmp/1_<SEGID>.csv',
      format 'csv');
      \! echo '0' > /tmp/1_0.csv
      \! echo '1' > /tmp/1_1.csv
      \! echo '2' > /tmp/1_2.csv
      select count(*) from t_file;
      ERROR:  unrecognized node type: 920 (nodeFuncs.c:2932)
      fcf6fc6d
    • H
      Fix url stored in error log, when reading from gpfdist external table. · 66fa86e5
      Heikki Linnakangas 提交于
      The URL stored in the 'filename' field of the error log was stored
      incorrectly:
      
          postgres=# SELECT filename FROM gp_read_error_log('sreh_ext_err_tbl');
                                                filename
          ------------------------------------------------------------------------------------
           (null) [/home/heikki/git-sandbox-gpdb/master/src/test/regress/data/bad_data1.data]
           (null) [/home/heikki/git-sandbox-gpdb/master/src/test/regress/data/bad_data1.data]
          ...
      
      On 5X_STABLE, the URL is stored in place of the '(null)'. This got broken
      in 6X_STABLE. The root cause is that we no longer keep the 'url' in the
      CopyState->filename field for gpfdist external tables.
      
      Fixes https://github.com/greenplum-db/gpdb/issues/10542Reviewed-by: N(Jerome)Junfeng Yang <jeyang@pivotal.io>
      66fa86e5
    • N
      ic-proxy: enable in release builds · 14331c7e
      Ning Yu 提交于
      The Greenplumn release builds are configured without any extra configure
      flags, it only use the ones specified in gpAux/Makefile, so we have to
      enable ic-proxy in this file to include the ic-proxy feature in relase
      builds.
      Reviewed-by: NShaoqi Bai <bshaoqi@vmware.com>
      (cherry picked from commit 48978756)
      14331c7e
    • W
      Check if RelOptInfo exits before replace subplan for grouping set query · f913cba2
      Weinan WANG 提交于
      In grouping set cases, we need to replace our private subplan into
      plan tree. But for subplan's RelOptInfo will not create if the
      subplan does not have any relaton in its fromlist.
      
      Check pointer in simple_rte_array and simple_rel_array to promise
      which is not null before replacement.
      f913cba2
  2. 13 8月, 2020 3 次提交
    • H
      Rename faultInjectorSlots field for clarity. · fd5f11ba
      Heikki Linnakangas 提交于
      fd5f11ba
    • H
      Make Fault Injection sites cheaper, when no faults have been activated. · 81b0e5fd
      Heikki Linnakangas 提交于
      Fault injection is expected to be *very* cheap, we even enable it on
      production builds. That's why I was very surprised when I saw 'perf' report
      that FaultInjector_InjectFaultIfSet() was consuming about 10% of CPU time
      in a performance test I was running on my laptop. I tracked it to the
      FaultInjector_InjectFaultIfSet() call in standard_ExecutorRun(). It gets
      called for every tuple between 10000 and 1000000, on every segment.
      
      Why is FaultInjector_InjectFaultIfSet() so expensive? It has a quick exit
      in it, when no faults have been activated, but before reaching the quick
      exit it calls strlen() on the arguments. That's not cheap. And the function
      call isn't completely negligible on hot code paths, either.
      
      To fix, turn FaultInjector_InjectFaultIfSet() into a macro that's only
      few instructions long in the fast path. That should be cheap enough.
      Reviewed-by: NAshwin Agrawal <aashwin@vmware.com>
      Reviewed-by: NJesse Zhang <jzhang@pivotal.io>
      Reviewed-by: NAsim R P <pasim@vmware.com>
      81b0e5fd
    • D
      Allow static partition selection for lossy casts in ORCA · 0f0cecc7
      Divyesh Vanjare 提交于
      For a table partitioned by timestamp column, a query such as
        SELECT * FROM my_table WHERE ts::date == '2020-05-10'
      should only scan on a few partitions.
      
      ORCA previously supported only implicit casts for partition selection.
      This commit, extends ORCA to also support a subset of lossy (assignment)
      casts that are order-preserving (increasing) functions. This will
      improve ORCA's ability to partition elimination to produce faster plans.
      
      To ensure correctness, the additional supported functions are captured
      in an allow-list in gpdb::IsFuncAllowedForPartitionSelection(), which
      includes some in-built lossy casts such as ts::date, float::int etc.
      
      Details:
       - For list partitions, we compare our predicate with each distinct
         value in the list to determine if the partition has to be
         selected/eliminated. Hence, none of the operators need to be changed
         for list partition selection
      
       - For range partition selection, we check bounds of each partition and
         compare it with the predicates to determine if partition has to be
         selected/eliminated.
      
         A partition such as [1, 2) shouldn't be selected for float = 2.0, but
         should be selected for float::int = 2.  We change the logic for handling
         equality predicates differently when lossy casts are present (ub: upper
         bound, lb: lower bound)
      
         if (lossy cast on partition col):
           (lb::int <= 2) and (ub::int >= 2)
         else:
           ((lb <= 2 and inclusive lb) or (lb < 2))
           and
           ((ub >= 2 and inclusive ub ) or (ub > 2))
      
        - CMDFunctionGPDB now captures whether or not a cast is a lossy cast
          supported by ORCA for partition selection. This is then used in
          Expr2DXL translation to identify how partitions should be selected.
      0f0cecc7
  3. 12 8月, 2020 3 次提交
    • H
      Fix compilation without libuv's uv.h header. · f16160ae
      Heikki Linnakangas 提交于
      ic_proxy_backend.h includes libuv's uv.h header, and ic_proxy_backend.h
      was being included in ic_tcp.c, even when compiling with
      --disable-ic-proxy.
      f16160ae
    • P
      Fix potential panic in visibility check code. (#10589) · 398ac2b7
      Paul Guo 提交于
      We've seen a panic case on gpdb 6 with stack as below,
      
      3  markDirty (isXmin=0 '\000', tuple=0x7effe221b3c0, relation=0x0, buffer=16058) at tqual.c:105
      4  SetHintBits (xid=<optimized out>, infomask=1024, rel=0x0, buffer=16058, tuple=0x7effe221b3c0) at tqual.c:199
      5  HeapTupleSatisfiesMVCC (relation=0x0, htup=<optimized out>, snapshot=0x15f0dc0 <CatalogSnapshotData>, buffer=16058) at tqual.c:1200
      6  0x00000000007080a8 in systable_recheck_tuple (sysscan=sysscan@entry=0x2e85940, tup=tup@entry=0x2e859e0) at genam.c:462
      7  0x000000000078753b in findDependentObjects (object=0x2e856e0, flags=<optimized out>, stack=0x0, targetObjects=0x2e85b40, pendingObjects=0x2e856b0,
         depRel=0x7fff2608adc8) at dependency.c:793
      8  0x00000000007883c7 in performMultipleDeletions (objects=objects@entry=0x2e856b0, behavior=DROP_RESTRICT, flags=flags@entry=0) at dependency.c:363
      9  0x0000000000870b61 in RemoveRelations (drop=drop@entry=0x2e85000) at tablecmds.c:1313
      10 0x0000000000a85e48 in ExecDropStmt (stmt=stmt@entry=0x2e85000, isTopLevel=isTopLevel@entry=0 '\000') at utility.c:1765
      11 0x0000000000a87d03 in ProcessUtilitySlow (parsetree=parsetree@entry=0x2e85000,
      
      The reason is that we pass a NULL relation to the visibility check code, which
      might use the relation variable to determine if hint bit should be set or not.
      Let's pass the correct relation variable even it might not be used finally.
      
      I'm not able to reproduce the issue locally so I can not provide a test case
      but that is surely a potential issue.
      Reviewed-by: NAshwin Agrawal <aashwin@vmware.com>
      (cherry picked from commit 85811692)
      398ac2b7
    • Z
      Print CTID when we detect data distribution wrong for UPDATE|DELETE. · f7dcbb5a
      Zhenghua Lyu 提交于
      When update or delete statement errors out because of the CTID is
      not belong to the local segment, we should also print out the CTID
      of the tuple so that it will be much easier to locate the wrong-
      distributed data via:
        `select * from t where gp_segment_id = xxx and ctid='(aaa,bbb)'`.
      f7dcbb5a
  4. 10 8月, 2020 2 次提交
  5. 08 8月, 2020 5 次提交
  6. 07 8月, 2020 4 次提交
  7. 06 8月, 2020 3 次提交
  8. 05 8月, 2020 4 次提交
  9. 04 8月, 2020 3 次提交
  10. 03 8月, 2020 7 次提交
    • D
      9f305aa8
    • G
      Change the unit of the GUC from kb to mb · 3ef5e267
      Gang Xiong 提交于
      3ef5e267
    • G
      Make max_slot_wal_keep_size work on 6X · ea69506b
      Gang Xiong 提交于
      1. change the GUC unit from MB to KB as 6X doesn't have GUC_UNIT_MB.
      2. the upstream commit added 3 fields in the system view
         'pg_replication_slots', this commit remove that change since we cannot make
         catalog change on 6X.
      3. upstream uses 'slot->active_pid' to identify the process that acquired the
         replication slot, this commit added 'walsnd' in 'ReplicationSlot' to do the
         same.
      4. upstream uses condition variable to wait the walsender exit, this commit
         uses WalSndWaitStoppingOneWalSender as we don't have condition variable on 6X.
      5. add test cases.
      ea69506b
    • A
      Allow users to limit storage reserved by replication slots · 7a274622
      Alvaro Herrera 提交于
      Replication slots are useful to retain data that may be needed by a
      replication system.  But experience has shown that allowing them to
      retain excessive data can lead to the primary failing because of running
      out of space.  This new feature allows the user to configure a maximum
      amount of space to be reserved using the new option
      max_slot_wal_keep_size.  Slots that overrun that space are invalidated
      at checkpoint time, enabling the storage to be released.
      
      Author: Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>
      Reviewed-by: NMasahiko Sawada <sawada.mshk@gmail.com>
      Reviewed-by: NJehan-Guillaume de Rorthais <jgdr@dalibo.com>
      Reviewed-by: NÁlvaro Herrera <alvherre@alvh.no-ip.org>
      Discussion: https://postgr.es/m/20170228.122736.123383594.horiguchi.kyotaro@lab.ntt.co.jp
      7a274622
    • W
      Add "FILL_MISSING_FIELDS" option for gpload. · 7afdd72c
      Wen Lin 提交于
      This reverts commit 7118e8ac.
      7afdd72c
    • (
      Resolve high `CacheMemoryContext` usage for `ANALYZE` on large partition table. (#10554) · f8c8265a
      (Jerome)Junfeng Yang 提交于
      In some cases, merge stats logic for root partition table may consume
      very high memory usage in CacheMemoryContext.
      This may lead to `Canceling query because of high VMEM usage` when
      concurrently ANALYZE partition tables.
      
      For example, there are several root partition tables and they both have
      thousands of leaf tables. And these tables are all wide tables that may
      contain hundreds of columns.
      So when analyze()/auto_stats() leaf tables concurrently,
      `leaf_parts_analyzed` will consume lots of memory(catalog catch for
      pg_statistic and pg_attribute) under
      CacheMemoryContext for each backend, which may hit the protect VMEM
      limit.
      In `leaf_parts_analyzed`, a single backend's leaf table analysis for a
      root partition table, it may add cache entries up to
      number_of_leaf_tables * number_of_columns tuples from pg_statistic and
      number_of_leaf_tables * number_of_columns tuples from pg_arrtibute.
      Set guc `optimizer_analyze_root_partition` or
      `optimizer_analyze_enable_merge_of_leaf_stats` to false could skip merge
      stats for root table and `leaf_parts_analyzed` will not execute.
      
      To resolve this issue:
      1. When checking whether merge stats are available for a root table in
      `leaf_parts_analyzed`, check whether all leaf tables are ANALYZEd first,
      if they're still un-ANALYZE leaf table exists, return quickly to avoid touch
      columns' pg_attribute and pg_statistic per leaf table(this will save lots of time).
      And also don't rely on system catalog cache and use the
      index to fetch the stats tuple to avoid one-time cache usage(in common cases).
      
      2. When merging a stats in `merge_leaf_stats`, don't rely on system
      catalog cache and use the index to fetch the stats tuple.
      
      There are side-effects for not rely on system catalog cache(which are all **rare** situations).
      1. If insert/update/copy several leaf tables which under **same
      root partition** table in **same session** and all leaf tables are **analyzed**
      will be much slower since auto_stats will call `leaf_parts_analyzed` once the leaf
      table gets updated, and we don't rely on system catalog cache now.
      (`set optimizer_analyze_enable_merge_of_leaf_stats=false` could avoid
      this)
      
      2. ANALYZE the same root table several times in the same session is much
      slower than before since we don't rely on system catalog cache.
      
      Seems this solution improves the performance for ANALYZE, and
      it also makes ANALYZE won't hit the memory issue anymore.
      
      (cherry picked from commit 533a47dd)
      f8c8265a
    • N
      ic-proxy: handle early coming BYE correctly · bd8959f6
      Ning Yu 提交于
      In a query that contains multiple init/sub plans, the packets of the
      second subplan might be received while the first is still being
      processed in the ic-proxy mode, this is because in ic-proxy mode a local
      host handshake is used instead of the global one.
      
      To distinguish the packets of different subplans, especially for the
      early coming ones, we must stop handling on the BYE immediately, and
      pass any unhandled early coming pkts to the successor or the
      placeholder.
      
      This fixes the random hanging during the ICW parallel group of
      qp_functions_in_from.  No new test is added.
      Co-authored-by: NHubert Zhang <hzhang@pivotal.io>
      Co-authored-by: NNing Yu <nyu@pivotal.io>
      (cherry picked from commit 79ff4e62)
      bd8959f6
  11. 01 8月, 2020 1 次提交
    • B
      gpinitsystem: use new 6-field ARRAY format internally for QD and QEs · 27038bd4
      bhuvnesh chaudhary 提交于
      The initialization file (passed as gpinitsystem -I <file>) can have two
      formats: legacy (5-field) and new (6-field, that has the HOST_ADDRESS).
      
      This commit fixes a bug in which an internal sorting routine that matched
      a primary with its corresponding mirror assumed that <file> was always
      in the new format.  The fix is to convert any input <file> to the new
      format via re-writing the QD_ARRAY, PRIMARY_ARRAY and MIRROR_ARRAY to
      have 6 fields.  We also always use '~' as the separator instead of ':'
      for consistency.
      
      The bug fixed is that a 5-field <file> was being sorted numerically,
      causing either the hostname (on a multi-host cluster) or the port (on
      a single-host cluster) to be used to sort instead or the content.
      This could result in the primary and its corresponding mirror being
      created on different contents, which fortunately hit an internal error
      check.
      
      Unit tests and a behave test have been added as well.  The behave test
      uses a demo cluster to validate a legacy gpinitsystem initialization
      file format (e.g. one that has 5 fields) successfully creates a
      Greenplum database.
      Co-authored-by: NDavid Krieger <dkrieger@vmware.com>
      27038bd4
  12. 31 7月, 2020 1 次提交
    • A
      Correct and stabilize some replication tests · 15dd8027
      Ashwin Agrawal 提交于
      Adding pg_stat_clear_snapshot() in functions looping over
      gp_stat_replication / pg_stat_replication to refresh result everytime
      the query is run as part of same transaction. Without
      pg_stat_clear_snapshot() query result is not refreshed for
      pg_stat_activity neither for xx_stat_replication functions on multiple
      invocations inside a transaction. So, in absence of it the tests
      become flaky.
      
      Also, tests commit_blocking_on_standby and dtx_recovery_wait_lsn were
      initially committed with wrong expectations, hence were missing to
      test the intended behavior. Now reflect the correct expectation.
      
      (cherry picked from commit c565e988)
      15dd8027