1. 06 7月, 2018 10 次提交
    • J
      Fix create gang failure on dns lookup error on down mirrors. · dd861e72
      Jialun 提交于
      If a segment exists in gp_segment_configuration but its ip address can
      not be resolved we will run into a runtime error on gang creation:
      
          ERROR:  could not translate host name "segment-0a", port "40000" to
          address: Name or service not known (cdbutil.c:675)
      
      This happens even if segment-0a is a mirror and is marked as down.  With
      this error queries can not be executed, gpstart and gpstop will also
      fail.
      
      One way to trigger the issue:
      
      - create a multiple segments cluster;
      - remove sdw1's dns entry from /etc/hosts on mdw;
      - kill postgres primary process on sdw1;
      
      FTS can detect this error and automatically switch to mirror, but
      queries can not be executed.
      dd861e72
    • M
      docs - update system catalog maintenance information. (#5179) · dbee1c3c
      Mel Kiyama 提交于
      * docs - update system catalog maintenance information.
      
      --Updated Admin. Guide and Best Practices for running REINDEX, VACUUM, and ANALYZE
      --Added note to REINDEX reference about running ANALYZE after REINDEX.
      
      * docs - edits for system catalog maintenance updates
      
      * docs - update recommendation for running vacuum and analyze.
      
      Update based on dev input.
      dbee1c3c
    • L
      5dedc72c
    • L
      docs - add foreign data wrapper-related sql ref pages (#5209) · 525656dd
      Lisa Owen 提交于
      * docs - add foreign data wrapper-related ref pages
      
      * remove CREATE SERVER example referencing default fdw
      
      * edits from david, and his -> their
      525656dd
    • J
      Check all AO segment files during concurrent AO VACUUM · cd45683d
      Jimmy Yih 提交于
      We currently exit VACUUM early when there is a concurrent operation on
      an AO relation. Instead of exiting early, go through the rest of the
      AO segment files to see if they have crossed threshold for compaction.
      cd45683d
    • J
      Invalidate AppendOnlyHash cache entry at end of TRUNCATE · 19c48cb4
      Jimmy Yih 提交于
      TRUNCATE will rewrite the relation by creating a temporary table and
      swapping it with the real relation. For AO, this includes the
      auxiliary tables which is concerning for the AO relation's pg_aoseg
      table which holds information that a AO segment file is available for
      write or waiting to be compacted/dropped. Since we do not currently
      invalidate the AppendOnlyHash cache entry, the entry could have
      invisible leaks in its AOSegfileStatus array that will be stuck in
      state AOSEG_STATE_AWAITING_DROP. These leaks will persist until the
      user evicts the cache entry by not using the table to allow another AO
      table to cache itself in that slot or by restarting the database. We
      fix this issue by invalidating the cache entry at the end of TRUNCATE
      on AO relations.
      19c48cb4
    • J
      Invalidate AppendOnlyHash cache entry for AT_SetDistributedBy cases · 9a088849
      Jimmy Yih 提交于
      ALTER TABLE commands that are tagged as AT_SetDistributedBy require a
      gather motion and does its own variation of creating a temporary table
      for CTAS (basically bypassing the usual ATRewriteTable which actually
      does do AppendOnlyHash cache entry invalidation). Without the
      AppendOnlyHash cache entry invalidation, the entry could have
      invisible leaks in its AOSegfileStatus array that will be stuck in
      state AOSEG_STATE_AWAITING_DROP. These leaks will persist until the
      user evicts the cache entry by not using the table to allow another AO
      table to cache itself in that slot or by restarting the database. We
      fix this issue by invalidating the cache entry at the end of
      AT_SetDistributedBy ALTER TABLE cases.
      9a088849
    • J
      Fix schema in rangefuncs_cdb ICG test · 1a8bd0ad
      Jimmy Yih 提交于
      The schema is named differently from the one being used in the
      search_path so all the tables, views, functions, and etc. were
      incorrectly being created in the public schema.
      1a8bd0ad
    • O
      Remove deduplication in hyperloglog code · 9c456084
      Omer Arap 提交于
      We had significant deduplication in hyperloglog extension and utility
      library that we use in the analyze related code. This commit removes the
      deduplication as well as significant amount dead code. It also fixes
      some compiler warnings and some coverity issues.
      
      This commit also puts the hyperloglog functions in a separate schema
      which is non-modifiable by non superusers.
      Signed-off-by: NBhuvnesh Chaudhary <bchaudhary@pivotal.io>
      9c456084
    • L
      e839f9a2
  2. 04 7月, 2018 7 次提交
  3. 03 7月, 2018 6 次提交
  4. 02 7月, 2018 1 次提交
  5. 30 6月, 2018 7 次提交
    • D
      fix problematic xrefs · 7966ba4e
      David Yozie 提交于
      7966ba4e
    • I
      Extra docs for the pushdown feature (#5193) · ac917ce0
      Ivan Leskin 提交于
      * Extra docs for gp_external_enable_filter_pushdown
      
      Add extra documentation for 'gp_external_enable_filter_pushdown' and the pushdown feature in PXF extension.
      
      * Minor doc text fixes
      
      Minor documentation text fixes, proposed by @dyozie.
      
      * Clarify the pushdown support by PXF
      
      Add the following information:
      * List the PXF connectors that support pushdown;
      * State that GPDB PXF extension supports pushdown;
      * Add a list of conditions that need to be fulfilled for the pushdown feature to work when PXF protocol is used.
      
      * Correct the list of PXF connectors with pushdown
      
      * State that Hive and HBase PXF connectors support filter predicate pushdown;
      * Remove references to JDBC and Apache Ignite PXF connectors, as proposed by @dyozie (these are not officially supported by Greenplum).
      ac917ce0
    • K
      db95eb96
    • A
      Update the fsync test to allow room upto 7 buffers to flushed. · c97870fc
      Ashwin Agrawal 提交于
      Number of fsync buffers synced to disk varies based on how hint-bits gets
      updated and all. Like sometimes I see global table pg_tablespace buffer flushed
      and sometimes not depending on what tests were executed before this test.
      c97870fc
    • A
      Remove rd_issyscat from RelationData. · f56b5fb2
      Ashwin Agrawal 提交于
      Greenplum added rd_isyscat to Relation structure. Only usage of the same is in
      markDirty() to decide if buffer should be marked dirty or not. The setting of
      rd_issyscat is based in checking if relation name starts with "pg_" then set it
      else not. Which anyways is very loose.
      
      Modified instead to just make check based on if oid < FirstNormalObjectId or to
      cover for pg_aoseg tables RelationGetNamespace(relation) ==
      PG_AOSEGMENT_NAMESPACE. So, this allows us to remove the extra variable.
      
      This patch is not trying to change the intent of GUC
      `gp_disable_tuple_hints`. That's all together different discussion.
      f56b5fb2
    • S
      Remove irrelevant comments from sql test file · a8f6260e
      Shreedhar Hardikar 提交于
      a8f6260e
    • S
      Fix 'no parameter found for initplan subquery' · f50e5daf
      Shreedhar Hardikar 提交于
      The issue happens because of constant folding in the testexpr of the
      SUBPLAN expression node. The testexpr may be reduced to a const and any
      PARAMs, previous used in the testexpr, disappear, However, the subplan
      still remains.
      
      This behavior is similar in upstream Postgres 10 and may be of
      performance consideration. Leaving that aside for now, the constant
      folding produces an elog(ERROR)s when the plan has subplans and no
      PARAMs are used. This check in `addRemoteExecParamsToParamList()` uses
      `context.params` which computes the used PARAMs in the plan and `nIntPrm
      = list_length(root->glob->paramlist`, which is the number of PARAMs
      declared/created.
      Given the ERROR messages generated, the above check makes no sense.
      Especially since it won’t even trip for the InitPlan bug (mentioned in
      the comments) as long as there is at least one PARAM in the query.
      
      This commit removes this check since it doesn't correctly capture the
      intent.
      
      In theory, it could be be replaced by one specifically aimed at
      InitPlans, that is, find all the params ids used by InitPlan and then
      make sure they are used in the plan. But we already do this and
      remove any unused initplans in `remove_unused_initplans()`. So I don’t
      see the point of adding that.
      
      Fixes #2839
      f50e5daf
  6. 29 6月, 2018 6 次提交
    • D
      Fix typos in documentation and code comments · 996853cb
      Daniel Gustafsson 提交于
      996853cb
    • O
      Fix incremental analyze for non-matching attnums · ef39e0d0
      Omer Arap 提交于
      To merge stats in incremental analyze for root partition, we use leaf
      tables' statistics. In commit b28d0297,
      we fixed an issue where child attnum do not match with a root table's
      attnum for the same column. After we fixed that issue with a test, that
      test also exposed the bug in analyze code.
      
      This commit fixes the issue in analyze using the similar fix in
      b28d0297.
      ef39e0d0
    • L
      ci: pr_pipeline: Separate sync_tools from compilation (#5214) · f9dd6ba0
      Lisa Oakley 提交于
      This is related to the work we have done to fix the sles11 and windows
      compilation failures.
      Co-authored-by: NLisa Oakley <loakley@pivotal.io>
      Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
      f9dd6ba0
    • O
      Fix querying stats for largest child · b28d0297
      Omer Arap 提交于
      Previously, we would use the root table's information to acquire stats
      from the `syscache` which return no result. The reason it does not
      return any result is because we query syscache using `inh` field which
      is set true for root table and false for the leaf tables.
      
      Another issue which is not evident is the possibility of mismatching
      `attnum`s for the root and leaf tables after running specific scenarios.
      When we delete a column and then split a partition, unchanged partitions
      and old partitions preserves the old attnums while newly created
      partitions have increasing attnums with no gaps. If we query syscache
      using the root's attnum for that column, we would be getting a wrong
      stats for that specific column. Passing root's `inh` hide the issue of
      having wrong stats.
      
      This commit fixes the issue by getting the attribute name using the
      root's attnume and use it to acquire correct attnum for the largest leaf
      partition.
      b28d0297
    • A
      Perform analyze on specific table in spilltodisk test. · 37c75753
      Ashwin Agrawal 提交于
      No need to have database scope analyze, only specific table needs to be
      analyzed for the test.
      37c75753
    • A
      Restrict max_wal_senders guc to 1 in GPDB. · db53d8cf
      Ashwin Agrawal 提交于
      GPDB only supports 1 replica currently. Need to adopt in FTS and all to support
      1:n till then restrict max_wal_senders GUC to 1. Later when code can handle the
      same the max value of guc can be changed.
      
      Also, remove the setting of max_wal_senders in postmaster which was added
      earlier for dealing with filerep/walrep co-existence.
      db53d8cf
  7. 27 6月, 2018 3 次提交