1. 22 1月, 2019 1 次提交
    • H
      Fix gppkg error when master and standby master are in the same node · 1f33759b
      Haozhou Wang 提交于
      If both master and standby master are set in the same
      node, the gppkg utility will report error when uninstall a gppkg.
      This is because, gppkg utility assume master and standby master
      are in the different node, which is not be true in test environment.
      
      This patch fixed this issue, when master and standby master are in
      the same node, we skip to install/uninstall gppkg on standby master
      node.
      1f33759b
  2. 21 1月, 2019 2 次提交
    • S
      Remove GPDB_93_MERGE_FIXME (#6699) · d286b105
      Shaoqi Bai 提交于
      The code was added to tackle the case when FTS sends promote message, on mirror create the PROMOTE file and signal mirror to promote. But while mirror is still under promotion and not completed yet, FTS sends promote again, which creates the PROMOTE file again. Now, this PROMOTE file exist on promoted mirror which is acting as primary.
      So, if basebackup was taken from this primary to create mirror, it included PROMOTE file and auto promoted mirror on creation which is incorrect. Hence, via FTS to detect if this file exist delete PROMOTE file was added along with pg_basebackup excluding the copy of PROMOTE file.
      
      Now, given that background and upstream commit to always just delete the PROMOTE file on postmaster start, covers for even if PROMOTE file gets created after mirror promotion and gets copied over by pg_basebackup. On mirror startup no risk of auto-promotion. So, we can safely remove this code now.
      Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
      Reviewed-by: NPaul Guo <pguo@pivotal.io>
      d286b105
    • R
      Use the right rel for largest_child_relation(). · 8712da1e
      Richard Guo 提交于
      Function largest_child_relation() is used to find the largest child
      relation for an inherited/partitioned relation, recursively. Previously
      we passed a wrong rel as its param.
      
      This patch finds in root->simple_rel_array the right rel for
      largest_child_relation(). Also it replaces several rt_fetch with a
      search in root->simple_rte_array.
      
      This patch fixes #6599.
      Co-authored-by: NMelanie Plageman <mplageman@pivotal.io>
      Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      8712da1e
  3. 19 1月, 2019 8 次提交
    • L
      docs - reorg pxf content, add multi-server, objstore content (#6736) · f601572d
      Lisa Owen 提交于
      * docs - reorg pxf content, add multi-server, objstore content
      
      * misc edits, SERVER not optional
      
      * add server, remove creds from examples
      
      * address comments from alexd
      
      * most edits requested by david
      
      * add Minio to table column name
      
      * edits from review with pxf team (start)
      
      * clear text credentials, reorg objstore cfg page
      
      * remove steps with XXX placeholder
      
      * add MapR to supported hadoop distro list
      
      * more objstore config updates
      
      * address objstore comments from alex
      
      * one parquet data type mapping table, misc edits
      
      * misc edits from david
      
      * add mapr hadoop config step, misc edits
      
      * fix formatting
      
      * clarify copying libs for MapR
      
      * fix pxf links on CREATE EXTERNAL TABLE page
      
      * misc edits
      
      * mapr paths may differ based on version in use
      
      * misc edits, use full topic name
      
      * update OSS book for pxf subnav restructure
      f601572d
    • D
      Fix race-condition of pg_xlog delete during pg_basebackup · ab2ec2b4
      David Kimura 提交于
      This commit addresses a race condition where it was possible that during
      xlogstream the pg_xlog directory went missing. The race exists only with
      --foceoverwrite and --xlog stream. In stream mode pg_basebackup forks a
      process to populate pg_xlog directory with new transaction files and
      another process to receive and untar base directory contents. Force
      overwrite removes an existing pg_xlog directory before copying contents
      from tar file. It is problematic if untar process deletes xlog directory
      while stream process tries to write to it.
      
      In order to avoid this situation in forceoverwrite mode, the deletion
      of pg_xlog now happens before starting stream and untar processes. This
      enables untar process to skip deletion of pg_xlog.
      Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
      ab2ec2b4
    • D
      Refactor reloptions construction · 677a08ef
      Daniel Gustafsson 提交于
      Use the CStringGetTextDatum() construct when generating the reloptions
      array in order to improve readability. This patch started out by trying
      to remove duplication in calculating the string length but turned into
      a refactoring of the Datum creation instead.
      Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      677a08ef
    • D
      Use VERBOSE setting for HLL ANALYZE logging · e34f741f
      Daniel Gustafsson 提交于
      Running ANALYZE with the HLL computation produce a lot of LOG messages
      which are more geared towards troubleshooting than general purpose log
      files. Fold these under ANALYZE VERBOSE to avoid cluttering up logfiles
      on production systems unless explicitly asked for.
      Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      e34f741f
    • B
      Add QuickLZ compression wrappers to gpcontrib (#6718) · fd73773b
      Bradford Boyle 提交于
      - Added with-quicklz configure flag
      - Added quicklz gpcontrib directory with C wrapper functions and SQL installation file
      - Added simple quicklz functional tests
      - Added #undef HAVE_LIBQUICKLZ to pg_config.h.win32.
        This is to parallel the recent change in pg_config.h.in that adds
        quicklz. pg_config.h.win32 should be autogenerated, but isn't in
        practice.
      Co-authored-by: NJimmy Yih <jyih@pivotal.io>
      Co-authored-by: NBen Christel <bchristel@pivotal.io>
      Co-authored-by: NDavid Sharp <dsharp@pivotal.io>
      fd73773b
    • V
      CI: Only centos 6/7 release candidates · fa63e7ab
      Venkatesh Raghavan 提交于
      For GPDB 6 Beta centos 6/7 need to be passing for the same commit
      to be a valid release candidate.
      Co-authored-by: NKris Macoskey <kmacoskey@pivotal.io>
      fa63e7ab
    • S
      Bump ORCA version to v3.23.0 · a4e95e1f
      Sambitesh Dash 提交于
      a4e95e1f
    • J
      pg_isready: handle PQPING_MIRROR_READY and reassign exported constant · 46c83a66
      Jacob Champion 提交于
      The GPDB-specific constant PQPING_MIRROR_READY, which indicates that a
      mirror is ready for replication, was not handled in pg_isready.
      
      Additionally, the value we selected for PQPING_MIRROR_READY might at one
      point in the future conflict with upstream libpq, which would be a pain
      to untangle. Try to avoid that situation by increasing the value.
      Co-authored-by: NShoaib Lari <slari@pivotal.io>
      46c83a66
  4. 18 1月, 2019 6 次提交
    • A
      cee0a6f5
    • D
      Remove flaky pg_basebackup scenario · 497d1112
      David Kimura 提交于
      There was a race condition where it was possible that fault was
      unexpectedly triggered by WAL sender object independent of
      pg_basebackup being run. We could fix it to be more deternimistic by
      incrementing wait for triggered count, but the test as a whole didn't
      seem to add much value.
      Co-authored-by: NEkta Khanna <ekhanna@pivotal.io>
      497d1112
    • D
      pg_rewind: Update tests to create separate datadirs for each test (#6689) · ac3cf6e0
      David Kimura 提交于
      Prior to this commit, the test recreated the tmp_check_* directory for
      each running test. This would lead to loosing the datadir for the
      failing test if it wasn't the last one. This commit, creates a new
      directory specific to each test and cleans up artifacts of previous
      passing tests
      Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
      Co-authored-by: NEkta Khanna <ekhanna@pivotal.io>
      ac3cf6e0
    • A
      Bump ORCA version to 3.22.0 · 6cb95608
      Abhijit Subramanya 提交于
      Co-authored-by: NChris Hajas <chajas@pivotal.io>
      6cb95608
    • A
      Update the default value of optimizer_penalize_broadcast_threshold. · 873b657b
      Abhijit Subramanya 提交于
      This commit sets the default value of the guc optimizer_penalize_broadcast_threshold
      to 100000. We have seen a lot of cases where a plan with broadcast was chosen
      due to underestimation of  cardinality. In such cases a Redistribute motion
      would have been better. So this commit will penalize broadcast when the number
      of rows is greater than 100000 so that Redistribute is favored more in this
      case. We have tested the change on the perf pipeline and do not see any
      regression.
      Co-authored-by: NChris Hajas <chajas@pivotal.io>
      873b657b
    • H
      Remove broken test for an ancient bug. · 568ee859
      Heikki Linnakangas 提交于
      I looked up this issue in the old JIRA instance:
      
      > MPP-8014: bitmap indexes create entries in gp_distribution_policy
      >
      > postgres=# \d bar
      >       Table "public.bar"
      >  Column |  Type   | Modifiers
      > --------+---------+-----------
      >  i      | integer |
      > Distributed by: (i)
      >
      > postgres=# create index bitmap_idx on bar using bitmap(i);
      > CREATE INDEX
      > postgres=# select localoid::regclass, * from gp_distribution_policy;
      >           localoid          | localoid | attrnums
      > ----------------------------+----------+----------
      >  bar                        |    16398 | {1}
      >  pg_bitmapindex.pg_bm_16415 |    16416 |
      > (2 rows)
      
      So the problem was that we created gp_distribution_policy entry for the
      auxiliary heap table of the bitmap index. We no longer do that, this bug
      was fixed 9 years ago. But the test we have in mpp8014 would not fail,
      even if the bug reappeared! Let's remove the test, as it's useless in its
      current form. It would be nice to have a proper test for that bug, but it
      doesn't seem very likely to re-appear any time soon, so it doesn't seem
      worth the effort.
      
      Fixes https://github.com/greenplum-db/gpdb/issues/6315
      568ee859
  5. 17 1月, 2019 13 次提交
  6. 16 1月, 2019 10 次提交