1. 27 1月, 2019 1 次提交
  2. 26 1月, 2019 10 次提交
    • H
      Fix expected output for \di output change. · a2c7ccc9
      Heikki Linnakangas 提交于
      After commit 56bb376c, \di no longer prints the Storage column. I failed
      to change the 'bfv_partition' test's expected output accordingly.
      a2c7ccc9
    • H
      Fix assertion failure in \di+ and add tests. · 56bb376c
      Heikki Linnakangas 提交于
      The 'translate_columns' array must be larger than the number of columns
      in the result set, being passed printQuery(). We had added one column,
      "Storage", in GPDB, so we must make the array larger, too.
      
      This is a bit fragile, and would go wrong, if there were any translated
      columns after the GPDB-added column. But there isn't, and we don't really
      do translation in GPDB, anyway, so this seems good enough.
      
      The Storage column isn't actually interesting for indexes, so omit it
      for \di.
      
      Add a bunch of tests. For the \di+ that was hitting the assertion, as well
      as \d commands, to test the Storage column.
      
      Fixes github issue https://github.com/greenplum-db/gpdb/issues/6792Reviewed-by: NMelanie Plageman <mplageman@pivotal.io>
      Reviewed-by: NJimmy Yih <jyih@pivotal.io>
      Reviewed-by: NJesse Zhang <jzhang@pivotal.io>
      56bb376c
    • A
      CI: change gpexpand job dependency to icw_gporca_centos6. · 322c4602
      Ashwin Agrawal 提交于
      icw_gporca_centos6 job generates the icw_gporca_centos6_dump. gpexpand
      has icw_gporca_centos6_dump as input, hence make it just depend on
      that particular job instead of all the ICW jobs. This makes the
      gpexpand job same as pg_upgrade job. Also, importantly marks the
      real dependency instead of perceived one.
      322c4602
    • A
      gpexpand: need to start and stop the primaries with convertMasterDataDirToSegment. · c596cca7
      Ashwin Agrawal 提交于
      This reverts partial part of commit
      b597bfa8, as primaries need to be
      started once using convertMasterDataDirToSegment.
      c596cca7
    • B
      Fix mem leak · d2d1c209
      Bhuvnesh Chaudhary 提交于
      d2d1c209
    • A
      Remove BitmapHeapPath and UniquePath check in cdbpath_cost_motion() · 014a9d6a
      Alexandra Wang 提交于
      For cost estimation of MotionPath node, we calculate the rows as
      (subpath->rows * cdbpath_segments) for CdbPathLocus_IsReplicated() which
      do not have IndexPath, BitmapHeapPath, UniquePath, and
      BitmapAppendOnlyPath (which is completely removed in db516347) in its
      subpath. Previously, for the above mentioned node we always calculated
      the rows as subpath->rows. The reason why the Paths mentioned above are
      special is unknown, the logic has always been there, it used to be in
      cdbpath_rows() and was refactored as part of commit b2411b59. Therefore
      removing the checks all together, and calculating the rows for all
      CdbPathLocus_IsReplicated() to be same. We have already removed
      IndexPath as part of the 94_STABLE merge.
      
      With this update, we only see one plan change in ICG.
      Co-authored-by: NEkta Khanna <ekhanna@pivotal.io>
      014a9d6a
    • A
      Remove tinc references. · b6af8d7b
      Ashwin Agrawal 提交于
      b6af8d7b
    • A
      Remove TINC framework and tests. · 7fa85902
      Ashwin Agrawal 提交于
      All the tests have been ported out of this framework and nothing runs
      these tests in CI anymore.
      7fa85902
    • A
      Remove tinc from concourse and pipeline files. · cd733c64
      Ashwin Agrawal 提交于
      This also removes the last remaining walrep job from pipeline file
      using tinc. Those tests are anyways broken and can't be run. Plan is
      to port some of the relevant to regress or behave.
      cd733c64
    • A
      gpexpand: remove redundant creation of mirrors. · b597bfa8
      Ashwin Agrawal 提交于
      gpexpand runs `_gp_expand.sync_new_mirrors()` at end after updating
      catalog which runs `gprecoverseg -aF`. While it was also calling
      `buildSegmentInfoForNewSegment()` as part of add_segments() which
      creates primaries and was also calling `ConfigureNewSegment()` for
      mirror which ran pg_basebackup internally. So, essentially as end
      result mirror was created twice, pg_basebackup and then later
      gprecoverseg -aF.
      
      Hence, modifying to just create primaries first as part of
      `_gp_expand.add_segments()` and let `_gp_expand.sync_new_mirrors()` do
      the mirror creation. Spotted the redundancy while browsing the code.
      b597bfa8
  3. 25 1月, 2019 13 次提交
  4. 24 1月, 2019 16 次提交
    • D
      Tidy up error reporting in gp_sparse_vector a little · 9208b8f3
      Daniel Gustafsson 提交于
      This cleans up the error messages in the sparse vector code a little
      by ensuring they mostly conform to the style guide for error handling.
      Also fixes a nearby typo and removes commented out elogs which are
      clearly dead code.
      Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      9208b8f3
    • D
      Fix up incorrect license statements for module · 41e8ed17
      Daniel Gustafsson 提交于
      The gp_sparse_vector module was covered by the relicensing done as
      part of the Greenplum open sourcing, but a few mentions of previous
      licensing remained in the code. The legal situation of this code has
      been reviewed by Pivotal legal and is cleared, so remove incorrect
      statements and replace with the standard copyright file headers.
      
      This also cleans up a few comments while at it.
      
      Reviewed-by: Cyrus Wadia
      Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      41e8ed17
    • D
      Remove unused header file · 0c1add31
      Daniel Gustafsson 提交于
      The float_specials.h header was removed shortly after this contrib
      module was imported in 2010, and has been dead code since. Remove.
      Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      0c1add31
    • D
      Remove unused and inline single-use functions · f1277a5c
      Daniel Gustafsson 提交于
      This removes a few unused functions, and inlines the function body of
      another one which only had a single caller. Also properly mark a
      few functions as static.
      Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      f1277a5c
    • D
      Stabilize gp_sparse_vector test · ccfe82b7
      Daniel Gustafsson 提交于
      Remove redundant test on array_agg which didn't have a stable output, and
      remove an ORDER BY to let atmsort deal with differences instead.
      Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      ccfe82b7
    • D
      Allocate histogram with palloc to avoid memleak · 0d4908ee
      Daniel Gustafsson 提交于
      The histogram structure was allocated statically via malloc(), but it
      had no data retention between calls as it was purely a microoptimization
      to avoid the cost of repeated allocations. This lead to the allocated
      memory leaking as it's not cleaned up automatically. Fix by pallocing
      the memory instead and take the cost of repeat allocation.
      
      Also ensure to properly clean up allocated memory on failure cases.
      Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      0d4908ee
    • D
      d613a759
    • D
      Fix memory management in gp_sparse_vector · 970a0395
      Daniel Gustafsson 提交于
      palloc() is guaranteed to only return on successful allocation, so there
      is no need to check it. ereport(ERROR..) is guaranteed never to return,
      and to clean up on it's way out, so pfree()ing after an ereport() is not
      just unreachable code, it would be a double-free if it was reached.
      
      Also add proper checks on the malloc() and strdup() calls as those are
      subject to the usual memory pressure controls by the programmer.
      Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      970a0395
    • A
      pg_dump: free temporary variable qualTmpExtTable · 68b11d46
      Adam Lee 提交于
      This part of codes are not covered by PR pipeline, tested manually.
      68b11d46
    • A
      pg_dump: fix dropping temp external table failure on CI · ed940ab6
      Adam Lee 提交于
      fmtQualifiedId() and fmtId() share the same buffer, we cannot
      use any of them until we finished calling.
      ed940ab6
    • M
      Don't choose indexscan when you need a motion for a subplan (#6665) · cd055f99
      Melanie 提交于
      
      When you have subquery under a SUBLINK that might get pulled up, you should
      not allow indexscans to be chosen for the relation which is the
      rangetable for the subquery. If that relation is distributed and the
      subquery is pulled up, you will need to redistribute or broadcast that
      relation and materialize it on the segments, and cdbparallelize will not
      add a motion and materialize an indexscan, so you cannot use indexscan
      in these cases.
      You can't materialize an indexscan because it will materialize only one
      tuple at a time and when you compare that to the param you get from the
      relation on the segments, you can get wrong results.
      
      Because we don't pick indexscan very often, we don't see this issue very
      often. You need a subquery referring to a distributed table in a subplan
      which, during planning, gets pulled up and then when adding paths, the
      indexscan is cheapest.
      Co-authored-by: NAdam Berlin <aberlin@pivotal.io>
      Co-authored-by: NJinbao Chen <jinchen@pivotal.io>
      Co-authored-by: NMelanie Plageman <mplageman@pivotal.io>
      cd055f99
    • D
      Remove stale replication slots on mirrors. · fa09dd80
      David Kimura 提交于
      Stale replication slots can exist on mirrors that were once acting as
      primaries. In this case restart_lsn is non-zero value used in past
      replication slot setup. The stale replication slot will continue to
      retain xlog on mirror which is problematic and unnecessary.
      
      This patch drops internal replication slot on startup of mirror.
      Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
      Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
      fa09dd80
    • J
      Prevent int128 from requiring more than MAXALIGN alignment. · d74dd56a
      Jesse Zhang 提交于
      We backported 128-bit integer support to speed up aggregates (commits
      8122e143 and 959277a4) from upstream 9.6 into Greenplum (in
      commits 9b164486 and 325e6fcd). However, we forgot to also port a
      follow-up fix postgres/postgres@7518049980b, mostly because it's nuanced
      and hard to reproduce.
      
      There are two ways to tell the brokenness:
      
      1. On a lucky day, tests would fail on my workstation, but not my laptop (or
         vice versa).
      
      1. If you stare at the generated code for `int8_avg_combine` (and friends),
         you'll notice the compiler uses "aligned" instructions like `movaps` and
         `movdqa` (on AMD64).
      
      Today's my lucky day.
      
      Original commit message from postgres/postgres@7518049980b (by Tom Lane):
      
      > Our initial work with int128 neglected alignment considerations, an
      > oversight that came back to bite us in bug #14897 from Vincent Lachenal.
      > It is unsurprising that int128 might have a 16-byte alignment requirement;
      > what's slightly more surprising is that even notoriously lax Intel chips
      > sometimes enforce that.
      
      > Raising MAXALIGN seems out of the question: the costs in wasted disk and
      > memory space would be significant, and there would also be an on-disk
      > compatibility break.  Nor does it seem very practical to try to allow some
      > data structures to have more-than-MAXALIGN alignment requirement, as we'd
      > have to push knowledge of that throughout various code that copies data
      > structures around.
      
      > The only way out of the box is to make type int128 conform to the system's
      > alignment assumptions.  Fortunately, gcc supports that via its
      > __attribute__(aligned()) pragma; and since we don't currently support
      > int128 on non-gcc-workalike compilers, we shouldn't be losing any platform
      > support this way.
      
      > Although we could have just done pg_attribute_aligned(MAXIMUM_ALIGNOF) and
      > called it a day, I did a little bit of extra work to make the code more
      > portable than that: it will also support int128 on compilers without
      > __attribute__(aligned()), if the native alignment of their 128-bit-int
      > type is no more than that of int64.
      
      > Add a regression test case that exercises the one known instance of the
      > problem, in parallel aggregation over a bigint column.
      
      > This will need to be back-patched, along with the preparatory commit
      > 91aec93e.  But let's see what the buildfarm makes of it first.
      
      > Discussion: https://postgr.es/m/20171110185747.31519.28038@wrigleys.postgresql.org
      
      (cherry picked from commit 75180499)
      d74dd56a
    • J
      Rearrange c.h to create a "compiler characteristics" section. · 60a08bc2
      Jesse Zhang 提交于
      This cherry-picks 91aec93e. We had to be extra careful to preserve
      still-in-use macros UnusedArg and STATIC_IF_INLINE and friends.
      
      > Generalize section 1 to handle stuff that is principally about the
      > compiler (not libraries), such as attributes, and collect stuff there
      > that had been dropped into various other parts of c.h.  Also, push
      > all the gettext macros into section 8, so that section 0 is really
      > just inclusions rather than inclusions and random other stuff.
      
      > The primary goal here is to get pg_attribute_aligned() defined before
      > section 3, so that we can use it with int128.  But this seems like good
      > cleanup anyway.
      
      > This patch just moves macro definitions around, and shouldn't result
      > in any changes in generated code.  But I'll push it out separately
      > to see if the buildfarm agrees.
      
      > Discussion: https://postgr.es/m/20171110185747.31519.28038@wrigleys.postgresql.org
      
      (cherry picked from commit 91aec93e)
      60a08bc2
    • D
      Update GDD to not assign global transaction ids · e24ddd70
      David Kimura 提交于
      Currently GDD sets DistributedTransactionContext to
      DTX_CONTEXT_QD_DISTRIBUTED_CAPABLE and as a result allocates distributed
      transaction id. It creates entry in ProcGlobal->allTmGxact with state
      DTX_STATE_ACTIVE_NOT_DISTRIBUTED. The effect of this is that any query
      taking a snapshot will see this transaction as in progress. Since GDD
      transaction is short lived it is not an issue in general, but in CI it
      causes flaky behavior for some of the vacuum tests. The flaky behavior
      shows up as unvacuumed tables where the vacuum snapshot was taken while
      GDD transaction was running thereby forcing vacuum to lower its oldest
      XMIN. Current behavior of GDD consuming a distributed transaction id
      (every 2 minutes by default) is also wasteful behavior.
      
      Currently GDD also sends a snapshot to QE, but this isn't required and
      is wasteful as well.
      
      In this change for GDD we keep DistributedTransactionContext as
      DTX_CONTEXT_LOCAL_ONLY and avoid dispatching snapshots to QEs.
      Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
      e24ddd70
    • A
      Gpexpand use gp_add_segment to register primaries. · e2c699c8
      Ashwin Agrawal 提交于
      Currently, dbid is used in tablespace path. Hence, while creating
      segment need dbid. To get the dbid need to add segment to catalog
      first. But adding segment to catalog before creating causes
      issues. Hence, modify gpexpand to not let database generate the dbid,
      but instead pass the dbid upfront generated while registering in
      catalog. This way dbid used while creating the segment will be same as
      dbid in catalog.
      Reviewed-by: NJimmy Yih <jyih@pivotal.io>
      e2c699c8