1. 08 10月, 2019 6 次提交
    • B
      Publish server builds from compile tasks · b679ac6f
      Bradford D. Boyle 提交于
      Server release candidate artifacts are not published until after an
      extensive set of tests have passed in the CI pipeline. These tests
      include ICW and all the CLI test suites. It is not unusual for time
      between a commit being pushed to a release candidate being published to
      be several hours, potentially slowing down the feedback cycle for
      component teams.
      
      This commit adds a "published" output to the compliation tasks. The new
      build artifact is stored in an immutable GCS bucket with the version in
      the filename. This makes it trivial for other pipelines to safely
      consume the latest build.
      
      This artifact **has not** passed any sort of testing (e.g., ICW) and
      should only be used in development pipelines that need
      near-instantaneous feedback on commits going into GPDB.
      
      For the server-build artifact, `((rc-build-type-gcs))` will resolve to
      `.debug` for the with asserts pipeline and `''` (i.e., empty string) for the
      without asserts pipeline. The two types of server artifacts that are
      "published" are:
      
      1. server-build
      2. server-rc
      
      server-build is the output of the compilation task and has had no
      testing; server-rc is a release candidate of the server component.
      Authored-by: NBradford D. Boyle <bboyle@pivotal.io>
      (cherry picked from commit 94a8ffc9)
      b679ac6f
    • M
      docs - add guc optimizer_use_gpdb_allocators (#8767) · 3e01c88b
      Mel Kiyama 提交于
      * docs - add guc optimizer_use_gpdb_allocators
      
      * docs - add information about improved GPDB query mem. mgmt.
      3e01c88b
    • D
      Add pg_upgrade integration test of partitioned heap tables · c41c9f42
      David Kimura 提交于
      Tests that a use can upgrade a partitioned heap table from 5X to 6X and
      validates that the data exists on the upgraded cluster.
      Co-authored-by: NAdam Berlin <aberlin@pivotal.io>
      c41c9f42
    • D
      Remove VACUUM FREEZE in pg_upgrade at set_frozenxids · a5e35ac6
      David Kimura 提交于
      Issue is when new cluster has higher next xid than old cluster. In that
      case, new cluster moves next xid backwards in `copy_xact_xlog_xid()`.
      This is problematic as subsequent tuple inserts are not frozen, but will
      have a lower xmin then the relfrozenxid defined on the relation. That
      violates the definition of relfrozenxid and VACUUM FREEZE will choke.
      
      This was identified by the upgrade integration tests which create a
      fresh GPDB5 and GPDB6 cluster. In this scenario immediately after
      initdb, GPDB6 has higher next xid than GPDB5.
      
      Given that template0 is not connectable, it should not need vacuum.
      Solution is to remove VACUUM FREEZE on template0 since it does not work
      in all scenarios. Additionally this removes a difference with upstream
      postgres.
      a5e35ac6
    • H
      Set locus correctly on Append node, if there are General locus children. · bc7295ec
      Heikki Linnakangas 提交于
      I found the logic to decide the target locus hard to understand, so I
      rewrote it in a table-driven approach. I hope it's not just me.
      
      Fixes github issue https://github.com/greenplum-db/gpdb/issues/8711Reviewed-by: NZhenghua Lyu <zlv@pivotal.io>
      bc7295ec
    • C
      docs - add gpcopy as an option to migrate data to GPDB 6 (#8751) · 2df0c411
      Chuck Litzell 提交于
      * Can use gpcopy to migrate data
      
      * Conditionalize gpcopy references
      
      * Edits for review comments
      2df0c411
  2. 05 10月, 2019 1 次提交
    • C
      Bump ORCA version to 3.74.0, Introduce PallocMemoryPool for use in GPORCA (#8643) · 27e237ce
      Chris Hajas 提交于
      We introduce a new type of memory pool and memory pool manager:
      CMemoryPoolPalloc and CMemoryPoolPallocManager
      
      The motivation for this PR is to improve memory allocation/deallocation
      performance when using GPDB allocators. Additionally, we would like to
      use the GPDB memory allocators by default (change the default for
      optimizer_use_gpdb_allocators to on), to prevent ORCA from crashing when
      we run out of memory (OOM). However, with the current way of doing
      things, doing so would add around 10 % performance overhead to ORCA.
      
      CMemoryPoolPallocManager overrides the default CMemoryPoolManager in
      ORCA, and instead creates a CMemoryPoolPalloc memory pool instead of a
      CMemoryPoolTracker. In CMemoryPoolPalloc, we now call MemoryContextAlloc
      and pfree instead of gp_malloc and gp_free, and we don’t do any memory
      accounting.
      
      So where does the performance improvement come from? Previously, we
      would (essentially) pass in gp_malloc and gp_free to an underlying
      allocation structure (which has been removed on the ORCA side). However,
      we would add additional headers and overhead to maintain a list of all
      of these allocations. When tearing down the memory pool, we would
      iterate through the list of allocations and explicitly free each one. So
      we would end up doing overhead on the ORCA side, AND the GPDB side.
      However, the overhead on both sides was quite expensive!
      
      If you want to compare against the previous implementation, see the
      Allocate and Teardown functions in CMemoryPoolTracker.
      
      With this PR, we improve optimization time by ~15% on average and up to
      30-40% on some queries which are memory intensive.
      
      This PR does remove memory accounting in ORCA. This was only enabled
      when the optimizer_use_gpdb_allocators GUC was set. By setting
      `optimizer_use_gpdb_allocators`, we still capture the memory used when
      optimizing a query in ORCA, without the overhead of the memory
      accounting framework.
      
      Additionally, Add a top level ORCA context where new contexts are created
      
      The OptimizerMemoryContext is initialized in InitPostgres(). For each
      memory pool in ORCA, a new memory context is created in
      OptimizerMemoryContext.
      
      Bumps ORCA version to 3.74.0
      
      This is a re-commit of 1db9b27a, which didn't properly catch/rethrow
      exceptions in gpos_init.
      Co-authored-by: NShreedhar Hardikar <shardikar@pivotal.io>
      Co-authored-by: NChris Hajas <chajas@pivotal.io>
      (cherry picked from commit 99dfccc2)
      27e237ce
  3. 04 10月, 2019 7 次提交
  4. 03 10月, 2019 1 次提交
  5. 02 10月, 2019 6 次提交
    • A
      607d003b
    • A
      9a90da8a
    • A
      cb1ebf73
    • A
      e229a311
    • C
      Bump ORCA version to 3.74.0, Introduce PallocMemoryPool for use in GPORCA (#8643) · 1db9b27a
      Chris Hajas 提交于
      We introduce a new type of memory pool and memory pool manager:
      CMemoryPoolPalloc and CMemoryPoolPallocManager
      
      The motivation for this PR is to improve memory allocation/deallocation
      performance when using GPDB allocators. Additionally, we would like to
      use the GPDB memory allocators by default (change the default for
      optimizer_use_gpdb_allocators to on), to prevent ORCA from crashing when
      we run out of memory (OOM). However, with the current way of doing
      things, doing so would add around 10 % performance overhead to ORCA.
      
      CMemoryPoolPallocManager overrides the default CMemoryPoolManager in
      ORCA, and instead creates a CMemoryPoolPalloc memory pool instead of a
      CMemoryPoolTracker. In CMemoryPoolPalloc, we now call MemoryContextAlloc
      and pfree instead of gp_malloc and gp_free, and we don’t do any memory
      accounting.
      
      So where does the performance improvement come from? Previously, we
      would (essentially) pass in gp_malloc and gp_free to an underlying
      allocation structure (which has been removed on the ORCA side). However,
      we would add additional headers and overhead to maintain a list of all
      of these allocations. When tearing down the memory pool, we would
      iterate through the list of allocations and explicitly free each one. So
      we would end up doing overhead on the ORCA side, AND the GPDB side.
      However, the overhead on both sides was quite expensive!
      
      If you want to compare against the previous implementation, see the
      Allocate and Teardown functions in CMemoryPoolTracker.
      
      With this PR, we improve optimization time by ~15% on average and up to
      30-40% on some queries which are memory intensive.
      
      This PR does remove memory accounting in ORCA. This was only enabled
      when the optimizer_use_gpdb_allocators GUC was set. By setting
      `optimizer_use_gpdb_allocators`, we still capture the memory used when
      optimizing a query in ORCA, without the overhead of the memory
      accounting framework.
      
      Additionally, Add a top level ORCA context where new contexts are created
      
      The OptimizerMemoryContext is initialized in InitPostgres(). For each
      memory pool in ORCA, a new memory context is created in
      OptimizerMemoryContext.
      
      Bumps ORCA version to 3.74.0
      Co-authored-by: NShreedhar Hardikar <shardikar@pivotal.io>
      Co-authored-by: NChris Hajas <chajas@pivotal.io>
      1db9b27a
    • A
      Remove vacuum_drop_phase_ao from parallel group · 38a177f5
      Adam Berlin 提交于
      - as an experiment. We want to see if it becomes less flakey.
      
      (cherry picked from commit b99920ff)
      38a177f5
  6. 01 10月, 2019 7 次提交
  7. 28 9月, 2019 2 次提交
  8. 27 9月, 2019 10 次提交