1. 10 10月, 2019 8 次提交
  2. 09 10月, 2019 6 次提交
  3. 08 10月, 2019 10 次提交
    • H
      Move a few GPDB-specific pg_dump C functions to separate file. · cf65a4ab
      Heikki Linnakangas 提交于
      The reason to do this now is that the dumputils mock test was failing to
      compile, on the 9.6 merge branch, because of the Assert that was added to
      dumputils.c. I'm not sure why, but separating the GPDB functions to a
      separate file seem like a good idea anyway, so let's just do that.
      cf65a4ab
    • H
      Change fix for Motion node's tdhasoids confusion, and add tests. · 6b53e2d7
      Heikki Linnakangas 提交于
      In master, we already got this fix as part of the 9.5 merge. Change it
      slightly, so that we don't use the subplan's tuple descriptor as is, but
      only copy the "tdhasoid" flag from it. It's in principle possible that
      some unimportant information, like attribute names or typmods, are not
      set up in the subplan's result type, and should be computed from the
      Motion's target list. We're not seeing problems like that on master, but
      on the 9.6 merge branch we are.
      
      More importantly, this adds bespoken tests for this scenario. It arose
      in the 'rowsecurity' test, but this doesn't have anything to do with
      row-level security, so it was accidental.
      
      We didn't backport the fix to 6X_STABLE before, so do that now.
      
      Fixes https://github.com/greenplum-db/gpdb/issues/8765.
      Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
      6b53e2d7
    • B
    • B
      Publish server builds from compile tasks · 94a8ffc9
      Bradford D. Boyle 提交于
      Server release candidate artifacts are not published until after an
      extensive set of tests have passed in the CI pipeline. These tests
      include ICW and all the CLI test suites. It is not unusual for time
      between a commit being pushed to a release candidate being published to
      be several hours, potentially slowing down the feedback cycle for
      component teams.
      
      This commit adds a "published" output to the compliation tasks. The new
      build artifact is stored in an immutable GCS bucket with the version in
      the filename. This makes it trivial for other pipelines to safely
      consume the latest build.
      
      This artifact **has not** passed any sort of testing (e.g., ICW) and
      should only be used in development pipelines that need
      near-instantaneous feedback on commits going into GPDB.
      
      For the server-build artifact, `((rc-build-type-gcs))` will resolve to
      `.debug` for the with asserts pipeline and `''` (i.e., empty string) for the
      without asserts pipeline. The two types of server artifacts that are
      "published" are:
      
      1. server-build
      2. server-rc
      
      server-build is the output of the compilation task and has had no
      testing; server-rc is a release candidate of the server component.
      Authored-by: NBradford D. Boyle <bboyle@pivotal.io>
      94a8ffc9
    • A
      Use different database for fsyc, heap_checksum and walrep tests · 2cfd2b2c
      Ashwin Agrawal 提交于
      Previously, fsyc, heap_checksum and walrep tests used separate
      databases. But seems with merge when src/makefiles/pgxs.mk added
      --dbname to REGRESS_OPTS, all of these tests started using
      `contrib_regression` database even if each of these tests defined
      --dbname=<>, it was overridden from src/makefiles/pgxs.mk.
      
      Hence, set USE_MODULE_DB=1 which makes these tests to use
      --dbname=$(CONTRIB_TESTDB_MODULE) instead of
      `contrib_regression`. This way these will again start having separate
      database.
      2cfd2b2c
    • A
      Add retry in test reader_waits_for_lock for lock checking · 74cb8dcc
      Ashwin Agrawal 提交于
      Command in next session gets executed in parallel to current session
      in case "&" is used. Hence, if session 1 command is running very slow,
      but session 0 commands gets executed fast, can cause flaky test. To
      avoid the same, add retries to check for all processes for session are
      blocked or not.
      74cb8dcc
    • M
      docs - add guc optimizer_use_gpdb_allocators (#8767) · 7927eb4b
      Mel Kiyama 提交于
      * docs - add guc optimizer_use_gpdb_allocators
      
      * docs - add information about improved GPDB query mem. mgmt.
      7927eb4b
    • C
      Revert "Enable optimizer_use_gpdb_allocators guc by default (#8771)" · e06fc220
      Chris Hajas 提交于
      This reverts commit b81d4501.
      
      The resource group tests use ORCA and need to be modified for this
      change.
      e06fc220
    • C
      Enable optimizer_use_gpdb_allocators guc by default (#8771) · b81d4501
      Chris Hajas 提交于
      This guc makes ORCA use gpdb allocators. This allows for faster
      optimization due to less overhead, reduced memory during optimization
      due to smaller/fewer headers, and makes ORCA observe vmem limits instead
      of crashing.
      Authored-by: NChris Hajas <chajas@pivotal.io>
      b81d4501
    • H
      Set locus correctly on Append node, if there are General locus children. · c6355348
      Heikki Linnakangas 提交于
      I found the logic to decide the target locus hard to understand, so I
      rewrote it in a table-driven approach. I hope it's not just me.
      
      Fixes github issue https://github.com/greenplum-db/gpdb/issues/8711Reviewed-by: NZhenghua Lyu <zlv@pivotal.io>
      c6355348
  4. 05 10月, 2019 1 次提交
    • C
      Bump ORCA version to 3.74.0, Introduce PallocMemoryPool for use in GPORCA (#8643) · 99dfccc2
      Chris Hajas 提交于
      We introduce a new type of memory pool and memory pool manager:
      CMemoryPoolPalloc and CMemoryPoolPallocManager
      
      The motivation for this PR is to improve memory allocation/deallocation
      performance when using GPDB allocators. Additionally, we would like to
      use the GPDB memory allocators by default (change the default for
      optimizer_use_gpdb_allocators to on), to prevent ORCA from crashing when
      we run out of memory (OOM). However, with the current way of doing
      things, doing so would add around 10 % performance overhead to ORCA.
      
      CMemoryPoolPallocManager overrides the default CMemoryPoolManager in
      ORCA, and instead creates a CMemoryPoolPalloc memory pool instead of a
      CMemoryPoolTracker. In CMemoryPoolPalloc, we now call MemoryContextAlloc
      and pfree instead of gp_malloc and gp_free, and we don’t do any memory
      accounting.
      
      So where does the performance improvement come from? Previously, we
      would (essentially) pass in gp_malloc and gp_free to an underlying
      allocation structure (which has been removed on the ORCA side). However,
      we would add additional headers and overhead to maintain a list of all
      of these allocations. When tearing down the memory pool, we would
      iterate through the list of allocations and explicitly free each one. So
      we would end up doing overhead on the ORCA side, AND the GPDB side.
      However, the overhead on both sides was quite expensive!
      
      If you want to compare against the previous implementation, see the
      Allocate and Teardown functions in CMemoryPoolTracker.
      
      With this PR, we improve optimization time by ~15% on average and up to
      30-40% on some queries which are memory intensive.
      
      This PR does remove memory accounting in ORCA. This was only enabled
      when the optimizer_use_gpdb_allocators GUC was set. By setting
      `optimizer_use_gpdb_allocators`, we still capture the memory used when
      optimizing a query in ORCA, without the overhead of the memory
      accounting framework.
      
      Additionally, Add a top level ORCA context where new contexts are created
      
      The OptimizerMemoryContext is initialized in InitPostgres(). For each
      memory pool in ORCA, a new memory context is created in
      OptimizerMemoryContext.
      
      Bumps ORCA version to 3.74.0
      
      This is a re-commit of 339dedf0d2, which didn't properly catch/rethrow
      exceptions in gpos_init.
      Co-authored-by: NShreedhar Hardikar <shardikar@pivotal.io>
      Co-authored-by: NChris Hajas <chajas@pivotal.io>
      99dfccc2
  5. 04 10月, 2019 8 次提交
  6. 03 10月, 2019 2 次提交
  7. 02 10月, 2019 5 次提交
    • G
      Spell out if the gpcheckcat tests have PASSED or FAILED · 68453b69
      Georgios Kokolatos 提交于
      and follow the convention of explicitly using these keywords.
      Any failed tests, including not reported, will add the 'FAILED' keyword in the
      summary.
      Reviewed-by: NAdam Berlin <aberlin@pivotal.io>
      68453b69
    • C
      Bump ORCA version to 3.74.0, Introduce PallocMemoryPool for use in GPORCA (#8643) · 0b00feb6
      Chris Hajas 提交于
      We introduce a new type of memory pool and memory pool manager:
      CMemoryPoolPalloc and CMemoryPoolPallocManager
      
      The motivation for this PR is to improve memory allocation/deallocation
      performance when using GPDB allocators. Additionally, we would like to
      use the GPDB memory allocators by default (change the default for
      optimizer_use_gpdb_allocators to on), to prevent ORCA from crashing when
      we run out of memory (OOM). However, with the current way of doing
      things, doing so would add around 10 % performance overhead to ORCA.
      
      CMemoryPoolPallocManager overrides the default CMemoryPoolManager in
      ORCA, and instead creates a CMemoryPoolPalloc memory pool instead of a
      CMemoryPoolTracker. In CMemoryPoolPalloc, we now call MemoryContextAlloc
      and pfree instead of gp_malloc and gp_free, and we don’t do any memory
      accounting.
      
      So where does the performance improvement come from? Previously, we
      would (essentially) pass in gp_malloc and gp_free to an underlying
      allocation structure (which has been removed on the ORCA side). However,
      we would add additional headers and overhead to maintain a list of all
      of these allocations. When tearing down the memory pool, we would
      iterate through the list of allocations and explicitly free each one. So
      we would end up doing overhead on the ORCA side, AND the GPDB side.
      However, the overhead on both sides was quite expensive!
      
      If you want to compare against the previous implementation, see the
      Allocate and Teardown functions in CMemoryPoolTracker.
      
      With this PR, we improve optimization time by ~15% on average and up to
      30-40% on some queries which are memory intensive.
      
      This PR does remove memory accounting in ORCA. This was only enabled
      when the optimizer_use_gpdb_allocators GUC was set. By setting
      `optimizer_use_gpdb_allocators`, we still capture the memory used when
      optimizing a query in ORCA, without the overhead of the memory
      accounting framework.
      
      Additionally, Add a top level ORCA context where new contexts are created
      
      The OptimizerMemoryContext is initialized in InitPostgres(). For each
      memory pool in ORCA, a new memory context is created in
      OptimizerMemoryContext.
      
      Bumps ORCA version to 3.74.0
      Co-authored-by: NShreedhar Hardikar <shardikar@pivotal.io>
      Co-authored-by: NChris Hajas <chajas@pivotal.io>
      0b00feb6
    • B
      Applied Feedback · 055eba59
      Bhuvnesh Chaudhary 提交于
      055eba59
    • B
      Allow COPY FROM to catalog tables only with allow_system_table_mods=on · aa310911
      Bhuvnesh Chaudhary 提交于
      Earlier, COPY <Catalogtable> from <file> was allowed irrespective of the
      value of allow_system_table_mods. This commit restricts such statements
      only when allow_system_table_mods is set to ON.
      
      Co-Authored-By: Ashwin Agrawal<aagrawal@pivotal.io>
      aa310911
    • C
      Fix pg_regress arg parsing error · 864b76dd
      Chen Mulong 提交于
      Due to a option index error, exclude-tests, ignore-plans, prehook and
      print-failure-diffs are not parsed from command line correctly.
      864b76dd