1. 21 7月, 2018 7 次提交
    • A
      Fix ci failure and add more asserts. · bebce8b4
      Ashwin Agrawal 提交于
      bebce8b4
    • A
      Exclude root and internal parent partitions from age calculation. · 44f97760
      Ashwin Agrawal 提交于
      Root partition and internal parent partitions do not contain any data. But since
      currently these tables still have valid relfrozenxid, their age keeps
      growing. The problem it poses is to bring the age down need to run vacuum on
      root, which trickles down and vacuum's each and every child partition as
      well. Instead if only leaf if getting modified those can be vacuumed in
      isolation and bring the age down and avoid overhead of vacuuming the full
      hierarchy.
      
      So, similar to AO, CO, external tables, etc.. for root and parent partitions
      record relfrozenxid as 0 (InvalidTransaction) during table creation. This works
      since these tables will never store any xids. This would skip them from age
      calculation hence eliminate forced need for vacuum on them. Ideally same can be
      achieved by defining root and internal parent partition as AO tables but its lot
      more work and ddls need modifications.
      44f97760
    • A
      Remove guc gp_setwith_alter_storage. · b3d85b70
      Ashwin Agrawal 提交于
      Alter table change storage type feature is hidden under this guc.  The syntax
      ALTER TABLE <tablename> SET WITH (appendonly=true/false) is not documented as
      well, since seems this feature is not full ready for primetime. Hence remove the
      guc and keep it disabled, anytime in future can code the feature fully and
      expose the same.
      
      Github issue #5300 tracks to full enable the feature.
      b3d85b70
    • A
      Remove tests for gp_setwith_alter_storage. · f11dc049
      Ashwin Agrawal 提交于
      This feature is not ready for primetime yet, so no point testing the same by
      enabling the guc. Hence for now removing the tests whenever in future we expose
      this feature add and enable tests for the same.
      
      Github issue #5300 tracks to full enable the feature.
      f11dc049
    • A
      Flush error state before making a retry attempt in DTM · 90e66f0c
      Asim R P 提交于
      Each DTM retry needs a clean error state because the previous error is
      already written to csv log and is handled by subsequent retry.
      90e66f0c
    • A
      Test case for DTM retry error handling · ab0f3296
      Asim R P 提交于
      The test hits the PG_CATCH() block in DTM retry logic.  It uncovers a
      bug in that part of the code, leading to PANIC due to
      ERRORDATA_STACK_SIZE exceeded.
      
      The upper limit on dtx_phase2_retry_count is increased to 15.  That
      allows to keep the test simpler by avoiding PANIC due to max retry count
      reached.
      ab0f3296
    • A
      Let dispatcher raise error when a DTM broadcast fails · 49fc2332
      Asim R P 提交于
      There already is a PG_TRY() ... PG_CATCH() block to handle such errors
      in the DTM retry logic.  This change makes it easier to test the error
      handling in the retry logic.
      
      The patch also fixes a bug in dispatcher that invoked CopyErrorData
      interface incorrectly, without switching to a memory context other that
      ErrorContext.
      49fc2332
  2. 20 7月, 2018 3 次提交
    • D
      Fix typo · cbee6d14
      Daniel Gustafsson 提交于
      Getting our own abbreviation right seems like a good thing.
      cbee6d14
    • D
      Remove unnecessary pg_config.h includes · 79c6c87f
      Daniel Gustafsson 提交于
      pg_config.h is already included via c.h from postgres.h, individual
      files in the backend should not include it separately (especially
      not for improved editor support as some comments alluded to).
      79c6c87f
    • D
      Unbreak compilation of SNMP support on macOS · 95382331
      Daniel Gustafsson 提交于
      Commit 3b3adb2b most likely broke
      compilation of the SNMP related code on macOS, as it required a
      feature test macro flag to enable the right bits in <sys/types.h>.
      There however seems to be little to no reason for the feature test
      macros to be defined here either, so remove and try to unbreak the
      build rather than resurrect.
      
      Also enable snmp support in Travis CI compilation so we get a heads
      up the next time it gets broken.
      95382331
  3. 19 7月, 2018 10 次提交
    • T
      Dump XLOG_HINT records with xlogdump · 813b8326
      Taylor Vesely 提交于
      Adds a facility to dump XLOG_HINT records with xlogdump. We have backported
      XLOG_HINT xlog records from upstream and this record type never existed at the
      time that xlogdump was originally created.
      813b8326
    • J
      pg_upgrade: dump REINDEX instructions after upgrade from GPDB4 · 0f2f52e1
      Jacob Champion 提交于
      GPDB5 changed relation indexes on disk, and so they are all invalidated
      when upgrading from 4. Rather than expecting the user to know what to do
      after that mass-invalidation, write a script to perform a REINDEX
      DATABASE for every db_name we have, and point the user to it.
      Co-authored-by: NAsim Praveen <apraveen@pivotal.io>
      0f2f52e1
    • L
      Ping utility can optionally survive DNS failure · 9a9adcbe
      Larry Hamel 提交于
      - Previously, DNS was queried within the `Ping` utility constructor, so a DNS failure would always raised an exception.
      - Now the DNS query is in the standard `run()` method, so a failure from DNS will raise optionally, depending on the `validateAfter` parameter.
      - `Command` declared as a new style class so that `super(Ping, self).run()` can be called.
      Co-authored-by: NLarry Hamel <lhamel@pivotal.io>
      Co-authored-by: NJemish Patel <jpatel@pivotal.io>
      9a9adcbe
    • M
      docs - update gpbackup API - add segment instance and update backup d… (#5285) · 7a3eee4b
      Mel Kiyama 提交于
      * docs - update gpbackup API - add segment instance and update backup directory information.
      
      Also update API version to 0.3.0.
      
      This will be ported to 5X_STABLE
      
      * docs - gpbackup API - review updates and fixes for scope information
      
      Also, cleanup edits.
      
      * docs - gpbackup API - more review updates and fixes to scope information.
      7a3eee4b
    • M
    • J
      Remove MyProc inDropTransaction flag · 7aef1fd0
      Jimmy Yih 提交于
      The MyProc inDropTransaction flag was used to make sure concurrent AO
      vacuum would not conflict with each other during drop phase. Two
      concurrent AO vacuum on same relation was possible back in 4.3 and 5X
      where the different AO vacuum phases (prepare, compaction, drop,
      cleanup) would interleave with each other, and having two AO vacuum
      drop phases concurrently on the same AO relation was
      dangerous. However, we no longer do that interleaving anymore on
      6.0devel. We now hold the ShareUpdateExclusiveLock through the entire
      AO vacuum which renders the inDropTransaction flag useless.
      7aef1fd0
    • A
      Remove gp_create_index_concurrently from resgroup tests · 4ee33612
      Asim R P 提交于
      The GUC is already removed from source code, no tests should use it.
      4ee33612
    • A
      Disable concurrent index build entirely · e5b0e253
      Asim R P 提交于
      The patch removes gp_create_index_concurrently GUC.  There is no use setting
      this GUC, concurrent index building is not supported in Greenplum.  Greenplum
      needs additional work for this feature to behave properly.  Also undone is an
      incorrect (and dangerous) attempt to make this feature work in Greenplum.  The
      change was dangerous because IndexStmt was dispatched to QEs outside of a
      distributed transaction/2PC, which is a recipe for inconsistency.
      
      GitHub issue #5293 is created to track the work needed for this feature.
      e5b0e253
    • A
      Fix pxf unittest compilation · 05526c27
      Asim R P 提交于
      The compilation failed on OSX/clag with the error "implicit declaration of
      function 'MemoryContextInit' is invalid".
      05526c27
    • L
      docs - add kafka connector xrefs (#5292) · 9e288ad3
      Lisa Owen 提交于
      9e288ad3
  4. 18 7月, 2018 7 次提交
  5. 17 7月, 2018 7 次提交
    • J
      tuptoaster: gracefully handle changes to TOAST_MAX_CHUNK_SIZE · 28acba6e
      Jacob Champion 提交于
      GPDB 4.3 uses a slightly smaller TOAST_MAX_CHUNK_SIZE, and we claim to
      be able to upgrade those toast tables, but there's currently no support
      in tuptoaster:
      
      	ERROR:  unexpected chunk size 8138 (expected 8140) in chunk 0 of 4
      	for toast value 16486 in pg_toast_16479 (tuptoaster.c:1840)
      
      Add some tests to flush this out. This adds the
      gp_test_toast_max_chunk_size_override GUC, which allows a superuser to
      manually reduce the chunk size used by toast_save_datum().
      
      We can handle changes to TOAST_MAX_CHUNK_SIZE by taking a look at the
      size of the first chunk. For the full toast_fetch_datum(), we can
      optimistically assume that the max chunk size is TOAST_MAX_CHUNK_SIZE,
      and then adjust if that turns out to be false. For the random access in
      toast_fetch_datum_slice(), though, I haven't tried a similar
      optimization -- if we don't know the chunk size to begin with, it seems
      like there are too many corner cases to keep track of when jumping into
      the middle of a toast relation. So toast_fetch_datum_slice() will always
      open the first chunk, which may somewhat impact performance for some
      queries.
      28acba6e
    • A
      Remove seqserver process from walrep test. · f8943298
      Ashwin Agrawal 提交于
      f8943298
    • M
      docs - minor edits to docs. · 0f340ded
      mkiyama 提交于
      0f340ded
    • A
      Fix unused qdinfo variable warnings. · 8e61ae80
      Ashwin Agrawal 提交于
      8e61ae80
    • A
      Get sequence code closer to upstream. · e19b3a2c
      Ashwin Agrawal 提交于
      This patch attempts to get the sequence code as close to upstream as
      possible. There is still mix of code from various upstream versions due to past
      cherry-picks for bug fixes so its hard to state which exact upstream version it
      matches but atleast all the gpdb specific modifications are minimized and
      unnecessary code movement has been removed.
      e19b3a2c
    • D
      Refactor of Sequence handling. · eae1ee3f
      David Kimura 提交于
      Handling sequences in MPP stage is challenging. This patch refactors the same
      mainly to eliminate the shortcomings/pitfalls in previous implementation. Lets
      first glance at issues with older implementation.
      
        - required relfilenode == oid for all sequence relations
        - "denial of service" due to dedicated but single process running on QD to
          serve sequence values for all the tables in all the databases to all the QEs
        - sequence tables direct opened as a result instead of flowing throw relcache
        - divergence from upstream implementation
      
      Many solutions were considered refer mailing discussion for the same before
      settling on one in here. Newer implementation still leverages centralized place
      QD to generated sequence values. It now leverages the existing QD backend
      process connecting to QEs for the query to serve the nextval request. As a
      result need for relfilenode == oid gets eliminated as based on oid, QD process
      can now lookup relfilenode from catalog and also leverage relcache. No more
      direct opens by single process across databases.
      
      For communication between QD and QE for sequence nextval request, async notify
      message is used (Notify messages are not used in GPDB for anything else
      currently). QD process while waiting for results from QEs is sitting idle and
      hence on seeing the nextval request, calls `nextval_internal()` and responds
      back with the value.
      
      Since the need for separate sequence server process went away, all the code for
      the same is removed.
      
      Discussion:
      https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/hni7lS9xH4c/o_M3ddAeBgAJCo-authored-by: NAsim R P <apraveen@pivotal.io>
      Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
      eae1ee3f
    • A
      Clear all the notify messages in cdbconn_discardResults() · 334252c0
      Ashwin Agrawal 提交于
      Currently, notify messages are not used in GPDB by QE's except for sequence
      nextval messages. While cleaning the connection for reuse best to remove all the
      notify messages as well to be safe than sorry even later if we start using it
      for more things.
      334252c0
  6. 14 7月, 2018 4 次提交
  7. 13 7月, 2018 2 次提交
    • J
      Fix resource group bypass test case (#5278) · 4fb21650
      Jialun 提交于
      - Remove async session before CREATE FUNCTION
      - Change comment format from -- to /* */
      4fb21650
    • J
      Fix resgroup bypass quota (#5262) · dc82ceea
      Jialun 提交于
      * Add resource group bypass memory limit.
      
      - Bypassed query allocates all the memory in group / global shared memory,
        and is also enforced by the group's memory limit;
      - Bypassed query also has a memory limit of 10 chunks per process;
      
      * Add test cases for the resgroup bypass memory limit.
      
      * Provide ORCA answer file.
      
      * Adjust memory limit on QD.
      dc82ceea