1. 11 3月, 2019 6 次提交
    • D
      Gracefully error out on incorrect partition type add · 460789a0
      Daniel Gustafsson 提交于
      When altering a partitioned table and adding an incorrectly specified
      partition, an assertion was hit rather than gracefully erroring out.
      Make sure that the requested partition matches the underlying table
      definition before continuing down into the altering code. This also
      adds a testcase for this.
      
      Reported-by: Kalen Krempely in #6967
      Reviewed-by: NPaul Guo <pguo@pivotal.io>
      Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
      460789a0
    • D
      Rename recursive CTE guc to remove _prototype · f6a1a60e
      Daniel Gustafsson 提交于
      The GUC which enables recursive CTEs is in the currently released
      version called gp_recursive_cte_prototype, but in order to reflect
      the current state of the code it's now renamed to gp_recursive_cte.
      By default the GUC is still off, but that might change before we
      ship the next release.
      
      The previous GUC name is still supported, but marked as deprecated,
      in order to make upgrades easier.
      Reviewed-by: NIvan Novick <inovick@pivotal.io>
      Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
      f6a1a60e
    • N
      Retire the reshuffle method for table data expansion (#7091) · 1c262c6e
      Ning Yu 提交于
      This method was introduced to improve the data redistribution
      performance during gpexpand phase2, however per benchmark results the
      effect does not reach our expectation.  For example when expanding a
      table from 7 segments to 8 segments the reshuffle method is only 30%
      faster than the traditional CTAS method, when expanding from 4 to 8
      segments reshuffle is even 10% slower than CTAS.  When there are indexes
      on the table the reshuffle performance can be worse, and extra VACUUM is
      needed to actually free the disk space.  According to our experiments
      the bottleneck of reshuffle method is on the tuple deletion operation,
      it is much slower than the insertion operation used by CTAS.
      
      The reshuffle method does have some benefits, it requires less extra
      disk space, it also requires less network bandwidth (similar to CTAS
      method with the new JCH reduce method, but less than CTAS + MOD).  And
      it can be faster in some cases, however as we can not automatically
      determine when it is faster it is not easy to get benefit from it in
      practice.
      
      On the other side the reshuffle method is less tested, it is possible to
      have bugs in corner cases, so it is not production ready yet.
      
      In such a case we decided to retire it entirely for now, we might add it
      back in the future if we can get rid of the slow deletion or find out
      reliable ways to automatically choose between reshuffle and ctas
      methods.
      
      Discussion: https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/8xknWag-SkI/5OsIhZWdDgAJReviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
      1c262c6e
    • Z
      Only get necessary relids for partition table in InitPlan · a71447db
      Zhenghua Lyu 提交于
      Previously, during initializing ResultRelations in InitPlan on
      QD, it always builds the relids as all the relation oids in a
      partition table (including root and all its inheritors).
      Sometimes we does not need all the relids.
      
      A typical case is for ao partition table. When we directly
      insert into a specific child partition, the plan's ResultRelation
      only contains the child partition. And if we still make relids
      as root and all its inheritors, during `assignPerRelSegno`,
      it might lock each aoseg file on AccessShare mode on QEs. It
      causes confusion that the insert statement is only for a child
      partition but holding other partition's lock.
      
      This commit changes the relids building logic as:
        - if the ResultRelation contains the root partition, then
          relids is root and all its inheritors
        - otherwise, relids is a map of ResultRelations to get the
          element's relation oid
      a71447db
    • D
      Align resource group ereports with style guide · 045162d5
      Daniel Gustafsson 提交于
      Make sure all ereports() starts with a lowercase letter, and move
      longer explanations to errdetail/errhints. Also fix expected error
      output to match.
      Reviewed-by: NAdam Berlin <aberlin@pivotal.io>
      Reviewed-by: NJacob Champion <pchampion@pivotal.io>
      045162d5
    • D
      Fix typo in gpconfig variable name · 338e40ba
      Daniel Gustafsson 提交于
      While harmless, knowing it's there is enough and I can't unsee it.
      Also reduce the scope of the variable as it has no outside users.
      Co-authored-by: NJacob Champion <pchampion@pivotal.io>
      Reviewed-by: NJacob Champion <pchampion@pivotal.io>
      338e40ba
  2. 10 3月, 2019 1 次提交
  3. 09 3月, 2019 9 次提交
    • J
      Revert recent changes to gpinitstandby and gprecoverseg · 659f0ee5
      Jacob Champion 提交于
      One of these changes appears to have possibly introduced a serious
      performance regression in the master pipeline. To avoid destabilizing
      work over the weekend, I'm reverting for now and we can investigate more
      fully next week.
      
      This reverts the following commits:
      "gprecoverseg: Show progress of pg_basebackup on each segment"
          1b38c6e8
      "Add gprecoverseg -s to show progress sequentially"
          9e89b5ad
      "gpinitstandby: guide the user on single-host systems"
          c9c3c351
      "gpinitstandby: rename -F to -S and document it"
          ba3eb5b4
      659f0ee5
    • L
      docs - add pxf config info for s3 server-side encryption (#7070) · f35dcd60
      Lisa Owen 提交于
      * docs - add pxf config info for s3 server-side encryption
      
      * add xref to aws docs for setting bucket encryption
      
      * create bucket and key in same AZ, section title formatting
      
      * remove number from bucket name variable
      
      * s3 config - xref to hadoop s3a prop section
      
      * copy edits requested by david
      
      * add the bucket variable numbers back in
      f35dcd60
    • M
      docs - CTE available with INSERT, UPDATE, DELETE (#7025) · 45d72f77
      Mel Kiyama 提交于
      * docs - CTE available with INSERT, UPDATE, DELETE
      
      -updated GUC
      -updated Admin Guide topic WITH Queries (Common Table Expressions)
      
      updates to SELECT, INSERT, UPDATE DELETE will be part of postgres 9.2 merge.
      
      * docs - CTE updates from review comments
      
      * docs - CTE more updates from review comments
      
      * docs - CTE - updates from review comments
      
      * Experimental -> Beta wording
      45d72f77
    • D
      Docs postgresql 9.3 merge (#7084) · 63ac5d2a
      David Yozie 提交于
      * vacuumdb - add -t option support for multiple tables
      
      * reindexdb - add -t option support for multiple tables; misc edits
      
      * psql - misc additions, edits, and reformats
      
      * pg_dump - misc edits, additions, formatting
      
      * add * syntax for descendant tables
      
      * pg_dumpall - add -d, --dbname option
      
      * PREPARE - add note about search_path
      
      * TRUNCATE - add * option
      
      * SELECT - add LATERAL, NO KEY UPDATE, KEY SHARE, and related edits to locking clause
      
      * ALTER ROLE. Re-order syntax and descriptions.
      
      * ALTER RULE. Add new command and fix some conref issues
      
      * COPY. Add FREEZE option and edits.
      
      * CREATE INDEX. Minor synopsis update.
      
      * CREATE SCHEMA. Add IF NOT EXISTS forms of command
      
      * CREATE TABLE. Syntax changes. Add SET CONSTRAINTS command.
      
      * bgworker. Add bgworker topic to reference guide
      
      * CREATE TABLE AS. Add WITH/WITHOUT OIDS
      
      * Changes from review
      
      * remove some parens, add trigger note
      
      * remove extra whitespace in CREATE TABLE
      
      * CREATE TABLE AS - correct DISTRIBUTED BY syntax
      
      * Review comments on CREATE TABLE.
      
      * pg_dump - add parallelization note
      
      * CREATE TABLE - fix whitespace again?
      
      * Remove docs for LATERAL
      
      * Add LATERAL to unsupported feature list
      63ac5d2a
    • J
      gpinitstandby: rename -F to -S and document it · ba3eb5b4
      Jacob Champion 提交于
      After the WALrep changes, the previous -F (filespace) option was
      co-opted to be the new standby data directory option. This isn't a
      particularly obvious association.
      
      Change the option to -S. (-D would have been better, but that's already
      in use as a short alias for --debug.) Also document this option in the
      official gpinitstandby help.
      ba3eb5b4
    • J
      gpinitstandby: guide the user on single-host systems · c9c3c351
      Jacob Champion 提交于
      When a standby is initialized on the same host as the original master,
      remind the user that the data directory and port need to be explicitly
      set.
      c9c3c351
    • K
      Add gprecoverseg -s to show progress sequentially · 9e89b5ad
      Kalen Krempely 提交于
      When -s is present, show pg_basebackup progress sequentially instead
      of inplace. Useful when writing to a file, or if a tty does not support
      escape sequences. Defaults to showing the progress inplace.
      9e89b5ad
    • S
      gprecoverseg: Show progress of pg_basebackup on each segment · 1b38c6e8
      Shoaib Lari 提交于
      The gprecoverseg utility runs pg_basebackup in parallel on all segments that are
      being recovered.  In this commit, we are logging the progress of each
      pg_basebackup on its host and displaying them to the user of gprecoverseg.  The
      progress files are deleted upons successful completion of gprecoverseg.
      
      Unit tests have also been added.
      Authored-by: NShoaib Lari <slari@pivotal.io>
      Co-authored-by: NMark Sliva <msliva@pivotal.io>
      Co-authored-by: NJacob Champion <pchampion@pivotal.io>
      Co-authored-by: NEd Espino <edespino@pivotal.io>
      Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
      1b38c6e8
    • D
      Docs: reword from 'experimental' to 'beta' (#7103) · 79d3bb7f
      David Yozie 提交于
      * reword from 'experimental' to 'beta'
      
      * Experimental -> Beta in markdown source
      
      * typo fix
      
      * Removing SuSE 11 details in Beta notes
      79d3bb7f
  4. 08 3月, 2019 6 次提交
  5. 07 3月, 2019 4 次提交
  6. 06 3月, 2019 5 次提交
    • D
      Break up the ignore block in the with suite · f6842d41
      Daniel Gustafsson 提交于
      A large set of tests were wrapped in an ignore block in the with
      suite due to them not working properly in the past. Since most of
      these have been addressed, it's time to break up the block and
      ensure testing coverage.
      
      This removes as much of the ignore as possible, and updates the
      underlying returned data to match. This will create merge conflicts
      with upstream but since we wont merge more code before cutting the
      next release it's better to have sane tests for the lifecycle of
      the next release, and we can always revert this on master as we
      statrt merging again.
      
      The trigger tests are left under ignore, even though they seem to
      work quite well, since atmsort cannot handle that output yet.
      Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      f6842d41
    • D
      Add support for Squelching a ModifyTable node · 996cbc68
      Daniel Gustafsson 提交于
      An inner ModifyTable node must run to completion even if the outer
      node can be satisfied with a squelched inner. Ensure to run the node
      to completion when asked to Squelch to not risk losing modifications.
      
      This adds a testcase from the upstream with test suite to the GPDB
      with_clause suite. The original test is under an ignore block, but
      even with lifting that the output is different due to state being
      set up by prior tests which happen to fail in GPDB.
      Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      996cbc68
    • P
      Mark get_dns_cached_address fault injector to fts probe only · 75d7446a
      Pengzhou Tang 提交于
      To test that fts probe can handle dns error, a fault named
      get_dns_cached_address is added, however, this fault might
      effect any connections that call getCdbComponentInfo() and
      make the fts_errors flaky. To fix this, mark the dns fault
      to fts probe only.
      75d7446a
    • H
      Fixed gppkg/deb package install for ubuntu (#7072) · c8c502c4
      Hao Wu 提交于
      1.  Use fakeroot to install/uninstall deb package in gppkg
      2. Throw an exception when installing an existing gppkg/deb package
      3. Throw an exception when updating a non-existing gppkg/deb package
      Co-authored-by: NHaozhou Wang <hawang@pivotal.io>
      c8c502c4
    • O
      Stop vendoring libstdc++.so.*-gdb.py · 060f1b53
      Oliver Albertini 提交于
      Existing code was globbing for libstdc++.so.* and copying it to gpdb
      lib, we are now ignoring the .py since that should not be vendored.
      
      * also cleaned up bash, no need for trailing '/.' on copy. Thanks d#!
      
      [#164347190]
      Co-authored-by: NOliver Albertini <oalbertini@pivotal.io>
      Co-authored-by: NNandish Jayaram <njayaram@pivotal.io>
      060f1b53
  7. 05 3月, 2019 9 次提交
    • H
      Add hooks function for diskquota extension · a9f594f7
      Hubert Zhang 提交于
      These hooks are used by diskquota extension to detect Heap/AO/CO table change.
      For Heap table, smgr related hooks will be utilized the same way as Postgresql.
      For AO/CO table, we add some new hook position, TruncateAOSegmentFile(),
      BufferedAppendWrite() and copy_append_only_data() due to the lack of
      abstract storage layer for AO/CO tables.
      Co-authored-by: NHaozhou Wang <hawang@pivotal.io>
      Co-authored-by: NHao Wu <gfphoenix78@gmail.com>
      a9f594f7
    • P
      COPY a replicated table on a subset of segments should work · de99f760
      Pengzhou Tang 提交于
      Commit 4eb65a53 bring GPDB the ability to allow tables to be distributed on
      a subset of segments, that commit taken care of the replicated table well for
      SELECT/UPDATE/INSERT on a subset, however, COPY FROM was not.
      
      For COPY FROM replicated table, only one replica should be picked to provide
      data and the QE whose segid match gp_session_id % segment_size will be chosen,
      obviously, for table on a subset of segments, a valid QE might be chosen.
      
      To fix it, the real numsegments of replicated table should be used instead of
      current segment size, what's more, dispatcher can allocate gangs on a subset
      of segments now, QD can directly allocate picked gang directly, QE doesn't need
      to care about whether it should provide data anymore.
      
      For COPY TO replicated table, we also should allocated correct QEs matching the
      numsegments of replicated table.
      Co-authored-by: NGang Xiong <gxiong@pivotal.io>
      de99f760
    • P
      Fix assertion when creating unique index on tables created in utility mode · 38d40bcc
      Pengzhou Tang 提交于
      checkPolicyForUniqueIndex() checks if distribution key conflict with unique/primary
      key, for example, unique index is not allowed for random-distributed table and
      allowed for replicated-distributed table, for normal distributed, the set of
      columns being indexed should be a superset of the table.
      
      What about entry-distributed table? (eg, table created in utility mode, it has
      no records in gp_distribution_policy and GpPolicyFetch translated it to
      entry-distributed)? Such tables are localized in a single db, so adding a unique
      index should also be allowed.
      
      This is spotted by the assertion in checkPolicyForUniqueIndex() when checking
      the conflict for normal distributed tables.
      
      This fixes #5880
      38d40bcc
    • P
      Reuse the identifier when a QE is destroyed · 13c504fb
      Pengzhou Tang 提交于
      Now each QE in a session is assigned with a unique identifier, meanwhile, QD
      dispatches a slice table to all QEs and each slice in the slice table has a
      bitmapset of QE identifiers. QE go through all slices and decide the slice it
      belongs to by checking its identifier in the bitmapset.
      
      A problem is, the QE identifier was never resued when a QE was destroyed, so the
      identifier was incrementally increased until the bitmapset is inefficient and
      insufficient to hold it.
      
      This commit fixes it by reusing the QE identifiers to limit it in a reasonable
      range.
      Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
      13c504fb
    • A
      Make Release job push gcs centos{6,7} artifacts · 4893e5e6
      Amil Khanzada 提交于
      - add a rename task to make the artifacts use the new naming convention
      - keep pushing to the s3 resources so that other pipelines using those
        published artifacts don't break (although those s3 resources are now
        deprecated).
      Co-authored-by: NSambitesh Dash <sdash@pivotal.io>
      Co-authored-by: NAmil Khanzada <akhanzada@pivotal.io>
      Co-authored-by: NBen Christel <bchristel@pivotal.io>
      4893e5e6
    • D
      Fix shared snapshot in-progress xids off-by-one (#7018) · 3db10d27
      David Kimura 提交于
      Issue is that we point the first snapshot's in-progress transaction
      array at the second block that we've allocated for xips. This results in
      every snapshot N pointing to snapshot N+1's block of allocated xips.
      This meant that the memory allocated which was intended for snapshot 0
      was never used. Worse though is that the last snapshot's xip points to
      an address that was not memory allocated for xips.
      
      This bug manifests itself in interesting ways because it essentially
      corrupts a local snapshot passed from a writer to a reader if we are
      unfortunate enough to use the last indexed shared local snapshot
      in-progress transaction array. When the snapshot is corrupt then we
      cannot guarantee the correct visibility of tuples in a table.
      Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
      3db10d27
    • C
      Warn not to run gpbackup/gprestore during expand. (#7062) · 1aeb61e9
      Chuck Litzell 提交于
      * Warn not to run gpbackup/gprestore during expand.
      
      * Updates from review
      1aeb61e9
    • S
    • J
      ssl: add 9.5 merge FIXME for SAN TAP tests · 38cb15a1
      Jacob Champion 提交于
      We don't have Subject Alternative Name support, and won't until 9.5.
      Make sure we uncomment these tests when we get there.
      Co-authored-by: NDavid Krieger <dkrieger@pivotal.io>
      38cb15a1