1. 20 6月, 2017 12 次提交
    • V
      Update gporca version to 2.33 which update Join Cardinality Estimation for... · 357db2f3
      Venkatesh Raghavan 提交于
      Update gporca version to 2.33 which update Join Cardinality Estimation for Text/bpchar/varchar/char columns
      357db2f3
    • H
      Reinstate RTEKind enum entries to the original order before 41c3b6 · a38e7b9e
      Haisheng Yuan 提交于
      Commit 41c3b6 changed the numbering of RTEKind (with the intent to
      converge with upstream ordering). RTEKind is included in the
      RangeTblEntry struct, which in turn is included in a parse tree. Parse
      trees are serialized (via `nodeToString`) in the catalog when we store
      view definitions. That means re-ordering an ostensibly internal enum
      will break catalog compatibility. Reverting the re-ordering of
      `RTEKind`.
      a38e7b9e
    • H
      febddac6
    • D
      Comment extending and cleanup · c8894b15
      Daniel Gustafsson 提交于
      c8894b15
    • D
      Fix pg_dump to not emit invalid SQL for an empty operator class. · f8db7bb7
      Daniel Gustafsson 提交于
      This is a backport of the below commit from upstream to handle empty
      operator classes in pg_dump. The bug was first found in Greenplum but
      applied as an upstream-first fix. Cherry-picking was not possible due
      to interim changes not yet in Greenplum.
      
        commit 0461b66e
        Author: Tom Lane <tgl@sss.pgh.pa.us>
        Date:   Fri May 26 12:51:05 2017 -0400
      
          Fix pg_dump to not emit invalid SQL for an empty operator class.
      
          If an operator class has no operators or functions, and doesn't need
          a STORAGE clause, we emitted "CREATE OPERATOR CLASS ... AS ;" which
          is syntactically invalid.  Fix by forcing a STORAGE clause to be
          emitted anyway in this case.
      
          (At some point we might consider changing the grammar to allow CREATE
          OPERATOR CLASS without an opclass_item_list.  But probably we'd want to
          omit the AS in that case, so that wouldn't fix this pg_dump issue anyway.)
      
          It's been like this all along, so back-patch to all supported branches.
      
          Daniel Gustafsson, tweaked by me to avoid a dangling-pointer bug
      
          Discussion: https://postgr.es/m/D9E5FC64-7A37-4F3D-B946-7E4FB468F88A@yesql.se
      f8db7bb7
    • H
      Fix bugs in pg_upgrade_support functions, for GPDB 5 -> GPDB 5 ugprade. · 96e953a7
      Heikki Linnakangas 提交于
      These patches were written by Daniel Gustafsson, I just squashed and
      rebased them.
      
      * Add missing 5.0 object procedure declarations
      
      Commit 7ec83119 implemented support for 5.0 objects in
      binary upgrade, but missed adding the procedure declarations to
      pg_upgrade due to a mismerge.
      
      * Fix opclass Oid preassignment function
      
      The function was erroneously pulling the wrong argument for the
      namespace Oid resulting in failed Oid lookups during synchronization.
      
      * Add support functions for pg_amop tuples
      96e953a7
    • D
      Extend debugging output in missing Oid assignment · 5c97bd91
      Daniel Gustafsson 提交于
      When a pre-assigned Oid can't be found, it's rather helpful to see
      the full searchkey that was used rather than just the objname since
      some keys lack objname (attrdef's for example). Having hacked this
      in multiple times I figured we might as well extend the elog() with
      the relevant information.
      5c97bd91
    • J
      Remove docker credentials from pr_pipeline · 73e8a1f7
      Jim Doty 提交于
      Docker dependancies for this pipeline are now public, thus no authorized access is needed.
      73e8a1f7
    • A
      Readers shouldn't check lock waitMask if writer holds the lock · 61623ce7
      Asim R P 提交于
      Otherwise there is a possibility of distributed deadlock.  One such deadlock is
      caused by ENTRY_DB_SINGLETON reader entering LockAcquire when QD writer of the
      same MPP session already holds the lock.  A backend from another MPP session is
      already waiting on the lock with a lockmode that conflicts with the reader's
      requested lockmode.  This results in waitMask conflict and the reader is
      enqueued in the wait queue.  But the QD writer is never going to release the
      lock because it's waiting for tuples from segments (QE writers/readers).  And
      the QE writers/readers are also waiting for the ENTRY_DB_SINGLETON reader,
      completing the cycle necessary for deadlock.
      
      The fix is to avoid checking waitMask conflicts for a reader if writer of the
      same MPP session already holds the lock.  In such a case the reader is granted
      the lock as long as it does not conflict with existing holders of the lock.
      
      Two isloation2 tests are added.  One simulates the above mentioned deadlock and
      fails if it occurs.  Another ensures that granting locks to readers without
      checking waitMask conflict does not starve existing waiters.
      
      cf. https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/OS1-ODIK0P4/ZIzayBbMBwAJSigned-off-by: NXin Zhang <xzhang@pivotal.io>
      61623ce7
    • K
      Remove building with codegen for non-oss build · b80fe58b
      Kavinder Dhaliwal 提交于
      b80fe58b
    • M
      GPDB DOCS - postGIS extension (#2618) · f7bfb3aa
      mkiyama 提交于
      * GPDB DOCS - postGIS extension
      
      * GPDB DOCS - PostGIS updates from review comments. Add information about installing PostGIS Raster.
      
      * Edits from Chuck's comments.
      f7bfb3aa
    • M
      1e8df2bb
  2. 19 6月, 2017 10 次提交
  3. 17 6月, 2017 6 次提交
    • C
      28ad6af3
    • J
      54b9a281
    • K
      Set a flag when window's child is exhausted. · 18c61de4
      Kavinder Dhaliwal 提交于
      There is an assert failure when Window's child is a HashJoin operator
      and while filling its buffer Window receives a NULL tuple. In this case
      HashJoin will call ExecEagerFreeHashJoin() since it is done returning
      any tuples. However, Window once it has returned all the tuples in its
      input buffer will call ExecProcNode() on HashJoin. This casues an assert
      failure in HashJoin that states that ExecHashJoin() should not be called
      if HashJoin's hashtable has already been released.
      
      This commit fixes the above issue by setting a flag in WindowState when
      Window encounters a null tuple while filling its buffer. This flag then
      guards any subsequent call to ExecProcNode() from fetchCurrentRow()
      18c61de4
    • F
      Merge 8.4 CTE (sans recursive) · 41c3b698
      This brought in postgres/postgres@44d5be0 pretty much wholesale, except:
      
      1. We leave `WITH RECURSIVE` for a later commit. The code is brought in,
          but kept dormant by us bailing early at the parser whenever there is
          a recursive CTE.
      2. We use `ShareInputScan` in the stead of `CteScan`. ShareInputScan is
          basically the parallel-capable `CteScan`. (See `set_cte_pathlist`
          and `create_ctescan_plan`)
      3. Consequently we do not put the sub-plan for the CTE in a
          pseudo-initplan: it is directly present in the main plan tree
          instead, hence we disable `SS_process_ctes` inside
          `subquery_planner`
      4. Another corollary is that all new operators (`CteScan`,
          `RecursiveUnion`, and `WorkTableScan`) are dead code right now. But
          they will come to live once we bring in parallel implementation of
          `WITH RECURSIVE`
      
      In general this commit reduces the divergence between Greenplum and
      upstream.
      
      User visible changes:
      The merge in parser enables a corner case previously treated as error:
      you can now specify fewer columns in your `WITH` clause than the actual
      projected columns in the body subquery of the `WITH`.
      
      Original commit message:
      
      > Implement SQL-standard WITH clauses, including WITH RECURSIVE.
      >
      > There are some unimplemented aspects: recursive queries must use UNION ALL
      > (should allow UNION too), and we don't have SEARCH or CYCLE clauses.
      > These might or might not get done for 8.4, but even without them it's a
      > pretty useful feature.
      >
      > There are also a couple of small loose ends and definitional quibbles,
      > which I'll send a memo about to pgsql-hackers shortly.  But let's land
      > the patch now so we can get on with other development.
      >
      > Yoshiyuki Asaba, with lots of help from Tatsuo Ishii and Tom Lane
      >
      
      (cherry picked from commit 44d5be0e)
      41c3b698
    • J
      Set pipelines using github deploy keys · 717124f1
      Jim Doty 提交于
      In an effort to make each secret have less access we are moving to use
      git deploy keys to pull changes from github.
      
       For now GPDB and its submodules are public, so we should be able to
       pull without any keys.
      Signed-off-by: NDivya Bhargov <dbhargov@pivotal.io>
      Signed-off-by: NJim Doty <jdoty@pivotal.io>
      717124f1
    • J
      Coverity pipeline no longer needs git private key · 4473a3b0
      Jim Doty 提交于
      Signed-off-by: NDivya Bhargov <dbhargov@pivotal.io>
      4473a3b0
  4. 14 6月, 2017 5 次提交
  5. 13 6月, 2017 7 次提交
    • C
      Docs - remove log errors to table clause in best practices guide (#2603) · 21685ea8
      Chuck Litzell 提交于
      * Update text for deprecated LOG ERRORS <table>
      
      * Remove deprecation note for gpdb5.
      21685ea8
    • A
      Reorganize gpfdist and gpcloud makefile targets · 7c73edd2
      Adam Lee 提交于
      gpfdist and gpcloud were shifted to top level by below commits, should
      be moved out of gpAux/Makefile.
      
          commit 6125ac85cae720f484d0d45042131f4b859779d2
          Author: Marbin Tan <mtan@pivotal.io>
          Date:   Fri Feb 5 11:23:06 2016 -0800
      
              Move gpfdist to gpdb core.
      
          commit 4e34d8bb
          Author: Adam Lee <ali@pivotal.io>
          Date:   Wed May 24 16:53:15 2017 +0800
      
              Add a build flag for gpcloud
      7c73edd2
    • A
      Checksum protect filerep change tracking files. · a0cc23d4
      Ashwin Agrawal 提交于
      Change tracking files are used to capture information on what's changed while
      mirror was down, to help incrementally bring it back to sync. In some instances
      mostly due to disk issues / full situations, if change tracking log was partial
      written or got messed up, resulted in rolling PANIC of segment and there by DB
      unavailable due to double fault. Only way out was manual intervention to remove
      changetracking files and run full resync.
      
      So, instead this commit now adds checksum protection to auto detect any problem
      with change tracking files during recovery / incremental resync. If checksum
      miss-match gets detected it takes preventive action to mark segment into
      ChangeTrackingDisabled state and keeps DB available. Plus, explicitely enforces
      only full recovery is allowed to bring mirror up in sync as changetracking info
      doesn't exist. Any attenpt for incremental resync clearly communicates out full
      resync has to be performed. So, this eliminates need for manual intervention to
      get DB back to availble state if changetracking files get corrupted.
      a0cc23d4
    • M
      gpperfmon: Address compiler warning · c575e033
      Marbin Tan 提交于
      Removing unused function that got left over from cleanup.
      c575e033
    • M
      gpperfmon: Fix coverity issue · 53e9bbc6
      Marbin Tan 提交于
      CID 170478:  Control flow issues  (DEADCODE)
      the "goto" function was dead code. We are closing the currently opened
      file right before we open up a new file already, so the FILE pointer was
      always NULL when "goto bail" occurred.
      53e9bbc6
    • M
      gpperfmon: Fix null pointer issue by providing memory · 7da3265a
      Marbin Tan 提交于
      CID 170479: (Null pointer dereferenced in agg_put_qexec)
      
      We removed too much in c0c1897f
      
      Coverity detected that we were trying to access a pointer that is NULL
      for this codepath. It would come up if the key value pair didn't yet
      exist in the the apr_hash. Ensure that we allocate memory before doing
      a memcpy.
      7da3265a
    • M
      5074ff31