- 20 6月, 2017 12 次提交
-
-
由 Venkatesh Raghavan 提交于
Update gporca version to 2.33 which update Join Cardinality Estimation for Text/bpchar/varchar/char columns
-
由 Haisheng Yuan 提交于
Commit 41c3b6 changed the numbering of RTEKind (with the intent to converge with upstream ordering). RTEKind is included in the RangeTblEntry struct, which in turn is included in a parse tree. Parse trees are serialized (via `nodeToString`) in the catalog when we store view definitions. That means re-ordering an ostensibly internal enum will break catalog compatibility. Reverting the re-ordering of `RTEKind`.
-
由 Heikki Linnakangas 提交于
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
This is a backport of the below commit from upstream to handle empty operator classes in pg_dump. The bug was first found in Greenplum but applied as an upstream-first fix. Cherry-picking was not possible due to interim changes not yet in Greenplum. commit 0461b66e Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Fri May 26 12:51:05 2017 -0400 Fix pg_dump to not emit invalid SQL for an empty operator class. If an operator class has no operators or functions, and doesn't need a STORAGE clause, we emitted "CREATE OPERATOR CLASS ... AS ;" which is syntactically invalid. Fix by forcing a STORAGE clause to be emitted anyway in this case. (At some point we might consider changing the grammar to allow CREATE OPERATOR CLASS without an opclass_item_list. But probably we'd want to omit the AS in that case, so that wouldn't fix this pg_dump issue anyway.) It's been like this all along, so back-patch to all supported branches. Daniel Gustafsson, tweaked by me to avoid a dangling-pointer bug Discussion: https://postgr.es/m/D9E5FC64-7A37-4F3D-B946-7E4FB468F88A@yesql.se
-
由 Heikki Linnakangas 提交于
These patches were written by Daniel Gustafsson, I just squashed and rebased them. * Add missing 5.0 object procedure declarations Commit 7ec83119 implemented support for 5.0 objects in binary upgrade, but missed adding the procedure declarations to pg_upgrade due to a mismerge. * Fix opclass Oid preassignment function The function was erroneously pulling the wrong argument for the namespace Oid resulting in failed Oid lookups during synchronization. * Add support functions for pg_amop tuples
-
由 Daniel Gustafsson 提交于
When a pre-assigned Oid can't be found, it's rather helpful to see the full searchkey that was used rather than just the objname since some keys lack objname (attrdef's for example). Having hacked this in multiple times I figured we might as well extend the elog() with the relevant information.
-
由 Jim Doty 提交于
Docker dependancies for this pipeline are now public, thus no authorized access is needed.
-
由 Asim R P 提交于
Otherwise there is a possibility of distributed deadlock. One such deadlock is caused by ENTRY_DB_SINGLETON reader entering LockAcquire when QD writer of the same MPP session already holds the lock. A backend from another MPP session is already waiting on the lock with a lockmode that conflicts with the reader's requested lockmode. This results in waitMask conflict and the reader is enqueued in the wait queue. But the QD writer is never going to release the lock because it's waiting for tuples from segments (QE writers/readers). And the QE writers/readers are also waiting for the ENTRY_DB_SINGLETON reader, completing the cycle necessary for deadlock. The fix is to avoid checking waitMask conflicts for a reader if writer of the same MPP session already holds the lock. In such a case the reader is granted the lock as long as it does not conflict with existing holders of the lock. Two isloation2 tests are added. One simulates the above mentioned deadlock and fails if it occurs. Another ensures that granting locks to readers without checking waitMask conflict does not starve existing waiters. cf. https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/OS1-ODIK0P4/ZIzayBbMBwAJSigned-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Kavinder Dhaliwal 提交于
-
由 mkiyama 提交于
* GPDB DOCS - postGIS extension * GPDB DOCS - PostGIS updates from review comments. Add information about installing PostGIS Raster. * Edits from Chuck's comments.
-
由 mkiyama 提交于
-
- 19 6月, 2017 10 次提交
-
-
由 Ashwin Agrawal 提交于
CID 170750: Logically dead code. In ChangeTracking_GetLastChangeTrackingLogEndLoc() code can never be reached because of a logical contradiction, hence removing the same.
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
-
由 Pengzhou Tang 提交于
"%m%s" format style is differ from upstream and broken the coverity scan, and it did not give too much value to keep it, so remove it.
-
由 Pengzhou Tang 提交于
* CID 170493: Null pointer dereferences (REVERSE_INULL), not need to do NULL check for transportStates * CID 170492: Resource leaks (RESOURCE_LEAK), addrs in not freed * CID 170481: Performance inefficiencies, should pass parameter fds of type "mpp_fd_set" by pointer
-
由 Pengzhou Tang 提交于
This commit backport related modification in tinc repository to make mpp interconnect test cases working on latest tinc framework.
-
由 Kenan Yao 提交于
-
由 Kenan Yao 提交于
-
- 17 6月, 2017 6 次提交
-
-
由 Chuck Litzell 提交于
-
由 Jingyi Mei 提交于
Signed-off-by: NJim Doty <jdoty@pivotal.io>
-
由 Kavinder Dhaliwal 提交于
There is an assert failure when Window's child is a HashJoin operator and while filling its buffer Window receives a NULL tuple. In this case HashJoin will call ExecEagerFreeHashJoin() since it is done returning any tuples. However, Window once it has returned all the tuples in its input buffer will call ExecProcNode() on HashJoin. This casues an assert failure in HashJoin that states that ExecHashJoin() should not be called if HashJoin's hashtable has already been released. This commit fixes the above issue by setting a flag in WindowState when Window encounters a null tuple while filling its buffer. This flag then guards any subsequent call to ExecProcNode() from fetchCurrentRow()
-
This brought in postgres/postgres@44d5be0 pretty much wholesale, except: 1. We leave `WITH RECURSIVE` for a later commit. The code is brought in, but kept dormant by us bailing early at the parser whenever there is a recursive CTE. 2. We use `ShareInputScan` in the stead of `CteScan`. ShareInputScan is basically the parallel-capable `CteScan`. (See `set_cte_pathlist` and `create_ctescan_plan`) 3. Consequently we do not put the sub-plan for the CTE in a pseudo-initplan: it is directly present in the main plan tree instead, hence we disable `SS_process_ctes` inside `subquery_planner` 4. Another corollary is that all new operators (`CteScan`, `RecursiveUnion`, and `WorkTableScan`) are dead code right now. But they will come to live once we bring in parallel implementation of `WITH RECURSIVE` In general this commit reduces the divergence between Greenplum and upstream. User visible changes: The merge in parser enables a corner case previously treated as error: you can now specify fewer columns in your `WITH` clause than the actual projected columns in the body subquery of the `WITH`. Original commit message: > Implement SQL-standard WITH clauses, including WITH RECURSIVE. > > There are some unimplemented aspects: recursive queries must use UNION ALL > (should allow UNION too), and we don't have SEARCH or CYCLE clauses. > These might or might not get done for 8.4, but even without them it's a > pretty useful feature. > > There are also a couple of small loose ends and definitional quibbles, > which I'll send a memo about to pgsql-hackers shortly. But let's land > the patch now so we can get on with other development. > > Yoshiyuki Asaba, with lots of help from Tatsuo Ishii and Tom Lane > (cherry picked from commit 44d5be0e)
-
由 Jim Doty 提交于
In an effort to make each secret have less access we are moving to use git deploy keys to pull changes from github. For now GPDB and its submodules are public, so we should be able to pull without any keys. Signed-off-by: NDivya Bhargov <dbhargov@pivotal.io> Signed-off-by: NJim Doty <jdoty@pivotal.io>
-
由 Jim Doty 提交于
Signed-off-by: NDivya Bhargov <dbhargov@pivotal.io>
-
- 14 6月, 2017 5 次提交
-
-
由 David Yozie 提交于
-
由 David Yozie 提交于
* rewording/conditionalizing more Pivotal-specific info * conditionalizing/editing/or getting rid of more Pivotal-specific references and info
-
由 Shoaib Lari 提交于
This patch contains the changes to pipeline.yml to enable the testing of segment wal replication feature on concourse. Signed-off-by: NAbhijit Subramanya <asubramanya@pivotal.io>
-
由 Shoaib Lari 提交于
The generate_ao_xlog/generate_aoco_xlog test would fail if there was a page header or a continuation record in the XLOG sent by the sender since the validation did not take them into consideration. This patch fixes the validation of the received XLOG by detecting the page headers and continuation records and making sure that it skips to the correct record boundary. Signed-off-by: NAbhijit Subramanya <asubramanya@pivotal.io>
-
由 Andreas Scherbaum 提交于
-
- 13 6月, 2017 7 次提交
-
-
由 Chuck Litzell 提交于
* Update text for deprecated LOG ERRORS <table> * Remove deprecation note for gpdb5.
-
由 Adam Lee 提交于
gpfdist and gpcloud were shifted to top level by below commits, should be moved out of gpAux/Makefile. commit 6125ac85cae720f484d0d45042131f4b859779d2 Author: Marbin Tan <mtan@pivotal.io> Date: Fri Feb 5 11:23:06 2016 -0800 Move gpfdist to gpdb core. commit 4e34d8bb Author: Adam Lee <ali@pivotal.io> Date: Wed May 24 16:53:15 2017 +0800 Add a build flag for gpcloud
-
由 Ashwin Agrawal 提交于
Change tracking files are used to capture information on what's changed while mirror was down, to help incrementally bring it back to sync. In some instances mostly due to disk issues / full situations, if change tracking log was partial written or got messed up, resulted in rolling PANIC of segment and there by DB unavailable due to double fault. Only way out was manual intervention to remove changetracking files and run full resync. So, instead this commit now adds checksum protection to auto detect any problem with change tracking files during recovery / incremental resync. If checksum miss-match gets detected it takes preventive action to mark segment into ChangeTrackingDisabled state and keeps DB available. Plus, explicitely enforces only full recovery is allowed to bring mirror up in sync as changetracking info doesn't exist. Any attenpt for incremental resync clearly communicates out full resync has to be performed. So, this eliminates need for manual intervention to get DB back to availble state if changetracking files get corrupted.
-
由 Marbin Tan 提交于
Removing unused function that got left over from cleanup.
-
由 Marbin Tan 提交于
CID 170478: Control flow issues (DEADCODE) the "goto" function was dead code. We are closing the currently opened file right before we open up a new file already, so the FILE pointer was always NULL when "goto bail" occurred.
-
由 Marbin Tan 提交于
CID 170479: (Null pointer dereferenced in agg_put_qexec) We removed too much in c0c1897f Coverity detected that we were trying to access a pointer that is NULL for this codepath. It would come up if the key value pair didn't yet exist in the the apr_hash. Ensure that we allocate memory before doing a memcpy.
-
由 mkiyama 提交于
-