- 21 6月, 2017 11 次提交
-
-
由 Asim R P 提交于
ExclusiveLock should be acquired in place of RowExclusiveLock for DMLs on user tables. If RowExclusiveLock is acquired, we may have a local deadlock on QD when concurrent UPDATE statements are executed from within UDFs. The problem is described in more detail here: https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/PC0Z_zw-YaY/tsCmPdADBQAJ
-
由 Shoaib Lari 提交于
The storage/access_methods/appendonly_eof/test_ao_eof.py test achieves its concurrency by inheriting from the SQLConcurrencyTestCase class defined in /tincrepo/mpp/models/sql_concurrency_tc.py. The run_test() method of this class is called concurrently from multiple processes. To have a unique filename for the output of each concurrent invocation, the test uses a timestamp suffix to the output file. Even though the microsecond part of the timestamp is used, it is possible that two concurrent runs of the test would get the same microsecond part for the `datetime.datetime.now()`. Hence, it is possible that two tests would have the same timestamp resulting in identical output file names. The proposed fix is to add the pid also to the timestamp. That should make it completely unique.
-
由 Larry Hamel 提交于
gpperfmon drop partition sql statement was syntactically incorrect, so partition_age gpperfmon feature was not working. We were using the rows in partitionrangestart column from pg_partition to drop specific partitions. The row value from partitionrangestart is reported, as such, '2017-02-01 00:00:00'::timestamp(0) without time zone. The query was reporting an error "Not a constant expression". Use only the first part of partitionrangestart to make our ALTER DROP query work. - Added behave test to confirm that it is now working Signed-off-by: NNadeem Ghani <nghani@pivotal.io> Signed-off-by: NMarbin Tan <mtan@pivotal.io> Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 ppggff 提交于
For now, ORCA don't support index with express or constraints, so it's OK to remap logical index info with scan's relid by calling IndexScan_MapLogicalIndexInfo(). Maybe later when we should consider remap them with hard-coded 1 as in gp_get_physical_index_relid().
-
由 Marbin Tan 提交于
Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
-
由 Marbin Tan 提交于
Ensure that we can run install and uninstall gppkg on any gppkg filenames. Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
-
由 Tushar Dadlani 提交于
Fix uninstall gppkg -r <gppkg>: When uninstalling a non-standard gppkg that's been installed, for example, sample.gppkg, it previously threw an error: Please specify the correct <name>-<version>. Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Marbin Tan 提交于
Fix listing command `gppkg --query --all` When an gppkg with a non-standard name gets installed, --query --all had previously PANICed with a raise condition about unable to do some substring manipulation. - Add unittest Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
-
由 Tushar Dadlani 提交于
- Also, add sample.ggpkg as a fixture for tests Signed-off-by: NLarry Hamel <lhamel@pivotal.io> Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Nadeem Ghani 提交于
Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Marbin Tan 提交于
Solaris is no longer supported as a GPDB server, so all of the solaris code is dead code. Removed the #if define(sun) blocks. Signed-off-by: NNadeem Ghani <nghani@pivotal.io>
-
- 20 6月, 2017 16 次提交
-
-
由 Jimmy Yih 提交于
Currently, our binary swap test will report a diff when pg_dump output is changed. This is unintended. To prevent this, we resource the latest greenplum_path.sh after starting up Greenplum with the old greenplum_path.sh. This allows us to use the latest pg_dumpall against a cluster running off of the old binaries. If the tests or SQL in the binary swap test becomes more complex, this workaround may need to be replaced.
-
由 Abhijit Subramanya 提交于
The test used to validate that the tmlock is not held after completing the DTM recovery. The root cause for not releasing the lock was that in case of an error during recovery `elog_demote(WARNING)` was called which would demote the error to a warning. This would cause the abort processing code to not get executed and hence the lock would not be released. Adding a simple assert in the code once DTM recovery is complete is sufficient to make sure that the lock is released.
-
由 Lisa Owen 提交于
-
Fixing a slight inconsistency introduced in 357db2f3.
-
由 Venkatesh Raghavan 提交于
Update gporca version to 2.33 which update Join Cardinality Estimation for Text/bpchar/varchar/char columns
-
由 Haisheng Yuan 提交于
Commit 41c3b6 changed the numbering of RTEKind (with the intent to converge with upstream ordering). RTEKind is included in the RangeTblEntry struct, which in turn is included in a parse tree. Parse trees are serialized (via `nodeToString`) in the catalog when we store view definitions. That means re-ordering an ostensibly internal enum will break catalog compatibility. Reverting the re-ordering of `RTEKind`.
-
由 Heikki Linnakangas 提交于
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
This is a backport of the below commit from upstream to handle empty operator classes in pg_dump. The bug was first found in Greenplum but applied as an upstream-first fix. Cherry-picking was not possible due to interim changes not yet in Greenplum. commit 0461b66e Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Fri May 26 12:51:05 2017 -0400 Fix pg_dump to not emit invalid SQL for an empty operator class. If an operator class has no operators or functions, and doesn't need a STORAGE clause, we emitted "CREATE OPERATOR CLASS ... AS ;" which is syntactically invalid. Fix by forcing a STORAGE clause to be emitted anyway in this case. (At some point we might consider changing the grammar to allow CREATE OPERATOR CLASS without an opclass_item_list. But probably we'd want to omit the AS in that case, so that wouldn't fix this pg_dump issue anyway.) It's been like this all along, so back-patch to all supported branches. Daniel Gustafsson, tweaked by me to avoid a dangling-pointer bug Discussion: https://postgr.es/m/D9E5FC64-7A37-4F3D-B946-7E4FB468F88A@yesql.se
-
由 Heikki Linnakangas 提交于
These patches were written by Daniel Gustafsson, I just squashed and rebased them. * Add missing 5.0 object procedure declarations Commit 7ec83119 implemented support for 5.0 objects in binary upgrade, but missed adding the procedure declarations to pg_upgrade due to a mismerge. * Fix opclass Oid preassignment function The function was erroneously pulling the wrong argument for the namespace Oid resulting in failed Oid lookups during synchronization. * Add support functions for pg_amop tuples
-
由 Daniel Gustafsson 提交于
When a pre-assigned Oid can't be found, it's rather helpful to see the full searchkey that was used rather than just the objname since some keys lack objname (attrdef's for example). Having hacked this in multiple times I figured we might as well extend the elog() with the relevant information.
-
由 Jim Doty 提交于
Docker dependancies for this pipeline are now public, thus no authorized access is needed.
-
由 Asim R P 提交于
Otherwise there is a possibility of distributed deadlock. One such deadlock is caused by ENTRY_DB_SINGLETON reader entering LockAcquire when QD writer of the same MPP session already holds the lock. A backend from another MPP session is already waiting on the lock with a lockmode that conflicts with the reader's requested lockmode. This results in waitMask conflict and the reader is enqueued in the wait queue. But the QD writer is never going to release the lock because it's waiting for tuples from segments (QE writers/readers). And the QE writers/readers are also waiting for the ENTRY_DB_SINGLETON reader, completing the cycle necessary for deadlock. The fix is to avoid checking waitMask conflicts for a reader if writer of the same MPP session already holds the lock. In such a case the reader is granted the lock as long as it does not conflict with existing holders of the lock. Two isloation2 tests are added. One simulates the above mentioned deadlock and fails if it occurs. Another ensures that granting locks to readers without checking waitMask conflict does not starve existing waiters. cf. https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/OS1-ODIK0P4/ZIzayBbMBwAJSigned-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Kavinder Dhaliwal 提交于
-
由 mkiyama 提交于
* GPDB DOCS - postGIS extension * GPDB DOCS - PostGIS updates from review comments. Add information about installing PostGIS Raster. * Edits from Chuck's comments.
-
由 mkiyama 提交于
-
- 19 6月, 2017 10 次提交
-
-
由 Ashwin Agrawal 提交于
CID 170750: Logically dead code. In ChangeTracking_GetLastChangeTrackingLogEndLoc() code can never be reached because of a logical contradiction, hence removing the same.
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
-
由 Pengzhou Tang 提交于
"%m%s" format style is differ from upstream and broken the coverity scan, and it did not give too much value to keep it, so remove it.
-
由 Pengzhou Tang 提交于
* CID 170493: Null pointer dereferences (REVERSE_INULL), not need to do NULL check for transportStates * CID 170492: Resource leaks (RESOURCE_LEAK), addrs in not freed * CID 170481: Performance inefficiencies, should pass parameter fds of type "mpp_fd_set" by pointer
-
由 Pengzhou Tang 提交于
This commit backport related modification in tinc repository to make mpp interconnect test cases working on latest tinc framework.
-
由 Kenan Yao 提交于
-
由 Kenan Yao 提交于
-
- 17 6月, 2017 3 次提交
-
-
由 Chuck Litzell 提交于
-
由 Jingyi Mei 提交于
Signed-off-by: NJim Doty <jdoty@pivotal.io>
-
由 Kavinder Dhaliwal 提交于
There is an assert failure when Window's child is a HashJoin operator and while filling its buffer Window receives a NULL tuple. In this case HashJoin will call ExecEagerFreeHashJoin() since it is done returning any tuples. However, Window once it has returned all the tuples in its input buffer will call ExecProcNode() on HashJoin. This casues an assert failure in HashJoin that states that ExecHashJoin() should not be called if HashJoin's hashtable has already been released. This commit fixes the above issue by setting a flag in WindowState when Window encounters a null tuple while filling its buffer. This flag then guards any subsequent call to ExecProcNode() from fetchCurrentRow()
-