- 02 6月, 2017 1 次提交
-
-
由 David Yozie 提交于
This makes it easier to generate html docs for greenplum.org/doc
-
- 01 6月, 2017 14 次提交
-
-
由 Ashwin Agrawal 提交于
appendonlywriter.c: CID 129803 Identical code for different branches. The condition is redundant in AtEOXact_AppendOnly_StateTransition: The same code is executed regardless of the condition. cdbfilerep.c: CID 129285 Buffer not null terminated. In FileRep_GetUnknownIdentifier: The string buffer may not have a null terminator if the source string's length is equal to the buffer size. Replaced the call with strlcpy. cdbmirroredflatfile.c: CID 130180 Uninitialized pointer read. In MirrorFlatFile: Reads an uninitialized pointer or its target for mirroredOpen. Hence initializing it. postmaster.c: CID 129834 Argument cannot be negative. In checkIODataDirectory: Negative value used as argument `fd` to a function `write` expecting a positive value. Hence fixing it to be called in else only when fd not negative. faultinject.c: CID 149132 Uninitialized scalar variable. In gp_fault_inject_impl: Use of an uninitialized variable hdr.is_segv_msg just initializing the same. persistentutil.c: CID 129278 Buffer not null terminated. In gp_persistent_relation_node_check: The string buffer `fdata->databaseDirName` may not have a null terminator if the source string's length is equal to the buffer size. Use safer alternative strlcpy. pg_resetxlog.c: CID 130064 Resource leak. In WriteEmptyXLOG: Leak of memory or pointers to system resources `fp`. Hence call fclose on it.
-
由 Daniel Gustafsson 提交于
Refactor addDistributedBy() to use less clever coding and instead write it with readable code. Also remove variable initializations to allow the compiler to help us catch errors, and clean up some whitespace issues.
-
由 Bhuvnesh Chaudhary 提交于
Before parallelization on nodes in cdbparallelize if there are any subplan nodes in the plan which refer to the same plan_id, parallelization step breaks as a node must be processed only once by it. This patch fixes the issue by generating a new subplan node in glob subplans, and updating the plan_id of the subplan to refer to the newly created node.
-
由 Bhuvnesh Chaudhary 提交于
The condition containing subplans will be duplicated as the partition selection key in the PartitionSelector node. It is not OK to duplicate the expression, if it contains SubPlans, because the code that adds motion nodes to a subplan gets confused if there are multiple SubPlans referring the same subplan ID.
-
由 Bhuvnesh Chaudhary 提交于
Skip duplicating subplans clauses as multiple subplan node referring to same plan_id of subplans breaks cdbparallize. cdbparallize expects to apply motion only to a node once, so if two subplan nodes refers to the same plan_id the assertion breaks. Signed-off-by: NJemish Patel <jpatel@pivotal.io>
-
由 Ashwin Agrawal 提交于
Before this commit, snapshot stored information of distributed in-progress transactions populated during snapshot creation and its corresponding localXids found during tuple visibility check later (used as cache) by reverse mapping using single tightly coupled data structure DistributedSnapshotMapEntry. Storing the information this way possed couple of problems: 1] Only one localXid can be cached for a distributedXid. For sub-transactions same distribXid can be associated with multiple localXid, but since can cache only one, for other local xids associated with distributedXid need to consult the distributed_log. 2] While performing tuple visibility check, code must loop over full size of distributed in-progress array always first to check if cached localXid can be utilized to avoid reverse mapping. Now, decoupled the distributed in-progress with local xids cache separately. So, this allows us to store multiple xids per distributedXid. Also, allows to optimize scanning localXid only if tuple xid is relevant to it and also scanning size only equivalent to number of elements cached instead of size of distributed in-progress always even if nothing was cached. Along the way, refactored relevant code a bit as well to simplify further.
-
由 Venkatesh Raghavan 提交于
-
由 mkiyama 提交于
* GPDB DOCS - new GUCs from 4.3.x and mdcache * Typo fixes * typo * updates based or review comments.
-
由 Andreas Scherbaum 提交于
* Remove the notion that the server configuration can be changed using pgAdminIII The "gpconfig" tool should be used instead
-
由 Andreas Scherbaum 提交于
* Add explicit notion that the data is stored in GPDB
-
由 dyozie 提交于
-
由 Todd Sedano 提交于
-
由 Daniel Gustafsson 提交于
PL/Perl support is not shipped on SLES so skip including Perl, and components which depend on Perl, in the test set. By invoking the the ./configure step in the test-tree without --with-perl and --enable-mapreduce on SLES the tests for those should be avoided.
-
由 mkiyama 提交于
GPDB DOC - remove unused topic referencing gp_dump, gp_restore. Utilities were removed in GPDB 4.3.3.0
-
- 31 5月, 2017 9 次提交
-
-
由 Jane Beckman 提交于
* Deprecate gpsupport * Remove gpsupport, add gpmt
-
由 Ekta Khanna 提交于
Few minor updates in error messages and comments. Signed-off-by: NDhanashree Kashid <dkashid@pivotal.io>
-
由 Andreas Scherbaum 提交于
* Explain the "-M fast" option
-
由 Andreas Scherbaum 提交于
-
由 Dhanashree Kashid 提交于
This commit updates the `rangefuncs_cdb` tests to use `plpgsql` instead of `plperlu`; since `plperlu` is not supported by SLES. We do not need a separate answer file for ORCA because the results produced by planner and ORCA are the same. Hence this commit also removes `rangefuncs_cdb_optimizer.out` file. Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Andreas Scherbaum 提交于
* Replace "template1" with "postgres" database
-
由 Andreas Scherbaum 提交于
-
由 Karen Huddleston 提交于
* Move ddboost cleanup step to be last to ensure all files in directory are removed from data domain server
-
由 C.J. Jameson 提交于
Follow on to #2495
-
- 30 5月, 2017 1 次提交
-
-
由 Michael Roth 提交于
* Updated SLES targets to remove perl and mapreduce * Removed unneded --disable-mapreduce
-
- 27 5月, 2017 4 次提交
-
-
由 Larry Hamel 提交于
Add targets for gpnetbenchClient and gpnetbenchServer to top-level Makefile so that gpcheckperf runs with the open source build Remove targets for gpnetbenchClient and gpnetbenchServer from gpAux Makefile Add additional debugging print statement to gpcheckperf Signed-off-by: NMelanie Plageman <mplageman@pivotal.io>
-
由 Andreas Scherbaum 提交于
* add "libpq" to list of common APIs libpq is probably the most common API for PostgreSQL.
-
由 Andreas Scherbaum 提交于
* Update documentation to reflect new test target: "make installcheck-good" -> "make installcheck-world"
-
由 Andreas Scherbaum 提交于
Provide alternative answer for gp_metadata test when GPORCA is not compiled in
-
- 26 5月, 2017 3 次提交
-
-
由 Ashwin Agrawal 提交于
Also, add retry to check for walsender is gone after walrcv_disconnect() as it can take sometime to detect the connection drop.
-
由 dyozie 提交于
-
由 mkiyama 提交于
-
- 25 5月, 2017 8 次提交
-
-
由 Jimmy Yih 提交于
For subselect_gp test, we were removing the distribution policy of a table to see if it would do a gather motion or not. Since it's technically a corrupted table, we should delete it after we're done with it. We also remove a quicklz reference that should not have been there. For gppc test, they were using the regression database. This made our gpcheckcat call at the end of ICW relatively useless since all our data would have been deleted due to gppc tests recreating the regression database. For gpload test, some generated files were previously commited. We should be actively cautious of this and remove them when we see them.
-
由 Venkatesh Raghavan 提交于
-
由 Venkatesh Raghavan 提交于
-
由 Venkatesh Raghavan 提交于
-
由 Daniel Gustafsson 提交于
PL/Perl is an optional component, and the main ICW should not use it as it may not be present. Move the tests that seem useful to the plperl test suite instead and remove the ones which we have ample coverage for elsewhere.
-
由 Daniel Gustafsson 提交于
The gpmapreduce application is an optional install included via the --enable-mapreduce configure option. The tests were however still in src/test/regress and unconditionally included in the ICW schedule, thus causing test failures when mapreduce wasn't configured. Move all gpmapreduce tests to co-locate them with the mapreduce code and only test when configured. Also, add a dependency on Perl for gpmapreduce in autoconf since it's a required component.
-
由 Chris Hajas 提交于
This is part of the effort to get all backup/restore tests using the same test suite. Since the Netbackup tests take significantly longer, we only run a subset of the regular test suite. We also tag scenarios to allow parallel runs on separate hosts in CI. This suite will take 1h, 40 mins after parallelization, down from the current 2h, 20 mins.
-
由 Bhuvnesh Chaudhary 提交于
- Before building Index object (IMDIndex), we build LogicalIndexes via calling `gpdb::Plgidx(oidRel)` in which a partition tables is traversed and index information (such as logicalIndexOid, nColumns, indexKeys, indPred, indExprs, indIsUnique, partCons, defaultLevels) is captured. - For Indexes which are available on all the partitions partCons and defaultLevels are NULL/empty. - Later in `CTranslatorRelcacheToDXL::PmdindexPartTable` to build Index object, we use the derived LogicalIndexes information and populates the array holding the levels on which default partitions exists. But since defaultLevels is NIL in this case, pdrgpulDefaultLevels is set to empty i,e `default partitions on levels: {}` - This causes an issue while trying to build the propagation expression, as because of wrong number of default partitions on level we mark the scan as partial and tries to construct a test propagation expression instead of a const propagation expression. - This patch fixes the issue by marking the default partitions on levels for index equal to the default partitions on levels for the part relation if the index exists on all the parts. Signed-off-by: NJemish Patel <jpatel@pivotal.io>
-