- 09 12月, 2016 3 次提交
-
-
由 Shreedhar Hardikar 提交于
Instead of calling ereport(ERROR), take advantage of debug assistance already built into GPDB Assert() method. Signed-off-by: NNikos Armenatzoglou <nikos.armenatzoglou@gmail.com>
-
由 C.J. Jameson 提交于
Signed-off-by: NDavid Sharp <dsharp@pivotal.io>
-
-
- 08 12月, 2016 19 次提交
-
-
由 Heikki Linnakangas 提交于
By trial & error, these seem to not be required.
-
由 Daniel Gustafsson 提交于
This patch started out as a small refactor of executable discovery due to the function having multiple leaks, but turned into a revamping of the way the commandlines are constructed (since that's where the executable paths were used). The changes introduced in this diff are: - Build the commandlines in PQExpBuffers instead of fixed size buffer believed to be large enough - Move commandline functions from cdb_dump_util into cdb_restore_agent as that's the only callsite, no need to link that code into other binaries. Also refactor the functions significantly as there were many functions doing essentially the same thing. - Remove code to find executables and just use find_other_exec() since all binaries are in the same folder - Change shellEscape() to not always resetting the PQExpBuffer, also use the same code as in src/backend/cdb/cdbbackup.c which handles quoting as well - Remove various small functions that only returned a static string and variables holding a static string - Use MAXPGPATH instead of locally defined macro for the same thing - Remove the highly uninteresting unittests for string concatenation
-
由 Daniel Gustafsson 提交于
The code is commented out without any commented out callsites so remove. This is what the vcs history is for
-
由 Tom Lane 提交于
rather than trying to implement the equivalent logic by hand. The motivation for the original coding appears to have been to check with the effective uid's permissions not the real uid's; but there is no longer any difference, because we don't run the postmaster setuid (indeed, main.c enforces that they're the same). Using access() means we will get it right in situations the original coding failed to handle, such as ACL-based permissions. Besides it's a lot shorter, cleaner, and more thread-safe. Per bug #5275 from James Bellinger. GPDB: Back-ported to Greenplum in order to provide thread safe discovery of executables.
-
由 Adam Lee 提交于
To match behavior changes of this: commit 291f737a Author: Andreas Scherbaum <ascherbaum@pivotal.io> Date: Sat May 7 00:08:39 2016 +0200 Change the default value of standard_conforming_strings to on This change should be publicized to driver maintainers at once and release-noted as an incompatibility with previous releases. (cherry picked from commit 0839f312) Changes to doc/src/sgml/{config,syntax}.sgml ignored as they are not present in gpdb. Backslash '\' is no longer recognized as esacpe by default, gpload.py use quote("\\") as escape string, which translates into two backslashes, if standard_conforming_strings is on, scanner will recognize it as double backslashes rather than a single one. We change escape string to a single backslash to avoid this problem, which fixes gpload related CCLE cases. Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io>
-
由 xiong-gang 提交于
If the ack packet in doSendStopMessageUDPIFC() is lost, QE will keep sending status packet, and QD will ack it in handleDataPacket(). We need make sure the 'extraSeq' is equal to 'seq' in the ack packet so that QE can update the capacity. Or else, QE will hang for ever.
-
由 Ashwin Agrawal 提交于
-
由 Pengzhou Tang 提交于
-
由 Ashwin Agrawal 提交于
Seems bugbuster tests used to bounce the database for perfmon tests, which are not run now, so stop spending time in same.
-
由 Nikos Armenatzoglou 提交于
-
由 Larry Hamel 提交于
-- set MASTER_DATA_DIRECTORY env var during each test -- port of 7cfb44a from tinc repo
-
由 Larry Hamel 提交于
- gptransfer validation fails with an unhelpful error message when the source map file is empty. - moves the host map validation earlier to catch this problem before we run other steps which assume a nonempty file
-
由 Chumki Roy 提交于
-
由 Chumki Roy 提交于
- Check that multiple source partitions which have the same destination table are all from the same parent partition - Check if destination table(s) are non-partition tables
-
由 Karen Huddleston 提交于
- added a partition-transfer-non-partition-target flag - added behave + unit tests around this functionality - fixed error with final rowcount validation (the standard assumption that destination tables are empty is not true for this case) - whitespace + code cleanup
-
由 Larry Hamel 提交于
Use superclass Exception.__init__ to store message in standard Pythonic way, in addition to any local message storing scheme.
-
由 Asim R P 提交于
This reverts commit 506032ac. VACUUM FULL on AO/CO is fixed to not give up during drop phase and the tests need not worry about segfiles in AWAITING_DROP state. Note that the tests should NOT be run concurrently with any other tests that use serializable transactions. VACUUM FULL on AO/CO continues to avoid deleting segfiles in AWAITING_DROP state in presence of concurrent serializable transactions that may need the segfiles.
-
由 Asim R P 提交于
VACUUM FULL acquires AccessExclusive lock upfront, so there is no possibility of deadlock due to a concurrent lazy vacuum that is also performing drop phase. Upon migration to ICG, UAO catalog tests started running concurrently and this bug was uncovered.
-
由 C.J. Jameson 提交于
This begs the question about other additions to the readme from the commit: https://github.com/greenplum-db/gpdb/commit/2101323239f7b01c13d7d663682d78261b40006c ...things were added where the README asked that things not be added.
-
- 07 12月, 2016 7 次提交
-
-
由 Kenan Yao 提交于
Reset mppIsWriter field of PGPROC to false in hook function of GUC gp_session_role if the new value is GP_ROLE_UTILITY(#1421) When someone is connecting to a segment in utility mode, and there exists a connection from QD to this segment having a same session id coincidently, reader QE on this segment may possibly treat the utility backend as its corresponding writer QE, this may lead to confusion on shared snapshot, and reader QE of extended query may report error saying it cannot find snapshot temp file. We avoid this by excluding utility backend from writer list. Signed-off-by Pengzhou Tang <ptang@pivotal.io>
-
由 Kenan Yao 提交于
-
由 Asim R P 提交于
The fix removes the assumption that no other partition table exists before the test starts.
-
由 Asim R P 提交于
Some of the TINC tests under "limit_removal" heading were testing subtransaction functionality but not limit remotal in particular. Move them to distributed_transactions.sql. The limit in "limit removal" refers to the in-progress subtransactions array SharedSnapshotSlot->subxids. If the limit of MaxGpSavePoints subtransactions is reached, the array is spilled over to disk. More elaborate tests for this funcionality exist in TINC (mpp/gpdb/tests/storage/sub_transaction_limit_removal).
-
由 Asim R P 提交于
The tests were counting tuples from an AO segfile in AWAITING_DROP state. This file should be excluded in counting visble tuples/eof because AWAITING_DROP segfile is only used by select transactions that run concurrently with VACUUM transaction on the same AO table.
-
由 foyzur 提交于
Previously we were converting accounts array to tree before logging those accounts via a tree walk. This is unnecessary for simple logging into the pg_log. Moreover, such tree conversion is not feasible when we have OOM in a peer process. Allocating memory merely for logging would force the current process to hit OOM as well. Therefore, we log the memory usage without allocating any further memory.
-
- 06 12月, 2016 9 次提交
-
-
由 Daniel Gustafsson 提交于
In 88f0623e a cache of recently used relfilenode Oids was added, and the cache is interrogated in the GetNewRelFileNode() logic. If the first Oid in the loop is encountered in the cache however, the loop will only retry if the compiler has initialized the loop control variable to true. Explicitly initialize to true to handle this case for compilers that initialize to false such as clang.
-
由 Kenan Yao 提交于
Signed-off-by: NPengzhou Tang <ptang@pivotal.io>
-
由 Adam Lee 提交于
-
`installcheck-good` stopped being a top-level make target after greenplum-db/gpdb@644b2a9 [ci skip]
-
由 Peifeng Qiu 提交于
-
由 Ashwin Agrawal 提交于
The test panics and needs to perform crash recovery on segments, if lot of ddl operations are performed by previous tests inside checkpoint interval, lot of xlog recovery needs to happen for this tests. Hence adding checkpoint upfront to eliminate any extra load for this tests.
-
由 Chris Hajas 提交于
When performing a dump, the postdata file contains DROP statements that will report an error if the relation already exists. This is misleading since the restore will say that errors have occured during restore, even though the DB is still in the expected state. For most cases we simply could use the `DROP IF EXISTS` syntax; however, the `DROP CONSTRAINT IF EXISTS` syntax isn't available until postgres 9.0 so we create a temporary procedure to perform this action. - Replaces DROP statements with DROP IF EXISTS for supported postdata reltypes - Temporarily adds and uses a function that mimics the behavior of DROP CONSTRAINT IF EXISTS, since this feature is not available in Postgres 8.2 Authors: Chris Hajas, Larry Hamel, Stephen Wu
-
由 Xin Zhang 提交于
When we have a correlated subquery with EXISTS and CTE, the planner produces wrong plan as: ``` pivotal=# explain with t as (select 1) select * from foo where exists (select * from bar where foo.a = 'a' and foo.b = 'b'); QUERY PLAN ---------------------------------------------------------------------------------------------------- Gather Motion 3:1 (slice3; segments: 3) (cost=0.01..1522.03 rows=5055210000 width=16) -> Nested Loop (cost=0.01..1522.03 rows=1685070000 width=16) -> Broadcast Motion 1:3 (slice2; segments: 1) (cost=0.01..0.03 rows=1 width=0) -> Limit (cost=0.01..0.01 rows=1 width=0) -> Gather Motion 3:1 (slice1; segments: 3) (cost=0.01..0.03 rows=1 width=0) -> Limit (cost=0.01..0.01 rows=1 width=0) -> Result (cost=0.01..811.00 rows=23700 width=0) One-Time Filter: $0 = 'a'::bpchar AND $1 = 'b'::bpchar -> Seq Scan on bar (cost=0.01..811.00 rows=23700 width=0) -> Seq Scan on foo (cost=0.00..811.00 rows=23700 width=16) Optimizer status: legacy query optimizer (11 rows) ``` This failed during execution because $0 is referenced across the slices. Root Cause: That's because planner produce a plan with `$0` aka `param` but without `subplan`. The `param` is created by `replace_outer_var()`, when planner detects a query referring to relations from its outer/parent query. Such `var` is created when removing `sublink` in `convert_EXISTS_to_join()` function. In that function, when handling the `EXISTS` query, we convert the `EXISTS_sublink` to a `subquery RTE` (and expect it to get pulled up later by `pull_up_subquery()`. However the subquery cannot be pulled-up by `pull_up_subquery()` since it is not a simple subquery (`is_simple_subquery()` returns false because of CTE in this case). In this case, the `sublink` got removed, hence it cannot produce the `subplan` (which is an valid option). And the `var` left behind as outer-reference, and then covered to param, which is blowed up during query execution. There is a mismatching conditions between `convert_EXISTS_to_join()` and `pull_up_subquery()` about whether this subquery can be pulled-up. The fix is to reuse `is_simple_subquery()` checking when `convert_EXISTS_to_join()`, so that it can be consistent with `pull_up_subquery()` on whether subquery can be pulled or not. The correct plan after fix is: ``` pivotal=# explain with t as (select 1) select * from foo where exists (select * from bar where foo.a = 'a' and foo.b = 'b'); QUERY PLAN ---------------------------------------------------------------------------------------------------------------------- Gather Motion 3:1 (slice2; segments: 3) (cost=0.00..1977.50 rows=35550 width=16) -> Seq Scan on foo (cost=0.00..1977.50 rows=11850 width=16) Filter: (subplan) SubPlan 1 -> Result (cost=0.01..811.00 rows=23700 width=16) One-Time Filter: $0 = 'a'::bpchar AND $1 = 'b'::bpchar -> Result (cost=882.11..1593.11 rows=23700 width=16) -> Materialize (cost=882.11..1593.11 rows=23700 width=16) -> Broadcast Motion 3:3 (slice1; segments: 3) (cost=0.01..811.00 rows=23700 width=16) -> Seq Scan on bar (cost=0.01..811.00 rows=23700 width=16) Optimizer status: legacy query optimizer (11 rows) ``` Signed-off-by: NDhanashree Kashid <dkashid@pivotal.io>
-
由 Heikki Linnakangas 提交于
* Sync plperl with PostgreSQL REL9_1_STABLE. We had mostly backported plperl from 9.1, but not up to the tip of the stable branch, and there were some cosmetic and other differences that were not really necessary. It is simpler if the GPDB version is as identical as possible to a particular upstream version, not a mismatch of different versions. Copy over everything from upstream REL9_1_STABLE, and tweak things to make it compile and pass the regression tests. I started hacking this because the regression tests were not passing. Now they pass. * Add "installcheck-world" makefile target, remove other top-level targets. The master plan is that we will have straightforward "world" makefile targets, like newer versions of PostgreSQL has, that build, install, and test everything in the source tree. For example, "make world" builds everything and the kitchen sink, and "make installcheck-world" runs every test suite against an existing installation. This patch is the first step in that direction. It creates a top-level "installcheck-world" target, that runs some of the test suites we have. It does not run everything, because the tests for some components are either broken, or require additional setup to run. The point of this first step is to establish the precedence, and clean up and add everything else that we care about to this new target. For example, this does not run the gphdfs tests, gpmapreduce tests, or orafce tests yet. Also, this only adds the "installcheck-world" target, not "world", "install-world" nor "check-world". Those are TODOs. Remove other, defunct, top-level targets, to reduce confusion. Move unittest-check top-level target from gpAux/Makefile to top-level Makefile, to get some more visibility to it. Adjust the concourse pipeline scripts to run the new "installcheck-world" target, replacing the separate "installcheck-good" and "installcheck-bugbuster" tasks. In the future, whenever a new test suite is added, it can be added to the installcheck-world target, and the pipeline will pick it up immediately, without having to modify the yaml files. An immediate benefit of this is that we now run the isolationtester test suite, and the PL tests, as part of the pipeline. * Fixed test cases
-
- 05 12月, 2016 2 次提交
-
-
由 Dave Cramer 提交于
to the OS, if you had versions built using vagrant that were not compatible with your OS this would fail Throttle the vagrants to 50% Build gpdb on debian as the vagrant user. The root user will fail to change the uid/gid on the nfs mount
-
由 Adam Lee 提交于
Signed-off-by: NHaozhou Wang <hawang@pivotal.io> Signed-off-by: NYuan Zhao <yuzhao@pivotal.io> Signed-off-by: NAdam Lee <ali@pivotal.io>
-