- 26 4月, 2018 7 次提交
-
-
由 krait007 提交于
* add 'static' keyword in definition for some files under src/backend/cdb * update mutate_plan_fields,mutate_join_fields function definition code style
-
由 Ashwin Agrawal 提交于
Decent coverage exist for 2PC in isolation2, plus coverage will be enhanced. The current tests in TINC for pg_twophase are written specifically for filerep. That much extensive tests to create all types of objects and all is not required for walrep given the fact it relies on crash recovery logic for replication. Either these tests need to be modified to continue to work for walrep, instead better to delete them and write fresh ones as required for walrep and 2PC interaction in isolation2.
-
由 David Kimura 提交于
The flakiness was due to concurrent VACUUMs. If there is another parallel drop transaction (on any relation) active, the drop is skipped. This is to avoid upgrade deadlock as the other drop transaction might be on the same relation. We added additional test coverage for scenario of skipping the drop phase during concurrent vacuum and crash recovery with file in drop pending state. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Mel Kiyama 提交于
* docs: gpbackup/gprestore S3 plugin -add gpbackup/gprestore --plugin-config option -add S3 plugin information -other minor fixes: add index as object, support table data and metadata for --jobs > 1 PR for 5X_STABLE Will be ported to MAIN * docs: review updates for gpbackup/gprestore S3 plugin -moved S3 links to Notes section -changed name from S3 plugin to S3 storage plugin -removed draft comments * docs: gpbackup s3 plugin change binary plugin name to gpbackup_s3_plugin * docs: s3 plugin - fix typo
-
由 Lisa Owen 提交于
-
由 Chuck Litzell 提交于
-
由 Lisa Owen 提交于
* docs - sql ref page updates for resgroup memory_auditor * edits from engineering review * some of the edits requested by david * use plural where appropriate
-
- 25 4月, 2018 6 次提交
-
-
由 Bhuvnesh Chaudhary 提交于
Fix qual_is_pushdown_safe_set_operation to correctly resolve the qual vars and identify if there are any window references in the top level of the set operation's left or right subqueries. Before commit b8002a9, instead of starting with rte of the level where the qual is attached we started scanning the rte of the subqueries of the left and right args of setop to identify the qual. However, because of this the varno didn't match to the corresponding RTE due to which the quals couldn't be resolved to winref and were incorrectly pushed down. This caused the planner to return an error during execution.
-
由 Ashwin Agrawal 提交于
Remove the extra WARNING only gets generated for MAC.
-
由 David Kimura 提交于
Commit 90ce5138 changed the logic for calculating grace period. Previously the variable marked_pid_zero_at_time was set to zero every time PID was set to valid value. With this changed logic the grace period is calculated even when PID is not zero (walsender state=startup). This resulted in FailedAssertion(!(walsnd->marked_pid_zero_at_time)). Now instead of resetting marked_pid_zero_at_time when the PID is set to valid value, we reset whenever walsender state changes to catchup or streaming. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 David Kimura 提交于
When pid is not 0 and state is WALSNDSTATE_STARTUP mirror gets reported as down without consulting the grace period. Hence refactor the logic to check for grace period any time before replying to FTS that mirror is down. This was one of the cases reported in CI possibly causing walrep_1 job to be flaky. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Bhuvnesh Chaudhary 提交于
Fix qual_is_pushdown_safe_set_operation to correctly resolve the qual vars and identify if there are any window references in the top level of the set operation's left or right subqueries. Before commit b8002a9, instead of starting with rte of the level where the qual is attached we started scanning the rte of the subqueries of the left and right args of setop to identify the qual. However, because of this the varno didn't match to the corresponding RTE due to which the quals couldn't be resolved to winref and were incorrectly pushed down. This caused the planner to return an error during execution.
-
- 24 4月, 2018 7 次提交
-
-
由 Ning Yu 提交于
- resgroup: dump cgroup memory limit with _granted suffix; - dump cgroup memory usage as `used`; This is to make the dump message consistent between 'vmtracker' and 'cgroup' resgroups.
-
由 Adam Lee 提交于
Which makes no sense, and will fail while creating the schema, whose name starts with reserved prefix "pg". The current testing framework could not hold an extra connection for testing this.
-
由 David Yozie 提交于
adding reference topic for maintenance_work_mem, which is mentioned elsewhere in the docs and in recent postgresql 8.4 merges (#4422)
-
由 Venkatesh Raghavan 提交于
Three-stage aggregate is an optimization to parallelize DISTINCT aggregate. Except for trivial cases (e.g. there is only one Aggref, or every Aggref has exactly the same aggfilter), it seems impossible to apply this class of optimizations to a query with SELECT aggfn(DISTINCT a) FILTER (WHERE ...)
-
由 Ashwin Agrawal 提交于
Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Ashwin Agrawal 提交于
In concurrent scenarios, globalxmin can be lower than distributed oldest xmin. This can happen as ProcArrayLock is released before calling DistributedLog_AdvanceOldestXmin() in GetSnapshotData(). Earlier it used to ERROR for this case. Now instead it treats safe to return distributed oldest xmin even if its higher than current process globalXmin, as some other process already consulted distributed log to bump the distributed oldestXmin. Condition which must be satisfied is globalXmin for this process remains lower or equal to TransactionXmin. Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Xin Zhang 提交于
-- only output verbose debian build logging upon failure [#156761964] Co-authored-by: NLarry Hamel <lhamel@pivotal.io> Co-authored-by: NXin Zhang <xzhang@pivotal.io>
-
- 23 4月, 2018 1 次提交
-
-
由 Peifeng Qiu 提交于
-
- 21 4月, 2018 1 次提交
-
-
由 mkiyama 提交于
see PR 4619
-
- 19 4月, 2018 10 次提交
-
-
由 David Kimura 提交于
Dispatcher has DISPATCH_WAIT_TIMEOUT_MSEC (current value is 2000) as poll timeout. It waited for 30 iterations of poll to timeout before checking the segment status. And then initiated fts probe before checking the segment status. As a result it took ~minute for query to fail in case of segment failures. This commit updates to check segment status on every poll timeout. It also leverages fts version to optimize whether to check segments. It avoids performing fts probe, instead it relies on fts to be called on regular intervals and provide cached results. With this change test time for twophase_tolerance_with_mirror_promotion was cut down by ~2 minutes. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 David Kimura 提交于
The goal of these tests is to validate that existing transactions on the primary are not hung when the mirror gets promoted. In order to validate this, mirror promotion is triggered at the following two-phase commit points: 1) if the transaction hasn't prepared, then it should be aborted 2) if the transaction is already prepared, then it should complete commit on the mirror 3) if the transaction has committed on the primary, but not acknowledged to the master then it should complete the commit on the mirror Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Ashwin Agrawal 提交于
Sometimes test need to use this fault to suspend and then stop the database. If this fault is inside the HOLD_INTERRUPTS (which is where its before this change), CHECK_FOR_INTERRUPTS doesn't do anything. Hence stopping of segment takes super long time as kill -9 needs to be enforced by gpstop after timeout. Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Ning Yu 提交于
In ResGroupDropFinish() uninitialized memory address can be accessed due to some reasons: 1. the group pointer is not initialized on segments; 2. the hash table node pointed by group is recycled in removeGroup(); This invalid access can cause crash issue on segments. Also move some global vars to resgroup.c, They were put in resgroup-ops-linux.c, which was only compiled and linked on linux, so on other OS like macos the vars can not be found.
-
由 Ning Yu 提交于
binary_swap_gpdb was input of all resgroup jobs, but as it's not `get` by the resgroup sles job an error will be triggered: missing inputs: binary_swap_gpdb Fixed by marking it as optional.
-
由 Jialun 提交于
User query can use global shared memory across resource group when available(Same as the PR 4843, just made the test cases stable) (#4866) 1) Global shared memory will be used if the query has run out of the group shared memory. 2) The limit of memory_spill_ratio changes to [0, INT_MAX], because global shared memory can be allocated, 100% limitation is not make sense. 3) Using Atomic Operation & "Compare And Save" instead of lock to get high performance. 4) Modify the test cases according to the new rules.
-
由 Ning Yu 提交于
Cgroup mount point is detected during startup, but due to a bug the mount point was used even if cgroup is not mounted at all. Fixed by correcting the checking logic.
-
由 Ning Yu 提交于
Memory auditor was a new feature introduced to allow external components (e.g. pl/container) managed by resource group. This feature requires a new gpdb dir to be created in cgroup memory controller, however on 5X branch unless the users created this new dir manually then the upgrade from a previous version would fail. In this commit we provide backward compatibility by checking the release version: - on 6X and master branches the memory auditor feature is always enabled so the new gpdb dir is mandatory; - on 5X branch only if the new gpdb dir is created with proper permissions the memory auditor feature could be enabled, when it's disabled `CREATE RESOURCE GROUP WITH (memory_auditor='cgroup') will fail with guide information on how to enable it; Binary swap tests are also provided to verify backward compatibility in future releases. As cgroup need to be configured to enable resgroup we split the resgroup binary swap tests into two parts: - resqueue mode only tests which can be triggered in the icw_gporca_centos6 pipeline job after the ICW tests, these parts have no requirements on cgroup; - complete resqueue & resgroup modes tests which can be triggered in the mpp_resource_group_centos{6,7} pipeline jobs after the resgroup tests, these parts need cgroup to be properly configured;
-
由 Bhuvnesh Chaudhary 提交于
This commit includes the stainherit columns in minirepro output recently added.
-
由 Omer Arap 提交于
Recent postgres merge introduced a new column `stainherit` to `pg_statistic` table. When `gpsd` dumps the pg_statistic not only a column is missing but also all the other columns are shifted and saved in wrong column index. This causes failure on reloading the gpsd. This commit fixes the issue and adds the missing column.
-
- 18 4月, 2018 1 次提交
-
-
由 Tom Lane 提交于
We are developing on a machine with flex 2.6.4 and we couldn't build. This was fixed in 72b1e3a2 in upstream (9.6 at the time) in 2016, but wasn't backported to 9.1 or older. Original commit message: > Now that we know about the %top{} trick, we can revert to building flex > lexers as separate .o files. This is worth doing for a couple of reasons > besides sheer cleanliness. We can narrow the scope of the -Wno-error flag > that's forced on scan.c. Also, since these grammar and lexer files are > so large, splitting them into separate build targets should have some > advantages in build speed, particularly in parallel or ccache'd builds. > > We have quite a few other .l files that could be changed likewise, but the > above arguments don't apply to them, so the benefit of fixing them seems > pretty minimal. Leave the rest for some other day. (cherry picked from commit 72b1e3a2) Fixes greenplum-db/gpdb#4863 Co-authored-by: NJesse Zhang <sbjesse@gmail.com> Co-authored-by: NAsim R P <apraveen@pivotal.io>
-
- 17 4月, 2018 7 次提交
-
-
由 Jialun 提交于
1) Global shared memory will be used if the query has run out of the group shared memory. 2) The limit of memory_spill_ratio changes to [0, INT_MAX], because global shared memory can be allocated, 100% limitation is not make sense. 3) Using Atomic Operation & "Compare And Save" instead of lock to get high performance. 4) Modify the test cases according to the new rules.
-
由 Pengzhou Tang 提交于
'distribution_policy' test do constraints check on randomly-distributed tables, however, attrnums = null in gp_distribution_policy is no longer effective to identify a randomly distributed table after we involved new replicated distributed policy, so add more filter to make things right.
-
由 David Kimura 提交于
Large objects are currently not supported in Greenplum. Rather than deceive the user with a non-functional large object api, we disable them for now. We disable the large object tests in privileges regress test by using ignore blocks instead of commenting them out or deleting them to reduce merge conflicts in future postgres merges. Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Sambitesh Dash 提交于
-
由 Mel Kiyama 提交于
-
由 Mel Kiyama 提交于
* docs: Add guc verify_gpfdists_cert -added guc definition to list of gucs -added link to guc from appropriate topics. PR for 5X_STABLE Will be ported to MAIN * docs: verify_gpfdists_cert guc updates -add SSL exceptions that are ignored -other minor edits * docs: guc verify_gpfdists_cert - fix typos