- 08 5月, 2018 8 次提交
-
-
由 Tingfang Bao 提交于
because the behivor of pg_dump is different between gpdb 4 and gpdb 5 so the check_sequence function only can work on gpdb 5
-
由 Adam Lee 提交于
ExecBRInsertTriggers() uses the per tuple memory context, which might be reset and pfree() SEGV.
-
由 Tingfang Bao 提交于
Signed-off-by: NAdam Lee <ali@pivotal.io>
-
由 Tingfang Bao 提交于
if the gptransfer with --full option, it should ensure the spcified database is not existed in the destination cluster. it isn't guaranteed, so we decide to remove this case Signed-off-by: NAdam Lee <ali@pivotal.io>
-
由 Lisa Owen 提交于
* docs - low memory query config for RQs and RGs * add missing topic ending tag
-
由 Lisa Owen 提交于
* docs - add content for resgroup global shared memory * some edits requested by david * memory_spill_ratio max back to 100 * incorporate edits requested from simon
-
由 Jacob Champion 提交于
Instead of making the top-level Makefile run `make install` as part of its test run, make installation a prerequisite for installcheck, just like isolation2. This will simplify recursion logic.
-
由 Lisa Owen 提交于
* docs - updates to resgroup "using" page for memory_auditor * reorg, other suggestions from david * edits from david * 2 more edits from david * too many Greenplum Databases in the sentence
-
- 07 5月, 2018 3 次提交
-
-
由 Tingfang Bao 提交于
* Create the sequence which is belonged to a table. * Correct the sequence next value We will check the dependent sequences before transfering the table data. If the table contains the sequeces. gptransfer will pg_dump all the metadata of the sequences with data. and import it into the destiantion cluster Signed-off-by: NMing LI <mli@apache.org>
-
由 Adam Lee 提交于
These table are inserted values not distributed right, via setting the GUC gp_enable_segment_copy_checking off and `COPY FROM ON SEGMENT`. Alter them to distributed randomly after testing to not break following tests against the databases left by ICW.
-
- 05 5月, 2018 1 次提交
-
-
由 Bhuvnesh Chaudhary 提交于
branches This commits adds the automation required to test input gporca and gpdb branches via a pipeline. This will enable orca developers to not fork their own pipeline and just use it for their testing.
-
- 04 5月, 2018 3 次提交
-
-
由 Zhenghua Lyu 提交于
Coverity Scan Report: >>> CID 185437: Incorrect expression (ASSERT_SIDE_EFFECT) >>> Argument "connected" of Assert() has a side effect because the variable is volatile. The containing function might work differently in a non-debug build. To fix this, we can simply remove this Assert because if the process reached here, it must not have thrown any exceptions.
-
由 Xin Zhang 提交于
Sometimes, these are not set. Other times, they can be set incorrectly by the user, or a malicious actor. Co-authored-by: NXin Zhang <xzhang@pivotal.io> Co-authored-by: NDavid Sharp <dsharp@pivotal.io>
-
由 Shoaib Lari 提交于
In addition, we clarified the steps that call gpexpand a second time. It is now clearer that the intention of that step is to cause the database to redistribute the tables. Co-authored-by: NShoaib Lari <slari@pivotal.io> Co-authored-by: NJim Doty <jdoty@pivotal.io>
-
- 03 5月, 2018 5 次提交
-
-
由 Mel Kiyama 提交于
* DOCS: clarify GPORCA index support -only B-tree and bitmap indexes are supported. GPORCA ignores indexes created with unsupported indexing methods. * docs: GPORCA index support review update - move notes to index type parameter.
-
由 Zhenghua Lyu 提交于
To prevent distributed deadlock, in Greenplum DB an exclusive table lock is held for UPDATE and DELETE commands, so concurrent updates the same table are actually disabled. We add a backend process to do global deadlock detect so that we do not lock the whole table while doing UPDATE/DELETE and this will help improve the concurrency of Greenplum DB. The core idea of the algorithm is to divide lock into two types: - Persistent: the lock can only be released after the transaction is over(abort/commit) - Otherwise cases This PR’s implementation adds a persistent flag in the LOCK, and the set rule is: - Xid lock is always persistent - Tuple lock is never persistent - Relation is persistent if it has been closed with NoLock parameter, otherwise it is not persistent Other types of locks are not persistent More details please refer the code and README. There are several known issues to pay attention to: - This PR’s implementation only cares about the locks can be shown in the view pg_locks. - This PR’s implementation does not support AO table. We keep upgrading the locks for AO table. - This PR’s implementation does not take networking wait into account. Thus we cannot detect the deadlock of GitHub issue #2837. - SELECT FOR UPDATE still lock the whole table. Co-authored-by: NZhenghua Lyu <zlv@pivotal.io> Co-authored-by: NNing Yu <nyu@pivotal.io>
-
由 Paul Guo 提交于
This makes the diff format align with that in pg_regression tests. Personally I think the pg_regression format is more human understandable.
-
由 Ashwin Agrawal 提交于
Co-authored-by: NJesse Zhang <sbjesse@gmail.com>
-
由 Ashwin Agrawal 提交于
FinishPreparedTransaction() and CommitTransaction() were modified for filerep and persistent tables. Now, that need no more exists hence revert back these functions to upstream code and thereby also help concurrency.
-
- 02 5月, 2018 4 次提交
-
-
由 Heikki Linnakangas 提交于
I'm not sure why it's been disabled. It's not very hard to make it work, so let's do it. Might not be a very common query type, but if you happen to have a query where it helps, it helps a lot. This adds a GUC, gp_enable_minmax_optimization, to enable/disable the optimization. There's no such GUC in upstream, but we need at least a flag in PlannerConfig for it, so that we can disable the optimization for correlated subqueries, along with some other optimizer tricks. Seems best to also have a GUC for it, for consistency with other flags in PlannerConfig.
-
由 Ashwin Agrawal 提交于
Incase promote message is send twice while mirror already undergoing promotion, isRoleMirror can be retruned as false. ON mirror in this case promote message is ignored. So, on master fts shouldn't check what the isRoleMirror returns.
-
由 Ashwin Agrawal 提交于
Previous behavior when primary is in crash recovery FTS probe fails and hence qqprimary is marked down. This change provides a recovery progress metric so that FTS can detect progress. We added last replayed LSN number inside the error message to determine recovery progress. This allows FTS to distinguish between recovery in progress and recovery hang or rolling panics. Only when FTS detects recovery is not making progress then FTS marks primary down. For testing a new fault injector is added to allow simulation of recovery hang and recovery in progress. Just fyi...this reverts the reverted commit 7b7219a4. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io> Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Marbin Tan 提交于
In addition to migrating the TiNC tests notable changes were made: + Made gpexpand tests for a cluster with mirrors work + Migrated Behave tests for duration and end time as behave tests + Add gpexpand behave tests to pipeline template + Cleanup some dead code in environment.py Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NJim Doty <jdoty@pivotal.io> Co-authored-by: NKevin Yeap <kyeap@pivotal.io> Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NShoaib Lari <slari@pivotal.io>
-
- 01 5月, 2018 7 次提交
-
-
由 Heikki Linnakangas 提交于
While we're at it, remove the mock tests on SessionState_ShmemSize. IMHO it was always a bit silly, it almost literally duplicated the computation in SessionState_ShmemSize() function itself. And now even more so.
-
由 Bhuvnesh Chaudhary 提交于
For text, varchar, char and bpchar, ORCA does not collect the MCV and Histogram information, so the calculation of NDVRemain and FreqRemain must be updated to account for it. For such columns, NDVRemain is the stadistinct as available in the pg_statistic, and FreqRemain is everything except the NULL frequency. Earlier, NDVRemain and FreqRemain for such columns would yield 0 resulting in poor cardinality estimation and suboptimal plans. Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Lisa Owen 提交于
* docs - add content for backup storage plugin api * some edits requested by david * comments from mel; remove diagram placeholder * address some requested edits from karen * gpbackup creates the local directory * include restore more * remove bold styling from command * clarify streaming backup * address some review comments * add backup/restore plugin api cmds to subnav * remove the on-page toc entries for cmds * wrap up the edits from karen
-
由 Ashwin Agrawal 提交于
This reverts commit 1b07e77a.
-
由 Ashwin Agrawal 提交于
Previous behavior when primary is in crash recovery FTS probe fails and hence primary is marked down. This change provides a recovery progress metric so that FTS can detect progress. We added last replayed LSN number inside the error message to determine recovery progress. This allows FTS to distinguish between recovery in progress and recovery hang or rolling panics. Only when FTS detects recovery is not making progress then FTS marks primary down. For testing a new fault injector is added to allow simulation of recovery hang and recovery in progress. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io> Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
由 David Kimura 提交于
Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Kris Macoskey 提交于
We test planner and orca on each platform, this one was missing. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
- 27 4月, 2018 3 次提交
-
-
由 Ning Yu 提交于
The resgroup dummy backend currently generate a warning in the Probe() call, this is not supposed as this function is designed to be called no matter resgroup is enabled or not. Removed the warning message from dummy Probe(). Also update warning messages in dummy backend. It used to generate warning messages like below in the dummy backend: cpu rate limitation for resource group is unsupported on this system. This message was originally introduced when resgroup supported only cpu rate limitation, but as now there are more supported capabilities we should have this message updated.
-
由 Dhanashree Kashid 提交于
Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Dhanashree Kashid 提交于
It is common to have large IN/NOT IN list in user queries hence 25 is too low of a bound. After running several experiments, 100 turned out to be good threshold value for this GUC. Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
- 26 4月, 2018 6 次提交
-
-
由 krait007 提交于
* add 'static' keyword in definition for some files under src/backend/cdb * update mutate_plan_fields,mutate_join_fields function definition code style
-
由 Ashwin Agrawal 提交于
Decent coverage exist for 2PC in isolation2, plus coverage will be enhanced. The current tests in TINC for pg_twophase are written specifically for filerep. That much extensive tests to create all types of objects and all is not required for walrep given the fact it relies on crash recovery logic for replication. Either these tests need to be modified to continue to work for walrep, instead better to delete them and write fresh ones as required for walrep and 2PC interaction in isolation2.
-
由 David Kimura 提交于
The flakiness was due to concurrent VACUUMs. If there is another parallel drop transaction (on any relation) active, the drop is skipped. This is to avoid upgrade deadlock as the other drop transaction might be on the same relation. We added additional test coverage for scenario of skipping the drop phase during concurrent vacuum and crash recovery with file in drop pending state. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Mel Kiyama 提交于
* docs: gpbackup/gprestore S3 plugin -add gpbackup/gprestore --plugin-config option -add S3 plugin information -other minor fixes: add index as object, support table data and metadata for --jobs > 1 PR for 5X_STABLE Will be ported to MAIN * docs: review updates for gpbackup/gprestore S3 plugin -moved S3 links to Notes section -changed name from S3 plugin to S3 storage plugin -removed draft comments * docs: gpbackup s3 plugin change binary plugin name to gpbackup_s3_plugin * docs: s3 plugin - fix typo
-
由 Lisa Owen 提交于
-
由 Chuck Litzell 提交于
-