- 20 2月, 2019 7 次提交
-
-
由 Richard Guo 提交于
All callers of simplify_function() are using the same parameters as in upstream. Guess thie FIXME has been out of date. So retire it. Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
-
由 Sambitesh Dash 提交于
This reverts commit 203a0892.
-
由 Sambitesh Dash 提交于
This reverts commit 8758e2bf.
-
由 Ben Christel 提交于
This commit moves all of the intermediate concourse resources (which are only used internally in the pipeline) to now use GCP buckets as opposed to AWS S3 storage. Co-authored-by: NBen Christel <bchristel@pivotal.io> Co-authored-by: NAmil Khanzada <akhanzada@pivotal.io> Co-authored-by: NSambitesh Dash <sdash@pivotal.io> Co-authored-by: NOliver Albertini <oalbertini@pivotal.io>
-
由 Amil Khanzada 提交于
The ((pipeline-name)) variable is needed for supporting the naming conventions that are used for new GCS objects, which namespace artifacts by pipeline. Co-authored-by: NBen Christel <bchristel@pivotal.io> Co-authored-by: NAmil Khanzada <akhanzada@pivotal.io>
-
由 Ashwin Agrawal 提交于
In order to avoid crash_recovery_dtm test from hanging, as for this scenario doesn't expect master to PANIC, increase dtx_phase2_retry_count to higher number which should help to complete the phase 2 processing and avoid master PANIC.
-
由 Ashwin Agrawal 提交于
This commit increases retry count and adds small delay between retries for 2PC. Commit-Prepared or Abort-Prepared (phase 2) of 2PC perform retries if first attempt fails to complete the transaction. Default only 2 retries were performed and also were with zero delay. Once retries are exhausted master PANIC's and has to continue retrying. Most of the times the phase 2 fails on first attempt if segment is undergoing recovery or failover happens on mirror. In such instances, just 2 retries are attempted in msecs and seems to defeat the purpose of the retries. Hence, modifying default number to retries to 10. Also, adding 100msec delay between each retry to provide resonable opportunity to succeed on retries. This should help avoid master PANICs for not able to complete phase 2. I gave lot of thought but couldn't think of any downsides from incresing the number of retries. Also, maximum number allowed to be configured was only 15, which seems too restrictive. Mainly for the tests where sometime to avoid flakiness and avoiding master panics having higher number of retries is helpful. So, changing the guc `dtx_phase2_retry_count` maximum allowed to INT_MAX. Don't practically expect it to be set to any value higher than some thousands. But think we don't have to be so restrictive for the maximum.
-
- 19 2月, 2019 2 次提交
-
-
由 Daniel Gustafsson 提交于
We do welcome draft PRs, especially now that Github has enabled it as a feature when creating PRs. Add a note on this in the main README and also fix the text regarding which branch to PR against, as we now have a supported back-branch. Also fix a long line in the same section to make reading of the source markdown easier. Reviewed-by: NAdam Berlin <aberlin@pivotal.io> Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
-
由 Jialun 提交于
gpexpand generate template from master by copying the master dir directly before, it may be unsafe. Though we lock the catalog, there may be some other on-disk changes. So we use pg_basebackup instead of native copy.
-
- 18 2月, 2019 3 次提交
-
-
由 Jinbao Chen 提交于
Because order-set aggs always have nonempty aggorder, numPureOrderedAggs currently contains WITHIN GROUP aggs. I think this is reasonable. The reason why we add numPureOrderedAggs in AggClauseCosts is that the group aggregate cost is much higher than hash aggregate, and we usually use hash aggregate with DISTINCT and group aggregate with ORDER BY. With within group, we must also use the group aggregate. So, we need to add numPureOrderedAggs when the query cantains a within group agg.
-
由 Richard Guo 提交于
Previously we checked whether OldestXmin is valid in heap_page_prune_opt and exited early if not so. This is mainly because in the case of persistent tables, GPDB may call into here without having a local snapshot and thus no valid OldestXmin. Since now we do not support persistent tables any more, revert this check to an assertion to keep the same with upstream. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
-
由 Teng zhang 提交于
-
- 16 2月, 2019 21 次提交
-
-
由 Mel Kiyama 提交于
-
由 Mel Kiyama 提交于
* docs - support for special characters in schema/table names for --include-table option. * docs - support for special characters in schema/table names for --include-table-file option. * docs - remove misplaced word "support"
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
由 Adam Berlin 提交于
These only run if OpenSSL is enabled. Co-authored-by: NJacob Champion <pchampion@pivotal.io> Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Adam Berlin 提交于
To fix the SSL TAP tests, revert most of the implementation to upstream 9.6 (commit 41740b9ef, right before a major refactor to ServerSetup.pm). This removes the GPDB-specific implementations of the test node setup, which weren't really setting up a new node anyway; they were modifying whatever GPDB cluster they found in the environment. Since the SSL tests don't need a full cluster to run, we can stop carrying that diff. Notable remaining differences from 9.6 include - the commenting out of the wal_retrieve_retry_interval GUC, which is not yet supported in GPDB - the addition of GPDB-specific options to `pg_ctl start` to create a standalone segment, and the use of utility mode to connect to it - the use of note() instead of diag() in the test suite, for cosmetic reasons (we can probably remove that diff once we catch up to 9.6) - the continued commenting-out of SAN tests, which we plan to reinstate in a future commit Co-authored-by: NJacob Champion <pchampion@pivotal.io>
-
由 Jimmy Yih 提交于
When running gprecoverseg, gpinitstandby, or gpaddmirrors, we actually run gpconfigurenewsegment to execute pg_basebackup. Log the progress output of pg_basebackup to a temporary file for user and/or utility consumption. The pg_basebackup progress output is logged to a temporary file located in ~/gpAdminLogs or wherever the user specified with -l flag used by most Greenplum Python utilities. The file is removed after a successful run. The pg_basebackup output contains carriage returns so the user must deal with it themselves through their editor of choice. Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NShoaib Lari <slari@pivotal.io> Co-authored-by: NMark Sliva <msliva@pivotal.io>
-
由 Jimmy Yih 提交于
The gpconfigurenewsegment logging would always go to the default location "~/gpAdminLogs/" while the callers of gpconfigurenewsegment could have been logging to a different location. Make this consistent. Co-authored-by: NMark Sliva <msliva@pivotal.io>
-
由 Mark Sliva 提交于
This follows up the addition of a start time during pg_basebackup. In the test, we added additional power to the execSQL mock so that queries to pg_stat_replication and pg_stat_activity can both be tested simultaneously. Co-authored-by: NJacob Champion <pchampion@pivotal.io>
-
由 Mark Sliva 提交于
Since this is a warning, the stack trace is excessive. Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Alexandra Wang 提交于
Mirror should PANIC immediately when recovery is in consistent state but there are existing invalid page entries in invalid_page hash table. This makes sure to not delay the checking till mirror is promoted and helps catching the file missing problems on mirror sooner. The logic of mirror immediate PANIC was introduced in upstream commit 1e616f63. Since, no test exist for this in upstream and logic is used by AO tables as well, this patch is adding test to validate the stated behavior. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Ashwin Agrawal 提交于
This is follow-up commit to 69ebd66c, incorporating Asim's review feedback.
-
由 Ashwin Agrawal 提交于
This is to avoid marking primary status down during this test, while it restarts the primary.
-
由 Jacob Champion 提交于
gpstop is the only consumer of this one-off helper, so move it to gpstop. The recently added WorkerPool properties make this possible. Also remove the requirement for callers to keep track of how many commands have been added to the pool, similarly to what we did for wait_and_printdots(). Additionally, fix some unit test bugs, where the assertions on mocks weren't actually testing anything, by properly speccing the Mock objects themselves.
-
由 Jacob Champion 提交于
WorkerPool.completed now tells you how many commands are currently in the completed queue; .assigned tells you how many commands are either pending or completed. The latter property replaces the previous .num_assigned attribute, which was not correctly updated when the completed queue was emptied.
-
由 Jacob Champion 提交于
base.join_and_indicate_progress() waits for the pool to complete its work, printing indication dots to stdout once per second. If it takes less than a second for the pool to complete, we won't print anything (and we also won't hang for a second waiting for nothing to happen). The previous implementation required the caller to store a running tally of how many commands had been added to the pool; that requirement is now dropped. Unlike wait_and_printdots(), join_and_indicate_progress() *always* prints to file. Don't call it if you don't want to print; use WorkerPool.join() directly.
-
由 Jacob Champion 提交于
The current status reporting methods are difficult to test (they try to do a little too much, IMO). Introduce a simpler solution -- allow join() to accept a timeout. All status reporting can now be implemented using this primitive.
-
由 Jacob Champion 提交于
WorkerPool needs some help, and we need some test coverage before I can fix and refactor.
-
由 Adam Berlin 提交于
This reverts commit 848733b6.
-
- 15 2月, 2019 7 次提交
-
-
由 Ning Yu 提交于
A duration can be set on gpexpand phase2, then it can quit before redistributed all the tables. There is behave tests to verify this, however they should check whether gpexpand quit on time.
-
由 Daniel Gustafsson 提交于
The :else clause on the for loop is superfluous as the loop doesn't contain any break statement. Removing it will yield the same codepath but makes for improved readability. This also removes an unused import (time) as well as fixes a set of typos. Reviewed-by: NJimmy Yih <jyih@pivotal.io>
-
由 Paul Guo 提交于
Also refactor subquery_motionHazard_walker() to make it simpler.
-
由 Ning Yu 提交于
Unless cancelled with ctrl-c the gpexpand redistribution phase's return code is always 0, it does not indicate whether the redistribution succeed or not, to get this information we need to check the status from gpexpand.status.
-
由 Ning Yu 提交于
Most temp tables won't live for long, there is no need to redistribute them. On the other hand if they are populated in gpexpand.status_detail and disappeared before redistribution, an error is reported to the user, this just causes unnecessary panic.
-
由 Taylor Vesely 提交于
Pull from upstream Postgres to make DefineIndex recursively create partitioned indexes. Instead of creating an individual IndexStmt for every partition, create indexes by recursing on the partition children. This aligns index creation with upstream in preparation for adding INTERNAL_AUTO relationships between partition indexes. * The QD will now choose the same name for partition indexes as Postgres. * Update tests to reflect the partition index names changes. * The changes to DefineIndex are mostly cherry-picked from Postgres commit: 8b08f7d4 * transformIndexStmt and its callers have been aligned with Postgres REL9_4_STABLE Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
-
由 Jesse Zhang 提交于
This reverts the following commits: commit 0ee987e64 - "Don't dispatch index creations too eagerly in ALTER TABLE." commit 28dd0152 - "Enable alter table column with index (#6286)" The motivation of commit 0ee987e64 is to stop eager dispatch of index creation during ALTER TABLE, and instead perform a single dispatch. Doing so prevents index name already exists errors when altering data types on indexed columns such as: ALTER TABLE foo ALTER COLUMN test TYPE integer; ERROR: relation "foo_test_key" already exists Unfortunately, without eager dispatch of index creation the QEs can choose a different name for a relation than was chosen on the QD. Eager dispatch was the only mechanism we had to ensure a deterministic and consistent index name between the QE and QD in some scenarios. In the absence of another mechanism we must revert this commit. This commit also rolls back commit 28dd0125 to enable altering data types on indexed columns, which required commit 0ee987e64. Co-authored-by: NKalen Krempely <kkrempely@pivotal.io> Co-authored-by: NTaylor Vesely <tvesely@pivotal.io> Co-authored-by: NDavid Krieger <dkrieger@pivotal.io>
-