- 08 9月, 2018 2 次提交
-
-
由 Xin Zhang 提交于
Add behave test for FQDN_HBA flag support Co-authored-by: NXin Zhang <xzhang@pivotal.io> Co-authored-by: NGoutam Tadi <gtadi@pivotal.io>
-
由 Goutam Tadi 提交于
- Behave tests for gpinitsystem with fqdn Co-authored-by: NGoutam Tadi <gtadi@pivotal.io> Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
- 28 8月, 2018 1 次提交
-
-
由 Chris Hajas 提交于
Authored-by: NChris Hajas <chajas@pivotal.io>
-
- 17 8月, 2018 1 次提交
-
-
由 Nadeem Ghani 提交于
-
- 15 8月, 2018 1 次提交
-
-
由 David Kimura 提交于
The purpose of this refactor is to more closely align the GUC with postgres. It started as a suggestion in https://github.com/greenplum-db/gpdb/pull/4790. There are still differences, particularly around when this GUC can be set. In GPDB it can be set by anyone at any time (PGC_USERSET), however in postgres it is limited to postmaster restart (PGC_POSTMASTER). This difference was kept on purpose until we have more buy-in as it is a bigger change on the end-user. Co-authored-by: NJoao Pereira <jdealmeidapereira@pivotal.io>
-
- 26 7月, 2018 1 次提交
-
-
由 Chris Hajas 提交于
This was added when we added the openbar gpcrondump tests in c9770456. Since these tests no longer exist, we can remove this too. Authored-by: NChris Hajas <chajas@pivotal.io>
-
- 21 6月, 2018 1 次提交
-
-
由 Nadeem Ghani 提交于
- Add mirrors with and without standby, and ensure that the host assignment is identical between the two. - Add mirrors, then kill one, and ensure that gprecoverseg operates correctly on the newly added mirror. Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NJacob Champion <pchampion@pivotal.io>
-
- 14 6月, 2018 1 次提交
-
-
由 Nadeem Ghani 提交于
- Add mirrors with and without standby, and ensure that the host assignment is identical between the two. - Add mirrors, then kill one, and ensure that gprecoverseg operates correctly on the newly added mirror. Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NJacob Champion <pchampion@pivotal.io>
-
- 23 5月, 2018 1 次提交
-
-
由 Jim Doty 提交于
Authored-by: NJim Doty <jdoty@pivotal.io>
-
- 22 5月, 2018 1 次提交
-
-
由 Jamie McAtamney 提交于
This test explicitly creates some tables and verifies that they can be read from after gpaddmirrors has been run on the cluster. The existing TINC test that does this will be deleted after all the new gpaddmirrors tests have been added. We've also added some code to allow cluster creation on more than one segment host. Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
- 17 5月, 2018 2 次提交
-
-
由 Jimmy Yih 提交于
These scenarios come from the walrep_1 TINC Makefile target. They have been designed to run on both singlenode and multinode environments. The tests have not been removed from TINC yet because some walrep_2 TINC tests have dependencies. Co-authored-by: NJimmy Yih <jyih@pivotal.io> Co-authored-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Nadeem Ghani 提交于
These scenarios come from the walrep_1 TINC Makefile target. They have been designed to run on both singlenode and multinode environments. The tests have not been removed from TINC yet because some walrep_2 TINC tests have dependencies. Co-authored-by: NJimmy Yih <jyih@pivotal.io> Co-authored-by: NNadeem Ghani <nghani@pivotal.io>
-
- 15 5月, 2018 1 次提交
-
-
由 Tingfang Bao 提交于
Because the pg_dump handle the implicit sequence (serial column type) and explicit sequence differently in different gpdb version, so we need to detect the sequences which not included in the table dump sql, then we should create them firstly. And also for all dependent sequence, setval() to source value after data transferred. Signed-off-by: NMing Li <mli@pivotal.io>
-
- 07 5月, 2018 1 次提交
-
-
由 Tingfang Bao 提交于
* Create the sequence which is belonged to a table. * Correct the sequence next value We will check the dependent sequences before transfering the table data. If the table contains the sequeces. gptransfer will pg_dump all the metadata of the sequences with data. and import it into the destiantion cluster Signed-off-by: NMing LI <mli@apache.org>
-
- 04 5月, 2018 1 次提交
-
-
由 Shoaib Lari 提交于
In addition, we clarified the steps that call gpexpand a second time. It is now clearer that the intention of that step is to cause the database to redistribute the tables. Co-authored-by: NShoaib Lari <slari@pivotal.io> Co-authored-by: NJim Doty <jdoty@pivotal.io>
-
- 02 5月, 2018 1 次提交
-
-
由 Marbin Tan 提交于
In addition to migrating the TiNC tests notable changes were made: + Made gpexpand tests for a cluster with mirrors work + Migrated Behave tests for duration and end time as behave tests + Add gpexpand behave tests to pipeline template + Cleanup some dead code in environment.py Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NJim Doty <jdoty@pivotal.io> Co-authored-by: NKevin Yeap <kyeap@pivotal.io> Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NShoaib Lari <slari@pivotal.io>
-
- 29 3月, 2018 1 次提交
-
-
由 Pengzhou Tang 提交于
* Support replicated table in GPDB Currently, tables are distributed across all segments by hash or random in GPDB. There are requirements to introduce a new table type that all segments have the duplicate and full table data called replicated table. To implement it, we added a new distribution policy named POLICYTYPE_REPLICATED to mark a replicated table and added a new locus type named CdbLocusType_SegmentGeneral to specify the distribution of tuples of a replicated table. CdbLocusType_SegmentGeneral implies data is generally available on all segments but not available on qDisp, so plan node with this locus type can be flexibly planned to execute on either single QE or all QEs. it is similar with CdbLocusType_General, the only difference is that CdbLocusType_SegmentGeneral node can't be executed on qDisp. To guarantee this, we try our best to add a gather motion on the top of a CdbLocusType_SegmentGeneral node when planing motion for join even other rel has bottleneck locus type, a problem is such motion may be redundant if the single QE is not promoted to executed on qDisp finally, so we need to detect such case and omit the redundant motion at the end of apply_motion(). We don't reuse CdbLocusType_Replicated since it's always implies a broadcast motion bellow, it's not easy to plan such node as direct dispatch to avoid getting duplicate data. We don't support replicated table with inherit/partition by clause now, the main problem is that update/delete on multiple result relations can't work correctly now, we can fix this later. * Allow spi_* to access replicated table on QE Previously, GPDB didn't allow QE to access non-catalog table because the data is incomplete, we can remove this limitation now if it only accesses replicated table. One problem is QE need to know if a table is replicated table, previously, QE didn't maintain the gp_distribution_policy catalog, so we need to pass policy info to QE for replicated table. * Change schema of gp_distribution_policy to identify replicated table Previously, we used a magic number -128 in gp_distribution_policy table to identify replicated table which is quite a hack, so we add a new column in gp_distribution_policy to identify replicated table and partitioned table. This commit also abandon the old way that used 1-length-NULL list and 2-length-NULL list to identify DISTRIBUTED RANDOMLY and DISTRIBUTED FULLY clause. Beside, this commit refactor the code to make the decision-making of distribution policy more clear. * support COPY for replicated table * Disable row ctid unique path for replicated table. Previously, GPDB use a special Unique path on rowid to address queries like "x IN (subquery)", For example: select * from t1 where t1.c2 in (select c2 from t3), the plan looks like: -> HashAggregate Group By: t1.ctid, t1.gp_segment_id -> Hash Join Hash Cond: t2.c2 = t1.c2 -> Seq Scan on t2 -> Hash -> Seq Scan on t1 Obviously, the plan is wrong if t1 is a replicated table because ctid + gp_segment_id can't identify a tuple, in replicated table, a logical row may have different ctid and gp_segment_id. So we disable such plan for replicated table temporarily, it's not the best way because rowid unique way maybe the cheapest plan than normal hash semi join, so we left a FIXME for later optimization. * ORCA related fix Reported and added by Bhuvnesh Chaudhary <bchaudhary@pivotal.io> Fallback to legacy query optimizer for queries over replicated table * Adapt pg_dump/gpcheckcat to replicated table gp_distribution_policy is no longer a master-only catalog, do same check as other catalogs. * Support gpexpand on replicated table && alter the dist policy of replicated table
-
- 15 3月, 2018 1 次提交
-
-
由 Kevin Yeap 提交于
Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NKevin Yeap <kyeap@pivotal.io> Co-authored-by: NShoaib Lari <slari@pivotal.io> (cherry picked from commit ed48879627686b36336529d0b445073687ec68cd)
-
- 16 2月, 2018 1 次提交
-
-
由 Jimmy Yih 提交于
After filerep, filespaces, and persistent tables were removed, we disabled the gprecoverseg Behave tests which needed gprecoverseg to be refactored before the tests could run. Currently, gprecoverseg is in a usable state so it's time to fix these Behave tests and have them running again. Some refactoring, workarounds, and test removal was needed which is detailed here. Refactoring: 1. The tests used gpfaultinjector to cause a mirror or primary down situation. This is not really needed and just killing the segments is good enough. Also, injecting a fault on a mirror doesn't really do anything anymore because it would not trigger due to the mirror being in recovery mode. 2. The checksum test did not look correct so we refactored it to what we think it was suppose to look like. Workaround: Running incremental recovery after killing a primary to trigger failover will not work without pg_rewind. These calls have been changed to use full recovery similar to how gprecoverseg rebalancing works right now. Once pg_rewind is introduced, they should be changed back to use incremental recovery. Test Removal: 1. A scenario checked for failure when there were corrupted changetracking logs. If corrupted, a full recovery must be run. We delete this test since changetracking logs are no longer a thing. However, this scenario is very similar to our src/test/walrep missing_xlogs test. That might be good enough as a low-level replacement but we may want to add a full end-to-end scenario back in these Behave tests. 2. A scenario checked that gprecoverseg would not recover segments with persistent rebuild inconsistencies. Persistent tables no longer exist so the scenario is okay to remove. Co-authored-by: NJimmy Yih <jyih@pivotal.io> Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
- 30 1月, 2018 1 次提交
-
-
由 Shoaib Lari 提交于
- Change gpdeletsystem to delete tablespaces before datadir - Refactor SegmentStart.noWait to pg_ctrl_wait - Create PgBaseBackup class - Revert the change to default Mirror status - Assorted typos and bugfixes Author: Shoaib Lari <slari@pivotal.io> Author: Jim Doty <jdoty@pivotal.io> Author: Nadeem Ghani <nghani@pivotal.io>
-
- 19 1月, 2018 1 次提交
-
-
由 Chris Hajas 提交于
I used the command `behave test/behave/mgmt_utils --format=steps.usage --dry-run` to identify unused steps. Some of these were false positives or were extra given/when/then steps that I did not remove. Author: Chris Hajas chajas@pivotal.io
-
- 18 1月, 2018 1 次提交
-
-
由 Daniel Gustafsson 提交于
Commit 5cc0ec50 changed gpperfmon_log_alert_level into a proper enum GUC, but failed to update the gpperfmon behave test.
-
- 17 1月, 2018 1 次提交
-
-
由 Heikki Linnakangas 提交于
-
- 13 1月, 2018 4 次提交
-
-
由 Jamie McAtamney 提交于
We will not be supporting these utilities in GPDB 6. References to gpcrondump and gpdbrestore in the gpdb-doc directory have been left intact, as the documentation will be updated to refer to gpbackup and gprestore in a separate commit. Author: Jamie McAtamney <jmcatamney@pivotal.io>
-
由 Jamie McAtamney 提交于
The previous check in gptransfer for matching source and destination filespaces has been changed to a check for tablespaces, since filespaces no longer exist. Also, gptransfer tests related to filespaces and the corresponding files have been removed or altered accordingly. Author: Jamie McAtamney <jmcatamney@pivotal.io>
-
由 C.J. Jameson 提交于
- use the new fact that datadirs are in the gp_segment_configuration - fix a few things with the gpperfmon behave tests (mostly for macOS) --> the change to mgmt_utils.py is to do the config file manipulation natively in python --> the change to the gp_bash_functions.sh is to use ASCII ' characters so that python string comparison is happier Author: C.J. Jameson <cjameson@pivotal.io>
-
由 Heikki Linnakangas 提交于
-
- 21 12月, 2017 2 次提交
-
-
由 Nadeem Ghani 提交于
There was step defined but not being used and a helper method only being called from that step. This commit removes both. Author: Nadeem Ghani <nghani@pivotal.io> Author: Shoaib Lari <slari@pivotal.io>
-
由 Nadeem Ghani 提交于
The gptransfer test used a step that looked for a logfile with a date in the name. If that logfile existed at 11:59PM on the day before, and the test looked for it at 12:00AM on the next day, it "wouldn't be there" Refactor the test so that assertions about using the typical gpAdminLogs directory are as banal as possible. Also see https://github.com/greenplum-db/gpdb/pull/4072/commits/84bf83d5013f891547dc21576d04a281cfd2faf7. Author: Nadeem Ghani <nghani@pivotal.io> Author: Shoaib Lari <slari@pivotal.io>
-
- 16 12月, 2017 1 次提交
-
-
由 Marbin Tan 提交于
This is simply a setup/cleanup step for the behave tests, so be accomodating to try to get it to work. Scope: affects gpcheckcat.feature and backups.feature; these tests already have some timing affordances; this just adds a bit more backstop Author: Marbin Tan <mtan@pivotal.io> Author: C.J. Jameson <cjameson@pivotal.io>
-
- 08 12月, 2017 1 次提交
-
-
由 C.J. Jameson 提交于
These two tests (gpcheckcat and gptransfer) used a step that looked for a logfile with a date in the name. If that logfile existed at 11:59PM on the day before, and the test looked for it at 12:00AM on the next day, it "wouldn't be there" `Exception: Log "/home/gpadmin/gpAdminLogs/gpcheckcat_20171122.log" was not created` Refactor the tests so that assertions about using the typical gpAdminLogs directory are as banal as possible; emphasize the gptransfer tests of the user option to specify a log directory Author: C.J. Jameson <cjameson@pivotal.io> Author: Shoaib Lari <slari@pivotal.io>
-
- 16 11月, 2017 1 次提交
-
-
由 Daniel Gustafsson 提交于
Any trailing commas were intended to be removed from the partition string, but due to a typo the fixed string was saved into a new variable instead. However, Chris Hajas realized that the step was no longer in use to remove it rather than fix it.
-
- 19 10月, 2017 1 次提交
-
-
由 Chris Hajas 提交于
Signed-off-by: NKaren Huddleston <khuddleston@pivotal.io>
-
- 06 10月, 2017 1 次提交
-
-
由 Chris Hajas 提交于
Signed-off-by: NKaren Huddleston <khuddleston@pivotal.io>
-
- 12 9月, 2017 1 次提交
-
-
由 Marbin Tan 提交于
Signed-off-by: NLarry Hamel <lhamel@pivotal.io> Signed-off-by: NShoaib Lari <slari@pivotal.io>
-
- 31 8月, 2017 1 次提交
-
-
由 Larry Hamel 提交于
Previously, during gpinitsystem, the standby was instantiated in the middle of setting up the master. This ordering caused problems because initializing the standby could cause an exit when an error occurred. As a result of this early exit, the gp_toolkit and DCA gucs were not set properly. Instead, initialize the standby after the master is finished. ------------------------------------------ Previously the exit return code for gpinitsystem was always non-zero. Now, it is non-zero only in an error or warning case. The issue was due to SCAN_LOG interpretation of an empty string as a line count of one. Fixed by changing to word count. ------------------------------------------ Initializing a standby can no longer cause gpinitsystem to exit early. Added extra logging/output about standby master status. Tell user at the end of gpinitsystem if gpinitstandby failed. ------------------------------------------ Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
- 23 8月, 2017 1 次提交
-
-
由 Shoaib Lari 提交于
The data_checksums GUC setting should be the same as the master. The existing test for gpinitstandby is modified to run on a single host. Signed-off-by: NLarry Hamel <lhamel@pivotal.io> Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
- 18 8月, 2017 3 次提交
-
-
由 Larry Hamel 提交于
- validate consistent checksum settings - Make sure checksum settings for all segments are same as master - Add logging proxy to allow logging to file to have different contents than stdout - Do heap checksum only for when starting up all segments. - Add option --skip-heap-checksum-validation - If this option is provided to gpstart, the cluster will start up without checking for matching "data_checksums" GUC between master and segments. Signed-off-by: NNadeem Ghani <nghani@pivotal.io> Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Nadeem Ghani 提交于
Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Nadeem Ghani 提交于
Updating the log message to display the parameters gpconfig was called with, both if the GUC was changed successfully or not. Signed-off-by: NMarbin Tan <mtan@pivotal.io> Signed-off-by: NShoaib Lari <slari@pivotal.io> Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-