- 12 9月, 2017 1 次提交
-
-
由 Marbin Tan 提交于
Signed-off-by: NLarry Hamel <lhamel@pivotal.io> Signed-off-by: NShoaib Lari <slari@pivotal.io>
-
- 07 9月, 2017 1 次提交
-
-
由 Chris Hajas 提交于
This column has a different name between master and previous GPDB versions. Signed-off-by: NKaren Huddleston <khuddleston@pivotal.io>
-
- 05 9月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
-
- 31 8月, 2017 1 次提交
-
-
由 Larry Hamel 提交于
Previously, during gpinitsystem, the standby was instantiated in the middle of setting up the master. This ordering caused problems because initializing the standby could cause an exit when an error occurred. As a result of this early exit, the gp_toolkit and DCA gucs were not set properly. Instead, initialize the standby after the master is finished. ------------------------------------------ Previously the exit return code for gpinitsystem was always non-zero. Now, it is non-zero only in an error or warning case. The issue was due to SCAN_LOG interpretation of an empty string as a line count of one. Fixed by changing to word count. ------------------------------------------ Initializing a standby can no longer cause gpinitsystem to exit early. Added extra logging/output about standby master status. Tell user at the end of gpinitsystem if gpinitstandby failed. ------------------------------------------ Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
- 26 8月, 2017 1 次提交
-
-
由 Larry Hamel 提交于
As part of the validation phase of gprecoverseg, before proceeding, validate that the setting for GUC data_checksums is the same between master and segments. The validation is done by comparing pg_control file content. Fail fast if the settings are not the same. If no segments are able to report their settings, then gprecoverseg fails. (This failure to report would be unexpected since there is already a check for at least one segment alive to progress to the validation phase.) Signed-off-by: NMarbin Tan <mtan@pivotal.io> Signed-off-by: NNadeem Ghani <nghani@pivotal.io>
-
- 23 8月, 2017 1 次提交
-
-
由 Shoaib Lari 提交于
The data_checksums GUC setting should be the same as the master. The existing test for gpinitstandby is modified to run on a single host. Signed-off-by: NLarry Hamel <lhamel@pivotal.io> Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
- 18 8月, 2017 3 次提交
-
-
由 Larry Hamel 提交于
- validate consistent checksum settings - Make sure checksum settings for all segments are same as master - Add logging proxy to allow logging to file to have different contents than stdout - Do heap checksum only for when starting up all segments. - Add option --skip-heap-checksum-validation - If this option is provided to gpstart, the cluster will start up without checking for matching "data_checksums" GUC between master and segments. Signed-off-by: NNadeem Ghani <nghani@pivotal.io> Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Nadeem Ghani 提交于
Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Nadeem Ghani 提交于
Updating the log message to display the parameters gpconfig was called with, both if the GUC was changed successfully or not. Signed-off-by: NMarbin Tan <mtan@pivotal.io> Signed-off-by: NShoaib Lari <slari@pivotal.io> Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
- 15 8月, 2017 1 次提交
-
-
由 Larry Hamel 提交于
Follow up commit for f936c4f3, which added quotes around gpconfig values. Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
- 10 8月, 2017 1 次提交
-
-
由 Karen Huddleston 提交于
Signed-off-by: NChris Hajas <chajas@pivotal.io>
-
- 09 8月, 2017 3 次提交
-
-
由 Shoaib Lari 提交于
gpinitsystem did not check for HEAP_CHECKSUM in the cluster configuration file with a -c switch. This commit accepts the HEAP_CHECKSUM setting, and additionally exports it to an output_configuration_file when specified with the -O switch. This commit also adds behave tests for the above, and for reading the input_configuration_file with the -I switch. Signed-off-by: NLarry Hamel <lhamel@pivotal.io> Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Nadeem Ghani 提交于
Bug fix for a scenario with multiple analyzedb processes running concurrently: the resulting report files were incorrect and/or overwritten. This commit adds a lock (a file semaphore) for synchronization between analyzedb processes. Each process will acquire an exclusive lock, per database, read the most recent report files (possibly written by concurrently running analyzedb processes) and incorporate that latest information into its own report. Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Larry Hamel 提交于
Signed-off-by: NNadeem Ghani <nghani@pivotal.io> Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
- 04 8月, 2017 1 次提交
-
-
由 C.J. Jameson 提交于
Refactor similar usage to share code with gpperfmon behave tests Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
- 01 8月, 2017 2 次提交
-
-
由 Marbin Tan 提交于
Add test for gppkg --migrate * gppkgs installed on the original master should be installed on the new master and all segments Signed-off-by: NLarry Hamel <lhamel@pivotal.io> Signed-off-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Nadeem Ghani 提交于
Signed-off-by: NMarbin Tan <mtan@pivotal.io> Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
- 28 7月, 2017 2 次提交
-
-
由 Shoaib Lari 提交于
Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io> Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Larry Hamel 提交于
Add a behave test to verify that the checksum configuration is preserved after a segment is recovered using gprecoveseg. Signed-off-by: NShoaib Lari <slari@pivotal.io>
-
- 27 7月, 2017 1 次提交
-
-
由 Karen Huddleston 提交于
This file contains a list of schema-qualified tablenames in the backup set. It is not used in the restore process; it is there solely to allow users to determine which tables were dumped in that backup set. Signed-off-by: NJamie McAtamney <jmcatamney@pivotal.io> Signed-off-by: NChris Hajas <chajas@pivotal.io>
-
- 21 7月, 2017 1 次提交
-
-
由 Chris Hajas 提交于
The changes in commit bdafd0ce should not be tested against the backup43/restore5 test since this code is not yet present in 4.3. Additionally, query the 'template1' database when verifying roles exist.
-
- 20 7月, 2017 3 次提交
-
-
由 Karen Huddleston 提交于
Additionally, gpdbrestore with `-G only` no longer requires the database to be created. Previously, gpdbrestore -G required the database to already exist or the -e flag. The `-G only` option will now only restore globals and the test is changed to reflect this.
-
由 Tom Meyer 提交于
Signed-off-by: NChris Hajas <chajas@pivotal.io>
-
由 Chris Hajas 提交于
This is an internal utility called by gpdbrestore and should not have specific tests (except for testing with valgrind).
-
- 19 7月, 2017 3 次提交
-
-
由 Nadeem Ghani 提交于
If you have multiple segment hosts with failures during a "sync", we used to only report the first issue. Fix: accumulate all the failures before reporting to the user. Add unit test Signed-off-by: NMarbin Tan <mtan@pivotal.io> Signed-off-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Marbin Tan 提交于
This commit introduces BEHAVE_FLAGS as a new parameter in concourse. This will help us run specific tests within a specific scenario, it used to be just all or nothing. Now we can separate multi-host testing vs single host testing. Add tests for gppkg --clean for multi-host * gppkg --clean should install to the segment host with no gppkg * gppkg --clean should remove on all segment hosts when gppkg does not exist in master
-
由 Marbin Tan 提交于
Ensure that gppkg is doing an RPM to all hosts -- this is just a backfill testing addition.
-
- 11 7月, 2017 2 次提交
-
-
由 Marbin Tan 提交于
Create a more extensive workload for the sql to make it last longer. The previous sql was completing too fast and so when the actual pid read happens, there pid no longer exists and causes the result to be 0.
-
由 Nadeem Ghani 提交于
Workaround a problem discovered by a client that noticed intermittent errors for gpssh when some nodes became very cpu-bound. In particular, we override the way the ssh command prompt is validated on a remote machine, within gpssh. The vendored module 'pexpect' tries to match 2 successive prompts from an interactive bash shell. However, if the target host is slow from CPU loading or network loading, these prompts may return late. In that case, the override retries several times, extending the timeout from the default 1 second to up to 125 times that duration. Experimentally, these added retries seem to tolerate about 1 second delay, testing with a 'tc' command that slows network traffic artificially. The number of retries can be configured. --add unit tests to verify happy path of ssh-ing to localhost --add module for gpssh, for overriding pexpect (pxxssh) --add readme to describe testing technique using 'tc' to delay network Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
- 01 7月, 2017 2 次提交
-
-
由 Marbin Tan 提交于
There are times where gpperfmon_log_alert_history scenario fails, because there's no data in the log alert history table. This might be due to us copying an empty csv file; gpperfmon writes to a file for log alert in a cadence. We might be copying a file that has not been written into yet -- possibly empty. Make sure that we have something to copy first before proceeding to the next step. Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Tushar Dadlani 提交于
Doing any kind of intensive work should show up in gpperfmon for both cpu skew and row skew, so putting them together as they can be tested at the same time. Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
- 30 6月, 2017 1 次提交
-
-
由 Jamie McAtamney 提交于
This is the final part of the backup and restore TINC to behave migration. The MFR suite tests the managed file replication for Data Domain. Signed-off-by: NChris Hajas <chajas@pivotal.io>
-
- 29 6月, 2017 1 次提交
-
-
由 Marbin Tan 提交于
The behave test is intermittently failing because the substep is trying to install gpperfmon eventhough gpperfmon is already installed. Trying to figure out which part of failing the correct value is hard with the current setup. Decouple the checks to determine which part is actually failing. Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
-
- 24 6月, 2017 1 次提交
-
-
由 Jimmy Yih 提交于
In this behave test, we delete some entries in pg_depend and in some relative catalog tables to simulate a corruption around pg_depend. The gpcheckcat tool should then flag these down.
-
- 21 6月, 2017 3 次提交
-
-
由 Larry Hamel 提交于
gpperfmon drop partition sql statement was syntactically incorrect, so partition_age gpperfmon feature was not working. We were using the rows in partitionrangestart column from pg_partition to drop specific partitions. The row value from partitionrangestart is reported, as such, '2017-02-01 00:00:00'::timestamp(0) without time zone. The query was reporting an error "Not a constant expression". Use only the first part of partitionrangestart to make our ALTER DROP query work. - Added behave test to confirm that it is now working Signed-off-by: NNadeem Ghani <nghani@pivotal.io> Signed-off-by: NMarbin Tan <mtan@pivotal.io> Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Marbin Tan 提交于
Ensure that we can run install and uninstall gppkg on any gppkg filenames. Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
-
由 Tushar Dadlani 提交于
- Also, add sample.ggpkg as a fixture for tests Signed-off-by: NLarry Hamel <lhamel@pivotal.io> Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
- 10 6月, 2017 1 次提交
-
-
由 Karen Huddleston 提交于
This is part of the effort to unify our backup/restore tests into a single suite. * Adds infrastructure to setup DDBoost on the client and cleanup server * after completion * Adds tests for DDBoost specific options * Adds test coverage from TINC suite that was not included in behave Signed-off-by: NChris Hajas <chajas@pivotal.io>
-
- 09 6月, 2017 2 次提交
-
-
由 Daniel Gustafsson 提交于
Since Command creates a short-lived SSH session, we observe the PID given a throw-away remote process. Assume that the PID is unused and available on the remote in the near future. This pid is no longer associated with a running process and won't be recycled for long enough that tests have finished. Looking ahead introduces the risk of a time-of-check-time-of-use race since the pid might have been allocated by the operating system by the time the test would use the data.
-
由 Larry Hamel 提交于
Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
-