- 18 12月, 2017 1 次提交
-
-
由 Lav Jain 提交于
-
- 16 12月, 2017 7 次提交
-
-
由 Marbin Tan 提交于
Ensure that we're triggering the `gpfaultinjector`. There are cases where even though we have the `gpfaultinjector` setup and the transaction still does not block properly. By creating a database, we ensure that all segments gets contacted, and FTS will detect the issue that we created with gpfaultinjector. (cherry picked from commit acaccc6e)
-
由 Mike Roth 提交于
-
由 Lav Jain 提交于
-
由 Lav Jain 提交于
* Cleanup makefiles for GPHDFS * Fix HADOOP_TARGET_VERSION * Change gphdfs_target_version tokens to hadoop, cdh, hdp, mpr
-
由 Michael Roth 提交于
* Stiwching the chown to a more overlay friendly chmod on directories - Initial work is to remove the chown from the gpadmin user setup and replace it with a chmod a+w to the directories. This is sufficient for ICW, TINC and behave to run in most cases. - Gpload2 needs the datafile to be owned by gpadmin - Change to gpcloud as it was chowning the full directory - PXF tests needed to be able to write to the pxf_automation_src directory. Updated tests to set directories world writable instead of recusiely chowning. Singlenode needs to be owned by gpadmin. TODO: Change gpload2 to no longer need datafile to be owned by gpadmin TODO: clean up singlenode owndership for PXF test
-
- 14 12月, 2017 7 次提交
-
-
由 Tingfang Bao 提交于
This is to make gptransfer able to transfer only schema of databases or tables, like "--schema-only -d foo" or "--schema-only -t bar.public.t1". It could do that before actually but forgot to set the success flag. Signed-off-by: NAdam Lee <ali@pivotal.io> (cherry picked from commit d5852e91)
-
由 Peifeng Qiu 提交于
pg_query function is the underlying workhorse for db.query in python. For INSERT queries, it will return a string containing the number of rows successfully inserted. PQcmdTuples() parses a PGresult return by PQExec, if it's an insert count result, return a pointer to the count. However this pointer is the internal buffer of PGresult, it shouldn't be used after PQClear(), although most time its content remain accessible and unchanged. PyString_FromString will make a copy of the string, so move PQClear() after PyString_FromString is safe. This will fix the problem that gpload get a unprintable insert count sometimes.
-
由 Bhuvnesh Chaudhary 提交于
Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io>
-
由 Shreedhar Hardikar 提交于
-
由 Shreedhar Hardikar 提交于
The default value of Gp_role is set to GP_ROLE_DISPATCH. Which means auxiliary processes inherit this value. FileRep does the same, but also executes queries using SPI on the segment. Which means Gp_role == GP_ROLE_DISPATCH is not a sufficient check for master QD. So, bring back the check on GpIdentity. Author: Asim R P <apraveen@pivotal.io> Author: Shreedhar Hardikar <shardikar@pivotal.io>
-
由 Shreedhar Hardikar 提交于
We don't want to use the optimizer for planning queries in SQL, pl/pgSQL etc. functions when that is done on the segments. ORCA excels in complex queries, most of which will access distributed tables. We can't run such queries from the segments slices anyway because they require dispatching a query within another - which is not allowed in GPDB. Note that this restriction also applies to non-QD master slices. Furthermore, ORCA doesn't currently support pl/* statements (relevant when they are planned on the segments). For these reasons, restrict to using ORCA on the master QD processes only. Also revert commit d79a2c7f ("Fix pipeline failures caused by 0dfd0ebc.") and separate out gporca fault injector tests in newly added gporca_faults.sql so that the rest can run in a parallel group. Signed-off-by: NJesse Zhang <sbjesse@gmail.com>
-
由 Lisa Owen 提交于
-
- 13 12月, 2017 3 次提交
-
-
由 Chuck Litzell 提交于
-
由 Mel Kiyama 提交于
missed in earlier update.
-
由 Lisa Owen 提交于
* docs - updates for gphdfs jar file changes * updates include: - add note that default value gphd-1.1. is not supported - remove references to Pivotal and Greenplum HD
-
- 12 12月, 2017 11 次提交
-
-
由 Haisheng Yuan 提交于
-
由 Jialun 提交于
Update grep keywords to filter the unrelated program.
-
由 C.J. Jameson 提交于
These two tests (gpcheckcat and gptransfer) used a step that looked for a logfile with a date in the name. If that logfile existed at 11:59PM on the day before, and the test looked for it at 12:00AM on the next day, it "wouldn't be there" `Exception: Log "/home/gpadmin/gpAdminLogs/gpcheckcat_20171122.log" was not created` Refactor the tests so that assertions about using the typical gpAdminLogs directory are as banal as possible; emphasize the gptransfer tests of the user option to specify a log directory Author: C.J. Jameson <cjameson@pivotal.io> Author: Shoaib Lari <slari@pivotal.io> (cherry picked from commit 1de55903)
-
由 C.J. Jameson 提交于
Author: C.J. Jameson <cjameson@pivotal.io> Author: Shoaib Lari <slari@pivotal.io> (cherry picked from commit 5f6c036e)
-
由 Shoaib Lari 提交于
Run a distributed query across all segments to force FTS to detect and mark all downed segments. Author: Nadeem Ghani <nghani@pivotal.io> Author: Marbin Tan <mtan@pivotal.io> Author: Shoaib Lari <slari@pivotal.io> Author: C.J. Jameson <cjameson@pivotal.io> (cherry picked from commit 8d2a56a4)
-
由 C.J. Jameson 提交于
If we did stop all primaries on that host, the cluster would be down anyway. Best to just do a full-cluster gpstop, then bring it all back up together. (cherry picked from commit 4f96c774)
-
由 C.J. Jameson 提交于
underlying pylib code identifies master and standby by content id `gpstop --host localhost` will fail differently: it will simply not find the host in the set of hostnames (unless that's how you configured things at first) (cherry picked from commit 2f1d9d56)
-
由 Shoaib Lari 提交于
For interaction with `-r`: Since we don't stop the master with --host, restart will fail anyways, so we don't allow it from the get go. For interaction with `-m`: If someone is using `--host` and then thinking they want to stop the master but not the segments on a particular host, they should just do a full gpstop and then bring everything back up. If someone is using `-m` and then thinking they need to specify the host for the `-m` flag, they don't need to -- the tool infers from the system and shell state. Author: C.J. Jameson <cjameson@pivotal.io> Author: Shoaib Lari <slari@pivotal.io> Author: Marbin Tan <mtan@pivotal.io> (cherry picked from commit 70f15158)
-
由 C.J. Jameson 提交于
Signed-off-by: NMarbin Tan <mtan@pivotal.io> (cherry picked from commit 990b7518)
-
由 Marbin Tan 提交于
Add a flag `--host` which stops all segments on the specified host. An easy way to take down a set of segments without having to ssh and kill processes. Refuse to stop specific host if any primary isn't synched Signed-off-by: NNadeem Ghani <nghani@pivotal.io> Signed-off-by: NC.J. Jameson <cjameson@pivotal.io> (cherry picked from commit fa2ef5e7)
-
由 Nadeem Ghani 提交于
~~This class was handling the hasMirrors field incorrectly. Now we're correctly setting the hasMirrors flag.~~ This change broke the gpaddmirrors test, so fixed it. (cherry-picked from 5dfb9814) Edited this upon backporting to 5X_STABLE to use Fault Strategy to determine if the cluster has mirrors. This should now provide a fault strategy attribute on `gpArray`s that weren't initialized with initFromCatalog()
-
- 10 12月, 2017 2 次提交
-
-
由 Heikki Linnakangas 提交于
gpcheckcat now knows how to drop orhpaned temp toast schemas, in addition to temp schemas. Fix the expected output, since the test now reports to drop 2 orphaned schemas instead of 1 (the temp schema, and temp toast schema), instead of 1.
-
由 Heikki Linnakangas 提交于
And allow DROP SCHEMA on temp toast schemas, like we allow dropping temp schemas. These are fixups for commit 2619b329, in reaction to a test failure in the "gpcheckcat should drop leaked schemas" scenario in MU_gpcheckcat test suite. The test opens a session, creates a temp table in it, and then kills the server. That leaks the temp schema in the QD node, because the backend dies abruptly, but the QE backends exit cleanly when the QD-QE connection is lost, and they do remove the temp schema. That creates an inconsistency between the QD and QE nodes. gpcheckcat knows about that problem, and removes any orhpaned temp schemas; that's what the test tests. However, gpcheckcat didn't know about temp toast schemas. Until commit 2619b329, the temp toast schemas were always leaked, whether the backend exited cleanly or not, so there was no inconsistency between the QD and QE nodes. After that commit, the temp toast schema behaves the same as the temp schema, and the test started failing. The fix is straightforward: teach gpcheckcat to clean up temp toast schemas, just like it cleans up temp schemas. Backpatch to 5X_STABLE, like commit 2619b329.
-
- 09 12月, 2017 6 次提交
-
-
由 Ashwin Agrawal 提交于
This scenario intended to be tested is if primary (walsender) exits, walreceiver must exit as well. Then wal receiber should come-up and try to reconnect and fail again till primary is not brought back. Once primary is up, it should be able to connect. The way is coded today is extremely hacky logic as simulates via fault injection by creating file `wal_rcv_test` and providing option to suspend at what point. Then signal.SIGUSR2 is sent to notify standby to inject the fault. There exists no mechanism to check if fault was hit or not. So, tests tend to little unreliable at times since don't know if fault was hit or not, hence large sleeps are used in this test. Plus, also file wal_rcv.pid is created in code just for testing purpose to validate some behaviors. Hence, removing this flaky time-consuming test. (cherry picked from commit ff9c80cc)
-
由 Ashwin Agrawal 提交于
Seems this was missed while back-porting wal replication and due to this, latch was never signaled to startup process. Instead replay currently always happened due to latch time-out of 5 seconds and never due to walreceiver signaling of wal arrival. Author: Xin Zhang <xzhang@pivotal.io> Author: Ashwin Agrawal <aagrawal@pivotal.io> (cherry picked from commit 42a7dbe7)
-
由 Heikki Linnakangas 提交于
When a backend exits normally, the "pg_temp_<sessionid>" schema is dropped. In GPDB 5, with the 8.3 merge, there is now a "pg_temp_toast_<sessionid>" schema in addition to the temp schema, but it was not dropped. As a result, you would end up with a lot of unused pg_temp_toast_* schemas. To fix, also drop the temp toast schema at backend exit. We will still leak temp schemas, and temp toast schemas, if a backend exits abnormally, or if the server crashes. That's not a new issue, but we should probably do something about that in the future, too. Fixes github issue #4061. Backport to 5x_STABLE, where the toast temp namespaces were introduced.
-
由 Faisal Ali 提交于
Currently, when a backup doesn't exist, the gpdbrestore throws error ``` raise ExceptionNoStackTraceNeeded("No dump file on %s in %s" % (seg.getSegmentHostName(), seg.getSegmentDataDirectory())) ``` There is an issue with the error message since when the user provides its own directory like using "-u" option and during the run when the dumpfile doesn't exist then the directory shown is also the default directory which is misleading, for eg.s ``` [gpadmin@gpdb 20171201]$ gpdbrestore -t 20171201142523 -e -u /home/gpadmin/testbackup/ [.....] Continue with Greenplum restore Yy|Nn (default=N): > y 20171201:19:35:43:020121 gpdbrestore:gpdb:gpadmin-[ERROR]:-gpdbrestore error: No dump file on gpdb in /data/primary/gp_4.3.18.0_201712011453470 ``` This fix provides the correct location where it couldn't find the dump
-
由 Shoaib Lari 提交于
Squash of 2 commits: gpexpand: Simplify test for parallelism. We only need check for parallelism until the number of tables redistributed reached the specified number of tables to be redistributed simultaneously. Author: Marbin Tan <mtan@pivotal.io> Author: Shoaib Lari <slari@pivotal.io> gpexpand: test parallelism more loosely to reduce flakes The parallelism tests were too stringent, wanting to observe maximum parallelism in order to go green. An example failure we saw was as follows, when 3 threads had been "In Progress" together, but not all four (because one thread finished really quickly): `Worker GpExpandTests.check_number_of_parallel_tables_expanded_case_1 failed execution: AssertionError: The specified value was never reached.` Now, we simply assert that some parallelism is observed at some point (2 or more "In Progress" at a time). If a test run "flakes" such that there never were two "In Progress" at a time, that would be indistinguishable from serial execution, so the test would still fail. Author: C.J. Jameson <cjameson@pivotal.io> Author: Shoaib Lari <slari@pivotal.io>
-
由 Goutam Tadi 提交于
- should append to both psql 'select version()' and also to 'gpssh --version'
-
- 08 12月, 2017 2 次提交
-
-
由 Chuck Litzell 提交于
-
由 Omer Arap 提交于
-
- 07 12月, 2017 1 次提交
-