- 21 12月, 2017 3 次提交
-
-
由 Lisa Owen 提交于
* docs - add note about MaxStartups to relevant utility cmds * uses ...
-
由 Shreedhar Hardikar 提交于
-
由 Shreedhar Hardikar 提交于
As pointed out by Heikki, maintaining another variable to match one in the database system will be error-prone and cumbersome, especially while merging with upstream. This commit initializes ORCA with a pointer to a GPDB function that returns true when QueryCancelPending or ProcDiePending is set. This way we no longer have to micro-manage setting and re-setting some internal ORCA variable, or touch signal handlers. This commit also reverts commit 0dfd0ebc "Support optimization interrupts in ORCA" and reuses tests already pushed by 916f460f and 0dfd0ebc.
-
- 20 12月, 2017 2 次提交
- 19 12月, 2017 7 次提交
-
-
由 Sambitesh Dash 提交于
They were leftover from the Perforce repo, and they should never have been checked in. Now this normally wouldn't have been an issue, except for commit history cleanliness: we would just silently overwrite the output files with actual output. But what if in CI those "output files" have different permissions? Turns out we would silently leave them alone. Two steps down the road we have a diff failure ... This commit removes -- at long last -- those output files. This fix is forward-ported from an older, closed-source version of Greenplum, where we first spotted this oversight. Strangely this is not causing any test failures on master or 5 ... But this still should be ported, even for cleanliness sake. Signed-off-by: NJesse Zhang <sbjesse@gmail.com> (cherry picked from commit 20d6b178)
-
由 David Yozie 提交于
* Doc edits for gptransfer --schema-only change * Change header title; add xref * --d -> -d * remove extraneous comma * changing -d behavior to match -t; making sentences parallel
-
由 Lisa Owen 提交于
* docs - costing diffs between gporca/planner and RQ limits * mention fallback * RQs do not align/differentiate costs between planners
-
由 Mel Kiyama 提交于
PR for 5X_STABLE Will be ported to MAIN
-
由 David Sharp 提交于
Author: Amil Khanzada <akhanzada@pivotal.io> Author: David Sharp <dsharp@pivotal.io> (cherry picked from commit 35ae9aee)
-
由 Amil Khanzada 提交于
- As part of determining the resource group that a transaction should be assigned to, AssignResGroupOnMaster() calls GetResGroupIdForRole(), which queries a syscache on the catalog table pg_authid, which maps users to resource groups. - Prior to this commit, AssignResGroupOnMaster() was doing the queries on pg_authid near the top of StartTransaction() before the per-transaction memory context was set up. This required GetResGroupIdForRole() to run ResourceOwnerCreate() to avoid segfaulting gpdb and also led to many potential issues: * unknown behavior if a relcache invalidation event happens on pg_authid's syscache * possible stale pg_authid entries, as access done with SnapshotNow and out-of-date RecentGlobalXmin * memory leaks due to no memory context * uphill battle as newer version of PostgreSQL remove SnapshotNow and assume catalog lookups only happen when transactions are open Signed-off-by: NDavid Sharp <dsharp@pivotal.io> Signed-off-by: NAmil Khanzada <akhanzada@pivotal.io> (cherry picked from commit 9ea766d9)
-
由 Marbin Tan 提交于
This is simply a setup/cleanup step for the behave tests, so be accomodating to try to get it to work. Scope: affects gpcheckcat.feature and backups.feature; these tests already have some timing affordances; this just adds a bit more backstop Author: Marbin Tan <mtan@pivotal.io> Author: C.J. Jameson <cjameson@pivotal.io>
-
- 18 12月, 2017 4 次提交
-
-
由 Chuck Litzell 提交于
* Enhance hardening docs on trust and ident * Format source. No content changes.
-
由 Jinbao Chen 提交于
-
由 Lav Jain 提交于
* Cleanup makefiles for GPHDFS * Fix HADOOP_TARGET_VERSION * Change gphdfs_target_version tokens to hadoop, cdh, hdp, mpr
-
由 Lav Jain 提交于
-
- 16 12月, 2017 7 次提交
-
-
由 Marbin Tan 提交于
Ensure that we're triggering the `gpfaultinjector`. There are cases where even though we have the `gpfaultinjector` setup and the transaction still does not block properly. By creating a database, we ensure that all segments gets contacted, and FTS will detect the issue that we created with gpfaultinjector. (cherry picked from commit acaccc6e)
-
由 Mike Roth 提交于
-
由 Lav Jain 提交于
-
由 Lav Jain 提交于
* Cleanup makefiles for GPHDFS * Fix HADOOP_TARGET_VERSION * Change gphdfs_target_version tokens to hadoop, cdh, hdp, mpr
-
由 Michael Roth 提交于
* Stiwching the chown to a more overlay friendly chmod on directories - Initial work is to remove the chown from the gpadmin user setup and replace it with a chmod a+w to the directories. This is sufficient for ICW, TINC and behave to run in most cases. - Gpload2 needs the datafile to be owned by gpadmin - Change to gpcloud as it was chowning the full directory - PXF tests needed to be able to write to the pxf_automation_src directory. Updated tests to set directories world writable instead of recusiely chowning. Singlenode needs to be owned by gpadmin. TODO: Change gpload2 to no longer need datafile to be owned by gpadmin TODO: clean up singlenode owndership for PXF test
-
- 14 12月, 2017 7 次提交
-
-
由 Tingfang Bao 提交于
This is to make gptransfer able to transfer only schema of databases or tables, like "--schema-only -d foo" or "--schema-only -t bar.public.t1". It could do that before actually but forgot to set the success flag. Signed-off-by: NAdam Lee <ali@pivotal.io> (cherry picked from commit d5852e91)
-
由 Peifeng Qiu 提交于
pg_query function is the underlying workhorse for db.query in python. For INSERT queries, it will return a string containing the number of rows successfully inserted. PQcmdTuples() parses a PGresult return by PQExec, if it's an insert count result, return a pointer to the count. However this pointer is the internal buffer of PGresult, it shouldn't be used after PQClear(), although most time its content remain accessible and unchanged. PyString_FromString will make a copy of the string, so move PQClear() after PyString_FromString is safe. This will fix the problem that gpload get a unprintable insert count sometimes.
-
由 Bhuvnesh Chaudhary 提交于
Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io>
-
由 Shreedhar Hardikar 提交于
-
由 Shreedhar Hardikar 提交于
The default value of Gp_role is set to GP_ROLE_DISPATCH. Which means auxiliary processes inherit this value. FileRep does the same, but also executes queries using SPI on the segment. Which means Gp_role == GP_ROLE_DISPATCH is not a sufficient check for master QD. So, bring back the check on GpIdentity. Author: Asim R P <apraveen@pivotal.io> Author: Shreedhar Hardikar <shardikar@pivotal.io>
-
由 Shreedhar Hardikar 提交于
We don't want to use the optimizer for planning queries in SQL, pl/pgSQL etc. functions when that is done on the segments. ORCA excels in complex queries, most of which will access distributed tables. We can't run such queries from the segments slices anyway because they require dispatching a query within another - which is not allowed in GPDB. Note that this restriction also applies to non-QD master slices. Furthermore, ORCA doesn't currently support pl/* statements (relevant when they are planned on the segments). For these reasons, restrict to using ORCA on the master QD processes only. Also revert commit d79a2c7f ("Fix pipeline failures caused by 0dfd0ebc.") and separate out gporca fault injector tests in newly added gporca_faults.sql so that the rest can run in a parallel group. Signed-off-by: NJesse Zhang <sbjesse@gmail.com>
-
由 Lisa Owen 提交于
-
- 13 12月, 2017 3 次提交
-
-
由 Chuck Litzell 提交于
-
由 Mel Kiyama 提交于
missed in earlier update.
-
由 Lisa Owen 提交于
* docs - updates for gphdfs jar file changes * updates include: - add note that default value gphd-1.1. is not supported - remove references to Pivotal and Greenplum HD
-
- 12 12月, 2017 7 次提交
-
-
由 Haisheng Yuan 提交于
-
由 Jialun 提交于
Update grep keywords to filter the unrelated program.
-
由 C.J. Jameson 提交于
These two tests (gpcheckcat and gptransfer) used a step that looked for a logfile with a date in the name. If that logfile existed at 11:59PM on the day before, and the test looked for it at 12:00AM on the next day, it "wouldn't be there" `Exception: Log "/home/gpadmin/gpAdminLogs/gpcheckcat_20171122.log" was not created` Refactor the tests so that assertions about using the typical gpAdminLogs directory are as banal as possible; emphasize the gptransfer tests of the user option to specify a log directory Author: C.J. Jameson <cjameson@pivotal.io> Author: Shoaib Lari <slari@pivotal.io> (cherry picked from commit 1de55903)
-
由 C.J. Jameson 提交于
Author: C.J. Jameson <cjameson@pivotal.io> Author: Shoaib Lari <slari@pivotal.io> (cherry picked from commit 5f6c036e)
-
由 Shoaib Lari 提交于
Run a distributed query across all segments to force FTS to detect and mark all downed segments. Author: Nadeem Ghani <nghani@pivotal.io> Author: Marbin Tan <mtan@pivotal.io> Author: Shoaib Lari <slari@pivotal.io> Author: C.J. Jameson <cjameson@pivotal.io> (cherry picked from commit 8d2a56a4)
-
由 C.J. Jameson 提交于
If we did stop all primaries on that host, the cluster would be down anyway. Best to just do a full-cluster gpstop, then bring it all back up together. (cherry picked from commit 4f96c774)
-
由 C.J. Jameson 提交于
underlying pylib code identifies master and standby by content id `gpstop --host localhost` will fail differently: it will simply not find the host in the set of hostnames (unless that's how you configured things at first) (cherry picked from commit 2f1d9d56)
-