- 07 12月, 2017 4 次提交
-
-
由 Shoaib Lari 提交于
For interaction with `-r`: Since we don't stop the master with --host, restart will fail anyways, so we don't allow it from the get go. For interaction with `-m`: If someone is using `--host` and then thinking they want to stop the master but not the segments on a particular host, they should just do a full gpstop and then bring everything back up. If someone is using `-m` and then thinking they need to specify the host for the `-m` flag, they don't need to -- the tool infers from the system and shell state. Author: C.J. Jameson <cjameson@pivotal.io> Author: Shoaib Lari <slari@pivotal.io> Author: Marbin Tan <mtan@pivotal.io>
-
由 C.J. Jameson 提交于
Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Marbin Tan 提交于
Add a flag `--host` which stops all segments on the specified host. An easy way to take down a set of segments without having to ssh and kill processes. Refuse to stop specific host if any primary isn't synched Signed-off-by: NNadeem Ghani <nghani@pivotal.io> Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Nadeem Ghani 提交于
This class was handling the hasMirrors field incorrectly. Now we're correctly setting the hasMirrors flag. This change broke the gpaddmirrors test, so fixed it.
-
- 06 12月, 2017 10 次提交
-
-
由 Daniel Gustafsson 提交于
The check for whether the remote server is a Greenplum instance was using the current version as an optimization, but with the current strategy of aggressively merging that will become a problem. Remove the version check and only go by the server version() output instead.
-
由 yanchaozhong 提交于
-
由 yanchaozhong 提交于
-
由 xiong-gang 提交于
-
由 David Yozie 提交于
* add -compression-level * add -single-data-file option * add gprestore -include-schema and -include-table-file options * removing statement about using -compression-level and -no-compression together * remove restriction about using both -include-schema and -redirect together * remove restriction about using both -include-table-file and -globals together * add note re: default compression level
-
由 David Sharp 提交于
eg: if(big_condition_a && big_condition_b) not: if(big_condition_a && big_condition_b)
-
由 Ben Christel 提交于
- This is not intended to replace pgindent, but can help get close to Postgres style during development. - It is not completely matching the Postgres style, so please update it as you use it. Signed-off-by: NAmil Khanzada <akhanzada@pivotal.io>
-
由 Chuck Litzell 提交于
* docs: optimizer_join_order guc * Implement suggested edits * Note iteractions with other params; correct inaccurate statement. * Clarify this is a GPORCA guc
-
由 Mel Kiyama 提交于
* docs: postGIS - add GDAL raster driver information * docs: postgis GDAL - fix typos.
-
由 Daniel Gustafsson 提交于
-
- 05 12月, 2017 10 次提交
-
-
由 Heikki Linnakangas 提交于
Instead of throwing a NOTICE on every object/role combination, track whether anything at all was revoked, and only issue one NOTICE for the whole command, if nothing was revoked. This reduces the noise if the REVOKE lists multiple objects and/or roles. This refactoring makes it easier to carry this diff vs. upstream, as we merge the column-level permissions feature from upstream. Discussion: https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/Ttn_UJb4Otg/LS1cFrDiAwAJ
-
由 Daniel Gustafsson 提交于
Commit 3fe43b8a introduced a lock upgrade in the plan revalidation for UDFs. This makes the lock acquire in RevalidateCachedPlanWithParams() match CdbTryOpenRelation() closer in order to avoid distributed deadlock for UPDATE/DELETE DMLs. It does however also upgrade the lock for INSERT which is overly aggressive. Fix by only upgrading the lock for the two specified DML commands. Also includes an isolationtest test that cause distributed deadlock without this patch. This solves reported cases of deadlock introduced around INSERTs in UDFs.
-
由 Daniel Gustafsson 提交于
The simplejson library was partially imported, but no longer used so remove. Suds seemed to be vendored more intact, but also seems unused to remove as well.
-
由 Karen Huddleston 提交于
This was accidentally removed in the commit that changed debug_sleep
-
由 Jesse Zhang 提交于
-
由 Venkatesh Raghavan 提交于
While porting the test from tinc, we added a schema for each test. During refactoring we forgot to add the schema name and correct table name in the test query.
-
由 David Yozie 提交于
-
由 PA Toolsmiths 提交于
-
由 Divya Bhargov 提交于
The failed cluster now will remain running for some time and can be accessed Signed-off-by: NEd Espino <eespino@pivotal.io>
-
由 Mel Kiyama 提交于
* docs: PL/Container - add information about disk quotas The information is added to the Notes section. Also --edited some existing information --fixed description of plcontainer_refresh_config and plcontainer_show_config to be views, not functions * docs: plcontainer - fix typo * docs: pl/container - clarified when base device size is displayed by docker info.
-
- 04 12月, 2017 4 次提交
-
-
由 Daniel Gustafsson 提交于
-
由 Adam Lee 提交于
ServiceUnavailable and RequestTimeout are errors needed retry, this commit treats them as S3ConnectionError. Other errors returned by S3 than 500, 503 and RequestTimeout are still considered as S3LogicError.
-
由 Shreedhar Hardikar 提交于
Move gporca regression test out of the parallel group so that gp_fault_injector functionality works correctly. Also, as it turns out, ORCA is used to run pg/PLSQL queries sometimes even when the GUC optimizer is set to off. So when gporca sets up the gp_fault_injector, it gets activated later on in parallel group qp_functions_in_from test is part of. So, reset the fault in gporca just in case.
-
由 Richard Guo 提交于
-
- 02 12月, 2017 12 次提交
-
-
由 Heikki Linnakangas 提交于
These were added back in 2009 to work around DNS issues on Mac OS X. The comment there says it shouldn't really be needed, and should be removed once we gain confidence that it's not needed. I'm feeling confident now; there are no hacks like this in the upstream, and I don't recall any reports of issues like this.
-
由 Heikki Linnakangas 提交于
It was used by the old nb_classify() aggregate function. It was removed in commit fae97ae7, but this type was left over.
-
由 Heikki Linnakangas 提交于
These tests were failing, because commit 724f9d27 changed the error message. Instead of just memorizing the new expected output, let's remove these tests altogether. We have similar tests in the main regression suite, that ought to be enough.
-
由 Shreedhar Hardikar 提交于
-
由 Shreedhar Hardikar 提交于
To support that, this commit adds 2 new ORCA APIs: - SignalInterruptGPOPT(), which notifies ORCA that an abort is requested (must be called from the signal handler) - ResetInterruptsGPOPT(), which resets ORCA's state to before the interruption, so that the next query can run normally (needs to be called only on the QD) Also check for interrupts right after ORCA returns.
-
由 Shreedhar Hardikar 提交于
GPDB uses "runaway query termination" to kill memory-intensive sessions when the total memory usage goes beyond the "red zone" limit. The "red zone detector" identifies the session consuming the most memory memory. If that is also currently active (i.e not in ReadCommand()), it is selected as the "primary" runaway session. Otherwise the session consuming next most memory is selected as the "secondary" runaway session, since an idle session cannot clean itself up. Once selected, the sessions attempts cleanup by either calling elog(ERROR). However, under certain conditions e.g critical sections, the clean up must be ignored. If the primary runaway session is idle and another session is marked as secondary, we should not terminate the secondary session if it is executing an administrative command (e.g database restarts). This is handled by ignoring clean up for secondary runaway sessions executing as superuser. Also we want to avoid cancelling a session outside of an active transaction since it will not be able to free up any more resources. This commit refactors the logic in RunawayCleaner_StartCleanup() to only cancel the query under the conditions described above. Furthermore, it makes sure that superuser() is called only when executing a transaction. Otherwise that can lead to a PANIC if it needs to access the catalogue. Also update runaway cleaner unit tests to include both primary and secondary runaway scenarios and add a unit test for runaway clean up when called outside of a transaction.
-
由 Shoaib Lari 提交于
Signed-off-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Dhanashree 提交于
This was missed in commit 407b2880
-
由 sambitesh 提交于
-
由 Ivan Leskin 提交于
Add a new compression option for append-optimized tables, "zstd". It is generally faster than zlib or quicklz, and compresses better. Or at least it can be faster or compress better, if not both at the same time, by adjusting the compression level. A major advantage of Zstandard is the wide range of tuning, to choose the trade-off between compression speed and ratio. Update documentation to mention "zstd" alongside "zlib" and "quicklz". More could be done; all the examples still use zlib or quicklz, for example, and I think we want to emphasize Zstandard more in the docs, over those other options. But this is the bare minimum to keep the docs factually correct. Using the new option requires building the server with the libzstd library. A new --with-zstd option is added for that. The default is to build without libzstd, for now, but we should probably change the default to be on, after we have had a chance to update all the buildfarm machines to have libzstd. Patch by Ivan Leskin, Dmitriy Pavlov, Anton Chevychalov. Test case, docs changes, and some minor editorialization by Heikki Linnakangas.
-
由 Chris Hajas 提交于
Author: Chris Hajas <chajas@pivotal.io> Author: Karen Huddleston <khuddleston@pivotal.io>
-
由 Heikki Linnakangas 提交于
These OIDs, in range 3012-3048, are about to be used for different things in the upstream, in the commits we're just about to merge from PostgreSQL 8.4. Renumber GPDB-added objects out of the way. 'pg_bitmapindex' pg_namespace entry: 3012 -> 7012 'bitmap' pg_am entry: 3013 -> 7013 all bitmap index opfamilies: 30XX -> 70XX
-