- 02 6月, 2018 1 次提交
-
-
由 Taylor Vesely 提交于
The return value of tzparse() has changed as of commit b749790a, but a corresponding change to the tzparse() function in score_timezone() was never made. As a result, under certain circumstances the server might produce SIGSEGV while running identify_system_timezone(). Co-Authored-by: NJesse Zhang <sbjesse@gmail.com>
-
- 04 5月, 2018 1 次提交
-
-
由 Mel Kiyama 提交于
-only B-tree and bitmap indexes are supported. GPORCA ignores indexes created with unsupported indexing methods.
-
- 03 5月, 2018 1 次提交
-
-
由 Karen Huddleston 提交于
This was supposed to be supported with the addition of content id to filenames, but it was not working. Added feature to gpddboost --readFile option to accept regular expression in filename and search the data domain directory for a file that matches. If a file exists with the matching content id, but a different dbid, we will find it and be able to restore. Co-authored-by: NChris Hajas <chajas@pivotal.io> Co-authored-by: NKaren Huddleston <khuddleston@pivotal.io>
-
- 02 5月, 2018 1 次提交
-
-
由 Bhuvnesh Chaudhary 提交于
Function within_agg_make_baseplan() invokes choose_deduplicate() which creates a flattened rte with the assumption that it is generated later in query_planner and releases it before returning. However, within_agg_make_baseplan() takes care of releasing the memory during deduplicate optimization and it crashes while trying to release it again. Also, before this function is invoked, query_planner has already generated the flattened rte as it is not going to change, so we need not generate it again in choose_deduplicate(). In GPDB4, choose_deduplicate() method does not has the code removed in this commit as well and the issue reported in 5 does not exists in GPDB4. In GPDB Master, this portion of the code has significantly changed due to merge from upstream and choose_deduplicate() method does not exist nor the issue. Adding tests to validate the change.
-
- 01 5月, 2018 3 次提交
-
-
由 Bhuvnesh Chaudhary 提交于
For text, varchar, char and bpchar, ORCA does not collect the MCV and Histogram information, so the calculation of NDVRemain and FreqRemain must be updated to account for it. For such columns, NDVRemain is the stadistinct as available in the pg_statistic, and FreqRemain is everything except the NULL frequency. Earlier, NDVRemain and FreqRemain for such columns would yield 0 resulting in poor cardinality estimation and suboptimal plans. Signed-off-by: NEkta Khanna <ekhanna@pivotal.io> (cherry picked from commit 4a5c58a5)
-
由 Lisa Owen 提交于
* docs - add content for backup storage plugin api * some edits requested by david * comments from mel; remove diagram placeholder * address some requested edits from karen * gpbackup creates the local directory * include restore more * remove bold styling from command * clarify streaming backup * address some review comments * add backup/restore plugin api cmds to subnav * remove the on-page toc entries for cmds * wrap up the edits from karen
-
由 Kris Macoskey 提交于
We test planner and orca on each platform, this one was missing. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
- 27 4月, 2018 9 次提交
-
-
由 Ning Yu 提交于
The resgroup dummy backend currently generate a warning in the Probe() call, this is not supposed as this function is designed to be called no matter resgroup is enabled or not. Removed the warning message from dummy Probe(). Also update warning messages in dummy backend. It used to generate warning messages like below in the dummy backend: cpu rate limitation for resource group is unsupported on this system. This message was originally introduced when resgroup supported only cpu rate limitation, but as now there are more supported capabilities we should have this message updated.
-
由 Ning Yu 提交于
Dump the new capability resgroup memory_auditor in pg_dumpall.
-
由 Tom Lane 提交于
DST law changes in Brazil, Sao Tome and Principe. Historical corrections for Bolivia, Japan, and South Sudan. The "US/Pacific-New" zone has been removed (it was only a link to America/Los_Angeles anyway).
-
由 Daniel Gustafsson 提交于
This backports the below commit which moved from the source timezone file to the newly introduced compact format. commit 097b24cea68ac167a82bb617eb1844c8be4eaf24 Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Sat Nov 25 15:30:11 2017 -0500 Replace raw timezone source data with IANA's new compact format. Traditionally IANA has distributed their timezone data in pure source form, replete with extensive historical comments. As of release 2017c, they've added a compact single-file format that omits comments and abbreviates command keywords. This form is way shorter than the pure source, even before considering its allegedly better compressibility. Hence, let's distribute the data in that form rather than pure source. I'm pushing this now, rather than at the next timezone database update, so that it's easy to confirm that this data file produces compiled zic output that's identical to what we were getting before. Discussion: https://postgr.es/m/1915.1511210334@sss.pgh.pa.us Backported from master.
-
由 Taylor Vesely 提交于
-
由 Taylor Vesely 提交于
These FIXME messages aren't relevant to the 5X_STABLE branch, only for master. Remove them.
-
由 Taylor Vesely 提交于
Commit b749790a backports changes from Postgres 10.1 to parse the most recent version of the IANA timezone library format, but these changes also included some binary incompatible changes in behavior. Specifically, pg_timezone_pre_initialize() and identify_system_timezone() were removed, resulting in the timezone defaulting to 'GMT' on database startup, even if the TimeZone GUC was specifically set in postgresql.conf. This commit restores the previous behavior without reverting the IANA timezone library changes.
-
由 Dhanashree Kashid 提交于
Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Dhanashree Kashid 提交于
It is common to have large IN/NOT IN list in user queries hence 25 is too low of a bound. After running several experiments, 100 turned out to be good threshold value for this GUC. Signed-off-by: NSambitesh Dash <sdash@pivotal.io> (cherry picked from commit e46039dc)
-
- 26 4月, 2018 7 次提交
-
-
由 Mel Kiyama 提交于
* docs: gpbackup/gprestore S3 plugin -add gpbackup/gprestore --plugin-config option -add S3 plugin information -other minor fixes: add index as object, support table data and metadata for --jobs > 1 PR for 5X_STABLE Will be ported to MAIN * docs: review updates for gpbackup/gprestore S3 plugin -moved S3 links to Notes section -changed name from S3 plugin to S3 storage plugin -removed draft comments * docs: gpbackup s3 plugin change binary plugin name to gpbackup_s3_plugin * docs: s3 plugin - fix typo
-
由 Lisa Owen 提交于
-
由 Chuck Litzell 提交于
-
由 Pengzhou Tang 提交于
This commit is a modified version of 8286b61f76014d1f3d58176ec3f12bb2dc9a9dcc, the only difference is we put new entries to the end of cacheinfo array, so extensions like madlib, plcontainer who linked with syscache.h can still have the same cache identifier value.
-
由 Pengzhou Tang 提交于
This reverts commit 8286b61f76014d1f3d58176ec3f12bb2dc9a9dcc which break madlib test cases unexpectedly.
-
由 Pengzhou Tang 提交于
This commit makes access to pg_resgroup a little bit faster
-
由 Lisa Owen 提交于
* docs - sql ref page updates for resgroup memory_auditor * edits from engineering review * some of the edits requested by david * use plural where appropriate
-
- 25 4月, 2018 1 次提交
-
-
由 Bhuvnesh Chaudhary 提交于
Fix qual_is_pushdown_safe_set_operation to correctly resolve the qual vars and identify if there are any window references in the top level of the set operation's left or right subqueries. Before commit b8002a9, instead of starting with rte of the level where the qual is attached we started scanning the rte of the subqueries of the left and right args of setop to identify the qual. However, because of this the varno didn't match to the corresponding RTE due to which the quals couldn't be resolved to winref and were incorrectly pushed down. This caused the planner to return an error during execution.
-
- 24 4月, 2018 3 次提交
-
-
由 Ning Yu 提交于
- resgroup: dump cgroup memory limit with _granted suffix; - dump cgroup memory usage as `used`; This is to make the dump message consistent between 'vmtracker' and 'cgroup' resgroups.
-
由 Jialun 提交于
1) Global shared memory will be used if the query has run out of the group shared memory. 2) Using Atomic Operation & "Compare And Save" instead of lock to get high performance. 3) Modify the test cases according to the new rules.
- 23 4月, 2018 3 次提交
-
-
由 Pengzhou Tang 提交于
This reverts commit c59e3c8f which breaks madlib test cases unexpectedly.
-
由 Pengzhou Tang 提交于
This commit makes access to pg_resgroup a little bit faster
-
由 Peifeng Qiu 提交于
-
- 21 4月, 2018 1 次提交
-
-
由 mkiyama 提交于
see PR 4619
-
- 19 4月, 2018 4 次提交
-
-
由 Ning Yu 提交于
In ResGroupDropFinish() uninitialized memory address can be accessed due to some reasons: 1. the group pointer is not initialized on segments; 2. the hash table node pointed by group is recycled in removeGroup(); This invalid access can cause crash issue on segments. Also move some global vars to resgroup.c, They were put in resgroup-ops-linux.c, which was only compiled and linked on linux, so on other OS like macos the vars can not be found.
-
由 Ning Yu 提交于
binary_swap_gpdb was input of all resgroup jobs, but as it's not `get` by the resgroup sles job an error will be triggered: missing inputs: binary_swap_gpdb Fixed by marking it as optional.
-
由 Ning Yu 提交于
Bring back the resgroup memory auditor feature: - 4354d336 - 8ede074c - 140d4d2e Memory auditor was a new feature introduced to allow external components (e.g. pl/container) managed by resource group. This feature requires a new gpdb dir to be created in cgroup memory controller, however on 5X branch unless the users created this new dir manually then the upgrade from a previous version would fail. In this commit we provide backward compatibility by checking the release version: - on 6X and master branches the memory auditor feature is always enabled so the new gpdb dir is mandatory; - on 5X branch only if the new gpdb dir is created with proper permissions the memory auditor feature could be enabled, when it's disabled `CREATE RESOURCE GROUP WITH (memory_auditor='cgroup') will fail with guide information on how to enable it; Binary swap tests are also provided to verify backward compatibility in future releases. As cgroup need to be configured to enable resgroup we split the resgroup binary swap tests into two parts: - resqueue mode only tests which can be triggered in the icw_gporca_centos6 pipeline job after the ICW tests, these parts have no requirements on cgroup; - complete resqueue & resgroup modes tests which can be triggered in the mpp_resource_group_centos{6,7} pipeline jobs after the resgroup tests, these parts need cgroup to be properly configured;
-
由 Ning Yu 提交于
* resgroup: make cgroup memsw.limit_in_bytes optional. * resgroup: retry proc migration for rmdir to succeed. * resgroup: add delay in a testcase. * resgroup: use correct log level in cgroup ops.
-
- 18 4月, 2018 1 次提交
-
-
由 dyozie 提交于
-
- 17 4月, 2018 3 次提交
-
-
由 Mel Kiyama 提交于
backport of https://github.com/greenplum-db/gpdb/pull/4854
-
由 Sambitesh Dash 提交于
-
由 Mel Kiyama 提交于
* docs: Add guc verify_gpfdists_cert -added guc definition to list of gucs -added link to guc from appropriate topics. PR for 5X_STABLE Will be ported to MAIN * docs: verify_gpfdists_cert guc updates -add SSL exceptions that are ignored -other minor edits * docs: guc verify_gpfdists_cert - fix typos
-
- 16 4月, 2018 1 次提交
-
-
由 Pengzhou Tang 提交于
Executing a query plan containing a large number of slices may slow down the entire Greenplum cluster: each "n-gang" slice corresponds to a separate process per segment. An example of such queries is a UNION ALL atop several complex views. To prevent such a situation, add a GUC gp_max_slices and refuse to execute plans of which the number of slices exceed that limit. Signed-off-by: NJesse Zhang <sbjesse@gmail.com>
-