- 22 12月, 2017 11 次提交
-
-
由 sambitesh 提交于
This query was added to "percentile" in commit b877cd43 as outer references were allowed after we backported ordered-set aggregates. But the query was using an ARRAY sublink, which semantically does not guarantee ordering. This causes sporadic failure in tests. This commit tweaks the test query so that it has a deterministic ordering in the output array. We considered just adding an ORDER BY to the subquery, but ultimately we chose to use `array_agg` with an `ORDER BY` because subquery order is not preserved per SQL standard. Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Lisa Owen 提交于
* docs - correct the example using --full and -d options * avoid using --full and --schema-only together, do not prohibit * clean up schema copy bullet point
-
由 Ashwin Agrawal 提交于
Mostly re-used init-mirrors for this functionality. Author: Ashwin Agrawal <aagrawal@pivotal.io> Author: Taylor Vesely <tvesely@pivotal.io>
-
由 Heikki Linnakangas 提交于
This seems to have been similar to the fault injector code we have, but not used by anything as far as I can see.
-
由 Alexandra Wang 提交于
Signed-off-by: NDivya Bhargov <dbhargov@pivotal.io>
-
由 yanchaozhong 提交于
-
由 yanchaozhong 提交于
* Update gpinitsystem edit commit message: Marbin Tan<mtan@pivotal.io> Change the symbol to actual words to better convey the meaning. Primary segment # -> Number of primary segments Mirror segment # -> Number of mirror segments
-
由 yanchaozhong 提交于
-
由 dyozie 提交于
-
由 Chris Hajas 提交于
Author: Chris Hajas <chajas@pivotal.io>
-
由 Jacob Champion 提交于
The portals_updatable test makes use of the public.bar table, which the partition_pruning test occasionally dropped. To fix, don't fall back to the public schema in the partition_pruning search path. Put the temporary functions in the partition_pruning schema as well for good measure. Author: Asim R P <apraveen@pivotal.io> Author: Jacob Champion <pchampion@pivotal.io>
-
- 21 12月, 2017 17 次提交
-
-
由 Heikki Linnakangas 提交于
The locale information is not used for anything, so no need to collect and pass it around.
-
由 Kris Macoskey 提交于
Installation of packages on every execution of a test suffers from any upstream flakiness. Therefore installation of generic packages is being moved to the underlying OS, in this case the AMI being used for the CCP job. In place of outright removing the package installation, it is a much better pattern to instead replace installation with a validation of the assumptions made for packages installed on the underlying OS the test will run within. The call `yum --cacheonly list installed [List of Packages]` does a number of things: 1. For the given list of packages, if installed the command will return 0, and if any are not installed will return 1 2. The `--cacheonly` prevents the call from issuing an upstream repository metadata refresh. This is not a requirement, but is an easy optimization that avoids upstream flakiness even further. Note: `--cacheonly` assumes that the repostiroy metadata cache has been refresh atleast once. If not, the flag will cause the command to fail. We are assuming that it has been performed at least once in the underlying OS in order to install the packages in the first place. Signed-off-by: NAlexandra Wang <lewang@pivotal.io> Signed-off-by: NDivya Bhargov <dbhargov@pivotal.io>
-
由 C.J. Jameson 提交于
Fix gpexpand the same way as this commit 4b439bc9 Port numbers used by GPDB should be below kernel's ephemeral port range The ephemeral port range is given by net.ipv4.ip_local_port_range kernel parameter. It is set to 32768 --> 60999. If GPDB uses port numbers in this range, FTS probe request may not get a response, resulting in FTS incorrectly marking a primary down. We change the example configuration files to lower the port number to proper range. Author: Marbin Tan <mtan@pivotal.io> Author: C.J. Jameson <cjameson@pivotal.io>
-
由 Lisa Owen 提交于
* docs - review zstd doc additions, misc related edits * modify ddl storage example 1 to use zstd
-
由 Bhuvnesh Chaudhary 提交于
-
由 Shivram Mani 提交于
-
由 Lisa Owen 提交于
* docs - add note about MaxStartups to relevant utility cmds * uses ...
-
由 Heikki Linnakangas 提交于
Maintaining these lists, in particular the list of fault injector identifier, or "fault points", is a bit tedious, because the names and enum values of the identifiers were kept in separate lists, in two different files. They needed to be kept in sync. To ease that pain, merge the two lists into a single file, with the identifier name and ID for each entry on the same line. Use C macro magic to generate the enum and array of strings from the single header file, similar to how the keyword list in src/include/parser/kwlist.h works. Remove the list of fault injection points from the --help message of clsInjectFault.py. It's just too easy to forget updating that. We could write some perl magic or something to create that list from the authoritative list in the header file at compilation time, but it hardly seems worth the effort, as this is just a developer tool. Advise the user to look directly into the header file, instead.
-
由 Nadeem Ghani 提交于
There was step defined but not being used and a helper method only being called from that step. This commit removes both. Author: Nadeem Ghani <nghani@pivotal.io> Author: Shoaib Lari <slari@pivotal.io>
-
由 Nadeem Ghani 提交于
The gptransfer test used a step that looked for a logfile with a date in the name. If that logfile existed at 11:59PM on the day before, and the test looked for it at 12:00AM on the next day, it "wouldn't be there" Refactor the test so that assertions about using the typical gpAdminLogs directory are as banal as possible. Also see https://github.com/greenplum-db/gpdb/pull/4072/commits/84bf83d5013f891547dc21576d04a281cfd2faf7. Author: Nadeem Ghani <nghani@pivotal.io> Author: Shoaib Lari <slari@pivotal.io>
-
由 sambitesh 提交于
-
由 Haisheng Yuan 提交于
-
由 Haisheng Yuan 提交于
-
由 Haisheng Yuan 提交于
This reverts commit 4fac169fb1204de54a05ac14fba1a5e4d9f82c08.
-
由 Haisheng Yuan 提交于
-
由 Haisheng Yuan 提交于
We had some customers reporting that planner generates plan using seqscan instead of bitmapscan, and the execution time of seqscan is 5x slower than using bitmapscan. The statistics were updated and quite accurate. Bitmap table scan uses some formula to interpolate between random_page_cost and seq_page_cost to determine the cost per page. But it turns out that the default value of random_page_cost is 100x of the value of seq_page_cost. With the original cost formula, random_page_cost predominates in the final cost result, even the formula is declared to be non-linear, but it is still more like linear, which can't reflect the real cost per page when a majority of pages are fetched. Therefore, the cost formula is updated to real non-linear function to reflect both random_page_cost and seq_page_cost for different percentage of pages fetched. For example, for the default value random_page_cost = 100, seq_page_cost = 1, if 80% pages are fetched, the cost per page in old formula is 11.45, which is 10x more than seqscan, because the cost is dominated by random_page_cost in the formula. With the new formula, the cost per page is 1.63, which can reflect the real cost better, in my opinion. [#151934601]
-
由 Heikki Linnakangas 提交于
On my laptop, with gcc -O, these comparisons didn't work as intended, and as result, the FTS probe messages were not processed, and gp_segment_configuration always claimed the mirrors to be down, even though they were running OK.
-
- 20 12月, 2017 6 次提交
-
-
由 Shreedhar Hardikar 提交于
-
由 Shreedhar Hardikar 提交于
As pointed out by Heikki, maintaining another variable to match one in the database system will be error-prone and cumbersome, especially while merging with upstream. This commit initializes ORCA with a pointer to a GPDB function that returns true when QueryCancelPending or ProcDiePending is set. This way we no longer have to micro-manage setting and re-setting some internal ORCA variable, or touch signal handlers. This commit also reverts commit 0dfd0ebc "Support optimization interrupts in ORCA" and reuses tests already pushed by 916f460f and 0dfd0ebc.
-
由 Nadeem Ghani 提交于
This reverts commit 16c4b480.
-
由 Lisa Owen 提交于
* docs - persist resource group configuration * recreate, not persist, the cgroup hierarchies * configure service start cmds with preferred auto-start tool
-
由 C.J. Jameson 提交于
Fix gpexpand the same way as this commit 4b439bc9 Port numbers used by GPDB should be below kernel's ephemeral port range The ephemeral port range is given by net.ipv4.ip_local_port_range kernel parameter. It is set to 32768 --> 60999. If GPDB uses port numbers in this range, FTS probe request may not get a response, resulting in FTS incorrectly marking a primary down. We change the example configuration files to lower the port number to proper range. Author: Marbin Tan <mtan@pivotal.io> Author: C.J. Jameson <cjameson@pivotal.io>
-
由 Ed Espino 提交于
To facilitate pipeline maintanance, a Python utility 'gen_pipeline.py` is used to generate the production pipeline (gpdb_master-generated.yml). It can also be used to build custom pipelines for developer use. Pipeline name change: For the time being, the pipeline will continue to be checked in. It's name has changed from "pipeline.yml" to "gpdb_master-generated.yml". Will revisit the request to not check in the generated pipeline file as we will need to rework the validate_pipeline job which requires it's existence. Please see accompanying README.md file for additional details. Here are the pipeline jobs which have been renamed to allow job reuse: from: - cs_filerep_e2e_full_primary - cs_filerep_e2e_incr_primary - cs_filerep_e2e_full_mirror - cs_filerep_e2e_incr_mirror - cs_pg_twophase_switch - regression_tests_pxf_cdh_rpm - regression_tests_pxf_cdh_tar - regression_tests_pxf_hdp_rpm - regression_tests_pxf_hdp_tar to: - cs_filerep_end_to_end_full_primary - cs_filerep_end_to_end_incr_primary - cs_filerep_end_to_end_full_mirror - cs_filerep_end_to_end_incr_mirror - cs_switch_01_12 - regression_tests_pxf_HDP_rpm - regression_tests_pxf_HDP_tar - regression_tests_pxf_CDH_rpm - regression_tests_pxf_CDH_tar Production ccp docker image: changed from "toolsmiths/ccp" to "pivotaldata/ccp" Remove compile_gpdb_windows_cl from gate_compile_end dependency. Acknowledgment: The idea for the utility originated from the Apache HAWQ project. It was originally authored by Ivan Weng (@wengyanqing).
-
- 19 12月, 2017 4 次提交
-
-
由 Sambitesh Dash 提交于
They were leftover from the Perforce repo, and they should never have been checked in. Now this normally wouldn't have been an issue, except for commit history cleanliness: we would just silently overwrite the output files with actual output. But what if in CI those "output files" have different permissions? Turns out we would silently leave them alone. Two steps down the road we have a diff failure ... This commit removes -- at long last -- those output files. This fix is forward-ported from an older, closed-source version of Greenplum, where we first spotted this oversight. Strangely this is not causing any test failures on master or 5 ... But this still should be ported, even for cleanliness sake. Signed-off-by: NJesse Zhang <sbjesse@gmail.com>
-
由 David Yozie 提交于
* Doc edits for gptransfer --schema-only change * Change header title; add xref * --d -> -d * remove extraneous comma * changing -d behavior to match -t; making sentences parallel
-
由 Mel Kiyama 提交于
PR for 5X_STABLE Will be ported to MAIN
-
由 Lisa Owen 提交于
* docs - costing diffs between gporca/planner and RQ limits * mention fallback * RQs do not align/differentiate costs between planners
-
- 18 12月, 2017 2 次提交
-
-
由 Chuck Litzell 提交于
* Enhance hardening docs on trust and ident * Format source. No content changes.
-
由 Lav Jain 提交于
-