- 10 10月, 2017 8 次提交
-
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
-
由 Jimmy Yih 提交于
This gp_replica_check tool is for 6.0+ development testing replication consistency between pairs of primary and mirror segments. Currently the tool is only supported on a single node cluster. The tool creates a new extension called gp_replica_check and then invokes the extension on all the primary segments in Utility mode. The extension will do md5 checksum validation on each block of the relation and report a mismatch for the first inconsistent block. Each block read from disk will utilize the internal masking functions to ensure that false mismatches are not reported such as header or hint-bit mismatches. The user is able to specify what relation types and databases they would like to validate or it defaults to all. Authors: Abhijit Subramanya and Jimmy Yih
-
由 Zhenghua Lyu 提交于
Current generate_series cost more memory than before, so we decrease the series size to make the test pass.
-
由 Bhuvnesh Chaudhary 提交于
If a target list entry is found under a relabelnode, the newly created var node should be nested inside the relabelnode if the vartype of the var node is different than the resulttype of the relablenode. Otherwise, the cast information is lost and executor will complain type mismatch ```sql CREATE TABLE t1 (a varchar, b character varying) DISTRIBUTED RANDOMLY; SELECT array_agg(f) FROM (SELECT b::text as f FROM t1 GROUP BY b) q; ERROR: attribute 1 has wrong type (execQual.c:763) (seg0 slice2 127.0.0.1:25432 pid=7064) DETAIL: Table has type character varying, but query expects text. ```
-
- 09 10月, 2017 1 次提交
-
-
由 Richard Guo 提交于
Previously there is a restriction on GUC 'max_resource_groups' that it cannot be larger than 'max_connections'. This restriction may cause gpdb fail to start if the two GUCs are not set properly. We decide to decouple these two GUCs and set a hard limit of 100 for 'max_resource_groups'.
-
- 07 10月, 2017 6 次提交
-
-
由 Shivram Mani 提交于
-
由 Kavinder Dhaliwal 提交于
This commit introduces the gp_recursive_cte test file with a variety of tests concentrated on exercising referencing a recursive CTE in a subquery
-
由 David Yozie 提交于
-
由 David Yozie 提交于
* adding revised, conditionalized install guide in prep for oss rpm/deb offering * adding install guide map to main ditamap
-
由 Ashwin Agrawal 提交于
Not having EXECUTE keyword here, makes EXPLAIN EXECUTE not formatted as explain plans and causes failures.
-
由 Lav Jain 提交于
-
- 06 10月, 2017 10 次提交
-
-
由 Dhanashree Kashid 提交于
Brings in ORCA changes to support partition elimination with IDFs in the partition predicate. Also update tests.
-
由 Jimmy Yih 提交于
In Greenplum 5.0, OID and relfilenode were decoupled. Each QD and QE segment was given its own relfilenode counter (similar to the nextOid counter) which users could manually set (with extreme caution!) by using pg_resetxlog tool. The pg_resetxlog --help output did not display the flag to set the relfilenode counter which was missed from that patch. Now it should properly display as an available option. Reference: https://github.com/greenplum-db/gpdb/commit/1fd11387d2b9f250c14f0ccb893c0956b1bf1487 Thanks to Marbin Tan for reporting this issue.
-
由 Lisa Owen 提交于
-
由 Mel Kiyama 提交于
* docs: gpcrondump/gpdbrestore - include language with schema level operation. --add information to for both backup and restore --mention external items such as shared libraries are not included. * docs - update based on review comment.
-
由 Jamie McAtamney 提交于
When including or excluding schemas in gpcrondump, functions may be dumped that require a procedural language to be created upon restore. However, languages are not part of any schema, so they are not already included in such a filter. This commit changes the behavior of gp_dump to always dump procedural languages when including and excluding schemas. It still will not dump them when including or excluding tables. Signed-off-by: NKaren Huddleston <khuddleston@pivotal.io>
-
由 David Sharp 提交于
Includes instructions on how to get an updated GCC on CentOS 6, which otherwise has GCC 4.4. These instructions will need to be expanded for devs working on Orca. README.CentOS.bash automates dependency installation. Recommend sourcing devtoolset-6 in CentOS dev setup instructions Signed-off-by: NNikhil Kak <nkak@pivotal.io> Signed-off-by: NAmil Khanzada <akhanzada@pivotal.io> Signed-off-by: NBen Christel <bchristel@pivotal.io>
-
-
由 Jane Beckman 提交于
* Reorganize pgAdmin4 topics into one file * Move note for better placement. * Remove files merged into main topic
-
由 Alexander Denissov 提交于
-
由 Chris Hajas 提交于
Signed-off-by: NKaren Huddleston <khuddleston@pivotal.io>
-
- 05 10月, 2017 5 次提交
-
-
由 Asim R P 提交于
Use m4.large instance with SSD, as the default t2.medium instances slow down the jobs 5 times as compared to pulse. With m4.large, the slowdown is less than twice.
-
由 Jimmy Yih 提交于
These use i3.xlarge because the tests are IO intensive. Authors: Jimmy Yih and Taylor Vesely Signed-off-by: NTaylor Vesely <tvesely@pivotal.io>
-
由 Divya Bhargov 提交于
-
由 Divya Bhargov 提交于
-
由 Jim Doty 提交于
Build artifacts filled up the disk, so for now we will clean up no matter what. The AIX test appears to have issues if there are concurrent pipelines going against it. The resource pool protects this failure case. Signed-off-by: NKris Macoskey <kmacoskey@pivotal.io>
-
- 04 10月, 2017 2 次提交
-
-
由 Heikki Linnakangas 提交于
Upstream commit 220db7cc replaced all the idiomatic DirectFunctionCalls of textin/textout, to use these new handy macros instead. Do the same for all the remaining such calls in GPDB-specific code.
-
由 Venkatesh Raghavan 提交于
-
- 03 10月, 2017 3 次提交
-
-
由 C.J. Jameson 提交于
This is good for Ubuntu, CentOS, and SLES -- we shouldn't have needed to change it Signed-off-by: NGoutam Tadi <gtadi@pivotal.io>
-
由 Jamie McAtamney 提交于
Signed-off-by: NKaren Huddleston <khuddleston@pivotal.io>
-
由 C.J. Jameson 提交于
This is a temporary affordance to make renaming the file transition smoother Signed-off-by: NGoutam Tadi <gtadi@pivotal.io>
-
- 30 9月, 2017 5 次提交
-
-
由 Ning Yu 提交于
A slot maybe reused on QE while its still using by another QE which might causes runtime error. This issue exists for a long time in theory, but the fail rate is enlarged with a current commit 8005f559. To workaround the issue we change the free slot pool from a stack into a queue, slots are allocated from the head and freed to the tail, so a slot will not be reused immediately after being freed. In worst case all the in-use slots are not freed in time on QEs, so we also doubled the slot pool size to have enough free slots for new queries even in such a worst case. In any sense this is not a fix, just a quick workaround, we will keep working on this race condition issue. Once a root fix is ready this workaround will be replaced.
-
由 Goutam Tadi 提交于
- Add paramiko files to PYLIB_DIR [#151180202] Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 C.J. Jameson 提交于
- Open Source GPDB compile - No ORCA yet -- getting there soon [#151180202] Signed-off-by: NGoutam Tadi <gtadi@pivotal.io>
-
由 Alex Diachenko 提交于
-
由 Haisheng Yuan 提交于
Don't transform large array Const into ArrayExpr for Orca (#3406) If the number of elements in the array Const is greater than optimizer_array_expansion_threshold, returns the original Const unmodified. Otherwise, it will cause severe performance issue for Orca optimizer for array with very large number of elements, e.g. 50K. Fixes issue #3355 [#151340976]
-