- 17 1月, 2018 3 次提交
-
-
由 Mel Kiyama 提交于
* docs: gpbackup new options -include-table -exclude-table -Add new options -Update AdminGuide PR for 5X_STABLE Will be ported to MAIN * docs: gpbackup - fix typos
-
由 Mel Kiyama 提交于
PR for 5X_STABLE Will be ported to MAIN
-
由 Mel Kiyama 提交于
* docs: plcontainer logging and attribute updates. -attribute names have changed for logging, and network -the GUC log_min_messages controls the log level -add note information about controlling container lifetime PR for 5X_STABLE Will be ported to MAIN * docs: update for pl/container logging * docs: plcontainer - udpates based on review comments. * docs - plcontainer - review updates from dev. tracker stories Add limitations -pl/container not supported when GPDB is run in Docker -multi-dimensional array not supported Add ID max length 63 bytes Clarify terminology Update example output Fix edits and typos * docs: plcontainer - fix runtime-add example * docs: pl/container does not support PL/R multi-dim. arrays * docs: plcontainer - In Docker Images section, remove references to Pivotal.
-
- 16 1月, 2018 1 次提交
-
-
由 Amos Bird 提交于
-
- 13 1月, 2018 1 次提交
-
-
由 Lisa Owen 提交于
-
- 12 1月, 2018 6 次提交
-
-
由 Heikki Linnakangas 提交于
To avoid being confused by a user-created function called "sum". Fixes github issue #4185.
-
由 Lav Jain 提交于
-
由 Karen Huddleston 提交于
Author: Karen Huddleston <khuddleston@pivotal.io> Author: Chris Hajas <chajas@pivotal.io>
-
由 Jimmy Yih 提交于
Some of our OS container images have been updated to have conan 1.0.0. This new version changes the conan install command a bit.
-
由 Shreedhar Hardikar 提交于
This commit brings in ORCA changes that ensure that a Materialize node is not added under a Filter when its child contains outer references. Otherwise, the subplan is not rescanned (because it is under a Material), producing wrong results. A rescan is necessary for it evaluates the subplan for each of the outer referenced values. For example: ``` SELECT * FROM A,B WHERE EXISTS ( SELECT * FROM E WHERE E.j = A.j and B.i NOT IN ( SELECT E.i FROM E WHERE E.i != 10)); ``` For the above query ORCA produces a plan with two nested subplans: ``` Result Filter: (SubPlan 2) -> Gather Motion 3:1 -> Nested Loop Join Filter: true -> Broadcast Motion 3:3 -> Table Scan on a -> Table Scan on b SubPlan 2 -> Result Filter: public.c.j = $0 -> Materialize -> Result Filter: (SubPlan 1) -> Materialize -> Gather Motion 3:1 -> Table Scan on c SubPlan 1 -> Materialize -> Gather Motion 3:1 -> Table Scan on c Filter: i <> 10 ``` The Materialize node (on top of Filter with Subplan 1) has cdb_strict = true. The cdb_strict semantics dictate that when the Materialize is rescanned, instead of destroying its tuplestore, it resets the accessor pointer to the beginning and the subtree is NOT rescanned. So the entries from the first scan are returned for all future calls; i.e. the results depend on the first row output by the cross join. This causes wrong and non-deterministic results. Also, this commit reinstates this test in qp_correlated_query.sql. It also fixes another wrong result caused by the same issue. Note that the changes in rangefuncs_optimizer.out are because ORCA now no longer falls back for those queries. Instead it produces a plan which is executed on master (instead of the segments as was done by planner) which changes the error messages. Also bump ORCA version to 2.53.8. Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 dyozie 提交于
-
- 11 1月, 2018 1 次提交
-
-
由 Lisa Owen 提交于
-
- 10 1月, 2018 2 次提交
-
-
由 Shivram Mani 提交于
-
由 kaknikhil 提交于
We are now using regexp instead of versioned_file for the madlib_gppkg resource so that we don't have to change the yml files after each release. Closes #4243
-
- 09 1月, 2018 3 次提交
-
-
由 Lav Jain 提交于
* Have a separate docker file for gpadmin user * Add indent package for centos6
-
由 Sambitesh Dash 提交于
Author: Sambitesh Dash <sdash@pivotal.io> Author: Abhijit Subramanya <asubramanya@pivotal.io>
-
由 Sambitesh Dash 提交于
Instead of assuming that casts are always binary coercible (and hence that we could get away with just dropping them), translate casts in ORCA plans into either a RelabelType or a FuncExpr. Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
- 06 1月, 2018 4 次提交
-
-
由 Mel Kiyama 提交于
PR for 5X_STABLE Will be ported to MAIN
-
由 Lisa Owen 提交于
-
由 Mel Kiyama 提交于
* docs: change GUC default: optimizer_join_arity_for_associativity_commutativity=18 * docs: GUC removed ZSTD (zstandard) from gp_default_storage_options (added by mistake)
-
由 Todd Sedano 提交于
The correct order is as follows: 1. modify ~/.bash_profile 2. ssh to localhost 3. make create-demo-cluster See https://github.com/greenplum-db/gpdb/issues/3704 for more detail (cherry picked from commit c322279a)
-
- 05 1月, 2018 3 次提交
-
-
由 Jesse Zhang 提交于
Partition tables hard-code the operator '=' lookup to namespace 'pg_catalog', which means that in this test we had to put our user-defined operator into that special system namespace. This works fine, until we try to pg_dump the resulting database: pg_catalog is not dumped by default. That led to an incomplete dump that will fail to restore. This commit reinstates the dropping of the schema at the end of `gporca` test to get the pipelines back to green (broken after c7ab6924 ). Backpatch to 5X_STABLE. (cherry picked from commit db1ecd3c)
-
由 Divya Bhargov 提交于
Since the docs folder is ignored, it is possible for the gpdb_src to get to a detatched HEAD state if there was a docs commit after a given commit. We don't really need the branch to get checkout the commit in question. Signed-off-by: NEd Espino <edespino@pivotal.io> Signed-off-by: NDivya Bhargov <dbhargov@pivotal.io>
-
由 Jesse Zhang 提交于
The `gporca` regression test suite uses a schema but doesn't really switch `search_path` to the schema that's meant to encapsulate most of the objects it uses. This has led to multiple instances where we: 1. Either used a table from another namespace by accident; 2. Or we leaked objects into the public namespace that other tests in turned accidentally depended on. As we were about to add a few user-defined types and casts to the test suite, we want to (at last) ensure that all future additions are scoped to the namespace. Signed-off-by: NSambitesh Dash <sdash@pivotal.io> (cherry picked from commit c7ab6924)
-
- 04 1月, 2018 4 次提交
-
-
由 Mel Kiyama 提交于
* docs: pl/container updates - plcontainer command, configuration file -removed experimental warning. -replaced plcontainer command with new options. -updated configuration file - changed element names, added setting element. -updated default shared volume. PR for 5X_STABLE Will be ported to MAIN * docs: pl/container - updates based on review comments. * docs: pl/container - more updates based on additional review comments. * docs: pl/container - minor edit * docs: pl/container - another minor fix.
-
由 Lisa Owen 提交于
* docs - add info about greenplum database release versioning * misc edits requested by david * update subnav titles * clarify that deprecated features removed only in major release
-
由 Mel Kiyama 提交于
* docs: gpstop new option --host * docs: gpstop - update update/clarify --host description based on review comments. * docs: gpstop --host. updates based on review comments. * docs: gpstop - added information on restoring segments after using --host. * docs: gpstop --host. corrected name of utility to recover segments : gprecoverseg.
-
由 Ed Espino 提交于
We are intentionally excluding support for: o OS o Jitter removal
-
- 03 1月, 2018 3 次提交
-
-
由 Bhuvnesh Chaudhary 提交于
Corresponding gporca PR: greenplum-db/gporca#287
-
由 Haisheng Yuan 提交于
-
由 Ekta Khanna 提交于
Signed-off-by: NShreedhar Hardikar <shardikar@pivotal.io> (cherry picked from commit e06ed157)
-
- 30 12月, 2017 2 次提交
- 29 12月, 2017 2 次提交
-
-
由 Nadeem Ghani 提交于
The gp_bloat_expected_pages.btdexppages column is numeric, but it was passed to the function gp_bloat_diag() as integer in the definition of the view of the same name gp_bloat_diag. This caused integer overflow errors when the number of expected pages exceeded max integer limit for columns with very large widths. This changes the function signature and call to use numeric for the btxexppages paramter. Adding a simple test to mimic the cutomer issue. The test output file src/test/regress/expected/gp_toolkit.out modified to conform to the 5X_STABLE query output. Author: Nadeem Ghani <nghani@pivotal.io> Author: Shoaib Lari <slari@pivotal.io> (cherry picked from commit 152783e1)
-
由 Mel Kiyama 提交于
-
- 28 12月, 2017 5 次提交
-
-
由 Adam Lee 提交于
There are two places that QD keep trying to get data, ignore SIGINT, and not send signal to QEs. If the program on segment has no input/output, copy command hangs. To fix it, this commit: 1, lets QD wait connections able to be read before PQgetResult(), and cancels queries if gets interrupt signals while waiting 2, sets DF_CANCEL_ON_ERROR when dispatch in cdbcopy.c 3, completes copy error handling -- prepare create table test(t text); copy test from program 'yes|head -n 655360'; -- could be canceled copy test from program 'sleep 100 && yes test'; copy test from program 'sleep 100 && yes test<SEGID>' on segment; copy test from program 'yes test'; copy test to '/dev/null'; copy test to program 'sleep 100 && yes test'; copy test to program 'sleep 100 && yes test<SEGID>' on segment; -- should fail copy test from program 'yes test<SEGID>' on segment; copy test to program 'sleep 0.1 && cat > /dev/nulls'; copy test to program 'sleep 0.1<SEGID> && cat > /dev/nulls' on segment; (cherry picked from commit 25c70407dc038a2c56ccb37a3540c9af6a99e6e4)
-
由 Bhuvnesh Chaudhary 提交于
Test files are also updated in this commit as now we don't generate cross join alternative if an input join was present. cross join contains CScalarConst(1) as the join condition. if the input expression is as below with cross join at top level between CLogicalInnerJoin and CLogicalGet "t3" +--CLogicalInnerJoin |--CLogicalInnerJoin | |--CLogicalGet "t1" | |--CLogicalGet "t2" | +--CScalarCmp (=) | |--CScalarIdent "a" (0) | +--CScalarIdent "b" (9) |--CLogicalGet "t3" +--CScalarConst (1) for the above expression (lower) predicate generated for the cross join between t1 and t3 will be: CScalarConst (1) In only such cases, donot generate such alternative with the lower join as cross join example: +--CLogicalInnerJoin |--CLogicalInnerJoin | |--CLogicalGet "t1" | |--CLogicalGet "t3" | +--CScalarConst (1) |--CLogicalGet "t2" +--CScalarCmp (=) |--CScalarIdent "a" (0) +--CScalarIdent "b" (9) Signed-off-by: NShreedhar Hardikar <shardikar@pivotal.io>
-
由 Xin Zhang 提交于
If the first insert into AOCS table aborted, the visible blocks in the block directory should be greater than 1. By default, we initialize the `DatumStreamWriter` with `blockFirstRowNumber=1` for newly added columns. Hence, the first row numbers are not consistent between the visible blocks. This caused inconsistency between the base table scan vs. the scan using indexes through block directory. This wrong result issue is only happened to the first invisible blocks. The current code (`aocs_addcol_endblock()` called in `ATAocsWriteNewColumns()`) already handles other gaps after the first visible blocks. The fix updates the `blockFirstRowNumber` with `expectedFRN`, and hence fixed the mis-alignment of visible blocks. Author: Xin Zhang <xzhang@pivotal.io> Author: Ashwin Agrawal <aagrawal@pivotal.io> (cherry picked from commit 7e7ee7e2)
-
由 Jim Doty 提交于
Going forward any pipeline with a terraform resource should specify a BUCKET_PATH of `clusters/`. As a static value it can be hardcoded in place of being an interpolated value from a secrets file. The key `tf-bucket-path` in any secrets yaml files is now depricated and should be removed. We are dropping the convention where we had teams place their clusters' tfstate files in a path. In the past this convention was necessary, but now that we are tagging the clusters with much richer metadata, this convention is no longer strictly necessary. Balanced against the need for keeping the bucket organized was the possibility of two clusters attempting to register their AWS public key with the same name. This collision came as a result of the mismatch in scope. The terraform resource that provisioned the clusters, and selected names is only looking in the path given for name collisions, but the names for AWS keys has to be unique for the account. By collapsing all of the clusters into an account wide bucket, we will rely on the terraform resource to check for name conflicts going forward. Addresses bug: https://www.pivotaltracker.com/story/show/153928527Signed-off-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Lav Jain 提交于
-