- 03 1月, 2018 14 次提交
-
-
由 Ekta Khanna 提交于
The new tests added as part of the postgres merge, contain tests of window functions like: last_value, nth_value without specifying the ORDER BY clause in the window function. Due to Redistribute Motion getting added, the order is not deterministic without an explicit ORDER BY clause within the window function. This commit updates such tests for the relevant changes in ORCA (https://github.com/greenplum-db/gporca/commit/855ba856fdc59e88923523f1f8b2ead32ae32364). Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Ed Espino 提交于
-
由 Ed Espino 提交于
-
由 Ed Espino 提交于
-
由 Ed Espino 提交于
-
由 Ed Espino 提交于
-
由 Ed Espino 提交于
-
由 Ed Espino 提交于
-
由 Ed Espino 提交于
-
由 Ed Espino 提交于
-
由 Ed Espino 提交于
-
由 Ed Espino 提交于
-
由 Ed Espino 提交于
-
由 Ed Espino 提交于
-
- 02 1月, 2018 9 次提交
-
-
由 Heikki Linnakangas 提交于
The ao_ptotal() test function was broken a long time ago, in the 8.3 merge, by the removal of implicit cast from text to integer. You just got an "operator does not exist: text > integer" error. However, the queries using the function were inside start/end_ignore blocks, so that didn't We have tests on tupcount elsewhere, in the uao_* tests, for example. Whether the table is partitioned or not doesn't seem very interesting. So just remove the test queries, rather than try to fix them. (I don't understand what the endianess issue mentioned in the comment might've been.) I kept the test on COPY with REJECT LIMIT on partitioned table. I'm not sure how interesting that is either, but it wasn't broken. While at it, I reduced the number of partitions used, though, to shave off a few milliseconds from the test.
-
由 Heikki Linnakangas 提交于
Commit e314acb1 changed the 'count_operator' helper function to include the EXPLAIN, so that it doesn't need to be given in the argument query anymore. But many of the calls of count_operator were not changed, and still contained EXPLAIN in the query, and as a result, they failed with 'syntax error at or near "explain"'. These syntax errors were accidentally memorized in the expected output. Revert the expected output to what it was before, and remove the EXPLAIN from the queries instead.
-
由 Heikki Linnakangas 提交于
We have these exact same tests twice, with and without scema-qualifying the table name. That's hardly a meaningful difference, when testing the grammar of the SUBPARTITION TEMPLATE part. Remove the duplicated tests. (I'm not convinced it's useful to have even a single copy of these tests, but keep for now.)
-
由 Heikki Linnakangas 提交于
Both bfv_partition and partition_ddl had the essentially the same test. Keep the copy in partition_ddl, and move the "alter table" commands that were only present in the bfv_partition copy there.
-
由 Heikki Linnakangas 提交于
AFAICS, this code isn't used for anything. It's a debugging utility, though, so maybe that's intentional. I think to use this, you're supposed to modify the source code at some place of interest, and add a debug_break() call there. However, I'm not aware of anyone using that. I just insert a sleep() or use a gdb breakpoint for that, when I'm debugging.
-
由 Heikki Linnakangas 提交于
These negative tests throw an error in the parse analysis phase already. Whether the target table is an AO or AOCO table is not interesting.
-
由 Heikki Linnakangas 提交于
* Remove unnecessary #includes, and extern declaration for Test_print_direct_dispatch_info * Remove unneeded debugging code to print target list (you get that in a nicer format with EXPLAIN VERBOSE now) * Remove duplicated assignment of 'useprefix' variable in show_sort_keys * Whitespace fixes.
-
由 Heikki Linnakangas 提交于
* Move a few GPDB-added functions to end of file. They were confusing "git diff", making it show a large diff in InitPlan(), even though it's mostly unchanged. * Inline InitializeResultRelations back into InitPlan(), like it is in the upstream * Add back upstrem code to deal with RETURNING, but #ifdef'd out.
-
由 Heikki Linnakangas 提交于
-
- 30 12月, 2017 1 次提交
-
-
由 Bhuvnesh Chaudhary 提交于
-
- 29 12月, 2017 1 次提交
-
-
由 mkiyama 提交于
-
- 28 12月, 2017 7 次提交
-
-
由 Adam Lee 提交于
There are two places that QD keep trying to get data, ignore SIGINT, and not send signal to QEs. If the program on segment has no input/output, copy command hangs. To fix it, this commit: 1, lets QD wait connections able to be read before PQgetResult(), and cancels queries if gets interrupt signals while waiting 2, sets DF_CANCEL_ON_ERROR when dispatch in cdbcopy.c 3, completes copy error handling -- prepare create table test(t text); copy test from program 'yes|head -n 655360'; -- could be canceled copy test from program 'sleep 100 && yes test'; copy test from program 'sleep 100 && yes test<SEGID>' on segment; copy test from program 'yes test'; copy test to '/dev/null'; copy test to program 'sleep 100 && yes test'; copy test to program 'sleep 100 && yes test<SEGID>' on segment; -- should fail copy test from program 'yes test<SEGID>' on segment; copy test to program 'sleep 0.1 && cat > /dev/nulls'; copy test to program 'sleep 0.1<SEGID> && cat > /dev/nulls' on segment;
-
由 Nadeem Ghani 提交于
The gp_bloat_expected_pages.btdexppages column is numeric, but it was passed to the function gp_bloat_diag() as integer in the definition of the view of the same name gp_bloat_diag. This caused integer overflow errors when the number of expected pages exceeded max integer limit for columns with very large widths. This changes the function signature and call to use numeric for the btxexppages paramter. Adding a simple test to mimic the cutomer issue. Author: Nadeem Ghani <nghani@pivotal.io> Author: Shoaib Lari <slari@pivotal.io>
-
由 Jim Doty 提交于
Going forward any pipeline with a terraform resource should specify a BUCKET_PATH of `clusters/`. As a static value it can be hardcoded in place of being an interpolated value from a secrets file. The key `tf-bucket-path` in any secrets yaml files is now depricated and should be removed. We are dropping the convention where we had teams place their clusters' tfstate files in a path. In the past this convention was necessary, but now that we are tagging the clusters with much richer metadata, this convention is no longer strictly necessary. Balanced against the need for keeping the bucket organized was the possibility of two clusters attempting to register their AWS public key with the same name. This collision came as a result of the mismatch in scope. The terraform resource that provisioned the clusters, and selected names is only looking in the path given for name collisions, but the names for AWS keys has to be unique for the account. By collapsing all of the clusters into an account wide bucket, we will rely on the terraform resource to check for name conflicts going forward. Addresses bug: https://www.pivotaltracker.com/story/show/153928527Signed-off-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Kris Macoskey 提交于
Going forward any pipeline with a terraform resource should specify a BUCKET_PATH of `clusters/`. As a static value it can be hardcoded in place of being an interpolated value from a secrets file. The key `tf-bucket-path` in any secrets yaml files is now depricated and should be removed. We are dropping the convention where we had teams place their clusters' tfstate files in a path. In the past this convention was necessary, but now that we are tagging the clusters with much richer metadata, this convention is no longer strictly necessary. Balanced against the need for keeping the bucket organized was the possibility of two clusters attempting to register their AWS public key with the same name. This collision came as a result of the mismatch in scope. The terraform resource that provisioned the clusters, and selected names is only looking in the path given for name collisions, but the names for AWS keys has to be unique for the account. By collapsing all of the clusters into an account wide bucket, we will rely on the terraform resource to check for name conflicts going forward. Addresses bug: https://www.pivotaltracker.com/story/show/153928527Signed-off-by: NJim Doty <jdoty@pivotal.io>
-
由 Bhuvnesh Chaudhary 提交于
Test files are also updated in this commit as now we don't generate cross join alternative if an input join was present. cross join contains CScalarConst(1) as the join condition. if the input expression is as below with cross join at top level between CLogicalInnerJoin and CLogicalGet "t3" +--CLogicalInnerJoin |--CLogicalInnerJoin | |--CLogicalGet "t1" | |--CLogicalGet "t2" | +--CScalarCmp (=) | |--CScalarIdent "a" (0) | +--CScalarIdent "b" (9) |--CLogicalGet "t3" +--CScalarConst (1) for the above expression (lower) predicate generated for the cross join between t1 and t3 will be: CScalarConst (1) In only such cases, donot generate such alternative with the lower join as cross join example: +--CLogicalInnerJoin |--CLogicalInnerJoin | |--CLogicalGet "t1" | |--CLogicalGet "t3" | +--CScalarConst (1) |--CLogicalGet "t2" +--CScalarCmp (=) |--CScalarIdent "a" (0) +--CScalarIdent "b" (9) Signed-off-by: NShreedhar Hardikar <shardikar@pivotal.io>
-
由 Xin Zhang 提交于
If the first insert into AOCS table aborted, the visible blocks in the block directory should be greater than 1. By default, we initialize the `DatumStreamWriter` with `blockFirstRowNumber=1` for newly added columns. Hence, the first row numbers are not consistent between the visible blocks. This caused inconsistency between the base table scan vs. the scan using indexes through block directory. This wrong result issue is only happened to the first invisible blocks. The current code (`aocs_addcol_endblock()` called in `ATAocsWriteNewColumns()`) already handles other gaps after the first visible blocks. The fix updates the `blockFirstRowNumber` with `expectedFRN`, and hence fixed the mis-alignment of visible blocks. Author: Xin Zhang <xzhang@pivotal.io> Author: Ashwin Agrawal <aagrawal@pivotal.io>
-
由 Lav Jain 提交于
-
- 27 12月, 2017 2 次提交
-
-
由 Jialun 提交于
- Fix gpload.py, add schema prefix to every external table name when EXTERNAL.SCHEMA is set - Add new test cases
-
由 Haisheng Yuan 提交于
See discussion at: http://www.postgresql-archive.org/Bitmap-table-scan-cost-per-page-formula-td5997588.html This reverts commit 982bcd4e
-
- 22 12月, 2017 6 次提交
-
-
由 dyozie 提交于
-
由 sambitesh 提交于
This query was added to "percentile" in commit b877cd43 as outer references were allowed after we backported ordered-set aggregates. But the query was using an ARRAY sublink, which semantically does not guarantee ordering. This causes sporadic failure in tests. This commit tweaks the test query so that it has a deterministic ordering in the output array. We considered just adding an ORDER BY to the subquery, but ultimately we chose to use `array_agg` with an `ORDER BY` because subquery order is not preserved per SQL standard. Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Lisa Owen 提交于
* docs - correct the example using --full and -d options * avoid using --full and --schema-only together, do not prohibit * clean up schema copy bullet point
-
由 Ashwin Agrawal 提交于
Mostly re-used init-mirrors for this functionality. Author: Ashwin Agrawal <aagrawal@pivotal.io> Author: Taylor Vesely <tvesely@pivotal.io>
-
由 Heikki Linnakangas 提交于
This seems to have been similar to the fault injector code we have, but not used by anything as far as I can see.
-
由 Alexandra Wang 提交于
Signed-off-by: NDivya Bhargov <dbhargov@pivotal.io>
-