- 04 1月, 2018 3 次提交
-
-
由 Mel Kiyama 提交于
* docs: gpstop new option --host * docs: gpstop - update update/clarify --host description based on review comments. * docs: gpstop --host. updates based on review comments. * docs: gpstop - added information on restoring segments after using --host. * docs: gpstop --host. corrected name of utility to recover segments : gprecoverseg.
-
由 Jacob Champion 提交于
Several \d variants don't put a row count at the end of their tables, which means that atmsort doesn't stop sorting output until it finds a row count somewhere later. Some tests are having their diff output horribly mangled because of this, which makes debugging difficult. When we see a \d command, try to apply more heuristics for finding the end of a table. In addition to ending at a row count, end table sorting whenever we find a line that doesn't have the same number of column separators as the table header. If we don't have a table header, still attempt to end table sorting at a blank line. extprotocol's test output order must be fixed as a result. Put the "External options" line where psql actually prints it, after "Format options".
-
由 David Sharp 提交于
https://github.com/hyperic/sigar is not under active development. https://github.com/boundary/sigar is a fork that has been somewhat updated. In particular, it has a fix to allow it to compile with GCC 5 and 6.
-
- 03 1月, 2018 21 次提交
-
-
由 Shreedhar Hardikar 提交于
* Fix 4 (out of 64) windowerr tests that use row_number() to be non-deterministic * Fix remaining 58 (out of 64) tests. There was a difference in the results between planner and optimizer due to different value of row_number assigned. row_number() is inherently non-deterministic in GPDB. For example, for the following query: select row_number() over (partition by b) from foo; Let's say that foo was not distributed by b. In this case, to compute the WindowAgg, we would first have to redistribute the table on b (or gather all the tuples on master). Thus, for rows having the same b value, the row_number assigned depends on the order in which they are received by the WindowAgg - which is non-deterministic. In qp_olap_windowerr.sql tests, we mitigate this by forcing an order on ord column, which is unique in this context, making it easier to compare test results. * Remove FIXME comment and enable optimizer_trace_fallback Signed-off-by: NShreedhar Hardikar <shardikar@pivotal.io>
-
由 Ed Espino 提交于
o This moves the validation (subprocess call) of the pipelines release jobs into the tool which generates the production pipeline. The corresponding task validate_pipeline.yml is removed and corresponding job. o Output production fly commands for both "gpdb_master" and "gpdb_master_without_asserts" pipelines. This will help engineers update both production pipelines. o Fix the icw sles jobs in the development pipelines. o Update README.md with usage examples. o Remove validate_pipeline from pr_pipeline.yml as this validation is moving to gen_pipeline.py.
-
由 Haisheng Yuan 提交于
-
由 Ekta Khanna 提交于
Signed-off-by: NShreedhar Hardikar <shardikar@pivotal.io>
-
由 Daniel Gustafsson 提交于
Passing -K to demo_cluster.sh will create gpdemo without enabling data checksums, which otherwise is the default. The reason for why one would want to disable checksums is to facilitate testing of upgrades from a non-checksum cluster to one with (and vice versa).
-
由 Daniel Gustafsson 提交于
When upgrading the cluster, allow the addition or removal of data checksums on the data pages as they are copied across from the old cluster. This allows for adding checksums without requiring a full dump/restore cycle. The test_gpdb.sh script is extended to support adding/removal of checksums. In the process, also extend to remove gpdemo config files generated when setting up the new cluster, and put them in gitignore as well to keep git status happy during testing. This also fixes a previously incorrect check for checksum version and aligns variable names better with upstream.
-
由 Goutam Tadi 提交于
-
由 Ekta Khanna 提交于
The new tests added as part of the postgres merge, contain tests of window functions like: last_value, nth_value without specifying the ORDER BY clause in the window function. Due to Redistribute Motion getting added, the order is not deterministic without an explicit ORDER BY clause within the window function. This commit updates such tests for the relevant changes in ORCA (https://github.com/greenplum-db/gporca/commit/855ba856fdc59e88923523f1f8b2ead32ae32364). Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Ed Espino 提交于
-
由 Ed Espino 提交于
-
由 Ed Espino 提交于
-
由 Ed Espino 提交于
-
由 Ed Espino 提交于
-
由 Ed Espino 提交于
-
由 Ed Espino 提交于
-
由 Ed Espino 提交于
-
由 Ed Espino 提交于
-
由 Ed Espino 提交于
-
由 Ed Espino 提交于
-
由 Ed Espino 提交于
-
由 Ed Espino 提交于
-
- 02 1月, 2018 9 次提交
-
-
由 Heikki Linnakangas 提交于
The ao_ptotal() test function was broken a long time ago, in the 8.3 merge, by the removal of implicit cast from text to integer. You just got an "operator does not exist: text > integer" error. However, the queries using the function were inside start/end_ignore blocks, so that didn't We have tests on tupcount elsewhere, in the uao_* tests, for example. Whether the table is partitioned or not doesn't seem very interesting. So just remove the test queries, rather than try to fix them. (I don't understand what the endianess issue mentioned in the comment might've been.) I kept the test on COPY with REJECT LIMIT on partitioned table. I'm not sure how interesting that is either, but it wasn't broken. While at it, I reduced the number of partitions used, though, to shave off a few milliseconds from the test.
-
由 Heikki Linnakangas 提交于
Commit e314acb1 changed the 'count_operator' helper function to include the EXPLAIN, so that it doesn't need to be given in the argument query anymore. But many of the calls of count_operator were not changed, and still contained EXPLAIN in the query, and as a result, they failed with 'syntax error at or near "explain"'. These syntax errors were accidentally memorized in the expected output. Revert the expected output to what it was before, and remove the EXPLAIN from the queries instead.
-
由 Heikki Linnakangas 提交于
We have these exact same tests twice, with and without scema-qualifying the table name. That's hardly a meaningful difference, when testing the grammar of the SUBPARTITION TEMPLATE part. Remove the duplicated tests. (I'm not convinced it's useful to have even a single copy of these tests, but keep for now.)
-
由 Heikki Linnakangas 提交于
Both bfv_partition and partition_ddl had the essentially the same test. Keep the copy in partition_ddl, and move the "alter table" commands that were only present in the bfv_partition copy there.
-
由 Heikki Linnakangas 提交于
AFAICS, this code isn't used for anything. It's a debugging utility, though, so maybe that's intentional. I think to use this, you're supposed to modify the source code at some place of interest, and add a debug_break() call there. However, I'm not aware of anyone using that. I just insert a sleep() or use a gdb breakpoint for that, when I'm debugging.
-
由 Heikki Linnakangas 提交于
These negative tests throw an error in the parse analysis phase already. Whether the target table is an AO or AOCO table is not interesting.
-
由 Heikki Linnakangas 提交于
* Remove unnecessary #includes, and extern declaration for Test_print_direct_dispatch_info * Remove unneeded debugging code to print target list (you get that in a nicer format with EXPLAIN VERBOSE now) * Remove duplicated assignment of 'useprefix' variable in show_sort_keys * Whitespace fixes.
-
由 Heikki Linnakangas 提交于
* Move a few GPDB-added functions to end of file. They were confusing "git diff", making it show a large diff in InitPlan(), even though it's mostly unchanged. * Inline InitializeResultRelations back into InitPlan(), like it is in the upstream * Add back upstrem code to deal with RETURNING, but #ifdef'd out.
-
由 Heikki Linnakangas 提交于
-
- 30 12月, 2017 1 次提交
-
-
由 Bhuvnesh Chaudhary 提交于
-
- 29 12月, 2017 1 次提交
-
-
由 mkiyama 提交于
-
- 28 12月, 2017 5 次提交
-
-
由 Adam Lee 提交于
There are two places that QD keep trying to get data, ignore SIGINT, and not send signal to QEs. If the program on segment has no input/output, copy command hangs. To fix it, this commit: 1, lets QD wait connections able to be read before PQgetResult(), and cancels queries if gets interrupt signals while waiting 2, sets DF_CANCEL_ON_ERROR when dispatch in cdbcopy.c 3, completes copy error handling -- prepare create table test(t text); copy test from program 'yes|head -n 655360'; -- could be canceled copy test from program 'sleep 100 && yes test'; copy test from program 'sleep 100 && yes test<SEGID>' on segment; copy test from program 'yes test'; copy test to '/dev/null'; copy test to program 'sleep 100 && yes test'; copy test to program 'sleep 100 && yes test<SEGID>' on segment; -- should fail copy test from program 'yes test<SEGID>' on segment; copy test to program 'sleep 0.1 && cat > /dev/nulls'; copy test to program 'sleep 0.1<SEGID> && cat > /dev/nulls' on segment;
-
由 Nadeem Ghani 提交于
The gp_bloat_expected_pages.btdexppages column is numeric, but it was passed to the function gp_bloat_diag() as integer in the definition of the view of the same name gp_bloat_diag. This caused integer overflow errors when the number of expected pages exceeded max integer limit for columns with very large widths. This changes the function signature and call to use numeric for the btxexppages paramter. Adding a simple test to mimic the cutomer issue. Author: Nadeem Ghani <nghani@pivotal.io> Author: Shoaib Lari <slari@pivotal.io>
-
由 Jim Doty 提交于
Going forward any pipeline with a terraform resource should specify a BUCKET_PATH of `clusters/`. As a static value it can be hardcoded in place of being an interpolated value from a secrets file. The key `tf-bucket-path` in any secrets yaml files is now depricated and should be removed. We are dropping the convention where we had teams place their clusters' tfstate files in a path. In the past this convention was necessary, but now that we are tagging the clusters with much richer metadata, this convention is no longer strictly necessary. Balanced against the need for keeping the bucket organized was the possibility of two clusters attempting to register their AWS public key with the same name. This collision came as a result of the mismatch in scope. The terraform resource that provisioned the clusters, and selected names is only looking in the path given for name collisions, but the names for AWS keys has to be unique for the account. By collapsing all of the clusters into an account wide bucket, we will rely on the terraform resource to check for name conflicts going forward. Addresses bug: https://www.pivotaltracker.com/story/show/153928527Signed-off-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Kris Macoskey 提交于
Going forward any pipeline with a terraform resource should specify a BUCKET_PATH of `clusters/`. As a static value it can be hardcoded in place of being an interpolated value from a secrets file. The key `tf-bucket-path` in any secrets yaml files is now depricated and should be removed. We are dropping the convention where we had teams place their clusters' tfstate files in a path. In the past this convention was necessary, but now that we are tagging the clusters with much richer metadata, this convention is no longer strictly necessary. Balanced against the need for keeping the bucket organized was the possibility of two clusters attempting to register their AWS public key with the same name. This collision came as a result of the mismatch in scope. The terraform resource that provisioned the clusters, and selected names is only looking in the path given for name collisions, but the names for AWS keys has to be unique for the account. By collapsing all of the clusters into an account wide bucket, we will rely on the terraform resource to check for name conflicts going forward. Addresses bug: https://www.pivotaltracker.com/story/show/153928527Signed-off-by: NJim Doty <jdoty@pivotal.io>
-
由 Bhuvnesh Chaudhary 提交于
Test files are also updated in this commit as now we don't generate cross join alternative if an input join was present. cross join contains CScalarConst(1) as the join condition. if the input expression is as below with cross join at top level between CLogicalInnerJoin and CLogicalGet "t3" +--CLogicalInnerJoin |--CLogicalInnerJoin | |--CLogicalGet "t1" | |--CLogicalGet "t2" | +--CScalarCmp (=) | |--CScalarIdent "a" (0) | +--CScalarIdent "b" (9) |--CLogicalGet "t3" +--CScalarConst (1) for the above expression (lower) predicate generated for the cross join between t1 and t3 will be: CScalarConst (1) In only such cases, donot generate such alternative with the lower join as cross join example: +--CLogicalInnerJoin |--CLogicalInnerJoin | |--CLogicalGet "t1" | |--CLogicalGet "t3" | +--CScalarConst (1) |--CLogicalGet "t2" +--CScalarCmp (=) |--CScalarIdent "a" (0) +--CScalarIdent "b" (9) Signed-off-by: NShreedhar Hardikar <shardikar@pivotal.io>
-