- 05 9月, 2017 1 次提交
-
-
由 Ning Yu 提交于
* Simplify tuple serialization in Motion nodes. There is a fast-path for tuples that contain no toasted attributes, which writes the raw tuple almost as is. However, the slow path is significantly more complicated, calling each attribute's binary send/receive functions (although there's a fast-path for a few built-in datatypes). I don't see any need for calling I/O functions here. We can just write the raw Datum on the wire. If that works for tuples with no toasted attributes, it should work for all tuples, if we just detoast any toasted attributes first. This makes the code a lot simpler, and also fixes a bug with data types that don't have a binary send/receive routines. We used to call the regular (text) I/O functions in that case, but didn't handle the resulting cstring correctly. Diagnosis and test case by Foyzur Rahman. Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io> Signed-off-by: NNing Yu <nyu@pivotal.io>
-
- 31 8月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
Rather than appending to a StringInfo, return a string. The caller can append that to a StringInfo if he wants to. And instead of passing a prefix as argument, the caller can prepend that too. Both callers passed the same format string, so just embed that in the function itself. Don't append a trailing "; ". It's easier for the caller to append it, if it's preferred, than to remove it afterwards. Also add a regression test for the 'gp_enable_fallback_plan' GUC. There were none before. The error message you get with that GUC disabled uses the gp_guc_list_show function.
-
- 25 8月, 2017 1 次提交
-
-
由 Ashwin Agrawal 提交于
Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
- 24 8月, 2017 2 次提交
-
-
由 Heikki Linnakangas 提交于
Commit 0be2e5b0 added a regression test to check that if a view contains an ORDER BY, that ORDER BY is obeyed on selecting from the view. However, it forgot to add the test to the schedule, so it was never run. Add it. There is actually a problem with the test as it is written: gpdiff masks out the differences in the row order, so this test won't catch the problem that it was originally written for. Nevertheless, seems better to enable the test and run, than not run it at all. But add a comment to point that out.
-
由 Heikki Linnakangas 提交于
In order to test this, modify the gpdiff tool to not suppress duplicate WARNINGs, if the duplicates don't have a segment number attached to them. The bug was easy to see by setting work_mem, which emits a WARNING in GPDB, except that suppressing the duplicates hid the bug. Even though we have a few tests that set work_mem, add one that does that explicitly for the purpose of testing for this bug, with a comment explaining its purpose. While we're at it, move the GPDB-specific test queries out of 'guc' test, into a new 'guc_gp' test, to keep the upstream 'guc' test pristine. Fixes github issue #3008, reported by @guofengrichard.
-
- 21 8月, 2017 1 次提交
-
-
由 Daniel Gustafsson 提交于
The way ORCA was tied into the planner, running a planner_hook was not supported in the intended way. This commit moves ORCA into standard_planner() instead of planner() and leaves the hook for extensions to make use of, with or without ORCA. Since the intention with the optimizer GUC is to replace the planner in postgres, while keeping the planning proess, this allows for planner extensions to co-operate with that. In order to reduce the Greenplum footprint in upstream postgres source files for future merges, the ORCA functions are moved to their own file. Also adds a memaccounting class for planner hooks since they otherwise ran in the planner scope, as well as a test for using planner_hooks.
-
- 18 8月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
I repurposed the existing psql_gpdb_du test for this. It was very small, I think we can put tests for other \d commands in the same file. In the passing fix a typo in a comment there, and move the expected output from output/ to expected/, because it doesn't need any string replacements.
-
- 09 8月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
* Turn it into an extension, for easier installation. * Add a simpler variant of the gp_inject_fault function, with less options. This is applicable to almost all the calls in the regression suite, so it's nice to make them less verbose. * Change the dbid argument from smallint to int4. For convenience, so that you don't need a cast when calling the function.
-
- 27 7月, 2017 3 次提交
-
-
由 Asim R P 提交于
-
由 Asim R P 提交于
The function gp_inject_fault() was defined in a test specific contrib module (src/test/dtm). It is moved to a dedicated contrib module gp_inject_fault. All tests can now make use of it. Two pg_regress tests (dispatch and cursor) are modified to demonstrate the usage. The function is modified so that it can inject fault in any segment, specified by dbid. No more invoking gpfaultinjector python script from SQL files. The new module is integrated into top level build so that it is included in make and make install.
-
由 Jesse Zhang 提交于
SharedInputScan (a.k.a. "Shared Scan" in EXPLAIN) is the operator through which Greenplum implements Common Table Expression execution. It executes in two modes: writer (a.k.a. producer) and reader (a.k.a. consumer). Writers will execute the common table expression definition and materialize the output, and readers can read the materialized output (potentially in parallel). Because of the parallel nature of Greenplum execution, slices containing Shared Scans need to synchronize among themselves to ensure that readers don't start until writers are finished writing. Specifically, a slice with readers depending on writers on a different slice will block during `ExecutorRun`, before even pulling the first tuple from the executor tree. Greenplum's Hash Join implementation will skip executing its outer ("probe side") subtree if it detects an empty inner ("hash side"), and declare all motions in the skipped subtree as "stopped" (we call this "squelching"). That means we can potentially squelch a subtree that contains a shared scan writer, leaving cross-slice readers waiting forever. For example, with ORCA enabled, the following query: ```SQL CREATE TABLE foo (a int, b int); CREATE TABLE bar (c int, d int); CREATE TABLE jazz(e int, f int); INSERT INTO bar VALUES (1, 1), (2, 2), (3, 3); INSERT INTO jazz VALUES (2, 2), (3, 3); ANALYZE foo; ANALYZE bar; ANALYZE jazz; SET statement_timeout = '15s'; SELECT * FROM ( WITH cte AS (SELECT * FROM foo) SELECT * FROM (SELECT * FROM cte UNION ALL SELECT * FROM cte) AS X JOIN bar ON b = c ) AS XY JOIN jazz on c = e AND b = f; ``` leads to a plan that will expose this problem: ``` QUERY PLAN ------------------------------------------------------------------------------------------------------------ Gather Motion 3:1 (slice2; segments: 3) (cost=0.00..2155.00 rows=1 width=24) -> Hash Join (cost=0.00..2155.00 rows=1 width=24) Hash Cond: bar.c = jazz.e AND share0_ref2.b = jazz.f AND share0_ref2.b = jazz.e AND bar.c = jazz.f -> Sequence (cost=0.00..1724.00 rows=1 width=16) -> Shared Scan (share slice:id 2:0) (cost=0.00..431.00 rows=1 width=1) -> Materialize (cost=0.00..431.00 rows=1 width=1) -> Table Scan on foo (cost=0.00..431.00 rows=1 width=8) -> Hash Join (cost=0.00..1293.00 rows=1 width=16) Hash Cond: share0_ref2.b = bar.c -> Redistribute Motion 3:3 (slice1; segments: 3) (cost=0.00..862.00 rows=1 width=8) Hash Key: share0_ref2.b -> Append (cost=0.00..862.00 rows=1 width=8) -> Shared Scan (share slice:id 1:0) (cost=0.00..431.00 rows=1 width=8) -> Shared Scan (share slice:id 1:0) (cost=0.00..431.00 rows=1 width=8) -> Hash (cost=431.00..431.00 rows=1 width=8) -> Table Scan on bar (cost=0.00..431.00 rows=1 width=8) -> Hash (cost=431.00..431.00 rows=1 width=8) -> Table Scan on jazz (cost=0.00..431.00 rows=1 width=8) Filter: e = f Optimizer status: PQO version 2.39.1 (20 rows) ``` where processes executing slice1 on the segments that have an empty `jazz` will hang. We fix this by ensuring we execute the Shared Scan writer even if it's in the sub tree that we're squelching. Signed-off-by: NMelanie Plageman <mplageman@pivotal.io> Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
- 22 7月, 2017 2 次提交
-
-
由 Asim R P 提交于
The function gp_inject_fault() was defined in a test specific contrib module (src/test/dtm). All tests can now make use of it. Two pg_regress tests (dispatch and cursor) are modified to demonstrate the usage. The function is also made capable to inject fault in any segment, specified by dbid. No more invoking gpfaultinjector python script from SQL files.
- 23 6月, 2017 1 次提交
-
-
由 foyzur 提交于
This PR adds SQL test to verify that memory consumption of alien nodes drops to zero after we set the guc execute_pruned_plan=on. Signed-off-by: NFoyzur Rahman <foyzur@gmail.com>
-
- 19 6月, 2017 1 次提交
-
-
由 Ashwin Agrawal 提交于
-
- 05 6月, 2017 1 次提交
-
-
由 Daniel Gustafsson 提交于
At some point in the past it seems the keyword list was merged from upstream 8.4 without merging the actual code, and instead keywords which weren't yet supported where commented out. The XML whitespace keywords for STRIP WHITESPACE and PRESERVE WHITESPACE were however not uncommented when XML support was merged, resulting in the below error when trying to restore the XML views in ICW back after a dump: db=# SELECT XMLPARSE(CONTENT '<abc>x</abc>'::text PRESERVE WHITESPACE) AS "xmlparse"; ERROR: syntax error at or near "WHITESPACE" LINE 1: ...CT XMLPARSE(CONTENT '<abc>x</abc>'::text PRESERVE WHITESPACE... ^ Put the keywords back, and also remove commented out keywords which we dont support but will get when merging 8.4 to reduce the diff with upstream. Also add a testcase for XML whitespace syntax parsing.
-
- 02 6月, 2017 1 次提交
-
-
由 Xin Zhang 提交于
In addition to renaming the files, a new test is added to cover the scenario when readers need to use subtransaction xid cache in writer's PGPROC. NOTE: Please use `git show -M10` to see the rename instead of separate remove and add. Signed-off-by: NAsim R P <apraveen@pivotal.io>
-
- 25 5月, 2017 2 次提交
-
-
由 Venkatesh Raghavan 提交于
-
由 Daniel Gustafsson 提交于
The gpmapreduce application is an optional install included via the --enable-mapreduce configure option. The tests were however still in src/test/regress and unconditionally included in the ICW schedule, thus causing test failures when mapreduce wasn't configured. Move all gpmapreduce tests to co-locate them with the mapreduce code and only test when configured. Also, add a dependency on Perl for gpmapreduce in autoconf since it's a required component.
-
- 19 5月, 2017 1 次提交
-
-
由 Ning Yu 提交于
As cgroup is now required to enable resgroup on linux and cgroup itself requires privileged permission to setup & config, so resgroup tests will fail or at least produce extra warnings in ICW pipeline. We moved them to the installcheck-resgroup target as there is a standalone privileged pipeline to run this target. Also the tests are updated as the psql output format is different between ICW and installcheck-resgroup.
-
- 05 5月, 2017 1 次提交
-
-
由 Daniel Gustafsson 提交于
When splitting the Greenplum and PostgreSQL versioning in autoconf, the wrong variable was used when building the version string. Also add a test for it in ICW to catch it in case it would happen again.
-
- 28 4月, 2017 1 次提交
-
-
由 Daniel Gustafsson 提交于
The workfile_manager unittest code was previously included in all builds, this is however functionality which shouldn't be in prod installations. This moves all unit test code to src/test/regress and links it to the regress object such that it can be tested with the normal pg_regress process. Add a random test from the suite into ICW just to exercise it, none of these test were ever executed in automated suites before so not sure what we actually want to run.
-
- 21 4月, 2017 1 次提交
-
-
由 Adam Lee 提交于
The 'sreh' test was disabled in the regression suite, looks useful, re-enable it. GPDB 5 removed the error table feature, rewrite these cases to use gp_read_error_log(). Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io>
-
- 19 4月, 2017 1 次提交
-
-
由 Jesse Zhang 提交于
The xpath test in general is fairly questionable: - it largely overlaps in coverage with the xml test from upstream - it contains a few negative assertions that depend on an error message that doesn't even come from our (Postgres) code: it's an implementation detail of libxml2. On CentOS, a few Red Hat patches to libxml2 always make those tests fail. This commit removes the xpath test.
-
- 18 4月, 2017 1 次提交
-
-
由 xiong-gang 提交于
1. Support num_running, num_queueing, num_queued, num_executed and total_queue_duration in gp_toolkit.gp_resgroup_status. 2. Reflect status in pg_stat_activity when transaction is queued in resource group. Signed-off-by: NRichard Guo <riguo@pivotal.io> Signed-off-by: NKenan Yao <kyao@pivotal.io>
-
- 13 4月, 2017 1 次提交
-
-
由 foyzur 提交于
Renaming session_state_memory_entries to session_level_memory_consumption in uninstall script for the session state view (#2208) * Renaming session_state_memory_entries to session_level_memory_consumption in uninstall script for the session state view. * Adding ICW test to check the installation/uninstallation, shape and basic content of session_level_memory_consumption. * Changing expected file to a dynamically generated one.
-
- 05 4月, 2017 1 次提交
-
-
由 Asim R P 提交于
The fix is to perform the same steps as a TRUNCATE command - set new relfiles and drop existing ones for parent AO table as well as all its auxiliary tables. This fixes issue #913. Thank you, Tao-Ma for reporting the issue and proposing a fix as PR #960. This commit implements Tao-Ma's idea but implementation differs from the original proposal.
-
- 03 4月, 2017 2 次提交
-
-
由 Daniel Gustafsson 提交于
This moves the memory_quota tests more or less unchanged to ICW. Changes include removing ignore sections and minor formatting as well as a rename to bb_memory_quota.
-
由 Daniel Gustafsson 提交于
This combines the various mpph tests in BugBuster into a single new ICW suite, bb_mpph. Most of the existing queries were moved over with a few pruned that were too uninteresting, or covered elsewhere. The BugBuster tests combined are: load_mpph, mpph_query, mpph_aopart, hashagg and opperf.
-
- 30 3月, 2017 1 次提交
-
-
由 Jesse Zhang 提交于
Not to be confused with a TINC test with the same name. This test was brought into the main suite in b82c1e60 as an effort to increase visibility into all the tests that we cared about. We never had the bandwidth to look at the intent of this test though. There are a plethora of problem of the bugbuster version of `schema_topology`: 0. It has very unclear and mixed intent. For example, it depends on gphdfs (which nobody outside Pivotal can build) but it just tests that we are able to revoke privilege of that protocol. 0. It creates and switches databases 0. The vast number of cases in this test is duplicating coverage for tests elsewhere in `installcheck-good` and TINC. Burning this with fire.
-
- 20 3月, 2017 1 次提交
-
-
由 Daniel Gustafsson 提交于
Most of the builtin analytical functions added in Greenplum have been deprecated in favour of the corresponding functionality in MADLib. The deprecation notice was committed in a61bf8b7, code as well as tests are removed here. The sum(array[]) function require the matrix_add() backend code and thus it remains. This removes matrix_add(), matrix_multiply(), matrix_transpose(), pinv(), mregr_coef(), mregr_r2(), mregr_pvalues(), mregr_tstats(), nb_classify() and nb_probabilities().
-
- 13 3月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
Some notable differences: * For the last "appendonly_verify_block_checksums_co" test, don't use a compressed table. That seems fragile. * In the "appendonly_verify_block_checksums_co" test, be more explicit in what is replaced. It used to scan backwards from end of file for the byte 'a', but we didn't explicitly include that byte in the test data. What actually gets replaced depends heavily on how integers are encoded. (And the table was compressed too, which made it even more indeterministic.) In the rewritten test, we replace the string 'xyz', and we use a text field that contains that string as the table data. * Don't restore the original table file after corrupting. That seemed uninteresting to test. Presumably the table was OK before we corrupted it, so surely it's still OK after restoring it back. In theory, there could be a problem if the file's corrupt contents were cached somewhere, but we don't cache AO tables, and I'm not sure what we'd try to prove by testing that, anway, because swapping the file while the system is active is surely not supported. * The old script checked that the output when a corrupt table was SELECTed from contained the string "ERROR: Header checksum does not match". However, half of the tests actually printed a different error, "Block checksum does not match". It turns out that the way the old select_table function asserted the result of the grep command was wrong. It should've done "assert(int(result) > 0), ..." rather than just "assert(result > 0), ...". As written, it always passed, even if there was no ERROR in the output. The rewritten test does not have that bug.
-
- 09 3月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
These tests are from the legacy cdbfast test suite, from writable_ext_tbl/test_* files. They're not very exciting, and we already had tests for some variants of these commands, but looking at the way all these privileges are handled in the code, perhaps it's indeed best to have tests for many different combinations. The tests run in under a second, so it's not too much of a burden. Compared to the original tests, I removed the SELECTs and INSERTs that tested that you can also read/write the external tables, after successfully creating one. Those don't seem very useful, they all basically test that the owner of an external table can read/write it. By getting rid of those statements, these tests don't need a live gpfdist server to be running, which makes them a lot simpler and faster to run. Also move the existing tests on these privileges from the 'external_table' test to the new file.
-
- 08 3月, 2017 1 次提交
-
-
由 Kenan Yao 提交于
There are two default resource groups, default_group and admin_group, roles will be assigned to one of them depending on they are superuser or not unless a group is explicitly specified. Only superuser can be assigned to the admin_group. -- create a role r1 and assign it to resgroup g1 CREATE ROLE r1 RESOURCE GROUP g1; Signed-off-by: NNing Yu <nyu@pivotal.io>
-
- 06 3月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
I had to jump through some hoops to run the same test queries with optimizer_cte_inlining both on and off, like the original tests did. The actual tests queries are in qp_with_functional.sql, which is included by two launcher scripts that set the options. The launcher tests are put in the test schedule, instead of qp_with_functional.sql. When ORCA is not enabled, the tests are run with gp_cte_sharing on and off. That's not quite the same as inlining, but it's similar in nature in that it makes sense to run test queries with it enabled and disabled. There were some tests for that in with_clause and qp_with_clause tests already, but I don't think some extra coverage will hurt us. This is just a straightforward conversion, there might be overlapping tests between these new tests and existing 'with_clause', 'qp_with_clause', and upstream 'with' tests. We can clean them up later; these new tests run in a few seconds on my laptop, so that's not urgent. A few tests were tagged with "@skip" in the original tests. Test queries 58, 59, and 60 produced different results on different invocations, apparently depending on the order that a subquery returned rows (OPT-2497). I left them in TINC, as skipped tests, pending a decision on what to do about them. Queries 28 and 29 worked fine, so I'm not sure why they were tagged as "@skip OPT-3035" in the first place. I converted them over and enabled like the other tests.
-
- 03 3月, 2017 2 次提交
-
-
由 Heikki Linnakangas 提交于
In the passing, also move existing tests GPDB-specific tests out of the 'alter_table' test. The 'alter_table' test is inherited from the upstream, so let's try to keep it unmodified.
-
由 Heikki Linnakangas 提交于
-
- 02 3月, 2017 3 次提交
-
-
由 Heikki Linnakangas 提交于
Rename the test tables in 'partition' test, so that they don't clash with the test tables in 'partition1' test. Change a few validation queries to not get confused if there are unrelated tables with partitions in the database. With these changes, we can run 'partition' and 'partition1' in the same parallel group, which is a more logical grouping.
-
由 Heikki Linnakangas 提交于
All of these tests used the same test table, but it was dropped and re-created for each test. To speed things up, create it once, and wrap each test in a begin-rollback block. The access plan of one of the tests varied depending on optimizer_segments, and it caused a difference in the ERROR message. The TINC tests were always run with 2 segments, but you got a different plan and message with 3 or more segments. Added a "SET optimizer_segments=2" to stabilize that, and a comment explaining the situation.
-
由 Heikki Linnakangas 提交于
-