- 05 4月, 2017 1 次提交
-
-
由 Asim R P 提交于
The fix is to perform the same steps as a TRUNCATE command - set new relfiles and drop existing ones for parent AO table as well as all its auxiliary tables. This fixes issue #913. Thank you, Tao-Ma for reporting the issue and proposing a fix as PR #960. This commit implements Tao-Ma's idea but implementation differs from the original proposal.
-
- 03 4月, 2017 2 次提交
-
-
由 Daniel Gustafsson 提交于
This moves the memory_quota tests more or less unchanged to ICW. Changes include removing ignore sections and minor formatting as well as a rename to bb_memory_quota.
-
由 Daniel Gustafsson 提交于
This combines the various mpph tests in BugBuster into a single new ICW suite, bb_mpph. Most of the existing queries were moved over with a few pruned that were too uninteresting, or covered elsewhere. The BugBuster tests combined are: load_mpph, mpph_query, mpph_aopart, hashagg and opperf.
-
- 30 3月, 2017 1 次提交
-
-
由 Jesse Zhang 提交于
Not to be confused with a TINC test with the same name. This test was brought into the main suite in b82c1e60 as an effort to increase visibility into all the tests that we cared about. We never had the bandwidth to look at the intent of this test though. There are a plethora of problem of the bugbuster version of `schema_topology`: 0. It has very unclear and mixed intent. For example, it depends on gphdfs (which nobody outside Pivotal can build) but it just tests that we are able to revoke privilege of that protocol. 0. It creates and switches databases 0. The vast number of cases in this test is duplicating coverage for tests elsewhere in `installcheck-good` and TINC. Burning this with fire.
-
- 20 3月, 2017 1 次提交
-
-
由 Daniel Gustafsson 提交于
Most of the builtin analytical functions added in Greenplum have been deprecated in favour of the corresponding functionality in MADLib. The deprecation notice was committed in a61bf8b7, code as well as tests are removed here. The sum(array[]) function require the matrix_add() backend code and thus it remains. This removes matrix_add(), matrix_multiply(), matrix_transpose(), pinv(), mregr_coef(), mregr_r2(), mregr_pvalues(), mregr_tstats(), nb_classify() and nb_probabilities().
-
- 13 3月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
Some notable differences: * For the last "appendonly_verify_block_checksums_co" test, don't use a compressed table. That seems fragile. * In the "appendonly_verify_block_checksums_co" test, be more explicit in what is replaced. It used to scan backwards from end of file for the byte 'a', but we didn't explicitly include that byte in the test data. What actually gets replaced depends heavily on how integers are encoded. (And the table was compressed too, which made it even more indeterministic.) In the rewritten test, we replace the string 'xyz', and we use a text field that contains that string as the table data. * Don't restore the original table file after corrupting. That seemed uninteresting to test. Presumably the table was OK before we corrupted it, so surely it's still OK after restoring it back. In theory, there could be a problem if the file's corrupt contents were cached somewhere, but we don't cache AO tables, and I'm not sure what we'd try to prove by testing that, anway, because swapping the file while the system is active is surely not supported. * The old script checked that the output when a corrupt table was SELECTed from contained the string "ERROR: Header checksum does not match". However, half of the tests actually printed a different error, "Block checksum does not match". It turns out that the way the old select_table function asserted the result of the grep command was wrong. It should've done "assert(int(result) > 0), ..." rather than just "assert(result > 0), ...". As written, it always passed, even if there was no ERROR in the output. The rewritten test does not have that bug.
-
- 09 3月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
These tests are from the legacy cdbfast test suite, from writable_ext_tbl/test_* files. They're not very exciting, and we already had tests for some variants of these commands, but looking at the way all these privileges are handled in the code, perhaps it's indeed best to have tests for many different combinations. The tests run in under a second, so it's not too much of a burden. Compared to the original tests, I removed the SELECTs and INSERTs that tested that you can also read/write the external tables, after successfully creating one. Those don't seem very useful, they all basically test that the owner of an external table can read/write it. By getting rid of those statements, these tests don't need a live gpfdist server to be running, which makes them a lot simpler and faster to run. Also move the existing tests on these privileges from the 'external_table' test to the new file.
-
- 08 3月, 2017 1 次提交
-
-
由 Kenan Yao 提交于
There are two default resource groups, default_group and admin_group, roles will be assigned to one of them depending on they are superuser or not unless a group is explicitly specified. Only superuser can be assigned to the admin_group. -- create a role r1 and assign it to resgroup g1 CREATE ROLE r1 RESOURCE GROUP g1; Signed-off-by: NNing Yu <nyu@pivotal.io>
-
- 06 3月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
I had to jump through some hoops to run the same test queries with optimizer_cte_inlining both on and off, like the original tests did. The actual tests queries are in qp_with_functional.sql, which is included by two launcher scripts that set the options. The launcher tests are put in the test schedule, instead of qp_with_functional.sql. When ORCA is not enabled, the tests are run with gp_cte_sharing on and off. That's not quite the same as inlining, but it's similar in nature in that it makes sense to run test queries with it enabled and disabled. There were some tests for that in with_clause and qp_with_clause tests already, but I don't think some extra coverage will hurt us. This is just a straightforward conversion, there might be overlapping tests between these new tests and existing 'with_clause', 'qp_with_clause', and upstream 'with' tests. We can clean them up later; these new tests run in a few seconds on my laptop, so that's not urgent. A few tests were tagged with "@skip" in the original tests. Test queries 58, 59, and 60 produced different results on different invocations, apparently depending on the order that a subquery returned rows (OPT-2497). I left them in TINC, as skipped tests, pending a decision on what to do about them. Queries 28 and 29 worked fine, so I'm not sure why they were tagged as "@skip OPT-3035" in the first place. I converted them over and enabled like the other tests.
-
- 03 3月, 2017 2 次提交
-
-
由 Heikki Linnakangas 提交于
In the passing, also move existing tests GPDB-specific tests out of the 'alter_table' test. The 'alter_table' test is inherited from the upstream, so let's try to keep it unmodified.
-
由 Heikki Linnakangas 提交于
-
- 02 3月, 2017 6 次提交
-
-
由 Heikki Linnakangas 提交于
Rename the test tables in 'partition' test, so that they don't clash with the test tables in 'partition1' test. Change a few validation queries to not get confused if there are unrelated tables with partitions in the database. With these changes, we can run 'partition' and 'partition1' in the same parallel group, which is a more logical grouping.
-
由 Heikki Linnakangas 提交于
All of these tests used the same test table, but it was dropped and re-created for each test. To speed things up, create it once, and wrap each test in a begin-rollback block. The access plan of one of the tests varied depending on optimizer_segments, and it caused a difference in the ERROR message. The TINC tests were always run with 2 segments, but you got a different plan and message with 3 or more segments. Added a "SET optimizer_segments=2" to stabilize that, and a comment explaining the situation.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
I moved them to the 'partition_pruning_with_fn' test, but because there is no functions involved in these tests, I renamed the test to just 'partition_pruning'. There is possibly some overlap with these new tests and the existing ones in the same file, but will let future test archeologists to decide that. These are cheap tests, in any case.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
* Move all the udf_insert_* tests to the main suite. * Move 'volatilefn_dml_int8' test to the main suite. * Remove all the other volatilefn_dml_* tests. There's no reason to believe that the data types used in the table would make a difference, so keeping one variant of this is enough.
-
- 01 3月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
I'm not how many of these tests are worth keeping at all, but as a first step, let's get them into the main test suite. The runtime of the parallel group with these tests is about 1 minute on my laptop. This is mostly a straightforward move, with mechanical changes removing the useless TINC tags. However, some tests were marked with the special "@executemode ORCA_PLANNER_DIFF" tag. The code in __init__.py ran those tests differently. Normally, you run a test, and comparet the output with the expected output file. But those ORCA_PLANNER_DIFF tests were run twice, with and without ORCA, and the results were compared against each other, instead of the expected output stored in the repository. The expected outputs of those tests were out of date, you got a different error message now than what was remembered in the expected output. Now those tests are run as normal pg_regress tests. I updated the expected output of those tests to match what you get nowadays.
-
- 27 2月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
I kept the tests using QuickLZ in TINC, as we don't have QuickLZ support in the open source version anymore. They ought to be moved to where we keep the code for reading legacy quicklz-compressed tables.
-
- 25 2月, 2017 2 次提交
-
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
Commit 92c36a45 moved these to the same parallel group, but that turned out to be a bad idea. With assertions enabled that can trigger an assertion, see https://github.com/greenplum-db/gpdb/issues/1859. Also, the test queries in the 'resource_queue' test assume that there are no other resource queues active, and sometimes fails because the test_q queue created by the resource_queue_functions test is active at the same time. Those queries should be changed to not be sensitive to other resource queues, probably by qualifying the queries with "WHERE rsqname LIKE 'reg_%'", but for now, avoid the test failures by running them separately.
-
- 23 2月, 2017 3 次提交
-
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
These exact same tests were in both 'direct_dispatch', and in 'bfv_dd_types'. Oddly, one SELECT was missing from 'bfv_dd_types', though. Add the missing SELECT there, and remove all these type tests from 'direct_dispatch'. Now that there are no duplicate table names in these tests, also run 'direct_dispatch' in the same parallel group as the rest of the direct dispatch tests. That's a more logical grouping.
-
由 Heikki Linnakangas 提交于
The test disconnects, and immediately reconnects and checks that temp table from the previous session has been removed. However, the old backend removes the tables only after the disconnection, so there's a race condition here: the new session might query pg_tables and still see the table, if the old session was not fast enough to remove it first. Add a two-second sleep, to make the race condition less likely to happen. This is not guaranteed to avoid the race altogether, but I don't see an easy way to wait for the specific event that the old backend has exited. A sleep in a test that doesn't run in parallel with other tests is a bit annoying, as it directly makes the whole regression suite slower. To compensate for that, move the test to run in parallel with the other nearby bfv_* tests. Those other tests don't use temp tables, so it still works. Also, modify the validation query in the test that checks that there are no temp tables, to return whole rows, not just COUNT(*). That'll make analyzing easier, if this test fails.
-
- 22 2月, 2017 2 次提交
-
-
由 Heikki Linnakangas 提交于
The uao_dml and uaocs_dml tests were identical, except that there were a few additional queries in the uao_dml suite that were missing from the uaocs_dml suite, and various cosmetic differences in table names and such. Use the new "uao templating" mechanism in pg_regress to generate both cases from a single set of files. I didn't look for redundant or useless tests in these suites. At a quick glance, at least many of the select tests seem quite pointless, so it would be good to cull these in the future.
-
由 Heikki Linnakangas 提交于
This is a straightforward copy-paste, except for thing: I changed all the quicklz-compressed tables to use zlib instead, so that these tests work without support for legacy quicklz compression. AFAICT, the compression isn't a very interesting aspect for these tests (I even considered making them uncompressed.)
-
- 21 2月, 2017 3 次提交
-
-
由 Heikki Linnakangas 提交于
These tests test for an old bug, where vacuuming an empty AO relation would not advance its relfrozenxid. Doesn't seem very likely for that particular bug to reappear, but seems like a good idea to have some coverage for freezing and relfrozenxid advancement. In this rewritten form, these tests are fairly quick, too (1-2 seconds on my laptop).
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
We have some VACUUM FULL commands in other tests already, but these tests seem more comprehensive than the other tests.
-
- 19 2月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
Change the queries that check tuple counts on a particular segment, in utility mode, to not print out the exact tuple counts, but a crude classification of 0, 1, <5 or more tuples. That's less sensitive to how the tuples are distributed across segments. The locks_reindex test is moved to the regular regression suite. I rewrote it to use a more advanced "locktest" view, copied from the partition_locking test, which doesn't rely on utility mode. These changes should make the tests work with even larger clusters, but I've only tested with 1, 2, and 3 nodes.
-
- 18 2月, 2017 2 次提交
-
-
由 Heikki Linnakangas 提交于
These tests need the usual "lineitem", "orders" and "suppliers" test tables, but use only a subset of the data that we have in the the data/*.csv files. Therefore, split the csv files into two parts, the "small" set, used by these new tests, and the rest.
-
由 Heikki Linnakangas 提交于
-
- 17 2月, 2017 2 次提交
-
-
由 Heikki Linnakangas 提交于
The main ExplainAnalyzeTestCase tests in here were running standard TPC-H-like test queries, and checking that the EXPLAIN ANALYZE output contains a "Memory: " line for every plan node. That seems a bit excessive, because looking at how those lines are printed, there is no reason to believe that you might have a Memory line for some nodes but not others. I don't think we need to test for that. In order to still have minimal coverage for the explain_memory_verbosity GUC, add a small test case to the main regression suite for that. The mpp20785 test was testing for an old bug where you got an out of memory error with this query. However, the test was broken, because the test schema was nowhere to be found, so it simply resulted in a "schema not found" error. Perhaps we could put it back if we could find the original schema somewhere, but it's useless as it is.
-
This commit also resolves #1646. Porting only the function `tuplesort_get_stats` in tuplesort.c from the Postgres commit https://github.com/postgres/postgres/blob/9bd27b7c9e998087f390774bd0f43916813a2847/src/backend/utils/sort/tuplesort.c Link to gpdb-dev discussion - https://groups.google.com/a/greenplum.org/forum/#!topic/gpdb-dev/V-zIshnNzyE
-
- 15 2月, 2017 3 次提交
-
-
由 Heikki Linnakangas 提交于
I'm not sure what exactly the purpose of each of these tests are, and they probably need more culling and cleanup, but for now, just move them out of bugbuster, so that we can get rid of bugbuster as a separate suite.
-
由 Heikki Linnakangas 提交于
I don't see any reason to expect that this would behave differently with an external table than a normal one. Move the test from bugbuster to the normal regression suite.
-
由 Heikki Linnakangas 提交于
This moves 'spi', 'spilltodisk', and 'wrkloadadmin' tests. The goal is to get rid of bugbuster as a separate suite altogether, and this commit is one step towards that goal.
-
- 14 2月, 2017 1 次提交
-
-
由 Haisheng Yuan 提交于
- The dpe.sql file was duplicated in bugbuster - so we remove that one. - Result files were old, and hence updated. Signed-off-by: NShreedhar Hardikar <shardikar@pivotal.io>
-
- 08 2月, 2017 1 次提交
-
-
由 Ashwin Agrawal 提交于
No reason was specified for running those tests serially, tried and seem to run fine in parallel. Also, dropped few drop tables from gpdtm_plpgsql test.
-
- 03 2月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
There's no need to allocate ExecWorkFile struct in TopMemoryContext. We don't rely on a callback to clean them up anymore, they will be closed by the normal ResourceOwner and MemoryContext mechanisms. Otherwise, the memory allocated for the struct is leaked on abort. In the passing, make the error handling more robust. Also modify tuplestorenew.c, to allocate everything in a MemoryContext, rather than using gp_malloc() directly. ResourceOwner and MemoryContext mechanisms will now take care of cleanup on abort, instead of the custom end-of-xact callback. Remove execWorkFile mock test. The only thing it tested was that ExecWorkFile was allocated in TopMemoryContext. That didn't seem very interesting to begin with, and the assumption is now false, anyway. We could've changed the test to test that it's allocated in CurrentMemoryContext instead, but I don't see the point. Per report in private from Karthikeyan Rajaraman. Test cases also by him, with some cleanup by me.
-