- 22 2月, 2017 5 次提交
-
-
由 Heikki Linnakangas 提交于
We have more comprehensive tests for gpcheckcat in gpMgmt/test/behave/mgmt_utils/gpcheckcat.feature, and in gpMgmt/bin/gppylib/test/unit/test_unit_gpcheckcat.py.
-
由 Peifeng Qiu 提交于
gp_dump_agent is used by management utils for backup/restore operation. Apply similar patch for pg_dump to cdb_dump_agent. Signed-off-by: NYuan Zhao <yuzhao@pivotal.io>
-
由 Adam Lee 提交于
Due to changes of external table "on master" feature Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Adam Lee 提交于
Due to changes of external table "on master" feature Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Heikki Linnakangas 提交于
I typoed the INSERT command in this test, and even checked in the ERROR in the expected output. Fix. Also, make test case a bit less fragile, by advancing the XID counter also after the INSERT+DELETEs in the second part of these tests. This ensures that the tables will look 'old' before the vacuum. Tweak the limits of 'old' and 'young' XID, to make more sure that a table just created is considered 'young', and that advancing the XID counter pushes the age to 'old' on all segments.
-
- 21 2月, 2017 17 次提交
-
-
由 Heikki Linnakangas 提交于
I'm not sure why this seemed to pass on my laptop without this file, but looking at the code, I'm pretty sure it's still needed for the test_uao_crash_update_with_ins_fault test. We'll see what the Concourse job thinks..
-
由 Heikki Linnakangas 提交于
These tests test for an old bug, where vacuuming an empty AO relation would not advance its relfrozenxid. Doesn't seem very likely for that particular bug to reappear, but seems like a good idea to have some coverage for freezing and relfrozenxid advancement. In this rewritten form, these tests are fairly quick, too (1-2 seconds on my laptop).
-
由 Heikki Linnakangas 提交于
Turns out that the remaining tests need both uao_crash_update2.sql, and uaocs_crash_update2.sql. [me puts on brown paper back]
-
由 Heikki Linnakangas 提交于
The remaining tests needed uaocs_crash_update2.sql, not uao_crash_update2.sql
-
由 Heikki Linnakangas 提交于
I removed these files in commit b39c1356, but it turns out that some of the remaining tests still used those files.
-
由 Heikki Linnakangas 提交于
The pattern in these tests was: 1. Create and populate a test append-only table 2. Induce a fault with gpfaultinjector, to crash the server at beginning of appendonly_insert/update/delete function. 3. Try to INSERT, UPDATE or DELETE some rows, crashing the server. Crashing the server at the beginning of the operation isn't very interesting, because we haven't done any on-disk modifications at that point yet. There is no reason to believe that there would be a problem at crash recovery at that point. In particular, after the crash, the state at disk will look identical in the INSERT, UPDATE and DELETE scenarios. Therefore, remove the tests.
-
由 Heikki Linnakangas 提交于
These tests set the appendonly_delete fault injection trigger, and then executed a TRUNCATE or ALTER TABLE DROP COLUMN on the table. However, neither command performs any DELETEs on the target table, and therefore don't even hit the fault. This is effectively the same as just testing TRUNCATE or ALTER TABLE DROP COLUMN on an AOCS table without any faults, and we have tests for that elsewhere already (e.g. src/test/regress/input/uao_dml/uao_dml.source and src/test/regress/sql/alter_table_aocs.sql).
-
由 xiong-gang 提交于
- Add 2 catalog tables for resource group - add one column in pg_authid to record the resource group of a role - bump catalog version - hard code default resource group of roles temporarily Signed-off-by: NKenan Yao <kyao@pivotal.io>
-
由 Heikki Linnakangas 提交于
ANALYZE used to be implemented with fairly complicated aggregate queries, to count distinct number of values, and gather many other statistics. Once upon a time, there was a bug in the planner, that it tried to use a hash aggregate, even if the data type wasn't hashable. That case was triggered by the internal queries that ANALYZE runs. ANALYZE has since been rewritten, and no longer issues such queries, so this test case isn't very interesting in its old form anymore. But the underlying problem, that we mustn't use a hash aggregate on a datatype that isn't hashable, would still be good to test. So write a new more targeted test for that, that doesn't involve ANALYZE.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
Subpartition names are SQL identifiers, so you cannot use a plain integer as name, unless you quote it. There isn't really anything special here, and never was. The original bug report was: > I was trying to add subpartitions named 2001, 2002, 2003, etc. but was not > allowed: > ... > However it was allowed when the name was changed from a number to a string It was closed as "No fix needed", as that is expected behavior. I'm not sure why we bothered to add test cases for this originally, but I think we can just drop them now.
-
由 Heikki Linnakangas 提交于
The old test case was no longer testing what it was supposed to. The ancient bug reported as MPP-3115 was related to renaming a partition, but the behavior has changed since then, to not allow two partitions to have the same name. I couldn't see any tests that RENAME PARTITION enforces uniqueness, except for the special case that the new name is the same as the old, so I added a test for that to partition1.sql. Other than that, I think we can simply remove this test case as obsolete.
-
由 Heikki Linnakangas 提交于
Much of it was present in partition1 test already.
-
由 Heikki Linnakangas 提交于
We have some VACUUM FULL commands in other tests already, but these tests seem more comprehensive than the other tests.
-
由 Haozhou Wang 提交于
Due to changes of external table "on master" feature
-
由 Haozhou Wang 提交于
Due to changes of external table "on master" feature
-
由 Haozhou Wang 提交于
We will first support external table query execution on master segment before we fully support task dispatch on master segment. 1. Modify grammar to enable ON clause for external table. 2. Modify ExtTableEntry catalog to record on clause. 3. Update legacy planner to support on clause for external table. 4. Update pg_dump. 5. Update regress tests. Signed-off-by: NHaozhou Wang <hawang@pivotal.io> Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NYuan Zhao <yuzhao@pivotal.io> * Update catalog version For "Support ON MASTER clause for external table" * Add on master regression tests for gpcloud
-
- 20 2月, 2017 1 次提交
-
- 19 2月, 2017 7 次提交
-
-
由 Heikki Linnakangas 提交于
ORCA acquires different locks. Same problem that was fixed in the main test suite by commit b3e13f6b.
-
由 Ashwin Agrawal 提交于
Answer files differ due to orca, as its more productive to have ICW running with ORCA on in PR pipeline for early feedback. Instead of running just ORCA on better to have then Codegen On as well for one ICW. [ci skip]
-
由 Ashwin Agrawal 提交于
[ci skip]
-
由 Heikki Linnakangas 提交于
ORCA acquires locks on tables with these operations. Just memorize the expected output.
-
由 Heikki Linnakangas 提交于
To make max_concurrency tests pass on concourse.
-
由 Heikki Linnakangas 提交于
Change the queries that check tuple counts on a particular segment, in utility mode, to not print out the exact tuple counts, but a crude classification of 0, 1, <5 or more tuples. That's less sensitive to how the tuples are distributed across segments. The locks_reindex test is moved to the regular regression suite. I rewrote it to use a more advanced "locktest" view, copied from the partition_locking test, which doesn't rely on utility mode. These changes should make the tests work with even larger clusters, but I've only tested with 1, 2, and 3 nodes.
-
由 Heikki Linnakangas 提交于
This new "isolation2" suite uses the same Python helper that TINC used, to run these special isolation test cases.
-
- 18 2月, 2017 10 次提交
-
-
由 Heikki Linnakangas 提交于
These tests merely test that "gpinitactivestandby --help" and "gpinitstandby --help" produce a particular output. That's not very interesting, --help seems highly unlikely to suddenly stop working. Also, memorizing the exact help output in a test adds a hurdle to keeping the --help text up-to-date - you will need to remember to update the test too, whenever you update the help text. What would be useful, is a test to check that --help covers all the options and provides useful advice to a DBA. But that cannot be automated.
-
由 Karen Huddleston 提交于
* Some tests in concourse are failing because they take just over 15 minutes to provision pulse machines. This time increase should help make those tests more consistent.
-
Commit d09cf2e6 introduced proc_exit(0) based on the check that postmaster is not alive and the segment is at fault. The check for postmaster being alive is incorrect when passed true for `PostmasterIsAlive()` in case of resync worker as resync workers are not direct children on postmaster. Because of the incorrect check proc_exit(0) is called always for resync worker when segment is in fault. Also, the locks are not released while exiting which might cause other backends `StartTransaction()` to hang due to virtual xact lock. Fixes #1791
-
由 Heikki Linnakangas 提交于
I removed the last tests from that directory in commit 4d23ab19.
-
由 Shreedhar Hardikar 提交于
-
由 Heikki Linnakangas 提交于
The "CREATE TABLE ... AS SELECT ..." commands that were used to create the test tables choose random distribution policy when ORCA is used, while without ORCA, it chooses the first column as the distribution key. I'm not sure why, and it seems inconsistent, but work around that by using CREATE TABLE + INSERT ... SELECT to create and populate the tables instead. With CTAS, the all the rows in AO tables go to AO segfile 0, while with CREATE TABLE + separate INSERT, they go to segfile 1, so update the expected output for that.
-
由 C.J. Jameson 提交于
Signed-off-by: NChumki Roy <croy@pivotal.io>
-
由 Heikki Linnakangas 提交于
These tests need the usual "lineitem", "orders" and "suppliers" test tables, but use only a subset of the data that we have in the the data/*.csv files. Therefore, split the csv files into two parts, the "small" set, used by these new tests, and the rest.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
I tried reverting the commit that fixed the bug, but I couldn't actually reproduce the original bug anymore. But this is a very quick query, so might as well keep it.
-