- 21 2月, 2017 10 次提交
-
-
由 xiong-gang 提交于
- Add 2 catalog tables for resource group - add one column in pg_authid to record the resource group of a role - bump catalog version - hard code default resource group of roles temporarily Signed-off-by: NKenan Yao <kyao@pivotal.io>
-
由 Heikki Linnakangas 提交于
ANALYZE used to be implemented with fairly complicated aggregate queries, to count distinct number of values, and gather many other statistics. Once upon a time, there was a bug in the planner, that it tried to use a hash aggregate, even if the data type wasn't hashable. That case was triggered by the internal queries that ANALYZE runs. ANALYZE has since been rewritten, and no longer issues such queries, so this test case isn't very interesting in its old form anymore. But the underlying problem, that we mustn't use a hash aggregate on a datatype that isn't hashable, would still be good to test. So write a new more targeted test for that, that doesn't involve ANALYZE.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
Subpartition names are SQL identifiers, so you cannot use a plain integer as name, unless you quote it. There isn't really anything special here, and never was. The original bug report was: > I was trying to add subpartitions named 2001, 2002, 2003, etc. but was not > allowed: > ... > However it was allowed when the name was changed from a number to a string It was closed as "No fix needed", as that is expected behavior. I'm not sure why we bothered to add test cases for this originally, but I think we can just drop them now.
-
由 Heikki Linnakangas 提交于
The old test case was no longer testing what it was supposed to. The ancient bug reported as MPP-3115 was related to renaming a partition, but the behavior has changed since then, to not allow two partitions to have the same name. I couldn't see any tests that RENAME PARTITION enforces uniqueness, except for the special case that the new name is the same as the old, so I added a test for that to partition1.sql. Other than that, I think we can simply remove this test case as obsolete.
-
由 Heikki Linnakangas 提交于
Much of it was present in partition1 test already.
-
由 Heikki Linnakangas 提交于
We have some VACUUM FULL commands in other tests already, but these tests seem more comprehensive than the other tests.
-
由 Haozhou Wang 提交于
Due to changes of external table "on master" feature
-
由 Haozhou Wang 提交于
Due to changes of external table "on master" feature
-
由 Haozhou Wang 提交于
We will first support external table query execution on master segment before we fully support task dispatch on master segment. 1. Modify grammar to enable ON clause for external table. 2. Modify ExtTableEntry catalog to record on clause. 3. Update legacy planner to support on clause for external table. 4. Update pg_dump. 5. Update regress tests. Signed-off-by: NHaozhou Wang <hawang@pivotal.io> Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NYuan Zhao <yuzhao@pivotal.io> * Update catalog version For "Support ON MASTER clause for external table" * Add on master regression tests for gpcloud
-
- 20 2月, 2017 1 次提交
-
- 19 2月, 2017 7 次提交
-
-
由 Heikki Linnakangas 提交于
ORCA acquires different locks. Same problem that was fixed in the main test suite by commit b3e13f6b.
-
由 Ashwin Agrawal 提交于
Answer files differ due to orca, as its more productive to have ICW running with ORCA on in PR pipeline for early feedback. Instead of running just ORCA on better to have then Codegen On as well for one ICW. [ci skip]
-
由 Ashwin Agrawal 提交于
[ci skip]
-
由 Heikki Linnakangas 提交于
ORCA acquires locks on tables with these operations. Just memorize the expected output.
-
由 Heikki Linnakangas 提交于
To make max_concurrency tests pass on concourse.
-
由 Heikki Linnakangas 提交于
Change the queries that check tuple counts on a particular segment, in utility mode, to not print out the exact tuple counts, but a crude classification of 0, 1, <5 or more tuples. That's less sensitive to how the tuples are distributed across segments. The locks_reindex test is moved to the regular regression suite. I rewrote it to use a more advanced "locktest" view, copied from the partition_locking test, which doesn't rely on utility mode. These changes should make the tests work with even larger clusters, but I've only tested with 1, 2, and 3 nodes.
-
由 Heikki Linnakangas 提交于
This new "isolation2" suite uses the same Python helper that TINC used, to run these special isolation test cases.
-
- 18 2月, 2017 12 次提交
-
-
由 Heikki Linnakangas 提交于
These tests merely test that "gpinitactivestandby --help" and "gpinitstandby --help" produce a particular output. That's not very interesting, --help seems highly unlikely to suddenly stop working. Also, memorizing the exact help output in a test adds a hurdle to keeping the --help text up-to-date - you will need to remember to update the test too, whenever you update the help text. What would be useful, is a test to check that --help covers all the options and provides useful advice to a DBA. But that cannot be automated.
-
由 Karen Huddleston 提交于
* Some tests in concourse are failing because they take just over 15 minutes to provision pulse machines. This time increase should help make those tests more consistent.
-
Commit d09cf2e6 introduced proc_exit(0) based on the check that postmaster is not alive and the segment is at fault. The check for postmaster being alive is incorrect when passed true for `PostmasterIsAlive()` in case of resync worker as resync workers are not direct children on postmaster. Because of the incorrect check proc_exit(0) is called always for resync worker when segment is in fault. Also, the locks are not released while exiting which might cause other backends `StartTransaction()` to hang due to virtual xact lock. Fixes #1791
-
由 Heikki Linnakangas 提交于
I removed the last tests from that directory in commit 4d23ab19.
-
由 Shreedhar Hardikar 提交于
-
由 Heikki Linnakangas 提交于
The "CREATE TABLE ... AS SELECT ..." commands that were used to create the test tables choose random distribution policy when ORCA is used, while without ORCA, it chooses the first column as the distribution key. I'm not sure why, and it seems inconsistent, but work around that by using CREATE TABLE + INSERT ... SELECT to create and populate the tables instead. With CTAS, the all the rows in AO tables go to AO segfile 0, while with CREATE TABLE + separate INSERT, they go to segfile 1, so update the expected output for that.
-
由 C.J. Jameson 提交于
Signed-off-by: NChumki Roy <croy@pivotal.io>
-
由 Heikki Linnakangas 提交于
These tests need the usual "lineitem", "orders" and "suppliers" test tables, but use only a subset of the data that we have in the the data/*.csv files. Therefore, split the csv files into two parts, the "small" set, used by these new tests, and the rest.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
I tried reverting the commit that fixed the bug, but I couldn't actually reproduce the original bug anymore. But this is a very quick query, so might as well keep it.
-
由 Heikki Linnakangas 提交于
-
由 David Yozie 提交于
* adding graphics directory for adminguide * updating source to use local /graphics dir * adding missing graphics * adding ditamap for client docs; removing some extraneous landing pages; adding local graphics
-
- 17 2月, 2017 10 次提交
-
-
由 Heikki Linnakangas 提交于
I had to change some of the tests that used 4 tests rows, to have more (9), to make them pass on a 3 segment cluster. If you only insert 4 rows to an external table, some segments might not receive any rows, and reading from the same file later will fail because the file wasn't created. I also changed the queries that tested that rows are distributed evenly, to work with a different number of segments. Hook up the new tests to installcheck-world.
-
由 Heikki Linnakangas 提交于
The explain plan with ORCA spells Seq Scan as Table Scan. Add alternative expected output for that.
-
由 Heikki Linnakangas 提交于
The main ExplainAnalyzeTestCase tests in here were running standard TPC-H-like test queries, and checking that the EXPLAIN ANALYZE output contains a "Memory: " line for every plan node. That seems a bit excessive, because looking at how those lines are printed, there is no reason to believe that you might have a Memory line for some nodes but not others. I don't think we need to test for that. In order to still have minimal coverage for the explain_memory_verbosity GUC, add a small test case to the main regression suite for that. The mpp20785 test was testing for an old bug where you got an out of memory error with this query. However, the test was broken, because the test schema was nowhere to be found, so it simply resulted in a "schema not found" error. Perhaps we could put it back if we could find the original schema somewhere, but it's useless as it is.
-
-
由 David Sharp 提交于
Rely on the credentials being in the environment rather than explicitly passing them. Add a comment to document the requirement. See also: 0d840bb2
-
由 Bhuvnesh Chaudhary 提交于
When we create Left Anti Semi Join relations in case the left side is a dummy relation, it implies that the join relation is also a dummy relation. Signed-off-by: NOmer Arap <oarap@pivotal.io>
-
由 Ashwin Agrawal 提交于
QE_READER and QE_ENTRY_DB_SINGLETON are not expected to allocate transaction IDs. They are just helper sub-processes to WRITER and should be using its xid.
-
由 Ashwin Agrawal 提交于
SET default_tablespace ideally should just be read-only transaction and shouldn't allocate xid. Currently get_tablespace_oid() is acquiring lock for other purposes to protect for concurrency, ends up assigning transaction IDs to take the lock. Setting the GUC doesn't need to acquire the lock, plus it creates the problem as multiple readers and a writer in a gang all end-up assigning independent xid when dispatched this GUC. This happens because SET runs as QE_AUTO_COMMIT_IMPLICIT for writer and hence has deferred allocation of transaction ID. The filespace ICW tests detected this case since hits assert FailedAssertion("!(!((proc->xid) != ((TransactionId) 0)))", File: "procarray.c" without the fix. Simplest scenario the problem can be demostrated is: - DROP TABLE a; - CREATE TABLE a(a INT, b INT); - INSERT INTO a VALUES (generate_series(1,2), generate_series(3,4)); -- generates 1 reader and 1 writer gang - SET default_tablespace='pg_default';
-
由 Ashwin Agrawal 提交于
The commit xlog record carries the information for objects which serves us for crash-recovery till post-commit persistent object work is done, hence cannot allow checkpoint in between. So, use inCommit flag to achieve the same. This makes it consistent with FinishPreparedTransaction() as well. Without the fix leaves out dangling persistent table entries and files on-disk for following sequence: - start transaction to drop a table - `RecordTransactionCommit()` - Checkpoint before `AtEOXact_smgr()` - Crash/Recover
-
由 Chuck Litzell 提交于
* docs: add Enum datatype * docs: add a release note for enum datatype * remove optimizer support statement.
-