- 09 3月, 2017 8 次提交
-
-
由 Christopher Hajas 提交于
Authors: Chris Hajas and Todd Sedano
-
由 Christopher Hajas 提交于
read_bytes will be negative if read() returns a negative value on error. Therefore, this should be a signed int.
-
由 Christopher Hajas 提交于
This is a backup-breaking failure and the dump should exit rather than simply breaking from the loop.
-
由 Ashwin Agrawal 提交于
These .sql files assumed that the timezone for machines running the tests would have the timezone as PST/PDT. This commit explicitly sets the timezone to PST in the .sql files and updates the .ans files to handle this new behavior.
-
This resolves #1880. Thanks Lirong Jian for the inital PR #1952. Details: Fixed a typo of string comparison in cdbexplain Fix the typo for UNINITIALIZED_SORT. Add new method to convert sort space type str to enum.
-
由 Heikki Linnakangas 提交于
These were not that exciting tests, because ORCA falls back to the Postgres planner for these queries. But it's a reasonable test case of partition pruning in general, after adding a WHERE clauses to the query, so add one such test case to the main test suite.
-
由 Heikki Linnakangas 提交于
These tests are from the legacy cdbfast test suite, from writable_ext_tbl/test_* files. They're not very exciting, and we already had tests for some variants of these commands, but looking at the way all these privileges are handled in the code, perhaps it's indeed best to have tests for many different combinations. The tests run in under a second, so it's not too much of a burden. Compared to the original tests, I removed the SELECTs and INSERTs that tested that you can also read/write the external tables, after successfully creating one. Those don't seem very useful, they all basically test that the owner of an external table can read/write it. By getting rid of those statements, these tests don't need a live gpfdist server to be running, which makes them a lot simpler and faster to run. Also move the existing tests on these privileges from the 'external_table' test to the new file.
-
由 Shoaib Lari 提交于
The gpcheckcat 'inconsistent' check should skip the pg_index.indcheckxmin column for now as due to the HOT feature, the value of this column can be different between the master and the segments. A more long term fix will resolve the underlying issue that makes the indcheckxmin column value to be different between the master and the segments.
-
- 08 3月, 2017 14 次提交
-
-
由 Omer Arap 提交于
-
由 Pengzhou Tang 提交于
rolresgroup was added to pg_authid in previous commits, we also should add it to pg_authid. Signed-off-by: NNing Yu <nyu@pivotal.io>
-
由 Kenan Yao 提交于
There is the same restrictions with the CREATE ROLE syntax, only superuser can be assigned to the admin_group. -- assign the role r1 to resource group g1 ALTER ROLE r1 RESOURCE GROUP g1; Signed-off-by: NNing Yu <nyu@pivotal.io>
-
由 Kenan Yao 提交于
There are two default resource groups, default_group and admin_group, roles will be assigned to one of them depending on they are superuser or not unless a group is explicitly specified. Only superuser can be assigned to the admin_group. -- create a role r1 and assign it to resgroup g1 CREATE ROLE r1 RESOURCE GROUP g1; Signed-off-by: NNing Yu <nyu@pivotal.io>
-
由 Eamon 提交于
[ci skip]
-
由 Haozhou Wang 提交于
dblink is a bulit-in module of postgresql, this patch enable it in GPDB. 1. Enable compiling dblink with GPDB 2. Update and enable dblink test cases Known issue: 1. The following functions are not supported by GPDB and disabled temporarily due to GPDB behavior changed. dblink_send_query() dblink_is_busy() dblink_get_result() 2. Anonymous connection and connection by name are not supported in a nested statement with both write and read. E.g. INSERT INTO tblname SELECT * FROM dblink(['connName'],'SELECT * FROM otherdb.tblname') as tbl; Currently, only connection with connection string is supported in such statements. Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io> Signed-off-by: NYuan Zhao <yuzhao@pivotal.io> Signed-off-by: NJianxia Chen <jchen@pivotal.io> Signed-off-by: NAdam Lee <ali@pivotal.io>
-
由 David Sharp 提交于
v22 of the pull_request resource removes the default behavior making shallow clones. A shallow clone will not fetch tags, since no tagged commits are walked over during the fetch. We need tags to be present to build and determine the version string. Signed-off-by: NJingyi Mei <jmei@pivotal.io>
-
由 Jingyi Mei 提交于
Signed-off-by: NDavid Sharp <dsharp@pivotal.io>
-
由 Jane Beckman 提交于
* Transaction id support * Removed extra line [ci skip]
-
由 Marbin Tan 提交于
- Also remove unwanted test: the abort test should be skipped, based on feedback from Storage team Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Marbin Tan 提交于
-
由 Marbin Tan 提交于
-
由 Heikki Linnakangas 提交于
These things are covered by the 'transactions' test, and the 'uao_dml/uao_dml_column' test, in the main test suite.
-
由 Bhunvesh Chaudhary 提交于
When `explain` traces a VAR to a CONST, it assumes (rightly so) that it's most likely the only constant being projected at that operator, e.g. ``` EXPLAIN SELECT a AS c FROM (SELECT 1 AS a) AS bar ORDER BY 1; QUERY PLAN ------------------------------------------------ Sort (cost=0.03..0.04 rows=1 width=4) Sort Key: (1) -> Result (cost=0.00..0.01 rows=1 width=0) Optimizer status: legacy query optimizer (4 rows) ``` For a query containing the `VALUES` construct, planner would generate `ValuesScan`, whereas ORCA generates an `Append` with each constant as a `Result` node underneath. Notice how the ORCA plan shape breaks the assumption of the `EXPLAIN` rendering logic, resulting in somehow confusing output like this: ``` explain select * from foo, (values (1), (2)) f(i) where foo.a=f.i; QUERY PLAN ------------------------------------------------------------------------------ Gather Motion 3:1 (slice1; segments: 3) (cost=0.00..431.00 rows=1 width=8) -> Hash Join (cost=0.00..431.00 rows=1 width=8) Hash Cond: (1) = a -> Append (cost=0.00..0.00 rows=1 width=4) -> Result (cost=0.00..0.00 rows=1 width=4) -> Result (cost=0.00..0.00 rows=1 width=4) -> Result (cost=0.00..0.00 rows=1 width=4) -> Result (cost=0.00..0.00 rows=1 width=4) -> Hash (cost=431.00..431.00 rows=1 width=4) -> Table Scan on foo (cost=0.00..431.00 rows=1 width=4) Settings: optimizer=on Optimizer status: PQO version 2.7.0 (12 rows) ``` In the above case, `(1)` is less than helpful because that is not the only constant in the relation. This commit renders a `VAR` node that trace back to a `CONST` that has a non empty `resname` with the `resname` instead of the constant value. An unintended (but good) consequence of this change is that some constants are now rendered as their projected column instead of the constant value, e.g.: ``` EXPLAIN SELECT a AS c FROM (SELECT 1 AS a) AS bar ORDER BY 1; QUERY PLAN ------------------------------------------------ Sort (cost=0.03..0.04 rows=1 width=4) Sort Key: c -> Result (cost=0.00..0.01 rows=1 width=0) Optimizer status: legacy query optimizer (4 rows) ``` Signed-off-by: NJesse Zhang <sbjesse@gmail.com> Signed-off-by: NBhunvesh Chaudhary <bchaudhary@pivotal.io>
-
- 07 3月, 2017 18 次提交
-
-
由 Daniel Gustafsson 提交于
The TINC plperl91 tests in package/language were mostly duplicates of what is already tested in src/pl/plperl (especially since sources were properly synced recently). There were also massive duplication within the tests of testing every single datatype and various combinations thereof. This leaves some superusers tests for plperlu which still holds some value due to them operating with different users.
-
由 Daniel Gustafsson 提交于
The Naive Bayes functionality is deprecated, and removed, in 5.0 in favour of using MadLib. Remove test.
-
由 Daniel Gustafsson 提交于
This only tested that functions in various languages returning SETOF could be called with SELECT func() as well as SELECT * FROM func(). We have ample coverage of this in ICW already so remove.
-
由 Daniel Gustafsson 提交于
These tests were killed a few years ago when they started failing on error message output. The verdict back then was they weren't at all interesting to keep running anyways so they were skipped but not removed. Remove from the repo.
-
由 Heikki Linnakangas 提交于
These tests were almost redundant with existing tests, but there were a couple of things they did test: * gp_create_table_random_default_distribution = on works, and * you cannot add a PRIMARY KEY to a randomly distributed table.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
You can't do it, but it would still be good to have a test case for it. Spotted while reviewing legacy "cdbfast" tests - there was a test for this there.
-
由 Heikki Linnakangas 提交于
Long time ago, they apparently weren't in the catalogs, but they're not going to suddenly disappear from there now, any more than any other function. The functions themselves are called elsewhere in the test file, so if they are missing or broken, the test will fail that way.
-
由 Heikki Linnakangas 提交于
This allows us to have the exact same error message and hint for errors, as what the traditional planner produces. That makes testing easier, as you don't need to have a different expected output file for ORCA and non-ORCA. And allows for more structured errors anyway. Use the new function for the case of trying to read from a WRITABLE external table. There was no test for that in the main test suite previously. There was one in the gpfdist suite, but that's not really the right place, as that error is caught the same way regardless of the protocol. While we're at it, re-word the error message and change the error code to follow the Postgres error message style guide.
-
由 Heikki Linnakangas 提交于
Like in commit 08f1d8e8. Float8, time, timestamp, and timestamptz have the same typlen, typyval, typtype, typalign and typstorage attributes as int8. It is therefore sufficient to run these tests with int8. Likewise, date has the same type attributes as int4, so the tests on int4 cover the same ground.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
The original TINC tests where these were copied for tested specifically that ORCA can use Dynamic Index Scans for these queries. To be closer to that spirit, even when running without ORCA, try to use index scans when possible. It exercises more codepaths in the planner anyway, and allows us to notice more easily if the planner loses the ability to use an index for some reason.
-
由 Omer Arap 提交于
-
由 yanchaozhong 提交于
`gpinitsystem`, `gpstart` and `gpstop` had different prompting before: $ gpinitsystem -c gpinitsystem_config ... Continue with Greenplum creation **Yy/Nn>** n ... $ gpstart ... Continue with Greenplum instance startup **Yy|Nn (default=N):** **>** n ... $ gpstop ... Continue with Greenplum instance shutdown **Yy|Nn (default=N):** **>** n ... This commit makes them consistent: $ gpinitsystem -c gpinitsystem_config ... Continue with Greenplum creation Yy|Nn (default=N): > n
-
由 Tom Meyer 提交于
Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
-
由 Heikki Linnakangas 提交于
* Re-enable UPDATE tests * After statements that fail on GPDB but not in upstream, or vice versa, add fixup-statements, to restore the test table's state to what it is in the upstream, so that the expected output for subsequent tests match up with upstream. * Change row order in expected output to match upstream.
-
由 Ashwin Agrawal 提交于
Rename checkpoint.c to checkpointer.c. And move the code from bgwriter.c to checkpointer.c and also renames most of corresponding data structures to refect the clear ownership and association. This commit brings it as close as possible to PostgreSQL 9.2. Reference to PostgreSQL related commits: commit 806a2aee Split work of bgwriter between 2 processes: bgwriter and checkpointer. commit bf405ba8 Add new file for checkpointer.c commit 8f28789b Rename BgWriterShmem/Request to CheckpointerShmem/Request commit d843589e5ab361dd4738dab5c9016e704faf4153 Fix management of pendingOpsTable in auxiliary processes.
-
-