- 08 3月, 2017 11 次提交
-
-
由 Kenan Yao 提交于
There are two default resource groups, default_group and admin_group, roles will be assigned to one of them depending on they are superuser or not unless a group is explicitly specified. Only superuser can be assigned to the admin_group. -- create a role r1 and assign it to resgroup g1 CREATE ROLE r1 RESOURCE GROUP g1; Signed-off-by: NNing Yu <nyu@pivotal.io>
-
由 Eamon 提交于
[ci skip]
-
由 Haozhou Wang 提交于
dblink is a bulit-in module of postgresql, this patch enable it in GPDB. 1. Enable compiling dblink with GPDB 2. Update and enable dblink test cases Known issue: 1. The following functions are not supported by GPDB and disabled temporarily due to GPDB behavior changed. dblink_send_query() dblink_is_busy() dblink_get_result() 2. Anonymous connection and connection by name are not supported in a nested statement with both write and read. E.g. INSERT INTO tblname SELECT * FROM dblink(['connName'],'SELECT * FROM otherdb.tblname') as tbl; Currently, only connection with connection string is supported in such statements. Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io> Signed-off-by: NYuan Zhao <yuzhao@pivotal.io> Signed-off-by: NJianxia Chen <jchen@pivotal.io> Signed-off-by: NAdam Lee <ali@pivotal.io>
-
由 David Sharp 提交于
v22 of the pull_request resource removes the default behavior making shallow clones. A shallow clone will not fetch tags, since no tagged commits are walked over during the fetch. We need tags to be present to build and determine the version string. Signed-off-by: NJingyi Mei <jmei@pivotal.io>
-
由 Jingyi Mei 提交于
Signed-off-by: NDavid Sharp <dsharp@pivotal.io>
-
由 Jane Beckman 提交于
* Transaction id support * Removed extra line [ci skip]
-
由 Marbin Tan 提交于
- Also remove unwanted test: the abort test should be skipped, based on feedback from Storage team Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Marbin Tan 提交于
-
由 Marbin Tan 提交于
-
由 Heikki Linnakangas 提交于
These things are covered by the 'transactions' test, and the 'uao_dml/uao_dml_column' test, in the main test suite.
-
由 Bhunvesh Chaudhary 提交于
When `explain` traces a VAR to a CONST, it assumes (rightly so) that it's most likely the only constant being projected at that operator, e.g. ``` EXPLAIN SELECT a AS c FROM (SELECT 1 AS a) AS bar ORDER BY 1; QUERY PLAN ------------------------------------------------ Sort (cost=0.03..0.04 rows=1 width=4) Sort Key: (1) -> Result (cost=0.00..0.01 rows=1 width=0) Optimizer status: legacy query optimizer (4 rows) ``` For a query containing the `VALUES` construct, planner would generate `ValuesScan`, whereas ORCA generates an `Append` with each constant as a `Result` node underneath. Notice how the ORCA plan shape breaks the assumption of the `EXPLAIN` rendering logic, resulting in somehow confusing output like this: ``` explain select * from foo, (values (1), (2)) f(i) where foo.a=f.i; QUERY PLAN ------------------------------------------------------------------------------ Gather Motion 3:1 (slice1; segments: 3) (cost=0.00..431.00 rows=1 width=8) -> Hash Join (cost=0.00..431.00 rows=1 width=8) Hash Cond: (1) = a -> Append (cost=0.00..0.00 rows=1 width=4) -> Result (cost=0.00..0.00 rows=1 width=4) -> Result (cost=0.00..0.00 rows=1 width=4) -> Result (cost=0.00..0.00 rows=1 width=4) -> Result (cost=0.00..0.00 rows=1 width=4) -> Hash (cost=431.00..431.00 rows=1 width=4) -> Table Scan on foo (cost=0.00..431.00 rows=1 width=4) Settings: optimizer=on Optimizer status: PQO version 2.7.0 (12 rows) ``` In the above case, `(1)` is less than helpful because that is not the only constant in the relation. This commit renders a `VAR` node that trace back to a `CONST` that has a non empty `resname` with the `resname` instead of the constant value. An unintended (but good) consequence of this change is that some constants are now rendered as their projected column instead of the constant value, e.g.: ``` EXPLAIN SELECT a AS c FROM (SELECT 1 AS a) AS bar ORDER BY 1; QUERY PLAN ------------------------------------------------ Sort (cost=0.03..0.04 rows=1 width=4) Sort Key: c -> Result (cost=0.00..0.01 rows=1 width=0) Optimizer status: legacy query optimizer (4 rows) ``` Signed-off-by: NJesse Zhang <sbjesse@gmail.com> Signed-off-by: NBhunvesh Chaudhary <bchaudhary@pivotal.io>
-
- 07 3月, 2017 27 次提交
-
-
由 Daniel Gustafsson 提交于
The TINC plperl91 tests in package/language were mostly duplicates of what is already tested in src/pl/plperl (especially since sources were properly synced recently). There were also massive duplication within the tests of testing every single datatype and various combinations thereof. This leaves some superusers tests for plperlu which still holds some value due to them operating with different users.
-
由 Daniel Gustafsson 提交于
The Naive Bayes functionality is deprecated, and removed, in 5.0 in favour of using MadLib. Remove test.
-
由 Daniel Gustafsson 提交于
This only tested that functions in various languages returning SETOF could be called with SELECT func() as well as SELECT * FROM func(). We have ample coverage of this in ICW already so remove.
-
由 Daniel Gustafsson 提交于
These tests were killed a few years ago when they started failing on error message output. The verdict back then was they weren't at all interesting to keep running anyways so they were skipped but not removed. Remove from the repo.
-
由 Heikki Linnakangas 提交于
These tests were almost redundant with existing tests, but there were a couple of things they did test: * gp_create_table_random_default_distribution = on works, and * you cannot add a PRIMARY KEY to a randomly distributed table.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
You can't do it, but it would still be good to have a test case for it. Spotted while reviewing legacy "cdbfast" tests - there was a test for this there.
-
由 Heikki Linnakangas 提交于
Long time ago, they apparently weren't in the catalogs, but they're not going to suddenly disappear from there now, any more than any other function. The functions themselves are called elsewhere in the test file, so if they are missing or broken, the test will fail that way.
-
由 Heikki Linnakangas 提交于
This allows us to have the exact same error message and hint for errors, as what the traditional planner produces. That makes testing easier, as you don't need to have a different expected output file for ORCA and non-ORCA. And allows for more structured errors anyway. Use the new function for the case of trying to read from a WRITABLE external table. There was no test for that in the main test suite previously. There was one in the gpfdist suite, but that's not really the right place, as that error is caught the same way regardless of the protocol. While we're at it, re-word the error message and change the error code to follow the Postgres error message style guide.
-
由 Heikki Linnakangas 提交于
Like in commit 08f1d8e8. Float8, time, timestamp, and timestamptz have the same typlen, typyval, typtype, typalign and typstorage attributes as int8. It is therefore sufficient to run these tests with int8. Likewise, date has the same type attributes as int4, so the tests on int4 cover the same ground.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
The original TINC tests where these were copied for tested specifically that ORCA can use Dynamic Index Scans for these queries. To be closer to that spirit, even when running without ORCA, try to use index scans when possible. It exercises more codepaths in the planner anyway, and allows us to notice more easily if the planner loses the ability to use an index for some reason.
-
由 Omer Arap 提交于
-
由 yanchaozhong 提交于
`gpinitsystem`, `gpstart` and `gpstop` had different prompting before: $ gpinitsystem -c gpinitsystem_config ... Continue with Greenplum creation **Yy/Nn>** n ... $ gpstart ... Continue with Greenplum instance startup **Yy|Nn (default=N):** **>** n ... $ gpstop ... Continue with Greenplum instance shutdown **Yy|Nn (default=N):** **>** n ... This commit makes them consistent: $ gpinitsystem -c gpinitsystem_config ... Continue with Greenplum creation Yy|Nn (default=N): > n
-
由 Tom Meyer 提交于
Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
-
由 Heikki Linnakangas 提交于
* Re-enable UPDATE tests * After statements that fail on GPDB but not in upstream, or vice versa, add fixup-statements, to restore the test table's state to what it is in the upstream, so that the expected output for subsequent tests match up with upstream. * Change row order in expected output to match upstream.
-
由 Ashwin Agrawal 提交于
Rename checkpoint.c to checkpointer.c. And move the code from bgwriter.c to checkpointer.c and also renames most of corresponding data structures to refect the clear ownership and association. This commit brings it as close as possible to PostgreSQL 9.2. Reference to PostgreSQL related commits: commit 806a2aee Split work of bgwriter between 2 processes: bgwriter and checkpointer. commit bf405ba8 Add new file for checkpointer.c commit 8f28789b Rename BgWriterShmem/Request to CheckpointerShmem/Request commit d843589e5ab361dd4738dab5c9016e704faf4153 Fix management of pendingOpsTable in auxiliary processes.
-
-
The deadlock occurs when a backend is about to unlink a file but finds the shared fsync requests queue full. If the backend waits for checkpointer to consume fsync requests from the queue, there is a deadlock. Because checkpointer is already waiting for this backend on `MyProc->inCommit`. A backend that is about to unlink a file is going through commit processing and must already set `MyProc->inCommit`. This commit prevents the deadlock by checkpoint process absorbing requests from the shared queue while waiting for the transactions to commit. Fixes issue discussed on mailing list at: https://groups.google.com/a/greenplum.org/forum/#!topic/gpdb-dev/PHKuQPNwWs0
-
We had partially pulled the fix to separate checkpoint and bgwriter processes and introduced a bug where pendingOpsTable was maintained in both the processes. The pendingOpsTable records pending fsync requests. Only checkpoint process should keep it. Bgwriter should only write out dirty pages to OS cache. Apparently, upstream also had this same bug and it was fixed in d843589e5ab361dd4738dab5c9016e704faf4153 Also ensure that background writer sweeps buffers even in the first run after checkpoint. There is no reason to hold off until next run and this is how it works in upstream. Fixes issue discussed on mailing list: https://groups.google.com/a/greenplum.org/forum/#!topic/gpdb-dev/PHKuQPNwWs0
-
The commit includes a UDF to walk dirty shared buffers and a new fault `fault_counter` to count the number of files fsync'ed by checkpointer process. Also another new fault `bg_buffer_sync_default_logic` to flush all buffers for BgBufferSync() for the background writer process.
-
由 Heikki Linnakangas 提交于
This test case caused a deadlock, with gp_cte_sharing=on. Disable the offending test query, until the issue is fixed.
-
由 Ashwin Agrawal 提交于
`MyProc->inCommit` is to protect checkpoint running during inCommit transactions. However, `MyProc->lxid` has to be valid because `GetVirtualXIDsDelayingChkpt()` and `HaveVirtualXIDsDelayingChkpt()` require `VirtualTransactionIdIsValid()` in addition to `inCommit` to block the checkpoint process. In this fix, we defer clearing `inCommit` and `lxid` to `CommitTransaction()`.
-
由 Ashwin Agrawal 提交于
Originally checkpoint is checking for xid, however, xid is used to control the transaction visibility and it's crucial to clean this xid if process is done with commit and before release locks. However, checkpoint need to wait for the `AtExat_smgr()` to cleanup persistent table information, which happened after release locks, where `xid` is already cleaned. Hence, we use VXID, which doesn't have visibility impact. NOTE: Upstream PostgreSQL commit f21bb9cf for the similar fix.
-
-
First, it support shell script and command: The new syntax is: ``` ! <some script> ``` This is required to run something like gpfaultinjector, gpstart, gpstop, etc. Second, it support utility mode with blocking command and join: The new syntax is: ``` 2U&: <sql> 2U<: ``` The above example means: - blocking in utility mode on dbid 2 - join back in previous session in utility mode on dbid 2 Fix the exception handling to allow the test to complete, and log the failed commands instead of abort the test. This will make sure all the cleaning steps are executed and not blocking following tests. This also include init_file for diff comparison to ignore timestamp output from gpfaultinjector.
-
- 06 3月, 2017 2 次提交
-
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
Installing PL languages via gppkg without doing anything to check the validity and functionality of the actual installation isn't terribly exciting, and on top of that it's also already tested in package/language.
-