- 13 1月, 2017 9 次提交
-
-
由 Heikki Linnakangas 提交于
Pass the EState that contains it to where it's needed, instead.
-
由 Dhanashree Kashid 提交于
-
由 Jesse Zhang 提交于
This doesn't save too much time, it is just the pedant in me cringing when I looked at why we had a mandatory proprietary dependency. Alas this probably should have been done when we first introduced the `--enable-netbackup` configure flag, but better late than never.
-
由 Jesse Zhang 提交于
The four BSA agents seem to (inherently, as opposed to incidentally) share the same dependencies. If that's true, we might as well declare them together in one place.
-
由 Jesse Zhang 提交于
Those binaries need the gpbsa shared object to build. This wasn't exposed until EITHER 0. We changed the order we build them in the previous commit; OR 0. We ran a parallel build This commit simply adds the proper dependency declaration to all the offending targets.
-
由 Jesse Zhang 提交于
This should not break anything given the declarative nature of Make. But it does (especially noticeable if you are not running a parallel `make`). Because we are not declaring a dependency on gpbsa libraries on the the dump agents executables.
-
由 Abhijit Subramanya 提交于
When building a scan descriptor for appendonly tables, the compression type for a table is copied from the pg_appendonly row from relcache. When the pg_appendonly row for the AO table gets updated, as part of relcache invalidation, the old relation object will be destroyed. However the new relation object would still point to the original compresstype object. So appendonly_beginrange scan_internal() should use pstrdup to copy the compression type instead of copying the pointer directly.
-
由 Heikki Linnakangas 提交于
Connect to a segment in utility mode and run the following query The test case that's included is not for the bug that was fixed, but for an older bug with default_tablespace, that doesn't seem to be covered by any existing tests. Seems good to add a test for it since we're touching the code. The bug that this actually fixes, is difficult to test, hence no regression test for that included. But to reproduce manually: --- PGOPTIONS="-c gp_session_role=utility" psql -p 25432 gptest psql (8.3.23) Type "help" for help. gptest=# create table t1(a int); NOTICE: Table doesn't have 'DISTRIBUTED BY' clause -- Using column named 'a' as the Greenplum Database data distribution key for this table. HINT: The 'DISTRIBUTED BY' clause determines the distribution of data. Make sure column(s) chosen are the optimal data distribution key to minimize skew. CREATE TABLE gptest=# insert into t1 values (1), (2), (3), (4); INSERT 0 4 gptest=# create table t2 as select * from t1; server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. --- Fixes github issue #1511, reported with test case by Abhijit Subramanya.
-
由 Abhijit Subramanya 提交于
When a superuser tries to split a partition table which was created by a non superuser, the split should complete successfully and the resulting partitions should have the owner set to the owner of the parent table.
-
- 12 1月, 2017 6 次提交
-
-
由 Pengzhou Tang 提交于
To avoid hang issue, rewrite AnalyzerWorker to follow the same logic as Worker class. Besides, haltWork() need to be called after join() to shutdown the whole WorkerPool, fix some incorrect usage for this.
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
Instead of logging errno, inlude the %m format specifier which pulls in the human readable errno translation. Also fix typos and make the error logging a bit more consistent with regards to style, string quoting and spelling. From a report by Github user laixiong in issue #1507
-
由 Daniel Gustafsson 提交于
If the format_str variable is left uninitialized we are in trouble (partly because palloc never returns NULL on failure to allocate). Remove useless checks and also remove initialization on declaration since that can hide us breaking the code subtly as it stands today.
-
由 Dave Cramer 提交于
* Implement bytea HEX encode/decode this is from upstream commit a2a8c7a6 The primary reason for back patching is to allow pgadmin4 to work Unfortunately there are a few hacks to make this work without Enum Config GUCs mostly around setting the GUC as the target is an integer but the guc value is a string * fixed regression tests \ removed unused code
-
由 Heikki Linnakangas 提交于
The code in rewriteDefine.c didn't know about AO tables, and tried to use heap_beginscan() on an AO table. It resulted in an ERROR, because even for an empty table, RelationGetNumberOfBlocks() returned 1, but we tried to scan it like a heap table, heap_getnext() couldn't find the block. To fix, add an explicit check that you can only turn heap tables into views this way. This also forbids turning an external table into a view. (We could possible relax that limitation, but it's a pointless feature anyway we only support for backwards-compatibility reasons in PostgreSQL.) Also add test case for this.
-
- 11 1月, 2017 8 次提交
-
-
由 Heikki Linnakangas 提交于
Constraints are not uniquely identified by namespace and name alone. It's possible to have multiple constraints with the same name, but attached to different tables or domains. So include parent relation and domain OIDs in the key. This was uncovered by a regression test in the TINC test suite. Move the OID consistency tests from TINC into the normal regression test suite, to have test coverage for this.
-
由 Adam Lee 提交于
-
由 Haozhou Wang 提交于
1. Add custom external table dependency with protocol to keep the dumped SQL ordered correctly. (create protocol must execute before creating custom external table) 2. Remove \" from dumped location URI 3. Support dump multiple locations Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io> Signed-off-by: NYuan Zhao <yuzhao@pivotal.io>
-
由 Daniel Gustafsson 提交于
Commit 2a3df5d0 introduced broken changes to the ereport() calls in the external table command. Fix by including the proper format specifier. Also concatenate the error message to a single row while at it to improve readability. Spotted by Heikki Linnakangas
-
由 Shoaib Lari 提交于
Update the .ans file to conform to the test output for 5.0.
-
由 Ashwin Agrawal 提交于
Seen failures multiple times in pipeline at point for checking if database is in changetracking or not for failover_to_mirror scenarios. For this case primary is paniced and mirror takes over but due to work-load running, master may see panics due to 2PC erroring out after retries. The test should be able to continue working in such a condition, hence adding wait_for_database to be up after master panic.
-
由 Marbin Tan 提交于
* Timeout was removed from the WorkerPool on commit c7dfcc45
-
由 Daniel Gustafsson 提交于
When dropping an external table, the pg_exttable catalog entry must be removed as well to avoid dangling entries building up. Reverse the logic in deleting the catalog entry to make sure we delete it before scanning for a potential next one. Also fix up the tests for this as they were testing the removal of the pg_class catalog entry rather than the pg_exttable entry. Reported by Ashwin Agrawal in Github issue #1498
-
- 10 1月, 2017 9 次提交
-
-
由 Nikos Armenatzoglou 提交于
-
由 Adam Lee 提交于
Signed-off-by: NYuan Zhao <yuzhao@gmail.com>
-
由 Adam Lee 提交于
-
由 Larry Hamel 提交于
-
由 Daniel Gustafsson 提交于
External tables with protocols file or http which specify more URIs than segments won't be allowed to query until the cluster has been extended. Rather than silently allow creation, issue a warning and hint to the user to assist troubleshooting. Writable tables error out for the same problem but avoiding to change behavior this late in the release cycle seems like a good idea.
-
由 Daniel Gustafsson 提交于
Creating an external table with duplicate location URIs is a syntax error so move the duplicate check to the parsing step. This simplifies the external table command code as well as provides an error message with context marker to the user. Also fix up test sources to match the new error messages and remove the duplicate test from gpfdist since we already test that in ICW.
-
由 Dave Cramer 提交于
-
由 Jasper Li 提交于
This was fixed in GPDB 4.3 immediately after it was brought up but was never ported to 5.0. Issue was introduced in https://github.com/greenplum-db/gpdb/commit/232eb64ad9f93dce8941f7b124a98f0c21c3350b which has the initial discussion.
-
由 Dhanashree Kashid 提交于
In rollup() processing, when the grouping cols are processed, the target list is traversed for each grouping col to find a match. Each node in the target list is mutated in `replace_grouping_columns_mutator()` to find match for grouping cols. In the failing query: ``` SELECT COUNT(DISTINCT c) AS r FROM ( SELECT c , CASE WHEN (e = 2) THEN 1 END AS f , 1 AS g FROM foo ) f_view GROUP BY ROLLUP(f,g); ``` grouping cols ‘g’ is a constant 1. when target entry for ‘f’ is being mutated, we mutate all the fields of caseexpr recursively. since the `result` field in caseexpr is a const with same value as that of ‘g’, it gets considered as a match and we replace the node by null which later results in crash. Essentially, since we traverse all the fields of expr, a const node (appearing anywhere in caseexpr) with same value as that of const grouping col will be matched. Hence the following query also fails where one of the arg of equality operator is a const with same value as that of ‘g’. ``` SELECT COUNT(DISTINCT c) AS r FROM ( SELECT c , CASE WHEN (e = 1) THEN 2 END AS f , 1 AS g FROM foo ) f_view GROUP BY ROLLUP(f,g); ``` Fix: 1. refactored the `replace_grouping_columns_mutator()` to have separate routines for target list replacement and qual replacement. 2. for target list, just look at top level node. do not recurse. 3. for quals, the top level is always some kind of expression, so we need to recurse to find the match. Skip replacing constants here. [#136294589] Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
- 09 1月, 2017 4 次提交
-
-
由 Adam Lee 提交于
This verifies PR #1514
-
由 Adam Lee 提交于
Also fix case break due to options keyword. Signed-off-by: NYuan Zhao <yuzhao@pivotal.io>
-
由 Adam Lee 提交于
Signed-off-by: NAdam Lee <ali@pivotal.io>
-
由 Adam Lee 提交于
Put it at the tail-end of the pipeline first for visibility, and then we can iterate to get it quicker. Signed-off-by: NYuan Zhao <yuzhao@pivotal.io>
-
- 07 1月, 2017 4 次提交
-
-
由 Jimmy Yih 提交于
These tests do not really give us any extra coverage different from our other TINC catalog and storage tests. Historically, they haven't flagged any persistent table issues we have seen in the field and we were surprised it didn't flag any persistent table issues from the 8.3 merge but other TINC tests did. Therefore, it is safe to remove these tests without fear of losing coverage. ================================================== periodic_failover: 3 hours 8 minutes (2 tests) - skip checkpoint fault (yes/no) - kills segments periodically while running SQLLoadTest concurrency workload - ~80 minutes of sleep consisting of sleeping every 5-10 minutes before segment killing thread wakes up - recover full or recover incrementally (depends on skip checkpoint yes/no) - gpcheckcat, wait until database is in-sync, checkmirrorintegrity ================================================== Drop_DB: 8 minutes (1 test) - move database base dir and try to dropdb - move database base dir back and try to dropdb - createdb back the database - gpcheckcat, wait until database is in-sync, checkmirrorintegrity ================================================== crash_system: 1 hour 11 minutes (2 tests) - skip checkpoint fault (yes/no) - test crash after checkpoint (in-sync or changetracking) while running SQLLoadTest concurrency workload - sleeps for 10-15 minutes while SQLLoadTest concurrency workload runs - if changetracking needed, puts system in changetracking and sleeps for another 10-15 minutes - issues checkpoint - restart database (gpstop immediate and gpstart) - if changetracking, do incremental recovery - gpcheckcat, wait until database is in-sync, checkmirrorintegrity ================================================== kill_primaries: 25 minutes (2 tests + partition_table_scenario) - skip checkpoint fault (yes/no) - failover to mirror while running SQLLoadTest concurrency workload - sleeps for 2 minutes - kills all primary segments - restart database (gpstop immediate and gpstart) - run incremental recovery - gpcheckcat, wait until database is in-sync, checkmirrorintegrity partition_table_scenario: (1 test) - runs PartitionTableQueries workload - create tiny amount of different partition tables in a transaction but finish without commiting or aborting - issue checkpoint - restart database (gpstop immediate and gpstart) - run incremental recovery - gpcheckcat, wait until database is in-sync, checkmirrorintegrity ================================================== postmaster_reset: 1 hour 15 minutes (4 tests) - put database in change tracking (yes/no) - kill all or half segment processes while running SQLLoadTest concurrency workload - sleeps for 2 minutes before killing segment processes - tries to restart database (gpstop immediate and gpstart) in an infinite loop after killing segment processes - run incremental recovery - gpcheckcat, wait until database is in-sync, checkmirrorintegrity ================================================== OOM: 1 hour 41 minutes (2 tests) - sleep for 40 minutes while JoinQueryTest workload runs to create OOM - if with load needed, runs SQLLoadTest concurrency workload concurrently with the JoinQueryTest workload - assumes 20 concurrent threads in 2 iterations will produce OOM on VM (with comment saying need to increase for physical systems) - run incremental recovery and hits database with sql query every 10 seconds to see if database is up - gpcheckcat, wait until database is in-sync, checkmirrorintegrity The test_PT_RebuildPT.py tests are spared as they test our PT rebuild tool. These tests will be migrated to our Behave framework later in the future.
-
-
由 Haisheng Yuan 提交于
Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
In a few cases planner makes a wrong decision about converting a correlated subquery into a join, which results in: 0. Wrong results because the transformation was not equivalent; OR 0. Execution errors because the plans contained invalid params (across slices) This commit adds tests for those cases.
-