- 14 1月, 2017 3 次提交
-
-
由 Shoaib Lari 提交于
Update the .ans file to conform to the test output for 5.0.
-
由 Abhijit Subramanya 提交于
The patch aims to cleanup and fix tests in the tincrepo/mpp/gpdb/tests/storage/basic folder. - Fix the external table tests. - Fix transaction management test and remove the max_prepared_transactions test since it is no longer valid. - Fix the xidlimits compilation. - Fix the partition ddl tests.
-
由 Shoaib Lari 提交于
Update the .ans file to conform to the test output for 5.0.
-
- 13 1月, 2017 14 次提交
-
-
由 Heikki Linnakangas 提交于
If a CTAS query uses a query with an InitPlan, and the InitPlan contains a call to a function that dispatches another query to the segments, the list of Oids is dispatched to the segments prematurely, in the inner query. As a result, when the CTAS query is dispatched, the OID of the new table is not included, and you get a "no pre-assigned OID for relation" error. To fix, attach the OID of the new table to the query descriptor before executing the InitPlans. Fixes github issue #1543, reported with test case by Abhijit Subramanya.
-
由 Jimmy Yih 提交于
Currently, the walsender process is not properly stopped during gpstop. The SignalSomeChildren function was changed in the Postgres 8.3 merge in commit 20ac0be1. It was previously at a Postgres 9.2 state to support sending signals to walsender as a part of our master and standby master replication.
-
由 Heikki Linnakangas 提交于
We don't build this as part of the CI, and we still won't pass the PL/TCL regression tests (doesn't seem like there's anything serious there at a quick glance though), but let's at least make it compile. Per github issue #1524
-
由 Heikki Linnakangas 提交于
For simplicity. This is less error-prone, too, in the face of future changes to ExprContext.
-
由 Heikki Linnakangas 提交于
DynamicTableScanInfo is an extension of EState, so always allocate it in the same memory context. The DynamicTableScanInfo.memoryContext field always pointed to es_query_cxt, so that is what we in fact always did anyway, this just removes the unnecessary abstraction, for simplicity.
-
由 Heikki Linnakangas 提交于
Pass the EState that contains it to where it's needed, instead.
-
由 Dhanashree Kashid 提交于
-
由 Jesse Zhang 提交于
This doesn't save too much time, it is just the pedant in me cringing when I looked at why we had a mandatory proprietary dependency. Alas this probably should have been done when we first introduced the `--enable-netbackup` configure flag, but better late than never.
-
由 Jesse Zhang 提交于
The four BSA agents seem to (inherently, as opposed to incidentally) share the same dependencies. If that's true, we might as well declare them together in one place.
-
由 Jesse Zhang 提交于
Those binaries need the gpbsa shared object to build. This wasn't exposed until EITHER 0. We changed the order we build them in the previous commit; OR 0. We ran a parallel build This commit simply adds the proper dependency declaration to all the offending targets.
-
由 Jesse Zhang 提交于
This should not break anything given the declarative nature of Make. But it does (especially noticeable if you are not running a parallel `make`). Because we are not declaring a dependency on gpbsa libraries on the the dump agents executables.
-
由 Abhijit Subramanya 提交于
When building a scan descriptor for appendonly tables, the compression type for a table is copied from the pg_appendonly row from relcache. When the pg_appendonly row for the AO table gets updated, as part of relcache invalidation, the old relation object will be destroyed. However the new relation object would still point to the original compresstype object. So appendonly_beginrange scan_internal() should use pstrdup to copy the compression type instead of copying the pointer directly.
-
由 Heikki Linnakangas 提交于
Connect to a segment in utility mode and run the following query The test case that's included is not for the bug that was fixed, but for an older bug with default_tablespace, that doesn't seem to be covered by any existing tests. Seems good to add a test for it since we're touching the code. The bug that this actually fixes, is difficult to test, hence no regression test for that included. But to reproduce manually: --- PGOPTIONS="-c gp_session_role=utility" psql -p 25432 gptest psql (8.3.23) Type "help" for help. gptest=# create table t1(a int); NOTICE: Table doesn't have 'DISTRIBUTED BY' clause -- Using column named 'a' as the Greenplum Database data distribution key for this table. HINT: The 'DISTRIBUTED BY' clause determines the distribution of data. Make sure column(s) chosen are the optimal data distribution key to minimize skew. CREATE TABLE gptest=# insert into t1 values (1), (2), (3), (4); INSERT 0 4 gptest=# create table t2 as select * from t1; server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. --- Fixes github issue #1511, reported with test case by Abhijit Subramanya.
-
由 Abhijit Subramanya 提交于
When a superuser tries to split a partition table which was created by a non superuser, the split should complete successfully and the resulting partitions should have the owner set to the owner of the parent table.
-
- 12 1月, 2017 6 次提交
-
-
由 Pengzhou Tang 提交于
To avoid hang issue, rewrite AnalyzerWorker to follow the same logic as Worker class. Besides, haltWork() need to be called after join() to shutdown the whole WorkerPool, fix some incorrect usage for this.
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
Instead of logging errno, inlude the %m format specifier which pulls in the human readable errno translation. Also fix typos and make the error logging a bit more consistent with regards to style, string quoting and spelling. From a report by Github user laixiong in issue #1507
-
由 Daniel Gustafsson 提交于
If the format_str variable is left uninitialized we are in trouble (partly because palloc never returns NULL on failure to allocate). Remove useless checks and also remove initialization on declaration since that can hide us breaking the code subtly as it stands today.
-
由 Dave Cramer 提交于
* Implement bytea HEX encode/decode this is from upstream commit a2a8c7a6 The primary reason for back patching is to allow pgadmin4 to work Unfortunately there are a few hacks to make this work without Enum Config GUCs mostly around setting the GUC as the target is an integer but the guc value is a string * fixed regression tests \ removed unused code
-
由 Heikki Linnakangas 提交于
The code in rewriteDefine.c didn't know about AO tables, and tried to use heap_beginscan() on an AO table. It resulted in an ERROR, because even for an empty table, RelationGetNumberOfBlocks() returned 1, but we tried to scan it like a heap table, heap_getnext() couldn't find the block. To fix, add an explicit check that you can only turn heap tables into views this way. This also forbids turning an external table into a view. (We could possible relax that limitation, but it's a pointless feature anyway we only support for backwards-compatibility reasons in PostgreSQL.) Also add test case for this.
-
- 11 1月, 2017 8 次提交
-
-
由 Heikki Linnakangas 提交于
Constraints are not uniquely identified by namespace and name alone. It's possible to have multiple constraints with the same name, but attached to different tables or domains. So include parent relation and domain OIDs in the key. This was uncovered by a regression test in the TINC test suite. Move the OID consistency tests from TINC into the normal regression test suite, to have test coverage for this.
-
由 Adam Lee 提交于
-
由 Haozhou Wang 提交于
1. Add custom external table dependency with protocol to keep the dumped SQL ordered correctly. (create protocol must execute before creating custom external table) 2. Remove \" from dumped location URI 3. Support dump multiple locations Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io> Signed-off-by: NYuan Zhao <yuzhao@pivotal.io>
-
由 Daniel Gustafsson 提交于
Commit 2a3df5d0 introduced broken changes to the ereport() calls in the external table command. Fix by including the proper format specifier. Also concatenate the error message to a single row while at it to improve readability. Spotted by Heikki Linnakangas
-
由 Shoaib Lari 提交于
Update the .ans file to conform to the test output for 5.0.
-
由 Ashwin Agrawal 提交于
Seen failures multiple times in pipeline at point for checking if database is in changetracking or not for failover_to_mirror scenarios. For this case primary is paniced and mirror takes over but due to work-load running, master may see panics due to 2PC erroring out after retries. The test should be able to continue working in such a condition, hence adding wait_for_database to be up after master panic.
-
由 Marbin Tan 提交于
* Timeout was removed from the WorkerPool on commit c7dfcc45
-
由 Daniel Gustafsson 提交于
When dropping an external table, the pg_exttable catalog entry must be removed as well to avoid dangling entries building up. Reverse the logic in deleting the catalog entry to make sure we delete it before scanning for a potential next one. Also fix up the tests for this as they were testing the removal of the pg_class catalog entry rather than the pg_exttable entry. Reported by Ashwin Agrawal in Github issue #1498
-
- 10 1月, 2017 9 次提交
-
-
由 Nikos Armenatzoglou 提交于
-
由 Adam Lee 提交于
Signed-off-by: NYuan Zhao <yuzhao@gmail.com>
-
由 Adam Lee 提交于
-
由 Larry Hamel 提交于
-
由 Daniel Gustafsson 提交于
External tables with protocols file or http which specify more URIs than segments won't be allowed to query until the cluster has been extended. Rather than silently allow creation, issue a warning and hint to the user to assist troubleshooting. Writable tables error out for the same problem but avoiding to change behavior this late in the release cycle seems like a good idea.
-
由 Daniel Gustafsson 提交于
Creating an external table with duplicate location URIs is a syntax error so move the duplicate check to the parsing step. This simplifies the external table command code as well as provides an error message with context marker to the user. Also fix up test sources to match the new error messages and remove the duplicate test from gpfdist since we already test that in ICW.
-
由 Dave Cramer 提交于
-
由 Jasper Li 提交于
This was fixed in GPDB 4.3 immediately after it was brought up but was never ported to 5.0. Issue was introduced in https://github.com/greenplum-db/gpdb/commit/232eb64ad9f93dce8941f7b124a98f0c21c3350b which has the initial discussion.
-
由 Dhanashree Kashid 提交于
In rollup() processing, when the grouping cols are processed, the target list is traversed for each grouping col to find a match. Each node in the target list is mutated in `replace_grouping_columns_mutator()` to find match for grouping cols. In the failing query: ``` SELECT COUNT(DISTINCT c) AS r FROM ( SELECT c , CASE WHEN (e = 2) THEN 1 END AS f , 1 AS g FROM foo ) f_view GROUP BY ROLLUP(f,g); ``` grouping cols ‘g’ is a constant 1. when target entry for ‘f’ is being mutated, we mutate all the fields of caseexpr recursively. since the `result` field in caseexpr is a const with same value as that of ‘g’, it gets considered as a match and we replace the node by null which later results in crash. Essentially, since we traverse all the fields of expr, a const node (appearing anywhere in caseexpr) with same value as that of const grouping col will be matched. Hence the following query also fails where one of the arg of equality operator is a const with same value as that of ‘g’. ``` SELECT COUNT(DISTINCT c) AS r FROM ( SELECT c , CASE WHEN (e = 1) THEN 2 END AS f , 1 AS g FROM foo ) f_view GROUP BY ROLLUP(f,g); ``` Fix: 1. refactored the `replace_grouping_columns_mutator()` to have separate routines for target list replacement and qual replacement. 2. for target list, just look at top level node. do not recurse. 3. for quals, the top level is always some kind of expression, so we need to recurse to find the match. Skip replacing constants here. [#136294589] Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-