- 15 8月, 2018 13 次提交
-
-
由 Heikki Linnakangas 提交于
Don't require a USING clause, when there is no cast between the old and new datatype are not compatible, when altering an external table. It'll just be ignored, we're not going to rewrite any data in an external source. And conversely, don't allow USING, because it'll just be ignored. Fixes https://github.com/greenplum-db/gpdb/issues/3356
-
由 xiong-gang 提交于
The line "if (source != PGC_S_DEFAULT)" was added a long time ago, and there's no trace why it's added except the commit message: "Don't set up interconnect too soon" The 'do_connect' is true only when it's a backend process, and there's no obvious problem setting 'gp_role' of the backend process. Co-authored-by: NAlexandra Wang <lewang@pivotal.io> Co-authored-by: NGang Xiong <gxiong@pivotal.io>
-
由 xiong-gang 提交于
Upstream postgres commit e2fa76d8 extracted `set_base_rel_sizes` out from `set_base_rel_pathlists`, and path related information doesn't exist in `set_base_rel_sizes`, as the FIXME indicated, childrel->cheapest_total_path->rows will not be available. - parent_rows += cdbpath_rows(root, childrel->cheapest_total_path); - width_avg += cdbpath_rows(root, childrel->cheapest_total_path) * childrel->width; + parent_rows += childrel->rows; + parent_size += childrel->width * childrel->rows; Co-authored-by: Alexandra Wang<lewang@pivotal.io> Co-authored-by: Gang Xiong<gxiong@pivotal.io>
-
由 xiong-gang 提交于
* Remove ERRCODE_GP_FEATURE_NOT_SUPPORTED and use ERRCODE_FEATURE_NOT_SUPPORTED instead * Remove ERROR_INVALID_WINDOW_FRAME_PARAMETER and use ERRCODE_WINDOWING_ERROR instead Co-authored-by: NAlexandra Wang <lewang@pivotal.io> Co-authored-by: NGang Xiong <gxiong@pivotal.io>
-
由 Heikki Linnakangas 提交于
In setop plannning, we had assertions that checked that FLOW_SINGLETON flows had segindex=0. I'm not sure what segindex 0 means; is it "any"? In any case, it's possible to have an input that resides on a single QE, different from 0, as evidenced by the new test query. Fixes https://github.com/greenplum-db/gpdb/issues/3807
-
由 Paul Guo 提交于
distributedBy is still a Node* in IntoClause since I do not want to include header file in primnodes.h which is for 'primitive' node types. Previously using Node* has no special meaning so we change to use DistributedBy explicitly to benefit debug.
-
由 Richard Guo 提交于
As with upstream, intoClause has been removed from Query. We can skip the intoClause check and just let it copy NULL if not a CTAS query.
-
由 Bhuvnesh Chaudhary 提交于
This reverts commit 2a38a9cd.
-
由 Bhuvnesh Chaudhary 提交于
As part of moving away from Hungarian notation in the GPORCA codebase, the integration points between GPORCA and GPDB in the translator have been renamed to the new convention used in GPORCA. The libraries currently updated to the new notation in GPORCA are Naucrates and GPOS. The new naming convention is a custom version of common C++ naming conventions. The style guide for this convention can be found in the GPORCA repository. Also bump ORCA version to 2.69.0 Co-authored-by: NShreedhar Hardikar <shardikar@pivotal.io> Co-authored-by: NMelanie Plageman <mplageman@pivotal.io> Co-authored-by: NEkta Khanna <ekhanna@pivotal.io> Co-authored-by: NAbhijit Subramanya <asubramanya@pivotal.io> Co-authored-by: NSambitesh Dash <sdash@pivotal.io><Paste> Co-authored-by: NDhanashree Kashid <dkashid@pivotal.io> Co-authored-by: NOmer Arap <oarap@pivotal.io>
-
由 Jesse Zhang 提交于
When SQL standard table inheritance was added in upstream (by commit 2fb6cc90 in Postgres 7.1), mentioning a table in the FROM clause of a query would necessarily mean traversing through the inheritance hierarchy. The need to distinguish between the (legacy, less common, but legitimate nonetheless) intent of not recursing into child tables gave rise to two things: the guc `sql_inheritance` which toggles the default semantics of parent tables, and the `ONLY` keyword used in front of parent table names to explicitly skip descendant tables. ORCA doesn't like queries that skip descendant tables: it falls back to the legacy planner as soon as it detects that intent. Way way back in Greenplum-land, when external tables were given a separate designation in relstorage (RELSTORAGE_EXTERNAL), we seemed to have added code in parser (parse analysis) so that queries on external tables *never* recurse into their child tables, regardless of what the user specifies -- either via `ONLY` or `*` in the query, or via guc `sql_inheritance`. Technically, that process scrubs the range table entries to hard-code "do not recurse". The combination of those two things -- hard coding "do not recurse" in the RTE for the analyzed parse tree and ORCA detecting intent of `ONLY` through RTE -- led ORCA to *always* fall back to planner when an external table is mentioned in the FROM clause. commit 013a6e9d tried fixing this by *detecting harder* whether there's an external table. The behavior of the parse-analyzer hard coding a "do not recurse" in the RTE for an external table seems wrong for several reasons: 1. It seems unnecessarily defensive 2. It doesn't seem to belong in the parser. a. While changing "recurse" back to "do not recurse" abounds, all other occurrences happen in the planner as an optimization for childless tables. b. It deprives an optimizer of the actual intent expressed by the user: because of this hardcoding, neither ORCA nor planner would have a way of knowing whether the user specified `ONLY` in the query. c. It deprives the user of the ability to use child tables with an external table, either deliberately or coincidentally. d. A corollary is that any old views created as `SELECT a,b FROM ext_table` will be perpetuated as `SELECT a,b FROM ONLY ext_table`. This commit removes this defensive setting in the parse analyzer. As a consequence, we're able to reinstate the simpler RTE check before commit 013a6e9d. Queries and new views will include child tables as expected.
-
-
由 mkiyama 提交于
-
由 mkiyama 提交于
-
- 14 8月, 2018 16 次提交
-
-
由 yanchaozhong 提交于
Because the default offset is 1000, the default min port is 5432 (primary port + offset = mirror database port) so, the default minAllowedPort should be 6432 minPort = min([seg.getSegmentPort() for seg in gpArray.getDbList()])
-
由 yanchaozhong 提交于
The -p parameter in the gpaddmirrors command has no boundary check. If the value is out of range, the cluster system table is updated successfully, but the mirror cannot be started. $ gpaddmirrors -p 200000 ... gpaddmirrors:node:gp6-[INFO]:- Primary instance port = 40300 ... gpaddmirrors:node:gp6-[INFO]:- Mirror instance port = 240300 ... gpaddmirrors:node:gp6-[INFO]:-Successfully updated gp_segment_configuration with mirror info ... gpaddmirrors:node:gp6-[INFO]:-Process results... gpaddmirrors:node:gp6-[WARNING]:-Failed to start segment. The fault prober will shortly mark it as down. Segment: node:/data/mirror/gpseg0:content=0:dbid=3:mode=r:status=d: REASON: PG_CTL failed. pg_log: ,,,,"FATAL","22023","240300 is outside the valid range for parameter ""port"" (1 .. 65535),,,,
-
由 Richard Guo 提交于
Based on the same consideration as in commit ed0b40, this is supposed to speed up CreateDistributedSnapshot by reducing the number of cache lines needing to be fetched.
-
由 Richard Guo 提交于
This fixes 'stack_base_ptr' assertion failure with '--enable-testutils' and also revises related codes to keep the same with upstream.
-
由 Heikki Linnakangas 提交于
The ResultRelInfos we build for the partitions, in slot_get_partition(), don't contain the ProjectInfo needed to execute RETURNING. We need to look that up in the parent ResultRelInfo, and when executing it, be careful to use the "parent" version of the tuple, the one before mapping the columns for the target partition. Fixes github issue #4735.
-
由 Pengzhou Tang 提交于
cdbdisp_get_PQerror create a new error data object and initialize it with filename, function values from QE, errdata need a const filename, function, it does not copy it in ErrorContext. The problem is filename and function was point to a unstable memory, so when edata is used later, it may report a SIGSEGV. To resolve this, copy them in the transaction context because this error data can only be used inside current transaction.
-
由 Paul Guo 提交于
Index only scan is a new feature in PG9.2 merge. We do not have the support in eagerfree related functions.
-
由 Pengzhou Tang 提交于
Previously, COPY use CdbDispatchUtilityStatement directly to dispatch 'COPY' statements to all QEs and then send/receive data from primaryWriterGang, this way happens to work because primaryWriterGang is not recycled when a dispatcher state is destroyed. This seems nasty because the COPY command has finished logically. This commit splits the COPY dispatching logic to two parts to make it more reasonable.
-
由 Pengzhou Tang 提交于
* cdbdisp_buildUtilityQueryParms * cdbdisp_buildCommandQueryParms
-
由 Pengzhou Tang 提交于
Previously, cdbdisp_finishCommand did three things: 1. cdbdisp_checkDispatchResult 2. cdbdisp_getDispatchResult 3. cdbdisp_destroyDispatcherState However, cdbdisp_finishCommand didn't make code cleaner or more convenient to use, in contrast, it makes error handling more difficult and makes code more complicated and inconsistent. This commit also reset estate->dispatcherState to NULL to avoid re-entry of cdbdisp_* functions.
-
由 Pengzhou Tang 提交于
Use cdbdisp_checkDispatchResult instead of CdbCheckDispatchResult to be consistent of cdbdisp_* functions.
-
由 Pengzhou Tang 提交于
Previously, CdbDispatchPlan might be called to dispatch nothing if 1. init plan is parallel but main plan is not. CdbDispatchPlan is still called for main plan. 2. init plan is not parallel, CdbDispatchPlan is still called for init plan. The reason is DISPATCH_PARALLEL stands for the whole plan include main plan and init plans, this commit add ways to seperately tell which plan is parallel exactly to avoid unnecessary dispatching.
-
由 Lav Jain 提交于
-
由 kaknikhil 提交于
PR https://github.com/greenplum-db/gpdb/pull/5432 was merged but the master pipeline wasn't recreated from the template. This PR generates the master pipeline from the template so that the madlib jobs can be run on the master pipeline.
-
由 Jingyi Mei 提交于
* Add madlib_build_gppkg job to master pipeline The current gpdb master pipeline fetches madlib gppkg compiled against ealier version of gpdb code, installs the gppkg and runs dev check in a container with latest gpdb installed. If there is catalog change in gpdb, the test will fail. To solve this issue, we add a job build_madlib_gppkg to compile madlib gppkg from soure and pass it to downstream dev check jobs so that madlib is always compiled and tested against latest catalog change. Co-authored-by: NDomino Valdano <dvaldano@pivotal.io>
-
由 Jimmy Yih 提交于
The distributed_transactions test contains a serializable transaction. This serializable transaction may intermittently cause the appendonly test to fail when run in the same test group. The appendonly test runs VACUUM on some appendonly tables and checks that last_sequence is nonzero in gp_fastsequence. Serializable transactions make concurrent VACUUM operations on appendonly tables exit early. To fix the contention, let's move the distributed_transactions test to another test group. appendonly test failure diff: *** 632,640 **** NormalXid | 0 | t | 0 NormalXid | 0 | t | 1 NormalXid | 0 | t | 2 ! NormalXid | 1 | t | 0 ! NormalXid | 1 | t | 1 ! NormalXid | 1 | t | 2 (6 rows) --- 630,638 ---- NormalXid | 0 | t | 0 NormalXid | 0 | t | 1 NormalXid | 0 | t | 2 ! NormalXid | 1 | f | 0 ! NormalXid | 1 | f | 1 ! NormalXid | 1 | f | 2 (6 rows) Repro: 1: CREATE TABLE heap_table (a int, b int); 1: INSERT INTO heap_table SELECT i, i FROM generate_series(1,100)i; 1: CREATE TABLE ao_table WITH (appendonly=true) AS SELECT * FROM heap_table; 1: SELECT gp_segment_id, * FROM gp_dist_random('gp_fastsequence') WHERE gp_segment_id = 0; 2: BEGIN ISOLATION LEVEL SERIALIZABLE; 2: SELECT 1; 1: VACUUM ao_table; -- VACUUM exits early 1: SELECT gp_segment_id, * FROM gp_dist_random('gp_fastsequence') WHERE gp_segment_id = 0; 2: END; 1: VACUUM ao_table; -- VACUUM completes 1: SELECT gp_segment_id, * FROM gp_dist_random('gp_fastsequence') WHERE gp_segment_id = 0;
-
- 13 8月, 2018 9 次提交
-
-
由 Joao Pereira 提交于
At this point in time we do not support backwards scanning of tuplestores.
-
由 Joao Pereira 提交于
User elog instead of ereport to align with upstream. Upstream uses elog instead of ereport for error that should only be reachable if there is a bug in the code
-
由 Joao Pereira 提交于
- Created the needed control files - Renamed and adapted the installation script - Remove redundant tests in regress that were just checking installation - Add tests to ensure installation is successfull - Updated the Makefile to support the extension and add information about regression tests - Add contrib/gp_internal_tools tests to ICW - Remove session_level_memory_consumption from regress test suite
-
由 xiong-gang 提交于
Replace function `cdbpath_rows(root, path)` with path->rows, this is more in line with upstream 9.2, thus removes a GPDB_92_MERGE_FIXME Co-authored-by: Alexandra Wang<lewang@pivotal.io> Co-authored-by: Gang Xiong<gxiong@pivotal.io>
-
由 Richard Guo 提交于
Variable numFreeProcs is added by GPDB to accelerate HaveNFreeProcs(), but considering the argument size of calls to HaveNFreeProcs(), this optimization may be not worth it. In addition, the current codes calculate numFreeProcs wrong. So remove numFreeProcs to keep the same as PostgreSQL. Relevant thread from gpdb-dev: https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/NG8t5XXYQB0/7BDVoQQLBAAJ
-
由 Paul Guo 提交于
-
由 Heikki Linnakangas 提交于
We had backported the tuplesort_skiptuples() function, as part of commit fd6212ce which backported upstream support for ordered-set aggregates. Since we backported the feature, we also need to keep backporting all bugfixes that follow, until we catch up to that in the merge. In GPDB, the same fix needs to also be applied to the "mk sort" variant. Fixes github issue #5334. Upstream commit: commit 1def747d Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Tue Dec 24 17:13:02 2013 -0500 Fix inadequately-tested code path in tuplesort_skiptuples(). Per report from Jeff Davis.
-
由 Jinbao Chen 提交于
A parse object has been copyed to subroot, and We do not free root->simple_rel_array and root->simple_rte_array. So We do not need to copy group clause.
-
由 Heikki Linnakangas 提交于
The show_gp_connections_per_thread function is not needed, because the default "show" functionality for an integer GUC does the same.
-
- 11 8月, 2018 2 次提交
-
-
由 Lisa Owen 提交于
* docs - add missing gp-specific options to pg_dump * qualify the options as unsupported * use a note
-
由 Ashuka Xue 提交于
Signed-off-by: NAbhijit Subramanya <asubramanya@pivotal.io>
-