- 03 1月, 2019 8 次提交
-
-
由 Daniel Gustafsson 提交于
Back when the initial backport of pg_upgrade was added to Greenplum we included a small dump of a 4.3.x cluster for testing purposes. This cluster was inadequate for testing pg_upgrade back then, and is even more so today. Remove to reduce potential confusion that these files are actually part of the now quite extensive test harness for pg_upgrade in Greenplum. The cluster also requires access to a 4.3 cluster which is a licensed product, so remove from the open source repository. Discussion: https://github.com/greenplum-db/gpdb/pull/6149Reviewed-by: NJim Doty <jdoty@pivotal.io> Reviewed-by: NJacob Champion <pchampion@pivotal.io>
-
由 Daniel Gustafsson 提交于
This removes unused arguments to cdbparallelize(), tidies up incorrect comments and converts an Insist(0) call to an elog(). Discussion: https://github.com/greenplum-db/gpdb/pull/6423Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Daniel Gustafsson 提交于
The segment role GP_ROLE_UNDEFINED should never be set, and cdbvars is guarding against that ever happening. Thus, testing for it in the code makes it look like it could indeed happen which is misleading to the reader. Remove the test and refactor the switch() to instead use an if-else clause with an assertion that maintains the same guarantee as the previous coding did. Discussion: https://github.com/greenplum-db/gpdb/pull/6423Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Richard Guo 提交于
Variable 'maybe_local_equijoin' is added by GPDB and not used any more. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Mel Kiyama 提交于
* docs - update analyzedb --skip-root-partition for incremental analyze (HLL) doc update until analyzedb is updated to account for incremental analyze. * docs - update analyzed - clarify --skip_root_stats description.
-
由 Mel Kiyama 提交于
* docs - new GUC optimizer_enable_agg_skew_avoidance * docs - edits for GUC optimizer_enable_agg_skew_avoidance * docs - change GUC name to optimizer_force_agg_skew_avoidance
-
由 Lisa Owen 提交于
* docs - discuss the global deadlock detector * some of the edits requested by david * move opening paragraph to release note * reorg content, add a bit about local deadlock * guc can be reloaded * concurrent update AND DELETE
-
由 David Kimura 提交于
This change brings back an optimization to reuse buffer when slot's PRIVATE_tts_memtuple is NULL and PRIVATE_tts_mtup_buf is not. Here we pass inline_toast=false into memtuple_form_to. Thus previous refactor to detoast either MemTuple or HeapTuple in SendTuple is maintained (from commit bf5e3d5d).
-
- 31 12月, 2018 7 次提交
-
-
由 Heikki Linnakangas 提交于
This got duplicated by accident in the 9.4.20 merge (commit f2e08026).
-
由 Heikki Linnakangas 提交于
These calls are made in the caller, ReadBufferExtended(), now. These are just leftovers.
-
由 Heikki Linnakangas 提交于
These happen often, when we have backported some upstream commits, and later merge the same commits the second time, as part of the PostgreSQL merge.
-
由 Heikki Linnakangas 提交于
These were left unused by commit 5ab222943d.
-
由 Heikki Linnakangas 提交于
This was accidentally added as part of the PostgreSQL 9.4 merge. I was hacking on commit faf0ec6b at the same time as the 9.4 merge, and this slipped into the 9.4 merge branch, and from there into master with the merge.
-
由 Heikki Linnakangas 提交于
The code that used to create these paths was removed in commit d4ce0921, but this was left behind.
-
由 Heikki Linnakangas 提交于
The only place where it was read was in setting the number of segments, when marking the resulting plan as "strewn". We can use the 'input_locus' directly for that, because that's where the numsegments value for the 'output_locus' came from in every case. Reviewed-by: NNing Yu <nyu@pivotal.io> Reviewed-by: NJesse Zhang <jzhang@pivotal.io>
-
- 29 12月, 2018 2 次提交
-
-
由 Heikki Linnakangas 提交于
We used to call some node types different names in EXPLAIN output, depending on whether the plan was generated by ORCA or the Postgres planner. Also, a Bitmap Heap Scan used to be called differently, when the table was an AO or AOCS table, but only in planner-generated plans. There was some historical justification for this, because they used to be different executor node types, but commit db516347 removed last such differences. Full list of renames: Table Scan -> Seq Scan Append-only Scan -> Seq Scan Append-only Columnar Scan -> Seq Scan Dynamic Table Scan -> Dynamic Seq Scan Bitmap Table Scan -> Bitmap Heap Scan Bitmap Append-Only Row-Oriented Scan -> Bitmap Heap Scan Bitmap Append-Only Column-Oriented Scan -> Bitmap Heap Scan Dynamic Bitmap Table Scan -> Dynamic Bitmap Heap Scan
-
由 Ashwin Agrawal 提交于
This merges upto upstream commit 4f0bf335 (tag: REL9_4_20). Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io> Co-authored-by: NEkta Khanna <ekhanna@pivotal.io> Co-authored-by: NJinbao Chen <jinchen@pivotal.io> Co-authored-by: NPaul Guo <pguo@pivotal.io> Co-authored-by: NRichard Guo <riguo@pivotal.io> Co-authored-by: NShaoqi Bai <sbai@pivotal.io>
-
- 28 12月, 2018 19 次提交
-
-
由 Heikki Linnakangas 提交于
Commit 305f7bd8 removed 'gp_enable_adaptive_nestloop' (which was already unused at that point), but missed it in the config file.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
Now that filerep is gone, it shouldn't be necessary anymore. Clarify the way 'lockHolderProcPtr' is cleared, while we're at it.
-
由 Heikki Linnakangas 提交于
These happen often, when we have backported some upstream commits, and later merge the same commits the second time, as part of the PostgreSQL merge.
-
由 Heikki Linnakangas 提交于
We don't support compiling the server on Windows. Even if we did, I think much of these wouldn't be needed.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
It was always set to same value as 'numConns'. I guess there was some idea of also connecting to mirrors with the interconnect, but that hasn't been used for a long time.
-
由 Heikki Linnakangas 提交于
Include the correct header instead.
-
由 Heikki Linnakangas 提交于
Not sure why it was put in cdbpublic.h originally. I don't see any reason for it now.
-
由 Paul Guo 提交于
See the plan example blow (without this patch). regression=# explain select a.idv, b.idv from tidv a, tidv b where a.idv = b.idv; QUERY PLAN ----------------------------------------------------------------------------------------------------------------- Merge Join (cost=10000000000.33..10000085645.93 rows=2787840 width=64) Merge Cond: (a.idv = b.idv) -> Gather Motion 3:1 (slice1; segments: 3) (cost=0.17..21848.17 rows=52800 width=32) Merge Key: a.idv -> Index Only Scan using tidv_idv_idx on tidv a (cost=0.17..20792.17 rows=17600 width=32) -> Materialize (cost=0.00..0.00 rows=0 width=0) -> Materialize (cost=0.00..0.00 rows=0 width=0) -> Gather Motion 3:1 (slice2; segments: 3) (cost=0.17..21848.17 rows=52800 width=32) Merge Key: b.idv -> Index Only Scan using tidv_idv_idx on tidv b (cost=0.17..20792.17 rows=17600 width=32) Optimizer: legacy query optimizer (11 rows) Reviewed by Richard Guo and Jinbao Chen
-
由 Kris Macoskey 提交于
It has been a long time since this spec file has been used to build a Greenplum rpm installer. The ability to build a Greenplum rpm installer will eventually be replaced with a new spec file, but it will not live in this directory nor does it need to be based on this older spec file. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Kris Macoskey 提交于
I am unsure where this script was originally used and when. I cannot find anything currently using it for Greenplum 6.X, so I believe it is safe to delete. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Kris Macoskey 提交于
This Greenplum logo png is used in the top level README of the repository. Instead of having the image live a few layers deep in the gpAux/releng directory (which we will eventually be removing), have the image be placed in close proximity to the README.md itself. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Kris Macoskey 提交于
The self-extracting, binary installer for Greenplum (a bash script with a tarball appended) will no longer be used for Greenplum 6.X. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Taylor Vesely 提交于
This build script appears to have been unused for quite some time.
-
由 Kris Macoskey 提交于
This script was once necessary when Pulse was being used for building and testing Greenplum. Pulse is no longer used and it's not necessary to maintain any of the scripts associated with using Pulse. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Heikki Linnakangas 提交于
The 'complex_decode_double' subroutine, used in the input function for the 'complex' datatype, was copy-pasted from float8in, and modified. However, those modifications broke compilation with HAVE_BUGGY_SOLARIS_STRTOD. The code inside the #ifdef referenced to non-existent 'endptr' local variable (it was called 'end_ptr'). To make maintenance of this function easier, copy-paste it verbatim from float8in, and avoid making any unnecessary changes to it. In the passing, fix a missing space in the error message. Fixes https://github.com/greenplum-db/gpdb/issues/5879. Reviewed-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Ashwin Agrawal 提交于
FTS message handler process is special system process, which doesn't have access to catalog. Hence, it shouldn't be performing any catalog access. Calling `superuser()` via `set_gp_replication_config()` or `pg_reload_conf()` results in error for the same reason. Hence, if `am_ftshandler` process avoid calling `superuser()` and thereby avoid catalog access. Rest all the stuff performed by this process doesn't need catalog access. Fixes #4764 github issue.
-
由 Ashwin Agrawal 提交于
When FTS promotes mirror, in gp_segment_configuration it marks mirror as not in sync ('n'). Hence, on mirror when promotion request is received, best to reset the synchronous_standby_names. If this is not done, commits hang/wait after promotion till next FTS probe cycle to detect this condition and reset the synchronous_standby_names.
-
- 27 12月, 2018 4 次提交
-
-
由 Pengzhou Tang 提交于
This is involved unexpectedly by b120194a when resolving conflict with the master.
-
由 Tang Pengzhou 提交于
* Do not expose system columns to users for replicated table for replicated table, all replica in segments should always be the same which make gp_segment_id ambiguous for replicated table, instead of makeing effort to keep gp_segment_id the same in all segments, I would like to just hide the gp_segment_id for replicated table to make things simpler. This commit only make gp_segment_id invisible to users at the stage of transforming parsetree, in the underlying storage, each segment still store different value for system column gp_segment_id, operations like SPLITUPDATE might also use gp_segment_id to do an explicit redistribution motion, this is fine as long as the user-visible columns have the same data. * Fixup reshuffle* test cases reshuffle.* cases used to use gp_segment_id to test that a replicated table is actually expanded to new segments, now gp_segment_id is invisible to users, some errors reports. Because replicated table has triple data, we can get enough info from the total count even without a group by of gp_segment_id. It's not 100% accurate but should be enough already. * Do dependencies check when converting a table to replicated table system columns of replicated table should not be exposed to users, we do the check at parser stage, however, if users create views or rules involve the system columns of a hash distributed table and then the hash distributed table is converted to a replicated table, then users can still access the system columns of replicated table through views and rules because they bypass the check in the parser stage. To resolve this, we add a dependencies check when altering a table to replicated table, users need to drop the views or rules first. I tried to add a recheck in later stages like planner or executor, but it seems that system columns like CTID are used internally for basic DMLs like UPDATE/DELETE and those columns are added even before the rewrite of views and rules, so adding a recheck will block basic DMLs too.
-
由 Paul Guo 提交于
There is a case that a postmaster process is killed-by-9 and then the process has no chance to remove its temporary files /tmp/.s.PGSQL.${PORT}*, which are used by the demo create script to check the port usability. For the above case re-creating demo cluster will fail but the below error message information is misleading since apparently the port is not used. Check to see if the port is free by using : 'netstat -an | grep 16432'. Fix this by providing more information to users. Reviewed by Ashwin Agrawal
-
由 Karen Huddleston 提交于
The repo name was changed after the github migration. The previous paths will not work correctly for new clones of the repository. Authored-by: NKaren Huddleston <khuddleston@pivotal.io>
-