- 19 1月, 2019 2 次提交
-
-
由 Sambitesh Dash 提交于
-
由 Jacob Champion 提交于
The GPDB-specific constant PQPING_MIRROR_READY, which indicates that a mirror is ready for replication, was not handled in pg_isready. Additionally, the value we selected for PQPING_MIRROR_READY might at one point in the future conflict with upstream libpq, which would be a pain to untangle. Try to avoid that situation by increasing the value. Co-authored-by: NShoaib Lari <slari@pivotal.io>
-
- 18 1月, 2019 6 次提交
-
-
由 Adam Berlin 提交于
-
由 David Kimura 提交于
There was a race condition where it was possible that fault was unexpectedly triggered by WAL sender object independent of pg_basebackup being run. We could fix it to be more deternimistic by incrementing wait for triggered count, but the test as a whole didn't seem to add much value. Co-authored-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 David Kimura 提交于
Prior to this commit, the test recreated the tmp_check_* directory for each running test. This would lead to loosing the datadir for the failing test if it wasn't the last one. This commit, creates a new directory specific to each test and cleans up artifacts of previous passing tests Co-authored-by: NDavid Kimura <dkimura@pivotal.io> Co-authored-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Abhijit Subramanya 提交于
Co-authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Abhijit Subramanya 提交于
This commit sets the default value of the guc optimizer_penalize_broadcast_threshold to 100000. We have seen a lot of cases where a plan with broadcast was chosen due to underestimation of cardinality. In such cases a Redistribute motion would have been better. So this commit will penalize broadcast when the number of rows is greater than 100000 so that Redistribute is favored more in this case. We have tested the change on the perf pipeline and do not see any regression. Co-authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Heikki Linnakangas 提交于
I looked up this issue in the old JIRA instance: > MPP-8014: bitmap indexes create entries in gp_distribution_policy > > postgres=# \d bar > Table "public.bar" > Column | Type | Modifiers > --------+---------+----------- > i | integer | > Distributed by: (i) > > postgres=# create index bitmap_idx on bar using bitmap(i); > CREATE INDEX > postgres=# select localoid::regclass, * from gp_distribution_policy; > localoid | localoid | attrnums > ----------------------------+----------+---------- > bar | 16398 | {1} > pg_bitmapindex.pg_bm_16415 | 16416 | > (2 rows) So the problem was that we created gp_distribution_policy entry for the auxiliary heap table of the bitmap index. We no longer do that, this bug was fixed 9 years ago. But the test we have in mpp8014 would not fail, even if the bug reappeared! Let's remove the test, as it's useless in its current form. It would be nice to have a proper test for that bug, but it doesn't seem very likely to re-appear any time soon, so it doesn't seem worth the effort. Fixes https://github.com/greenplum-db/gpdb/issues/6315
-
- 17 1月, 2019 13 次提交
-
-
由 Adam Berlin 提交于
-
由 Daniel Gustafsson 提交于
"remoteHost" is not a parameter to ReadPostmasterTempFile.remote(), fix by just passing the value. The ReadPostmasterTempFile __init__ does however have a remoteHost parameter, which is likely where the confusion occurred. This codepath was rarely executed, but when reached it would create the following error message: [ERROR]:-Failed to stop standby. Attempting forceful termination of standby process [CRITICAL]:-gpstop failed. (Reason='remote() got an unexpected keyword argument 'remoteHost'') exiting... Reviewed-by: NJimmy Yih <jyih@pivotal.io>
-
由 Daniel Gustafsson 提交于
This removes a duplicate import and a few set, but never used, vars from the gpload.py code as well as the including_defaults token as it was clearly unused. Also fixes a few typos while in there, one of which is a user facing error message. Reviewed-by: NJacob Champion <pchampion@pivotal.io>
-
由 Ning Yu 提交于
Resource group comes with a view gp_toolkit.gp_resgroup_status to get the running status of all the resource groups. The cpu and memory usages as displayed in json format, they are hard to parse and understand by human. To make it more user friendly we now provide two new views on the status, the cpu and memory usages are flatten into multiple columns. They also group the status by segment or host so it is easier to find out the usages at different levels. Reviewed-by: NZhenghua Lyu <zlv@pivotal.io>
-
由 ZhangJackey 提交于
pg_upgrade upgrade the database in utility mode, but we can not get the correct numsegments in the utility mode, now we calculate the segments count by the gp_segment_configuration when it is a utility QD process, then pg_upgrade can use it.
-
由 ZhangJackey 提交于
If the cluster is in expansion mode, then there must be some partial tables which are not equal between numsegments and cluster size. Now we check the expansion status by table's numsegments when we run gpexpand, gpexpand will raise an error if there are partial tables.
-
由 Daniel Gustafsson 提交于
The gp_ignore_error_table GUC will be deprecated in Greenplum 6 to be removed in Greenplum 7. Highlight this in the documentation. Discussion: https://groups.google.com/a/greenplum.org/forum/#!topic/gpdb-dev/mzYcVk_G5UwReviewed-by: NDavid Yozie <dyozie@pivotal.io>
-
由 Daniel Gustafsson 提交于
The INTO ERROR TABLE syntax has been deprecated since Greenplum 5 shipped. Discussion: https://groups.google.com/a/greenplum.org/forum/#!topic/gpdb-dev/mzYcVk_G5UwReviewed-by: NDavid Yozie <dyozie@pivotal.io>
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
- Assumes single node setup for CI. The test generates its own cluster.
-
由 Adam Berlin 提交于
- manages situations where the replication slot already exists.
-
由 David Kimura 提交于
Increase timeout for promotion from 10 seconds to 30 seconds. We noticed that in failing tests that promotion took longer than 10 seconds. Co-authored-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Georgios Kokolatos 提交于
vacuum_rel already performs the tests and all current codepaths pass through it. Block decorated by GPDB_93_MERGE_FIXME tag. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
- 16 1月, 2019 19 次提交
-
-
由 Heikki Linnakangas 提交于
The 'nloop' counts showed one too much. Here's how it happened: 1. In the QE processes, cdbexplain_sendExecStats() was called, and a node's 'running' flag was true. It called InstrEndLoop() on the node, which incremented 'nloops', and reset 'running' to false. 2. In the stats that were collected and sent to the QD, 'running' was recorded as 'true', and for 'nloops', the incremented value. 3. In the QD, the stats were received from the QE, and installed to the local Instrumentation structs. 4. QD called ExplainNode(), which saw that running=true, because that's the value that was received from the QE, and it called InstrEndLoop() again. Fix, by not including the 'running' flag in the Instrumentation information that is sent from QE to QD. The stats sent reflect the situation after InstrEndLoop() has already been called. Fixes https://github.com/greenplum-db/gpdb/issues/5854Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io> Reviewed-by: NPengzhou Tang <ptang@pivotal.io>
-
由 Hubert Zhang 提交于
This reverts commit b95059a8. Based on Heikki's comments, we still need more discussion on the hook position. For heap table part, continue to work on pgsql-hackers, to define the appropriate set of hooks for PostgreSQL, which we can then adopt in GPDB. For AO/CO part, continue to talk on original PR and gpdb-dev list.
-
由 Ning Yu 提交于
The resource group tests on views were sensitive to cluster size, the answer file was generated on a default demo cluster, but the pipeline jobs for resgroup are using a 2 segments cluster, so the tests will fail on the pipeline. Fixed by adding a filter to only check the status of master.
-
由 Heikki Linnakangas 提交于
Because it's faster and compresses better. Like with the old zlib code, if libzstd is not available, fall back to no compression. There was debate on whether we should fall back to zlib, if that's available, but maintaining alternative code is not free. Zstandard is so much faster than zlib, that anyone running a production system really should build with it. A developer's laptop is a different story, and falling back to no compression is fine for that. Discussion: https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/8w7vRuaqJ6c/uHpOJXwyEgAJReviewed-by: NAshwin Agrawal <aagrawal@pivotal.io> Reviewed-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
-
由 Ning Yu 提交于
Resource group comes with a view gp_toolkit.gp_resgroup_status to get the running status of all the resource groups. The cpu and memory usages as displayed in json format, they are hard to parse and understand by human. To make it more user friendly we now provide two new views on the status, the cpu and memory usages are flatten into multiple columns. They also group the status by segment or host so it is easier to find out the usages at different levels. Reviewed-by: NZhenghua Lyu <zlv@pivotal.io>
-
由 Ning Yu 提交于
Resource group id of my proc is stored in a local variable accessible only in resgroup.c, but this information can also be interested in other contexts, so an API `GetMyResGroupId()` is provided to get this information. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NHao Wang <haowang@pivotal.io>
-
由 Hubert Zhang 提交于
Diskquota extension needs two kinds of hooks: 1. hooks to detect active tables when tables are being modified. 2. hooks to cancel a query whose quota limit is reached. These two kinds of hooks are described in detail in Wiki: https://github.com/greenplum-db/gpdb/wiki/Greenplum-Diskquota-Design#design-of-diskquota They are corresponding to two components: Quota Enforcement Operator and Quota Change Detector. Co-authored-by: NHaozhou Wang <hawang@pivotal.io> Co-authored-by: NHao Wu <gfphoenix78@gmail.com>
-
由 Chuck Litzell 提交于
* Docs - update docs to note that system columns are unavailable in queries on replicated tables. * Edits from reviewers
-
由 Chuck Litzell 提交于
* docs - replicated tables don't support updatable cursors * Revert change that DECLARE for UPDATE not supported with rep tables
-
由 David Yozie 提交于
-
由 David Yozie 提交于
-
由 Alexandra Wang 提交于
To "incrementally" recover old primary as a mirror at a later time via pg_rewind, all xlog must be preserved from point of divergence. Hence, replication slot must be created at promote time. So, this commit adds logic on FTS promote message to create physical replication slot and also sets the restart_lsn of the slot to start preserving the xlog right away. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Andres Freund 提交于
When creating a physical slot it's often useful to immediately reserve the current WAL position instead of only doing after the first feedback message arrives. That e.g. allows slots to guarantee that all the WAL for a base backup will be available afterwards. Logical slots already have to reserve WAL during creation, so generalize that logic into being usable for both physical and logical slots. Catversion bump because of the new parameter. Author: Gurjeet Singh Reviewed-By: Andres Freund Discussion: CABwTF4Wh_dBCzTU=49pFXR6coR4NW1ynb+vBqT+Po=7fuq5iCw@mail.gmail.com
-
由 Alexandra Wang 提交于
pg_rewind --slot is mutual exclusive with --source-pgdata Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Alexandra Wang 提交于
Properly cleanup after replication slot behave test Issue is that gpstart will implicitly reblance the cluster when syned segment pairs are not in their preferred roles. But this functionality is broken with WAL replication. For more info: https://github.com/greenplum-db/gpdb/pull/6659Co-authored-by: NDavid Kimura <dkimura@pivotal.io> Co-authored-by: NAdam Berlin <aberlin@pivotal.io>
-
由 David Kimura 提交于
Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
-
由 David Kimura 提交于
Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
-