Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
Greenplum
Gpdb
提交
b07005c2
G
Gpdb
项目概览
Greenplum
/
Gpdb
通知
7
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
G
Gpdb
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
b07005c2
编写于
11月 09, 2017
作者:
L
Lisa Owen
提交者:
dyozie
11月 09, 2017
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
docs - memory implications when RGs active and failover occurs (#3827)
上级
45d7af51
变更
2
隐藏空白更改
内联
并排
Showing
2 changed file
with
20 addition
and
1 deletion
+20
-1
gpdb-doc/dita/admin_guide/wlmgmt_intro.xml
gpdb-doc/dita/admin_guide/wlmgmt_intro.xml
+1
-1
gpdb-doc/dita/ref_guide/config_params/guc-list.xml
gpdb-doc/dita/ref_guide/config_params/guc-list.xml
+19
-0
未找到文件。
gpdb-doc/dita/admin_guide/wlmgmt_intro.xml
浏览文件 @
b07005c2
...
...
@@ -112,7 +112,7 @@
fail. Use the following formula to calculate a safe value for
<varname>
gp_vmem_protect_limit
</varname>
; provide the
<codeph>
gp_vmem_rq
</codeph>
value you calculated earlier.
</p><p><codeblock>
gp_vmem_protect_limit = gp_vmem / max_acting_primary_segments
</codeblock></p>
gp_vmem_protect_limit = gp_vmem
_rq
/ max_acting_primary_segments
</codeblock></p>
<p>
where
<codeph>
max_acting_primary_segments
</codeph>
is the maximum number of
primary segments that could be running on a host when mirror segments are activated
due to a host or segment failure.
</p>
...
...
gpdb-doc/dita/ref_guide/config_params/guc-list.xml
浏览文件 @
b07005c2
...
...
@@ -4967,6 +4967,25 @@
</tbody>
</tgroup>
</table>
<note>
When resource group-based resource management is active, the memory allotted to
a segment host is equally shared by active primary segments. Greenplum Database assigns
memory to primary segments when the segment takes the primary role. The initial memory
allotment to a primary segment does not change, even in a failover situation. This may
result in a segment host utilizing more memory than the
<codeph>
gp_resource_group_memory_limit
</codeph>
setting permits.
<p>
For example, suppose your Greenplum Database cluster is utilizing the default
<codeph>
gp_resource_group_memory_limit
</codeph>
of
<codeph>
0.7
</codeph>
and a
segment host named
<codeph>
seghost1
</codeph>
has 4 primary segments and 4 mirror
segments. Greenplum Database assigns each primary segment on
<codeph>
seghost1
</codeph>
<codeph>
(0.7 / 4 = 0.175%)
</codeph>
of overall system memory. If failover occurs and
two mirrors on
<codeph>
seghost1
</codeph>
fail over to become primary segments,
each of the original 4 primaries retain their memory allotment of
<codeph>
0.175
</codeph>
,
and the two new primary segments are each allotted
<codeph>
(0.7 / 6 = 0.116%)
</codeph>
of system memory.
<codeph>
seghost1
</codeph>
's overall memory allocation
in this scenario is
</p><p><codeblock>
0.7 + (0.116 * 2) = 0.932%
</codeblock></p>
<p>
which is above the percentage configured
in the
<codeph>
gp_resource_group_memory_limit
</codeph>
setting.
</p></note>
</body>
</topic>
<topic
id=
"gp_resource_manager"
>
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录