提交 630b6a96 编写于 作者: D David Yozie

Docs - removing unused file

上级 4168f324
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE topic PUBLIC "-//OASIS//DTD DITA Topic//EN" "topic.dtd">
<topic id="topic_c32_qfv_r4">
<title>Hardware and Platform</title>
<body>
<p>In an MPP shared nothing environment, overall response time for a query is measured by the
completion time for all segments. Processing/query time is constrained by the slowest running
segment. While it is not enforced, it is highly recommended that all segment hosts have
identical configurations. </p>
<section>
<title>CPU</title>
<p>Choose a dual-socket PC with high clock rate, and multi-core CPUs for the Greenplum
Database master and segment hosts. If all of the segment hosts have the same or similar
configuration, it is easier to achieve uniform performance from the segments. </p>
</section>
<section>
<title>Disk Storage</title>
<p>Choose a hardware RAID storage system with 8 to 24 disks on the master and 16 to 24 disks
on the segment hosts. Controllers from LSI Corporation are used most frequently. SAS disks
generally provide better performance than SATA and are recommended.</p>
<p>Optimal I/O throughput is 2 GB per second read and 1 GB per second write. </p>
<p>RAID should be configured to allow for a hot spare disk to reduce the downtime in the event
of a single disk failure. RAID 5, RAID 6 and RAID 1 have all been used. A RAID mirroring
scheme should be selected to provide the least deterioration in performance during both disk
failure and RAID rebuilding scenarios. </p>
</section>
<section>
<title>Network</title>
<p>Networking throughput per host should be at least 10 gigabits per second, with 20 gigabits
per second preferred by using two 10-gigabit network interfaces on each host. If two
10-gigabit networks are used, then configuring interface bonding can provide a single IP
address for each host used for sending and receiving data.</p>
<p>Disk patrolling and other periodically scheduled disk maintenance tasks should be either
disabled or the schedule should be controlled to prevent delays in the I/O subsystem at
unexpected times. </p>
</section>
<section>
<title>Memory</title>
<p>A minimum of 256GB per host is recommended.</p>
<p>Large amounts of RAM per host can be helpful if you plan to run queries with a high degree
of concurrency. Up to 1TB of RAM per host can improve performance with some workloads.</p>
<p>See <xref href="#topic_c32_qfv_r4/os_mem_config" format="dita"/>, <xref
href="#topic_c32_qfv_r4/shared_mem_config" format="dita"/>, <xref
href="sysconfig.xml#segment_mem_config" format="dita"/>, and <xref
href="workloads.xml#topic_hhc_z5w_r4" format="dita"/> for additional memory-related best
practices.</p>
</section>
<section>
<title>Preferred Operating System</title>
<p>Red Hat Enterprise Linux (RHEL) is the preferred operating system. See the <i>Greenplum
Database Release Notes</i> to find supported operating systems for a particular
release.</p>
</section>
<section>
<title>File System</title>
<p>XFS is the best practice file system for Greenplum Database data directories. XFS should be
mounted with the following mount options:
<codeblock>rw,noatime,inode64,allocsize=16m</codeblock></p>
</section>
<section>
<title>Port Configuration</title>
<p><codeph>ip_local_port_range</codeph> should be set up to not conflict with the Greenplum
Database port ranges. For example:
<codeblock>net.ipv4.ip_local_port_range = 3000 65535
PORT_BASE=2000
MIRROR_PORT_BASE=2100
REPLICATION_PORT_BASE=2200
MIRROR_REPLICATION_PORT_BASE=2300</codeblock></p>
</section>
<section>
<title>I/O Configuration</title>
<p> blockdev read-ahead size should be set to 16384 on the devices that contain data
directories.</p>
<codeblock>/sbin/blockdev --getra /dev/sdb
16384</codeblock>
<p>The deadline IO scheduler should be set for all data directory devices. </p>
<codeblock> # cat /sys/block/sdb/queue/scheduler
noop anticipatory [deadline] cfq </codeblock>
<p>The maximum number of OS files and processes should be increased in the
<codeph>/etc/security/limits.conf</codeph> file.
<codeblock>* soft nofile 65536
* hard nofile 65536
* soft nproc 131072
* hard nproc 131072</codeblock></p>
<p>Enable core files output to a known location and make sure <codeph>limits.conf</codeph>
allows core files.
<codeblock>kernel.core_pattern = /var/core/core.%h.%t
# grep core /etc/security/limits.conf
* soft core unlimited</codeblock></p>
</section>
<section id="os_mem_config">
<title>OS Memory Configuration</title>
<p>The Linux sysctl <codeph>vm.overcommit_memory</codeph> and
<codeph>vm.overcommit_ratio</codeph> variables affect how the operating system manages
memory allocation. These variables should be set as follows: </p>
<p><codeph>vm.overcommit_memory</codeph> determines the method the OS uses for determining how
much memory can be allocated to processes. This should be always set to 2, which is the only
safe setting for the database. </p>
<p><codeph>vm.overcommit_ratio</codeph> is the percent of RAM that is used for application
processes. The default, 50, is the recommended setting.</p>
</section>
<section id="shared_mem_config">
<title>Shared Memory Settings </title>
<p>Greenplum Database uses shared memory to communicate between <codeph>postgres</codeph>
processes that are part of the same <codeph>postgres</codeph> instance. The following shared
memory settings should be set in <codeph>sysctl</codeph> and are rarely modified.</p>
<codeblock>kernel.shmmax = 500000000
kernel.shmmni = 4096
kernel.shmall = 4000000000</codeblock>
</section>
<section id="gpcheck">
<title>Validate the Operating System </title>
<p>Run <codeph>gpcheck</codeph> (as root) to validate the operating system configuration.
<codeph>gpcheck</codeph> in the <i>Greenplum Database Utility Guide</i>.</p>
</section>
</body>
</topic>
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册