diff --git a/gpdb-doc/dita/admin_guide/highavail/topics/g-enabling-high-availability-features.xml b/gpdb-doc/dita/admin_guide/highavail/topics/g-enabling-high-availability-features.xml index 25b57f53642cc7749c660ba9dce8a06fb04500c1..01dcefffb381355c26615708bfecb6e7bc293b2b 100644 --- a/gpdb-doc/dita/admin_guide/highavail/topics/g-enabling-high-availability-features.xml +++ b/gpdb-doc/dita/admin_guide/highavail/topics/g-enabling-high-availability-features.xml @@ -8,7 +8,14 @@ configured. -

For information about the utilities that are used to enable high - availability, see the Greenplum Database Utility Guide.

+ When data loss is not acceptable for a Pivotal Greenplum Database cluster, master and segment mirroring must be + enabled in order for the cluster to be supported by + Pivotal. Without mirroring, system and data availability is not guaranteed, Pivotal will make best efforts to restore a cluster in this + case. For information about master and segment mirroring, see About Redundancy and Failover. +

For information about the utilities that are used to enable high availability, see the + Greenplum Database Utility Guide.

diff --git a/gpdb-doc/dita/admin_guide/intro/about_ha.xml b/gpdb-doc/dita/admin_guide/intro/about_ha.xml index e21a8db86890d5c81c595f2d1dafc47d16194c91..56d1b735643f2655255937a5d47f12a012757bce 100644 --- a/gpdb-doc/dita/admin_guide/intro/about_ha.xml +++ b/gpdb-doc/dita/admin_guide/intro/about_ha.xml @@ -1,9 +1,7 @@ - - About Redundancy and Failover in Greenplum Database This topic provides a high-level overview of Greenplum Database high availability features. @@ -14,10 +12,13 @@ .

- + When data loss is not acceptable for a Pivotal + Greenplum Database cluster, master and segment mirroring must be enabled in order for the cluster to be supported by Pivotal. Without + mirroring, system and data availability is not guaranteed, Pivotal + will make best efforts to restore a cluster in this case.
About Segment Mirroring -

When you deploy your Greenplum Database system, you can configure mirror segments. Mirror segments allow database queries to fail over to a backup segment if the primary segment becomes unavailable. Mirroring is strongly recommended for production systems and @@ -35,34 +36,27 @@

shows how table data is distributed across segments when spread mirroring is configured.

- Spread Mirroring in Greenplum Database - -
-
Segment Failover and Recovery

When mirroring is enabled in a Greenplum Database system, the system will automatically fail over to the mirror segment if a primary copy becomes unavailable. A Greenplum Database system can remain operational if a segment instance or host goes down as long as all the data is available on the remaining active segments.

-

If the master cannot connect to a segment instance, it marks that segment instance as down in the Greenplum Database system catalog and brings up the mirror segment in its place. A failed segment instance will remain out of operation until an administrator takes steps to bring that segment back online. An administrator can recover a failed segment while the system is up and running. The recovery process copies over only the changes that were missed while the segment was out of operation.

-

If you do not have mirroring enabled, the system will automatically shut down if a segment instance becomes invalid. You must recover all failed segments before operations can continue.

-
About Master Mirroring

You can also optionally deploy a backup or mirror of the master instance on a @@ -70,30 +64,24 @@ the event that the primary master host becomes unoperational. The standby master is kept up to date by a transaction log replication process, which runs on the standby master host and synchronizes the data between the primary and standby master hosts.

-

If the primary master fails, the log replication process stops, and the standby master can be activated in its place. Upon activation of the standby master, the replicated logs are used to reconstruct the state of the master host at the time of the last successfully committed transaction. The activated standby master effectively becomes the Greenplum Database master, accepting client connections on the master port (which must be set to the same port number on the master host and the backup master host).

-

Since the master does not contain any user data, only the system catalog tables need to be synchronized between the primary and backup copies. When these tables are updated, changes are automatically copied over to the standby master to ensure synchronization with the primary master.

- Master Mirroring in Greenplum Database - - +
-
About Interconnect Redundancy -

The interconnect refers to the inter-process communication between the segments and the network infrastructure on which this communication relies. You can achieve a highly available interconnect using by deploying dual 10-Gigabit Ethernet switches on your network diff --git a/gpdb-doc/dita/admin_guide/managing/startstop.xml b/gpdb-doc/dita/admin_guide/managing/startstop.xml index 1c038de75c77342dcc3d59fe368353f37008fa69..998ba420db9a182c8a93f0c7de9cd7f348b023c6 100644 --- a/gpdb-doc/dita/admin_guide/managing/startstop.xml +++ b/gpdb-doc/dita/admin_guide/managing/startstop.xml @@ -3,36 +3,36 @@ PUBLIC "-//OASIS//DTD DITA Composite//EN" "ditabase.dtd"> Starting and Stopping Greenplum Database - In a Greenplum Database DBMS, the database server instances (the master - and all segments) are started or stopped across all of the hosts in the system in such a way - that they can work together as a unified DBMS. + In a Greenplum Database DBMS, the database server instances (the master and all + segments) are started or stopped across all of the hosts in the system in such a way that they + can work together as a unified DBMS. -

Because a Greenplum Database system is distributed across many machines, the - process for starting and stopping a Greenplum Database system is different than - the process for starting and stopping a regular PostgreSQL DBMS.

+

Because a Greenplum Database system is distributed across many machines, the process for + starting and stopping a Greenplum Database system is different than the process for starting + and stopping a regular PostgreSQL DBMS.

Use the gpstart and gpstop utilities to start and stop - Greenplum Database, respectively. These utilities are located in the - $GPHOME/bin directory on your Greenplum Database master - host.

- -

Do not issue a KILL command to end any Postgres process. Instead, use the - database command pg_cancel_backend().

-
-

For information about gpstart and gpstop, see the Greenplum Database Utility Guide.

+ Greenplum Database, respectively. These utilities are located in the + $GPHOME/bin directory on your Greenplum Database master host.

+ Do not issue a kill command to end any Postgres process. + Instead, use the database command pg_cancel_backend().

Issuing a + kill -9 or kill -11 might introduce database + corruption. If Pivotal Greenplum Database corruption occurs, + Pivotal will make best efforts to restore a cluster. A root cause analysis cannot be + performed.

+

For information about gpstart and gpstop, see the + Greenplum Database Utility Guide.

Starting Greenplum Database Start an initialized Greenplum Database system by running the gpstart utility on the master instance. - - Use the gpstart utility to start a Greenplum Database - system that has already been initialized by the gpinitsystem utility, but - has been stopped by the gpstop utility. The gpstart - utility starts Greenplum Database by starting all the Postgres database - instances on the Greenplum Database cluster. gpstart - orchestrates this process and performs the process in parallel. + Use the gpstart utility to start a Greenplum Database system that + has already been initialized by the gpinitsystem utility, but has been + stopped by the gpstop utility. The gpstart utility starts + Greenplum Database by starting all the Postgres database instances on the Greenplum Database + cluster. gpstart orchestrates this process and performs the process in + parallel. Run gpstart on the master host to start Greenplum Database: @@ -51,8 +51,7 @@ then restart Greenplum Database after the shutdown completes. - To restart Greenplum Database, enter the following command on the - master host: + To restart Greenplum Database, enter the following command on the master host: $ gpstop -r @@ -62,8 +61,8 @@ Reloading Configuration File Changes Only - Reload changes to Greenplum Database configuration files without - interrupting the system. + Reload changes to Greenplum Database configuration files without interrupting the + system. The gpstop utility can reload changes to the pg_hba.conf configuration file and to runtime parameters in @@ -125,13 +124,13 @@ Stopping Greenplum Database - The gpstop utility stops or restarts your Greenplum Database system and always runs on the master host. When activated, - gpstop stops all postgres processes in the system, - including the master and all segment instances. The gpstop utility uses a - default of up to 64 parallel worker threads to bring down the Postgres instances that make - up the Greenplum Database cluster. The system waits for any active - transactions to finish before shutting down. To stop Greenplum Database - immediately, use fast mode. + The gpstop utility stops or restarts your Greenplum Database system + and always runs on the master host. When activated, gpstop stops all + postgres processes in the system, including the master and all segment + instances. The gpstop utility uses a default of up to 64 parallel worker + threads to bring down the Postgres instances that make up the Greenplum Database cluster. + The system waits for any active transactions to finish before shutting down. To stop + Greenplum Database immediately, use fast mode. To stop Greenplum Database: diff --git a/gpdb-doc/dita/ref_guide/feature_summary.xml b/gpdb-doc/dita/ref_guide/feature_summary.xml index 9affd05119341450a3d6492b2461a04322c9ac2f..261108f95ba4e3041323497a9fd66605daae2692 100644 --- a/gpdb-doc/dita/ref_guide/feature_summary.xml +++ b/gpdb-doc/dita/ref_guide/feature_summary.xml @@ -281,7 +281,7 @@ NEXT 10 ROWS ONLY;

Greenplum ALTER OPERATOR CLASS - NO + YES @@ -470,7 +470,7 @@ NEXT 10 ROWS ONLY;

Greenplum CREATE OPERATOR CLASS - NO + YES @@ -576,8 +576,8 @@ NEXT 10 ROWS ONLY;

Greenplum Unsupported Clauses / Options:

SCROLL

FOR UPDATE [ OF column [, ...] ]

Limitations:

Cursors cannot be - backward-scrolled. Forward scrolling is supported.

PL/pgSQL does - not have support for updatable cursors.

+ backward-scrolled. Forward scrolling is supported.

PL/pgSQL does not have + support for updatable cursors.

DELETE @@ -656,7 +656,7 @@ NEXT 10 ROWS ONLY;

Greenplum DROP OPERATOR CLASS - NO + YES diff --git a/gpdb-doc/dita/ref_guide/system_catalogs/catalog_ref.xml b/gpdb-doc/dita/ref_guide/system_catalogs/catalog_ref.xml index bdc6fdf3e358707547c8b8fe816035d716392d3f..f9213aed12c5c27572e2259708b7975b6b5b83a4 100644 --- a/gpdb-doc/dita/ref_guide/system_catalogs/catalog_ref.xml +++ b/gpdb-doc/dita/ref_guide/system_catalogs/catalog_ref.xml @@ -4,12 +4,17 @@ System Catalog Reference -

This reference describes the Greenplum Database system catalog tables and - views. System tables prefixed with gp_ relate to the parallel features of - Greenplum Database. Tables prefixed with pg_ are either - standard PostgreSQL system catalog tables supported in Greenplum Database, or - are related to features Greenplum that provides to enhance - PostgreSQL for data warehousing workloads. Note that the global system catalog for Greenplum Database resides on the master instance.

+

This reference describes the Greenplum Database system catalog tables and views. System + tables prefixed with gp_ relate to the parallel features of Greenplum + Database. Tables prefixed with pg_ are either standard PostgreSQL system + catalog tables supported in Greenplum Database, or are related to features Greenplum that + provides to enhance PostgreSQL for data warehousing workloads. Note that the global system + catalog for Greenplum Database resides on the master instance.

+ Changes to Pivotal Greenplum Database system + catalog tables or views are not supported. If a catalog table or view is changed by the customer, the Pivotal Greenplum + Database cluster is not supported. The cluster must be reinitialized and restored by the customer.
  • diff --git a/gpdb-doc/dita/utility_guide/admin_utilities/util_ref.xml b/gpdb-doc/dita/utility_guide/admin_utilities/util_ref.xml index c2530051c7d473fb6c3255ee91665e45c0b32d24..c3c6a3987bb731f3b77b8c998b41329bbf486546 100644 --- a/gpdb-doc/dita/utility_guide/admin_utilities/util_ref.xml +++ b/gpdb-doc/dita/utility_guide/admin_utilities/util_ref.xml @@ -4,10 +4,11 @@ Management Utility Reference -

    This reference describes the command-line management utilities provided with Greenplum Database. Greenplum Database uses the standard PostgreSQL - client and server programs and provides additional management utilities for administering a - distributed Greenplum Database DBMS. Greenplum Database management - utilities reside in $GPHOME/bin.When referencing IPv6 addresses in +

    This reference describes the command-line management utilities provided with Greenplum + Database. Greenplum Database uses the standard PostgreSQL client and server programs and + provides additional management utilities for administering a distributed Greenplum Database + DBMS. Greenplum Database management utilities reside in + $GPHOME/bin.When referencing IPv6 addresses in gpfdist URLs or when using numeric IP addresses instead of hostnames in any management utility, always enclose the IP address in brackets. For command prompt use, the best practice is to escape any brackets or put them inside quotation marks. For example, @@ -137,10 +138,10 @@ Backend Server Programs -

    The following standard PostgreSQL server management programs are provided with Greenplum Database and reside in $GPHOME/bin. They are modified to - handle the parallelism and distribution of a Greenplum Database system. You - access these programs only through the Greenplum Database management tools and - utilities.

    +

    The following standard PostgreSQL server management programs are provided with Greenplum + Database and reside in $GPHOME/bin. They are modified to handle the + parallelism and distribution of a Greenplum Database system. You access these programs only + through the Greenplum Database management tools and utilities.

    Greenplum Database Backend Server Programs @@ -160,8 +161,8 @@ initdb This program is called by gpinitsystem when - initializing a Greenplum Database array. It is used internally to - create the individual segment instances and the master instance. + initializing a Greenplum Database array. It is used internally to create the + individual segment instances and the master instance. @@ -179,12 +180,12 @@ gpsyncmaster - This is the Greenplum program that - starts the gpsyncagent process on the standby master host. - Administrators do not call this program directly, but do so through the management - scripts that initialize and/or activate a standby master for a Greenplum Database system. This process is responsible for keeping the - standby master up to date with the primary master via a transaction log replication - process. + This is the Greenplum program that starts the + gpsyncagent process on the standby master host. Administrators do + not call this program directly, but do so through the management scripts that + initialize and/or activate a standby master for a Greenplum Database system. This + process is responsible for keeping the standby master up to date with the primary + master via a transaction log replication process. , @@ -205,8 +206,9 @@ pg_ctl This program is called by gpstart and - gpstop when starting or stopping a Greenplum Database array. It is used internally to stop and start the individual segment instances - and the master instance in parallel and with the correct options. + gpstop when starting or stopping a Greenplum Database array. It + is used internally to stop and start the individual segment instances and the master + instance in parallel and with the correct options. , @@ -215,7 +217,11 @@ pg_resetxlog - Not used in Greenplum Database + DO NOT USE

    Warning: This program might cause data loss + or cause data to become unavailable. If this program is used, the Pivotal Greenplum Database cluster is not supported. The + cluster must be reinitialized and restored by the + customer.

    N/A @@ -233,9 +239,9 @@ postmaster postmaster starts the postgres - database server listener process that accepts client connections. In Greenplum Database, a postgres database listener process - runs on the Greenplum master Instance and on each - Segment Instance. + database server listener process that accepts client connections. In Greenplum + Database, a postgres database listener process runs on the + Greenplum master Instance and on each Segment Instance. In Greenplum Database, you use and to