未验证 提交 89f53441 编写于 作者: D David Yozie 提交者: GitHub

Add notes to qualify lack of Large Object support. (#6798)

* Add notes to qualify lack of large object support.

* Replacing large object nonsupport note with more general description and link to postgresql docs
上级 0c24af63
......@@ -198,14 +198,16 @@ NEXT 10 ROWS ONLY; </codeblock><p>Greenplum
<topic id="topic8" xml:lang="en">
<title id="ik264019">Greenplum and PostgreSQL Compatibility</title>
<body>
<p>Greenplum Database is based on PostgreSQL 8.3 with additional features from newer
PostgreSQL releases. To support the distributed nature and typical workload of a Greenplum
Database system, some SQL commands have been added or modified, and there are a few
PostgreSQL features that are not supported. Greenplum has also added features not found in
PostgreSQL, such as physical data distribution, parallel query optimization, external
tables, resource queues, and enhanced table partitioning. For full SQL syntax and
references, see the <xref href="sql_commands/sql_ref.xml#topic1" type="topic" format="dita"
/>.</p>
<p>Greenplum Database is based on PostgreSQL 9.4. To support the distributed nature and
typical workload of a Greenplum Database system, some SQL commands have been added or
modified, and there are a few PostgreSQL features that are not supported. Greenplum has also
added features not found in PostgreSQL, such as physical data distribution, parallel query
optimization, external tables, resource queues, and enhanced table partitioning. For full
SQL syntax and references, see the <xref href="sql_commands/sql_ref.xml#topic1" type="topic"
format="dita"/>.<note>Greenplum Database does not support the PostgreSQL <xref
href="https://www.postgresql.org/docs/9.4/largeobjects.html" format="html"
scope="external">large object facility</xref> for streaming user data that is stored in
large-object structures.</note></p>
<table id="ik213423">
<title>SQL Support in Greenplum Database</title>
<tgroup cols="3">
......
......@@ -104,6 +104,12 @@ IS '<varname>text</varname>'</codeblock>
<plentry>
<pt><varname>large_object_oid</varname></pt>
<pd>The OID of the large object. </pd>
<pd>
<note>Greenplum Database does not support the PostgreSQL <xref
href="https://www.postgresql.org/docs/9.4/largeobjects.html" format="html"
scope="external">large object facility</xref> for streaming user data that is stored
in large-object structures.</note>
</pd>
</plentry>
<plentry>
<pt>PROCEDURAL</pt>
......
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE topic
PUBLIC "-//OASIS//DTD DITA Composite//EN" "ditabase.dtd">
<topic id="topic1" xml:lang="en"><title id="gt143896">pg_largeobject</title><body><p>The <codeph>pg_largeobject</codeph> system catalog table holds the data
<topic id="topic1" xml:lang="en"><title id="gt143896">pg_largeobject</title><body>
<note>Greenplum Database does not support the PostgreSQL <xref
href="https://www.postgresql.org/docs/9.4/largeobjects.html" format="html"
scope="external">large object facility</xref> for streaming user data that is stored
in large-object structures.</note><p>The <codeph>pg_largeobject</codeph> system catalog table holds the data
making up 'large objects'. A large object is identified by an OID
assigned when it is created. Each large object is broken into segments
or 'pages' small enough to be conveniently stored as rows in <codeph>pg_largeobject</codeph>.
......
......@@ -80,7 +80,12 @@
<pd>Include large objects in the dump. This is the default behavior except when
<codeph>--schema</codeph>, <codeph>--table</codeph>, or
<codeph>--schema-only</codeph> is specified, so the <codeph>-b</codeph> switch is
only useful to add large objects to selective dumps. </pd>
only useful to add large objects to selective dumps.<note>Greenplum Database does not
support the PostgreSQL <xref
href="https://www.postgresql.org/docs/9.4/largeobjects.html" format="html"
scope="external">large object facility</xref> for streaming user data that is
stored in large-object structures.</note>
</pd>
</plentry>
<plentry>
<pt>--binary-upgrade</pt>
......
......@@ -275,11 +275,6 @@
disable triggers on user tables before inserting the data then emits commands to re-enable
them after the data has been inserted. If the restore is stopped in the middle, the system
catalogs may be left in the wrong state.</p>
<p><codeph>pg_restore</codeph> cannot restore large objects selectively,
for instance only those for a single table. If an archive contains large
objects, then all large objects will be restored, or none will be
restored if they are excluded by the <codeph>-L</codeph>,
<codeph>-t</codeph>, or another option.</p>
<p>See also the <codeph>pg_dump</codeph> documentation for details on limitations of
<codeph>pg_dump</codeph>. </p>
<p>Once restored, it is wise to run <codeph>ANALYZE</codeph> on each restored table so the
......
......@@ -469,6 +469,12 @@ testdb=#</codeblock>
<pt>\dl</pt>
<pd>This is an alias for <codeph>\lo_list</codeph>, which shows a list of large
objects.</pd>
<pd>
<note>Greenplum Database does not support the PostgreSQL <xref
href="https://www.postgresql.org/docs/9.4/largeobjects.html" format="html"
scope="external">large object facility</xref> for streaming user data that is stored
in large-object structures.</note>
</pd>
</plentry>
<plentry>
<pt>\dn [<varname>schema_pattern</varname>] | \dn+
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册