提交 799de598 编写于 作者: C Chuck Litzell 提交者: David Yozie

Docs postgresql 9.4 merge ga (#7208)

* REASSIGN OWNED. edits. Remove qualification that it doesn't change the database ownership

* ALTER FUNCTION. set from current clause description revision

* COMMENT. capitalize proper noun.

* COPY. describe PROGRAM option.

* CREATE AGGREGATE. Implements new syntax.

* CREATE FUNCTION. edits.

* DROP ROLE. small edit, link to other commands

* DROP USER. trivial edit.

* PREPARE. small edits

* REASSIGN OWNED. Trivial edits.

* REINDEX. trivial edit.

* pg_dump. use --quote-all-identifers for cross-version dumps

* Additional edits

* Updates from review

* Updates from review
上级 5c5c8742
......@@ -139,7 +139,8 @@ RESET ALL</codeblock></section>
<codeph>RESET</codeph> is used, the function-local setting is removed, and the
function executes with the value present in its environment. Use <codeph>RESET
ALL</codeph> to clear all function-local settings. <codeph>SET FROM CURRENT</codeph>
applies the session's current value of the parameter when the function is entered.</pd>
saves the value of the parameter that is current when <codeph>ALTER FUNCTION</codeph> is
executed as the value to be applied when the function is entered.</pd>
</plentry>
<plentry>
<pt>RESTRICT</pt>
......
......@@ -1023,8 +1023,7 @@ where <varname>action</varname> is one of:
<codeblock>ALTER TABLE distributors ADD CONSTRAINT zipchk CHECK
(char_length(zipcode) = 5);</codeblock>
<p>To add a check constraint only to a table and not to its children:</p>
<codeblock>ALTER TABLE distributors ADD CONSTRAINT zipchk CHECK (char_length(zipcode) = 5) NO INHERIT
;</codeblock>
<codeblock>ALTER TABLE distributors ADD CONSTRAINT zipchk CHECK (char_length(zipcode) = 5) NO INHERIT;</codeblock>
<p>(The check constraint will not be inherited by future children, either.)</p>
<p>Remove a check constraint from a table and all of its children:</p>
......
......@@ -177,7 +177,7 @@ COMMENT ON SERVER myserver IS 'my foreign server';
COMMENT ON TABLE my_schema.my_table IS 'Employee Information';
COMMENT ON TABLESPACE my_tablespace IS 'Tablespace for indexes';
COMMENT ON TEXT SEARCH CONFIGURATION my_config IS 'Special word filtering';
COMMENT ON TEXT SEARCH DICTIONARY swedish IS 'Snowball stemmer for swedish language';
COMMENT ON TEXT SEARCH DICTIONARY swedish IS 'Snowball stemmer for Swedish language';
COMMENT ON TEXT SEARCH PARSER my_parser IS 'Splits text into words';
COMMENT ON TEXT SEARCH TEMPLATE snowball IS 'Snowball stemmer';
COMMENT ON TRIGGER my_trigger ON my_table IS 'Used for RI';
......
......@@ -66,6 +66,10 @@ IGNORE EXTERNAL PARTITIONS</codeblock></p>
as <codeph>gpfdist</codeph>, which is useful for high speed data loading. </p>
<note type="warning"> Use of the <codeph>ON SEGMENT</codeph> clause is recommended for expert
users only.</note>
<p>When <codeph>PROGRAM</codeph> is specified, the server executes the given command and reads
from the standard output of the program, or writes to the standard input of the program. The
command must be specified from the viewpoint of the server, and be executable by the <codeph>gpadmin</codeph>
user.</p>
<p>When <codeph>STDIN</codeph> or <codeph>STDOUT</codeph> is specified, data is transmitted
via the connection between the client and the master. <codeph>STDIN</codeph> and
<codeph>STDOUT</codeph> cannot be used with the <codeph>ON SEGMENT</codeph> clause.</p>
......
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE topic
PUBLIC "-//OASIS//DTD DITA Composite//EN" "ditabase.dtd">
<topic id="topic1"><title id="bm20941">CREATE AGGREGATE</title><body><p id="sql_command_desc">Defines a new aggregate function.</p><section id="section2"><title>Synopsis</title><codeblock id="sql_command_synopsis">CREATE [ORDERED] AGGREGATE <varname>name</varname> (<varname>input_data_type</varname> [ , ... ])
      ( SFUNC = <varname>sfunc</varname>,
        STYPE = <varname>state_data_type</varname>
        [, COMBINEFUNC = <varname>combinefunc</varname>]
        [, FINALFUNC = <varname>ffunc</varname>]
        [, INITCOND = <varname>initial_condition</varname>]
        [, SORTOP = <varname>sort_operator</varname>] )</codeblock></section><section id="section3"><title>Description</title><p><codeph>CREATE AGGREGATE</codeph> defines a new
<topic id="topic1"><title id="bm20941">CREATE AGGREGATE</title><body><p id="sql_command_desc">Defines a new aggregate function.</p><section id="section2"
><title>Synopsis</title><varname>argname</varname><codeblock id="sql_command_synopsis">CREATE AGGREGATE <varname>name</varname> ( [ <varname>argmode</varname> ] [ ] <varname>arg_data_type</varname> [ , ... ] ) (
SFUNC = <varname>sfunc</varname>,
STYPE = <varname>state_data_type</varname>
[ , SSPACE = <varname>state_data_size</varname> ]
[ , FINALFUNC = <varname>ffunc</varname> ]
[ , FINALFUNC_EXTRA ]
[ , COMBINEFUNC = <varname>combinefunc</varname> ]
[ , SERIALFUNC = <varname>serialfunc</varname> ]
[ , DESERIALFUNC = <varname>deserialfunc</varname> ]
[ , INITCOND = <varname>initial_condition</varname> ]
[ , MSFUNC = <varname>msfunc</varname> ]
[ , MINVFUNC = <varname>minvfunc</varname> ]
[ , MSTYPE = <varname>mstate_data_type</varname> ]
[ , MSSPACE = <varname>mstate_data_size</varname> ]
[ , MFINALFUNC = <varname>mffunc</varname> ]
[ , MFINALFUNC_EXTRA ]
[ , MINITCOND = <varname>minitial_condition</varname> ]
[ , SORTOP = <varname>sort_operator</varname> ]
)
CREATE AGGREGATE <varname>name</varname> ( [ [ <varname>argmode</varname> ] [ <varname>argname</varname> ] <varname>arg_data_type</varname> [ , ... ] ]
ORDER BY [ <varname>argmode</varname> ] [ <varname>argname</varname> ] <varname>arg_data_type</varname> [ , ... ] ) (
SFUNC = <varname>sfunc</varname>,
STYPE = <varname>state_data_type</varname>
[ , SSPACE = <varname>state_data_size</varname> ]
[ , FINALFUNC = <varname>ffunc</varname> ]
[ , FINALFUNC_EXTRA ]
[ , COMBINEFUNC = <varname>combinefunc</varname> ]
[ , SERIALFUNC = <varname>serialfunc</varname> ]
[ , DESERIALFUNC = <varname>deserialfunc</varname> ]
[ , INITCOND = <varname>initial_condition</varname> ]
[ , HYPOTHETICAL ]
)
or the old syntax
CREATE AGGREGATE <varname>name</varname> (
BASETYPE = <varname>base_type</varname>,
SFUNC = <varname>sfunc</varname>,
STYPE = <varname>state_data_type</varname>
[ , SSPACE = <varname>state_data_size</varname> ]
[ , FINALFUNC = <varname>ffunc</varname> ]
[ , FINALFUNC_EXTRA ]
[ , COMBINEFUNC = <varname>combinefunc</varname> ]
[ , SERIALFUNC = <varname>serialfunc</varname> ]
[ , DESERIALFUNC = <varname>deserialfunc</varname> ]
[ , INITCOND = <varname>initial_condition</varname> ]
[ , MSFUNC = <varname>msfunc</varname> ]
[ , MINVFUNC = <varname>minvfunc</varname> ]
[ , MSTYPE = <varname>mstate_data_type</varname> ]
[ , MSSPACE = <varname>mstate_data_size</varname> ]
[ , MFINALFUNC = <varname>mffunc</varname> ]
[ , MFINALFUNC_EXTRA ]
[ , MINITCOND = <varname>minitial_condition</varname> ]
[ , SORTOP = <varname>sort_operator</varname> ]
)</codeblock></section><section id="section3"><title>Description</title><p><codeph>CREATE AGGREGATE</codeph> defines a new
aggregate function. Some basic and commonly-used aggregate functions such as
<codeph>count</codeph>, <codeph>min</codeph>, <codeph>max</codeph>, <codeph>sum</codeph>,
<codeph>avg</codeph> and so on are already provided in Greenplum Database. If one defines
new types or needs an aggregate function not already provided, then <codeph>CREATE
AGGREGATE</codeph> can be used to provide the desired features.</p><p>An aggregate
function is identified by its name and input data types. Two aggregate functions in the same
schema can have the same name if they operate on different input types. The name and input
data types of an aggregate function must also be distinct from the name and input data types
of every ordinary function in the same schema.</p>
<p>An aggregate function is made from one,
two or three ordinary functions (all of which must be <codeph>IMMUTABLE</codeph> functions): </p>
<ul>
<li id="bm141603">A state transition function <varname>sfunc</varname></li>
<li id="bm141604">An optional combine function <varname>combinefunc</varname></li>
<li id="bm141605">An optional final calculation function <varname>ffunc</varname></li>
</ul>
AGGREGATE</codeph> can be used to provide the desired features.</p>
<p>If a schema name is given (for example, <codeph>CREATE AGGREGATE myschema.myagg
...</codeph>) then the aggregate function is created in the specified schema. Otherwise it
is created in the current schema. </p>
<p>An aggregate function is identified by its name and input data types. Two aggregate
functions in the same schema can have the same name if they operate on different input
types. The name and input data types of an aggregate function must also be distinct from the
name and input data types of every ordinary function in the same schema. This behavior is
identical to overloading of ordinary function names. See <codeph><xref
href="CREATE_FUNCTION.xml#topic1"/></codeph>.</p>
<p>A simple aggregate function is made from one, two, or three ordinary functions (which must
be <codeph>IMMUTABLE</codeph> functions): </p>
<ul id="ul_d5c_5yl_dhb">
<li>a state transition function <varname>sfunc</varname></li>
<li>an optional final calculation function <varname>ffunc</varname></li>
<li>an optional combine function <varname>combinefunc</varname></li>
</ul>
<p>These functions are used as
follows:</p><codeblock>sfunc( internal-state, next-data-values ) ---&gt; next-internal-state
combinefunc( internal-state, internal-state ) ---&gt; next-internal-state
ffunc( internal-state ) ---&gt; aggregate-value</codeblock><p>You
can specify <codeph>COMBINEFUNC</codeph> as method for optimizing aggregate execution. By
specifying <codeph>COMBINEFUNC</codeph>, the aggregate can be executed in parallel on segments
first and then on the master. When a two-level execution is performed,
<codeph>SFUNC</codeph> is executed on the segments to generate partial aggregate results,
and <codeph>COMBINEFUNC</codeph> is executed on the master to aggregate the partial results from
segments. If single-level aggregation is performed, all the rows are sent to the master and
<codeph>sfunc</codeph> is applied to the rows.</p><p>Single-level aggregation and
two-level aggregation are equivalent execution strategies. Either type of aggregation can be
implemented in a query plan. When you implement the functions <codeph>combinefunc</codeph> and
<codeph>sfunc</codeph>, you must ensure that the invocation of <codeph>sfunc</codeph> on
the segment instances followed by <codeph>combinefunc</codeph> on the master produce the same
result as single-level aggregation that sends all the rows to the master and then applies
only the <codeph>sfunc</codeph> to the rows. </p><p>Greenplum Database creates a temporary
variable of data type <varname>stype</varname> to hold the current internal state of the aggregate
function. At each input row, the aggregate argument values are calculated and the state
transition function is invoked with the current state value and the new argument values to
calculate a new internal state value. After all the rows have been processed, the final
function is invoked once to calculate the aggregate return value. If there is no final
function then the ending state value is returned as-is.</p><p>An aggregate function can
follows:</p><codeblock><varname>sfunc</varname>( internal-state, next-data-values ) ---&gt; next-internal-state
<varname>ffunc</varname>( internal-state ) ---&gt; aggregate-value
<varname>combinefunc</varname>( internal-state, internal-state ) ---&gt; next-internal-state</codeblock>
<p>Greenplum Database creates a temporary variable of data type <varname>stype</varname> to
hold the current internal state of the aggregate function. At each input row, the aggregate
argument values are calculated and the state transition function is invoked with the current
state value and the new argument values to calculate a new internal state value. After all
the rows have been processed, the final function is invoked once to calculate the aggregate
return value. If there is no final function then the ending state value is returned
as-is.</p><p>You can specify <codeph><varname>combinefunc</varname></codeph> as a method for optimizing
aggregate execution. By specifying <codeph><varname>combinefunc</varname></codeph>, the
aggregate can be executed in parallel on segments first and then on the master. When a
two-level execution is performed, <codeph><varname>sfunc</varname></codeph> is executed on
the segments to generate partial aggregate results, and
<codeph><varname>combinefunc</varname></codeph> is executed on the master to aggregate
the partial results from segments. If single-level aggregation is performed, all the rows
are sent to the master and <codeph><varname>sfunc</varname></codeph> is applied to the
rows.</p><p>Single-level aggregation and two-level aggregation are equivalent execution strategies. Either
type of aggregation can be implemented in a query plan. When you implement the functions
<codeph>combinefunc</codeph> and <codeph>sfunc</codeph>, you must ensure that the
invocation of <codeph>sfunc</codeph> on the segment instances followed by
<codeph>combinefunc</codeph> on the master produce the same result as single-level
aggregation that sends all the rows to the master and then applies only the
<codeph>sfunc</codeph> to the rows.</p><p>An aggregate function can
provide an optional initial condition, an initial value for the internal state value. This
is specified and stored in the database as a value of type text, but it must be a valid
external representation of a constant of the state value data type. If it is not supplied
......@@ -62,24 +119,60 @@ ffunc( internal-state ) ---&gt; aggregate-value</codeblock><p>You
values. This is useful for implementing aggregates like <codeph>max</codeph>. Note that this
behavior is only available when <varname>state_data_type</varname> is the same as the first
<varname>input_data_type</varname>. When these types are different, you must supply a non-null initial
condition or use a nonstrict transition function.</p><p>If the state transition function is
not declared <codeph>STRICT</codeph>, then it will be called unconditionally at each input
row, and must deal with <codeph>NULL</codeph> inputs and <codeph>NULL</codeph> transition
values for itself. This allows the aggregate author to have full control over the aggregate
handling of <codeph>NULL</codeph> values.</p><p>If the final function is declared
condition or use a nonstrict transition function.</p><p>If the state transition function is not declared <codeph>STRICT</codeph>, then it will be called
unconditionally at each input row, and must deal with <codeph>NULL</codeph> inputs and
<codeph>NULL</codeph> transition values for itself. This allows the aggregate author to
have full control over the aggregate's handling of <codeph>NULL</codeph> values.</p><p>If the final function is declared
<codeph>STRICT</codeph>, then it will not be called when the ending state value is
<codeph>NULL</codeph>; instead a <codeph>NULL</codeph> result will be returned
automatically. (This is the normal behavior of <codeph>STRICT</codeph> functions.) In any
case the final function has the option of returning a <codeph>NULL</codeph> value. For
example, the final function for <codeph>avg</codeph> returns <codeph>NULL</codeph> when it
sees there were zero input rows.</p><p>Single argument aggregate functions, such as min or
sees there were zero input rows.</p>
<p>
Sometimes it is useful to declare the final function as taking not just
the state value, but extra parameters corresponding to the aggregate's
input values. The main reason for doing this is if the final function
is polymorphic and the state value's data type would be inadequate to
pin down the result type. These extra parameters are always passed as
<codeph>NULL</codeph> (and so the final function must not be strict when
the <codeph>FINALFUNC_EXTRA</codeph> option is used), but nonetheless they
are valid parameters. The final function could for example make use
of <codeph>get_fn_expr_argtype</codeph> to identify the actual argument type
in the current call.
</p>
<p> An aggregate can optionally support <i>moving-aggregate mode</i>, as described in <xref
href="https://www.postgresql.org/docs/9.4/xaggr.html#XAGGR-MOVING-AGGREGATES"
scope="external" format="html">Moving-Aggregate Mode</xref> in the PostgreSQL
documentation. This requires specifying the <codeph><varname>msfunc</varname></codeph>,
<codeph><varname>minfunc</varname></codeph>, and
<codeph><varname>mstype</varname></codeph> functions, and optionally the
<codeph><varname>mspace</varname></codeph>,
<codeph><varname>mfinalfunc</varname></codeph>,
<codeph><varname>mfinalfunc_extra</varname></codeph>, and
<codeph><varname>minitcond</varname></codeph> functions. Except for
<codeph><varname>minvfunc</varname></codeph>, these functions work like the
corresponding simple-aggregate functions without <codeph><varname>m</varname></codeph>; they
define a separate implementation of the aggregate that includes an inverse transition
function. </p>
<p>The syntax with <codeph>ORDER BY</codeph> in the parameter list creates a special type of
aggregate called an <i>ordered-set aggregate</i>; or if <codeph>HYPOTHETICAL</codeph> is
specified, then a <i>hypothetical-set aggregate</i> is created. These aggregates operate
over groups of sorted values in order-dependent ways, so that specification of an input sort
order is an essential part of a call. Also, they can have <i>direct</i> arguments, which are
arguments that are evaluated only once per aggregation rather than once per input row.
Hypothetical-set aggregates are a subclass of ordered-set aggregates in which some of the
direct arguments are required to match, in number and data types, the aggregated argument
columns. This allows the values of those direct arguments to be added to the collection of
aggregate-input rows as an additional "hypothetical" row. </p>
<p>Single argument aggregate functions, such as min or
max, can sometimes be optimized by looking into an index instead of scanning every input
row. If this aggregate can be so optimized, indicate it by specifying a sort operator. The
basic requirement is that the aggregate must yield the first element in the sort ordering
induced by the operator; in other
words:</p><codeblock>SELECT <varname>agg</varname>(<varname>col</varname>) FROM <varname>tab</varname>; </codeblock><p>must be
equivalent
to:</p><codeblock>SELECT <varname>col</varname> FROM <varname>tab</varname> ORDER BY <varname>col</varname> USING <varname>sortop</varname> LIMIT 1;</codeblock><p>Further
words:</p><codeblock>SELECT <varname>agg</varname>(<varname>col</varname>) FROM <varname>tab</varname>; </codeblock>
<p>must be equivalent to:</p>
<codeblock>SELECT <varname>col</varname> FROM <varname>tab</varname> ORDER BY <varname>col</varname> USING <varname>sortop</varname> LIMIT 1;</codeblock><p>Further
assumptions are that the aggregate function ignores <codeph>NULL</codeph> inputs, and that
it delivers a <codeph>NULL</codeph> result if and only if there were no non-null inputs.
Ordinarily, a data type's <codeph>&lt;</codeph> operator is the proper sort operator for
......@@ -90,39 +183,175 @@ ffunc( internal-state ) ---&gt; aggregate-value</codeblock><p>You
<p> To be able to create an aggregate function, you must have <codeph>USAGE</codeph> privilege
on the argument types, the state type, and the return type, as well as
<codeph>EXECUTE</codeph> privilege on the transition and final functions. </p>
</section><section id="section5"><title>Parameters</title><parml><plentry><pt><varname>name</varname></pt><pd>The name (optionally schema-qualified) of the aggregate function
to create.</pd></plentry><plentry><pt><varname>input_data_type</varname></pt><pd>An input data type on which this aggregate function operates. To
create a zero-argument aggregate function, write * in place of the list
of input data types. An example of such an aggregate is count(*).</pd></plentry><plentry><pt><varname>sfunc</varname></pt><pd>The name of the state transition function to be called for each input
row. For an N-argument aggregate function, the <varname>sfunc</varname> must take
N+1 arguments, the first being of type <varname>state_data_type</varname> and the
rest matching the declared input data types of the aggregate. The function
must return a value of type <varname>state_data_type</varname>. This function takes
the current state value and the current input data values, and returns
the next state value.</pd></plentry><plentry><pt><varname>state_data_type</varname></pt><pd>The data type for the aggregate state value.</pd></plentry>
<plentry><pt><varname>combinefunc</varname></pt><pd>The name of a combine function. This is a function
of two arguments, both of type <varname>state_data_type</varname>. It must return
a value of <varname>state_data_type</varname>. A combine function takes two transition
state values and returns a new transition state value representing the
combined aggregation. In Greenplum Database, if the result of the aggregate
function is computed in a segmented fashion, the combine
function is invoked on the individual internal states in order to combine
them into an ending internal state.</pd><pd>Note that this function is also called in hash aggregate mode within
a segment. Therefore, if you call this aggregate function without a combine
function, hash aggregate is never chosen. Since hash aggregate is efficient,
consider defining combine function whenever possible.</pd></plentry>
<plentry><pt><varname>ffunc</varname></pt><pd>The name of the final function called to compute the aggregate result after all input rows have
been traversed. The function must take a single argument of type
</section><section id="section5"><title>Parameters</title>
<parml>
<plentry>
<pt><varname>name</varname></pt>
<pd>The name (optionally schema-qualified) of the aggregate function to create.</pd>
</plentry>
<plentry>
<pt><varname>argmode</varname></pt>
<pd>The mode of an argument: <codeph>IN</codeph> or <codeph>VARIADIC</codeph>. (Aggregate
functions do not support <codeph>OUT</codeph> arguments.) If omitted, the default is
<codeph>IN</codeph>. Only the last argument can be marked
<codeph>VARIADIC</codeph>.</pd>
</plentry>
<plentry>
<pt><varname>argname</varname></pt>
<pd>The name of an argument. This is currently only useful for documentation purposes. If
omitted, the argument has no name.</pd>
</plentry>
<plentry>
<pt><varname>arg_data_type</varname></pt>
<pd>An input data type on which this aggregate function operates. To create a
zero-argument aggregate function, write <codeph>*</codeph> in place of the list of
argument specifications. (An example of such an aggregate is <codeph>count(*)</codeph>.)
</pd>
</plentry>
<plentry>
<pt><varname>base_type</varname></pt>
<pd>In the old syntax for <codeph>CREATE AGGREGATE</codeph>, the input data type is
specified by a <codeph>basetype</codeph> parameter rather than being written next to the
aggregate name. Note that this syntax allows only one input parameter. To define a
zero-argument aggregate function with this syntax, specify the <codeph>basetype</codeph>
as <codeph>"ANY"</codeph> (not <codeph>*</codeph>). Ordered-set aggregates cannot be
defined with the old syntax. </pd>
</plentry>
<plentry>
<pt><varname>sfunc</varname></pt>
<pd>The name of the state transition function to be called for each input row. For a
normal N-argument aggregate function, the <varname>sfunc</varname> must take N+1
arguments, the first being of type <varname>state_data_type</varname> and the rest
matching the declared input data types of the aggregate. The function must return a
value of type <varname>state_data_type</varname>. This function takes the current state
value and the current input data values, and returns the next state value.</pd>
<pd>For ordered-set (including hypothetical-set) aggregates, the state transition function
receives only the current state value and the aggregated arguments, not the direct
arguments. Otherwise it is the same. </pd>
</plentry>
<plentry>
<pt><varname>state_data_type</varname></pt>
<pd>The data type for the aggregate state value.</pd>
</plentry>
<plentry>
<pt><varname>state_data_size</varname></pt>
<pd>The approximate average size (in bytes) of the aggregate's state value. If this
parameter is omitted or is zero, a default estimate is used based on the
<varname>state_data_type</varname>. The planner uses this value to estimate the memory
required for a grouped aggregate query. Large values of this parameter discourage use of
hash aggregation.</pd>
</plentry>
<plentry>
<pt><varname>ffunc</varname></pt>
<pd>The name of the final function called to compute the aggregate result after all input
rows have been traversed. The function must take a single argument of type
<codeph>state_data_type</codeph>. The return data type of the aggregate is defined as
the return type of this function. If <codeph>ffunc</codeph> is not specified, then the
the return type of this function. If <codeph><varname>ffunc</varname></codeph> is not specified, then the
ending state value is used as the aggregate result, and the return type is
<varname>state_data_type</varname>. </pd></plentry><plentry><pt><varname>initial_condition</varname></pt><pd>The initial setting for the state value. This must be a string constant in the form accepted for
the data type <varname>state_data_type</varname>. If not specified, the state value starts out
<codeph>NULL</codeph>. </pd></plentry><plentry><pt><varname>sort_operator</varname></pt><pd>The associated sort operator for a MIN- or MAX-like aggregate function.
This is just an operator name (possibly schema-qualified). The operator
is assumed to have the same input data types as the aggregate function
(which must be a single-argument aggregate function). </pd></plentry></parml></section>
<codeph>state_data_type</codeph>. </pd>
<pd>For ordered-set (including hypothetical-set) aggregates, the final function receives
not only the final state value, but also the values of all the direct arguments.</pd>
<pd>If <codeph>FINALFUNC_EXTRA</codeph> is specified, then in addition to the final state
value and any direct arguments, the final function receives extra NULL values
corresponding to the aggregate's regular (aggregated) arguments. This is mainly useful
to allow correct resolution of the aggregate result type when a polymorphic aggregate is
being defined. </pd>
</plentry>
<plentry>
<pt><varname>serialfunc</varname></pt>
<pd> An aggregate function whose <varname>state_data_type</varname> is
<codeph>internal</codeph> can participate in parallel aggregation only if it has a
<varname>serialfunc</varname> function, which must serialize the aggregate state into
a <codeph>bytea</codeph> value for transmission to another process. This function must
take a single argument of type <codeph>internal</codeph> and return type
<codeph>bytea</codeph>. A corresponding <varname>deserialfunc</varname> is also
required. </pd>
</plentry>
<plentry>
<pt><varname>deserialfunc</varname></pt>
<pd> Deserialize a previously serialized aggregate state back into
<varname>state_data_type</varname>. This function must take two arguments of types
<codeph>bytea</codeph> and <codeph>internal</codeph>, and produce a result of type
<codeph>internal</codeph>. (Note: the second, <codeph>internal</codeph> argument is
unused, but is required for type safety reasons.) </pd>
</plentry>
<plentry>
<pt><varname>initial_condition</varname></pt>
<pd> The initial setting for the state value. This must be a string constant in the form
accepted for the data type <varname>state_data_type</varname>. If not specified, the
state value starts out null. </pd>
</plentry>
<plentry>
<pt><varname>msfunc</varname></pt>
<pd> The name of the forward state transition function to be called for each input row in
moving-aggregate mode. This is exactly like the regular transition function, except that
its first argument and result are of type <varname>mstate_data_type</varname>, which
might be different from <varname>state_data_type</varname>. </pd>
</plentry>
<plentry>
<pt><varname>minvfunc</varname></pt>
<pd> The name of the inverse state transition function to be used in moving-aggregate
mode. This function has the same argument and result types as <varname>msfunc</varname>,
but it is used to remove a value from the current aggregate state, rather than add a
value to it. The inverse transition function must have the same strictness attribute as
the forward state transition function. </pd>
</plentry>
<plentry>
<pt><varname>mstate_data_type</varname></pt>
<pd> The data type for the aggregate's state value, when using moving-aggregate mode.
</pd>
</plentry>
<plentry>
<pt><varname>mstate_data_size</varname></pt>
<pd> The approximate average size (in bytes) of the aggregate's state value, when using
moving-aggregate mode. This works the same as <varname>state_data_size</varname>. </pd>
</plentry>
<plentry>
<pt><varname>mffunc</varname></pt>
<pd> The name of the final function called to compute the aggregate's result after all
input rows have been traversed, when using moving-aggregate mode. This works the same as
<varname>ffunc</varname>, except that its first argument's type is
<varname>mstate_data_type</varname> and extra dummy arguments are specified by writing
<codeph>MFINALFUNC_EXTRA</codeph>. The aggregate result type determined by
<varname>mffunc</varname> or <varname>mstate_data_type</varname> must match that
determined by the aggregate's regular implementation. </pd>
</plentry>
<plentry>
<pt><varname>minitial_condition</varname></pt>
<pd> The initial setting for the state value, when using moving-aggregate mode. This works
the same as <varname>initial_condition</varname>. </pd>
</plentry>
<plentry>
<pt><varname>sort_operator</varname></pt>
<pd> The associated sort operator for a <codeph>MIN</codeph>- or <codeph>MAX</codeph>-like
aggregate. This is just an operator name (possibly schema-qualified). The operator is
assumed to have the same input data types as the aggregate (which must be a
single-argument normal aggregate). </pd>
</plentry>
<plentry>
<pt><varname>HYPOTHETICAL</varname></pt>
<pd> For ordered-set aggregates only, this flag specifies that the aggregate arguments are
to be processed according to the requirements for hypothetical-set aggregates: that is,
the last few direct arguments must match the data types of the aggregated
(<codeph>WITHIN GROUP</codeph>) arguments. The <codeph>HYPOTHETICAL</codeph> flag has
no effect on run-time behavior, only on parse-time resolution of the data types and
collations of the aggregate's arguments. </pd>
</plentry>
<plentry>
<pt><varname>combinefunc</varname></pt>
<pd>The name of a combine function. This is a function of two arguments, both of type
<varname>state_data_type</varname>. It must return a value of
<varname>state_data_type</varname>. A combine function takes two transition state
values and returns a new transition state value representing the combined aggregation.
In Greenplum Database, if the result of the aggregate function is computed in a
segmented fashion, the combine function is invoked on the individual internal states in
order to combine them into an ending internal state.</pd>
<pd>Note that this function is also called in hash aggregate mode within a segment.
Therefore, if you call this aggregate function without a combine function, hash
aggregate is never chosen. Since hash aggregate is efficient, consider defining a combine
function whenever possible.</pd>
</plentry>
</parml></section>
<section id="section6"><title>Notes</title>
<p>The ordinary functions used to define a new aggregate function must
......@@ -153,16 +382,16 @@ ordered aggregate, using the syntax:
<codeph>COMBINEFUNC</codeph>.</p>
</section><section id="section7"><title>Example</title><p>The following simple example creates an aggregate function that computes
the sum of two columns. </p><p>Before creating the aggregate function, create two functions that
are used as the <codeph>SFUNC</codeph> and <codeph>COMBINEFUNC</codeph> functions
of the aggregate function. </p><p>This function is specified as the <codeph>SFUNC</codeph> function
in the aggregate function.</p><codeblock>CREATE FUNCTION mysfunc_accum(numeric, numeric, numeric)
the sum of two columns. </p><p>Before creating the aggregate function, create two functions that are used as the
<codeph><varname>sfunc</varname></codeph> and
<codeph><varname>combinefunc</varname></codeph> functions of the aggregate function. </p><p>This function is specified as the <codeph><varname>sfunc</varname></codeph> function in the
aggregate function.</p><codeblock>CREATE FUNCTION mysfunc_accum(numeric, numeric, numeric)
  RETURNS numeric
   AS 'select $1 + $2 + $3'
   LANGUAGE SQL
   IMMUTABLE
   RETURNS NULL ON NULL INPUT;</codeblock><p>This function is specified as the <codeph>COMBINEFUNC</codeph> function in
the aggregate function.</p><codeblock>CREATE FUNCTION mycombine_accum(numeric, numeric )
   RETURNS NULL ON NULL INPUT;</codeblock><p>This function is specified as the <codeph><codeph>combinefunc</codeph></codeph> function in the
aggregate function.</p><codeblock>CREATE FUNCTION mycombine_accum(numeric, numeric )
  RETURNS numeric
   AS 'select $1 + $2'
   LANGUAGE SQL
......@@ -187,8 +416,8 @@ Aggregate (cost=1.10..1.11 rows=1 width=32)  
     width=32)
    -&gt; Aggregate (cost=1.04..1.05 rows=1 width=32)
      -&gt; Seq Scan on t1 (cost=0.00..1.03 rows=2 width=8)
(4 rows)</codeblock></section><section id="section8"><title>Compatibility</title><p><codeph>CREATE AGGREGATE</codeph> is a Greenplum Database language extension.
The SQL standard does not provide for user-defined aggregate functions.</p></section><section id="section9"><title>See Also</title><p><codeph><xref href="ALTER_AGGREGATE.xml#topic1" type="topic" format="dita"/></codeph>,
(4 rows)</codeblock></section><section id="section8"><title>Compatibility</title><p><codeph>CREATE AGGREGATE</codeph> is a Greenplum Database language extension. The SQL standard
does not provide for user-defined aggregate functions.</p></section><section id="section9"><title>See Also</title><p><codeph><xref href="ALTER_AGGREGATE.xml#topic1" type="topic" format="dita"/></codeph>,
<codeph><xref href="./DROP_AGGREGATE.xml#topic1" type="topic" format="dita"/></codeph>,
<codeph><xref href="./CREATE_FUNCTION.xml#topic1" type="topic" format="dita"
/></codeph></p></section></body></topic>
......@@ -290,8 +290,9 @@ SELECT foo();</codeblock><p>In
<pd>The <codeph>SET</codeph> clause applies a value to a session configuration
parameter when the function is entered. The configuration parameter is
restored to its prior value when the function exits. <codeph>SET FROM
CURRENT</codeph> applies the session's current value of the parameter
when the function is entered.</pd>
CURRENT</codeph> saves the value of the parameter that is current when
<codeph>CREATE FUNCTION</codeph> is executed as the value to be applied
when the function is entered. </pd>
</plentry>
<plentry>
<pt><varname>definition</varname></pt>
......@@ -308,7 +309,11 @@ SELECT foo();</codeblock><p>In
dynamically loadable object, and <varname>link_symbol</varname> is the name
of the function in the C language source code. If the link symbol is
omitted, it is assumed to be the same as the name of the SQL function being
defined. It is recommended to locate shared libraries either relative to
defined. The C names
of all functions must be different, so you must give overloaded SQL
functions different C names (for example, use the argument types as
part of the C names).
It is recommended to locate shared libraries either relative to
<codeph>$libdir</codeph> (which is located at
<codeph>$GPHOME/lib</codeph>) or through the dynamic library path (set
by the <codeph>dynamic_library_path</codeph> server configuration
......
......@@ -3,12 +3,12 @@
PUBLIC "-//OASIS//DTD DITA Composite//EN" "ditabase.dtd">
<topic id="topic1"><title id="dd20941">DROP ROLE</title><body><p id="sql_command_desc">Removes a database role.</p><section id="section2"><title>Synopsis</title><codeblock id="sql_command_synopsis">DROP ROLE [IF EXISTS] <varname>name</varname> [, ...]</codeblock></section><section id="section3"><title>Description</title><p><codeph>DROP ROLE</codeph> removes the specified role(s). To drop a
superuser role, you must be a superuser yourself. To drop non-superuser
roles, you must have <codeph>CREATEROLE</codeph> privilege.</p><p>A role cannot be removed if it is still referenced in any database;
an error will be raised if so. Before dropping the role, you must drop
all the objects it owns (or reassign their ownership) and revoke any
privileges the role has been granted. The <codeph>REASSIGN OWNED</codeph>
and <codeph>DROP OWNED</codeph> commands can be useful for this purpose.
</p><p>However, it is not necessary to remove role memberships involving
roles, you must have <codeph>CREATEROLE</codeph> privilege.</p><p>A role cannot be removed if it is still referenced in any database; an error will be raised if
so. Before dropping the role, you must drop all the objects it owns (or reassign their
ownership) and revoke any privileges the role has been granted on other objects. The
<codeph><xref href="REASSIGN_OWNED.xml#topic1">REASSIGN OWNED</xref></codeph> and
<codeph><xref href="DROP_OWNED.xml#topic1">DROP OWNED</xref></codeph> commands can be
useful for this purpose. </p><p>However, it is not necessary to remove role memberships involving
the role; <codeph>DROP ROLE</codeph> automatically revokes any memberships
of the target role in other roles, and of other roles in the target role.
The other roles are not dropped nor otherwise affected.</p></section><section id="section4"><title>Parameters</title><parml><plentry><pt>IF EXISTS</pt><pd>Do not throw an error if the role does not exist. A notice is issued
......
......@@ -10,14 +10,13 @@
issued, the prepared statement is planned and executed. This division
of labor avoids repetitive parse analysis work, while allowing
the execution plan to depend on the specific parameter values supplied.</p>
<p>Prepared statements can take parameters: values that are substituted
into the statement when it is executed. When creating the prepared statement,
refer to parameters by position, using <codeph>$1</codeph>, <codeph>$2</codeph>,
etc. A corresponding list of parameter data types can optionally be specified.
When a parameter's data type is not specified or is declared as unknown,
the type is inferred from the context in which the parameter is used
(if possible). When executing the statement, specify the actual values
for these parameters in the <codeph>EXECUTE</codeph> statement.</p><p>Prepared statements only last for the duration of the current database session. When the session
<p>Prepared statements can take parameters: values that are substituted into the statement when it
is executed. When creating the prepared statement, refer to parameters by position, using
<codeph>$1</codeph>, <codeph>$2</codeph>, etc. A corresponding list of parameter data
types can optionally be specified. When a parameter's data type is not specified or is
declared as unknown, the type is inferred from the context in which the parameter is first
used (if possible). When executing the statement, specify the actual values for these
parameters in the <codeph>EXECUTE</codeph> statement.</p><p>Prepared statements only last for the duration of the current database session. When the session
ends, the prepared statement is forgotten, so it must be recreated before being used again.
This also means that a single prepared statement cannot be used by multiple simultaneous
database clients; however, each client can create their own prepared statement to use.
......@@ -31,11 +30,10 @@ the statement is relatively simple to plan and rewrite but relatively
expensive to execute, the performance advantage of prepared statements
will be less noticeable.</p></section><section id="section4"><title>Parameters</title><parml><plentry><pt><varname>name</varname></pt><pd>An arbitrary name given to this particular prepared statement. It
must be unique within a single session and is subsequently used to execute
or deallocate a previously prepared statement.</pd></plentry><plentry><pt><varname>datatype</varname></pt><pd>The data type of a parameter to the prepared statement. If the data
type of a particular parameter is unspecified or is specified as unknown,
it will be inferred from the context in which the parameter is used.
To refer to the parameters in the prepared statement itself, use <codeph>$1</codeph>,
<codeph>$2</codeph>, etc. </pd></plentry><plentry><pt><varname>statement</varname></pt><pd>Any <codeph>SELECT</codeph>, <codeph>INSERT</codeph>, <codeph>UPDATE</codeph>,
or deallocate a previously prepared statement.</pd></plentry><plentry><pt><varname>datatype</varname></pt><pd>The data type of a parameter to the prepared statement. If the data type of a particular
parameter is unspecified or is specified as unknown, it will be inferred from the
context in which the parameter is first used. To refer to the parameters in the prepared
statement itself, use <codeph>$1</codeph>, <codeph>$2</codeph>, etc. </pd></plentry><plentry><pt><varname>statement</varname></pt><pd>Any <codeph>SELECT</codeph>, <codeph>INSERT</codeph>, <codeph>UPDATE</codeph>,
<codeph>DELETE</codeph>, or <codeph>VALUES</codeph> statement.</pd></plentry></parml></section><section id="section5"><title>Notes</title>
<p>If a prepared statement is executed enough times, the server may eventually decide to save
and re-use a generic plan rather than re-planning each time. This will occur immediately if
......
......@@ -12,6 +12,7 @@
<section id="section3">
<title>Description</title>
<p><codeph>REASSIGN OWNED</codeph> changes the ownership of database objects owned by any of
the <varname>old_role</varname>s to <varname>new_role</varname>. </p>
</section>
<section id="section4">
......@@ -38,10 +39,10 @@
<p><codeph>REASSIGN OWNED</codeph> requires privileges on both the
source role(s) and the target role.</p>
<p>The <xref href="DROP_OWNED.xml#topic1"><codeph>DROP OWNED</codeph></xref> command is an
alternative that drops all the database objects owned by one or more roles. <codeph>DROP
OWNED</codeph> requires privileges only on the source role(s).</p>
<p>The <codeph>REASSIGN OWNED</codeph> command does not affect the privileges granted to the
old roles in objects that are not owned by them. Use <codeph>DROP OWNED</codeph> to revoke
alternative that simply drops all of the database objects owned by one or more roles.
<codeph>DROP OWNED</codeph> requires privileges only on the source role(s).</p>
<p>The <codeph>REASSIGN OWNED</codeph> command does not affect any privileges granted to the
old roles for objects that are not owned by them. Use <codeph>DROP OWNED</codeph> to revoke
those privileges.</p>
</section>
<section id="section6">
......@@ -52,7 +53,7 @@
</section>
<section id="section7">
<title>Compatibility</title>
<p>The <codeph>REASSIGN OWNED</codeph> statement is a Greenplum Database extension. </p>
<p>The <codeph>REASSIGN OWNED</codeph> command is a Greenplum Database extension. </p>
</section>
<section id="section8">
<title>See Also</title>
......
......@@ -55,7 +55,7 @@
<codeph>REINDEX</codeph> locks out writes but not reads of the index's parent table. It
also takes an exclusive lock on the specific index being processed, which will block reads
that attempt to use that index. In contrast, <codeph>DROP INDEX</codeph> momentarily takes
exclusive lock on the parent table, blocking both writes and reads. The subsequent
an exclusive lock on the parent table, blocking both writes and reads. The subsequent
<codeph>CREATE INDEX</codeph> locks out writes but not reads; since the index is not
there, no read will attempt to use it, meaning that there will be no blocking but reads may
be forced into expensive sequential scans. </p>
......
......@@ -83,8 +83,12 @@
<pt>-b | --blobs</pt>
<pd>Include large objects in the dump. This is the default behavior except when
<codeph>--schema</codeph>, <codeph>--table</codeph>, or
<codeph>--schema-only</codeph> is specified, so the <codeph>-b</codeph> switch is
only useful to add large objects to selective dumps.<note>Greenplum Database does not
<codeph>--schema-only</codeph> is specified. The <codeph>-b</codeph> switch is
only useful add large objects to dumps
where a specific schema or table has been requested. Note that
blobs are considered data and therefore will be included when
<codeph>--data-only</codeph> is used, but not when <codeph>--schema-only</codeph> is.
<note>Greenplum Database does not
support the PostgreSQL <xref
href="https://www.postgresql.org/docs/9.4/largeobjects.html" format="html"
scope="external">large object facility</xref> for streaming user data that is
......@@ -153,7 +157,7 @@
href="./pg_restore.xml#topic1" type="topic" format="dita"/></codeph>. The tar
format is compatible with the directory format; extracting a tar-format archive
produces a valid directory-format archive. However, the tar format does not support
compression and has a limit of 8 GB on the size of individual tables. Also, the
compression. Also, when using tar format the
relative order of table data items cannot be changed during restore.</pd>
</plentry>
<plentry>
......@@ -375,6 +379,12 @@
<pd> To exclude data for all tables in the database, see <codeph>--schema-only</codeph>.
</pd>
</plentry>
<plentry>
<pt><codeph>--if-exists</codeph></pt>
<pd> Use conditional commands (i.e. add an <codeph>IF EXISTS</codeph> clause) when
cleaning database objects. This option is not valid unless <codeph>--clean</codeph> is
also specified. </pd>
</plentry>
<plentry>
<pt>--inserts</pt>
<pd>Dump data as <codeph>INSERT</codeph> commands (rather than <codeph>COPY</codeph>).
......@@ -563,17 +573,26 @@
<codeph>pg_dump</codeph> emits commands to disable triggers on user tables before
inserting the data and commands to re-enable them after the data has been inserted. If the
restore is stopped in the middle, the system catalogs may be left in the wrong state.</p>
<p>Members of <codeph>tar</codeph> archives are limited to a size less than 8 GB. (This is an
inherent limitation of the <codeph>tar</codeph> file format.) Therefore this format cannot
be used if the textual representation of any one table exceeds that size. The total size of
a tar archive and any of the other output formats is not limited, except possibly by the
operating system.</p>
<p>The dump file produced by <codeph>pg_dump</codeph> does not contain the statistics used by
the optimizer to make query planning decisions. Therefore, it is wise to run
<codeph>ANALYZE</codeph> after restoring from a dump file to ensure optimal performance.</p>
<codeph>ANALYZE</codeph> after restoring from a dump file to ensure optimal
performance.</p>
<p>The database activity of <codeph>pg_dump</codeph> is normally collected by the statistics
collector. If this is undesirable, you can set parameter <codeph>track_counts</codeph> to
false via <codeph>PGOPTIONS</codeph> or the <codeph>ALTER USER</codeph> command.</p>
<p>Because <codeph>pg_dump</codeph> may be used to transfer data to newer versions of
Greenplum Database, the output of <codeph>pg_dump</codeph> can be expected to load into
Greenplum Database versions newer than <codeph>pg_dump</codeph>'s version.
<codeph>pg_dump</codeph> can also dump from Greenplum Database versions older than its own
version. However, <codeph>pg_dump</codeph> cannot dump from Greenplum Database versions
newer than its own major version; it will refuse to even try, rather than risk making an
invalid dump. Also, it is not guaranteed that <codeph>pg_dump</codeph>'s output can be
loaded into a server of an older major version — not even if the dump was taken from a
server of that version. Loading a dump file into an older server may require manual editing
of the dump file to remove syntax not understood by the older server. Use of the
<codeph>--quote-all-identifiers</codeph> option is recommended in cross-version cases, as
it can prevent problems arising from varying reserved-word lists in different Greenplum
Database versions.</p>
</section>
<section id="section8">
<title>Examples</title>
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册